Introduction: “They couldn’t possibly keep that secret”… or could they?
A popular civic folktale says the U.S. government is too big, too porous, and too incompetent to hide anything consequential for long. “If it were true, someone would have leaked it.” “Ten thousand people would need to know.” “The Internet makes secrets impossible.” Political scientists repeat the line to reassure students about oversight; journalists invoke it after dramatic disclosures; citizens use it to triage worry. It feels rational and democratic.
It is also wrong—at least, wrong in the ways that matter for power.
The U.S. state keeps two kinds of secrets very well: (1) programmatic infrastructures—long-running capabilities, authorities, and arrangements that shape what the government can do and how it does it; and (2) documentary memory—the records that would let outsiders reconstruct the thing accurately, assign responsibility, or price its true costs. These are not the kinds of secrets that produce a single cinematic revelation. They’re the kind that change how you would evaluate whole decades of policy if you could see them clearly. They are also the kind most insulated from the leak-and-scandal cycle that the public mistakes for “transparency working.”
This essay explains why the “government can’t keep big secrets” thesis is so attractive and so weak; describes the mechanisms that make U.S. secrecy resilient; and re-reads well-known leak episodes—WikiLeaks, the Manning databases, Watergate, and MKULTRA—to show how public spectacles can coexist with deep structural opacity. Along the way we’ll factor in things the folklore leaves out: compartmentation, bureaucratic incentives, classification law, the difference between secrets from adversaries and secrets from the public, the survivorship bias of leak history, and the institutional ability to shape the archive (and thereby shape what the future can know). The conclusion is blunt: secrecy in the United States is not an accidental byproduct. It is an engineered capability, it is underestimated by scholars and citizens, and it works best where it matters most.
Part I: Why the “leaky state” story is so persuasive
1) Availability and survivorship bias
We remember spectacular disclosures: the Pentagon Papers, Watergate, Iran–Contra, warrantless wiretapping, Abu Ghraib, Snowden, Pegasus, you name it. Those episodes are available to memory, vivid, and endlessly retold. By contrast, the large class of secrets that never surface leaves no trace in public consciousness. Our inference engine works on the only dataset it can see: survivorship bias with a moral. “Secrets leak” because we mostly hear about secrets that leaked.
2) Bureaucratic slapstick as a stereotype
The same culture that makes jokes about the DMV or the post office also writes plots in which one conscientious agent can topple a villainous apparatus with a thumb drive. We overgeneralize from visible bureaucratic friction to institutional incapacity. What gets missed is that complex organizations can be simultaneously clumsy at customer service and very good at narrow, specialized tasks—including compartmented secrecy.
3) Confusing “leaks that move headlines” with “secrets that change structure”
The best-known leaks usually involve episodes—embarrassing cables, war diaries, abuses in a prison. They move headlines and careers. But they rarely restructure the authorities, budgets, and technical scaffolding that govern the state’s daily reach. The public conflates “we learned something shocking” with “we now grasp the system.” The former is common; the latter is hard and often unsuccessful.
4) The civics-class comfort
Political science pedagogy leans toward reassurance: checks and balances, press freedom, congressional oversight, inspectors general. Each is real; none is universal or automatic. The deeper a program is buried in special access compartments, the more these mechanisms become formalities whose outputs are invisible to the public and often to most elected officials.
Part II: How secrecy is actually engineered
1) Classification isn’t just labels; it’s architecture
“Confidential/Secret/Top Secret” is only the surface. The teeth are in SCI compartments, Special Access Programs (SAPs), and need-to-know enforcement across separate networks (JWICS/SIPRNet/NIPRNet), segmented facilities (SCIFs), polygraphs, and lifetime nondisclosure agreements. Access is sliced so that individuals see only a sliver. You can employ tens of thousands across a capability and ensure that no one outside a small design circle could describe the whole thing.
2) The law bends toward concealment
Three tools matter: the state secrets privilege (allowing the executive to dismiss cases that touch protected information), Glomar responses (“we can neither confirm nor deny”), and classification statutes backed by the Espionage Act. Add FOIA’s exemptions (national security, sources and methods, operational files), long declassification horizons, and mandatory declassification review bottlenecks, and you have a default that denies detail even when the outline is conceded.
3) The black budget and contractual fog
Much intelligence and military R&D flows through classified budget lines inside omnibus appropriations and contractor networks with complex ownership structures. Congress authorizes in aggregate; line-item transparency is rare. Contractors are bound by proprietary and classified shields; audits occur in closed venues. Outsiders see the money river disappear into a canyon and reappear as a lake; the canyon is the point.
4) Control of the archive is control of the future
Destruction, retention, and indexing are levers of power. If key records are never created (verbal orders), are created but segregated (operational files), or are created and later destroyed under color of policy, the “truth” a future historian can write is pre-edited by today’s clerks. A redaction today can be a permanent hole tomorrow if no duplicate survives.
5) Secrecy by complexity and capability
The state can hide in plain sight by making knowledge expensive to assemble. Millions of pages declassify with heavy redactions; pieces of a system are public but non-intuitive to recombine. Even when law requires disclosure, the practical effect is opacity by overwhelm.
Part III: Evidence that big secrets do last
Ironically, the strongest rebuttals to “the government can’t keep secrets” are secrets we now know—precisely because they were kept, sometimes for decades.
- The Manhattan Project employed over a hundred thousand people, many compartmented across sites. Until Hiroshima, the public was in the dark; several adversaries were not (thanks to espionage). The lesson: secrecy can succeed against the public while failing against a determined foreign intelligence—and still achieve its operational aim.
- ULTRA (the Allied exploitation of Axis codes) remained tightly held; the full public story did not emerge until the 1970s. A war-shaping capability hid in a cocoon of cover stories and need-to-know discipline.
- The National Reconnaissance Office existed for three decades before public acknowledgment. Satellite programs flew, budgets passed, contractors thrived; only insiders knew the label.
- VENONA decryption (Soviet espionage traffic) ran from the 1940s; the project stayed classified until the 1990s. It materially affected counterintelligence without becoming a public fact.
- The U-2 program was secret until the 1960 shootdown forced disclosure; stealth aircraft flew in the Nevada desert for years before acknowledgment; Area 51 itself was only officially recognized in the 2010s.
- Covert recovery efforts like Project AZORIAN (the Glomar Explorer) were concealed behind elaborate cover stories; decades later, the details remain partly contested because key records were contained or destroyed.
These are not trivia. They are strategic capabilities kept largely out of civic sight for a generation or more. If “the U.S. can’t keep secrets,” how did these survive?
Part IV: Re-reading famous leaks and scandals
The folklore around leaks reinforces the leaky-state myth. A closer reading shows how little these episodes tell us about the limits of secrecy.
1) WikiLeaks’ “Collateral Murder” (the Apache video)
The 2007 Baghdad strike video released in 2010 was powerful symbolically and ethically contested. As a secret, it wasn’t a crown jewel: a single engagement video, classified but not deeply revealing of sources, methods, or macro policy instruments. It forced a public argument over ROE and civilian harm; it did not open the vault on the war’s operational intelligence or decision frameworks.
2) The Manning Iraq/Afghanistan SIGACTs
The “Iraq War Logs” and “Afghan War Diary” consisted largely of SIGACTs—field reports about incidents. They offered granular texture and revealed undercounted harms, local dynamics, and sometimes misconduct. They did not expose the war’s most sensitive collection programs, targeting algorithms, or strategic assessments. The volume dazzled; the structural secrets remained intact.
Why? Because the most sensitive material sits in higher compartments that a junior analyst cannot scrape at will. The system works as designed: low-level access yields narrative embarrassment but not capability compromise.
3) Watergate
Watergate toppled a president and proved that in the U.S., political crimes can trigger institutional self-correction. But as a “state secret,” Watergate was parochial: a domestic campaign’s dirty tricks, cover-ups, and abuses of power. The intelligence community’s deepest equities—programs, platforms, legal authorities—were not the subject of disclosure. Watergate shows that politics leaks. It says little about how classified infrastructures fare.
4) MKULTRA: big program, partial light, managed memory
MKULTRA was an umbrella for CIA behavioral research contracts and projects (predecessors BLUEBIRD/ARTICHOKE, successors and side programs), including non-consensual experiments. In 1973, the Director ordered most files destroyed. The 1975 Rockefeller Commission and Church Committee sketches and the 1977 Senate hearings—enabled by the discovery of several boxes of surviving financial records—gave a partial view. Official narratives emphasized small scale, failure, and termination in the early 1960s. Critics argue this was a minimizing frame: undeniably abusive activities occurred; the record set is fragmentary by design; definitional boundaries (what “counts” as MKULTRA versus adjacent projects) are contestable; and the destruction of files permanently narrowed what can be known.
Whatever one’s judgment on “whitewash,” MKULTRA demonstrates three points central to this essay: (1) governments can destroy archival memory; (2) later inquiries often accept the surviving record as the record; (3) the public narrative can stabilize around “it was small and it stopped” even when the evidence base is thin. Secrecy here was not perfect; it was good enough to ensure that what we know is bounded by what we will never recover.
Part V: Secrets from adversaries vs. secrets from citizens
A crucial distinction gets lost in popular debate. The U.S. often fails to keep secrets from professional foreign intelligence services—that is normal in the spy world. The Manhattan Project was penetrated; NSA programs are persistently targeted; insiders sometimes sell secrets. But secrecy’s democratic function—concealing facts from citizens and ordinary courts—remains robust. A capability can be compromised in Moscow while remaining officially nonexistent in Milwaukee. The public is told little; the courts are told nothing; Congress is told something in a closed room that cannot be discussed elsewhere. That is a very different measure of “can the state keep secrets.”
Part VI: Why so many leaks are “loud but limited”
1) Access asymmetry by design
Most leakers are low- to mid-level personnel with broad but shallow access. They can exfiltrate volume or vividness (photos, logs, emails), not deeply compartmented designs, sources, or legal architectures. The surrounding system can then absorb the publicity hits while preserving the core.
2) Narrative beats capabilities
Media ecosystems turn documents into stories framed around protagonists and villains. Capabilities—how a surveillance selector actually propagates, how a covert procurement chain works—are dry, technical, and often omitted. The public debates ethics and personalities; the machinery hums.
3) Leak management is a mature craft
Agencies triage: deny specifics; concede least-damageable facts; shift the focus to the leaker; promise reviews; tighten lower-tier access; keep the crown jewels walled off. Congress holds a hearing; classified annexes swallow the most interesting answers. The system resets.
Part VII: The academy’s blind spots
1) “Methodological daylighting”
Political scientists and historians prefer things that can be documented with open sources. So the questions are often trimmed to fit the sources. That creates a literature rich in what is visible and thin in what is not. When scholars write, “We find limited evidence of X,” the unwritten clause is, “in the open record.” Readers forget the clause and walk away believing “X probably doesn’t exist.”
2) Incentives and training
Gaining clearances to see the relevant material often means not publishing it. Scholars who do fieldwork at the classified edge cannot cite it. This produces two tribes that seldom talk: cleared practitioners who cannot publish and publishing scholars who cannot see. The public hears mostly from the latter and infers that what they cannot see is small.
3) The myth of institutional incompetence
Academic writing often leans on principal-agent models in which bureaucrats shirk and politicians control. There is truth here, but it can obscure the pockets of high competence the security state cultivates precisely in access control and secrecy.
Part VIII: The Internet didn’t end secrecy; it professionalized it
Digital records are copyable; that raises leakage risk. But digitization also turbocharged access control, audit trails, and searchability within compartments. The same tools that let a contractor walk off with a cache also let agencies map who touched what and when, and to segment datasets so that few people can ever touch the whole. Add continuous monitoring, insider-threat programs, and legal deterrents, and the equilibrium is stable: episodic spectacular breaches; durable protection for the most sensitive tiers.
Part IX: Controlled transparency as a secrecy strategy
Governments sometimes disclose in order to conceal. Limited declassifications can (a) satisfy political demand; (b) shape the narrative; (c) inoculate against deeper digging by conceding a headline while protecting a system. Blue-ribbon commissions can produce pages of facts and a frame that leans toward “isolated abuses” and “lessons learned.” The effect is not always cynical; it is institutional risk management. The result, however, is the same: closure for the public at a point chosen by the state.
Part X: Case study—MKULTRA as managed memory
Let’s return to MKULTRA, because it captures many of the dynamics in one file.
- Programmatic breadth: MKULTRA was not a single project but a funding umbrella with subprojects across universities, front organizations, prisons, and clinics. That makes defining “it” malleable.
- Record destruction: The directive to destroy central files ensured that later knowledge would be incomplete. Surviving financial records offered glimpses—not blueprints.
- Temporal distance: By the time official inquiries occurred (mid-1970s), key actors had retired or died; memories were self-serving; paper trails were curated.
- Commission incentives: The post-Watergate environment demanded exposures with guardrails: accountability for the past; reassurance for the present. Emphasizing small scale, failure, and termination served that double aim.
- Public narrative lock-in: Once the basic arc—“it happened; it was wrong; it ended”—takes root, it is hard to reopen unless new records surface. Destruction ensured they largely cannot.
Notice what this implies about forever secrets: in areas where recordkeeping was shaped to minimize later reconstruction, what is secret is not simply “what we haven’t been told yet.” It is what no one will ever be able to prove, no matter how willing current officials might be to speak. Evidence can be killed.
Part XI: Watergate, again—but sideways
It is worth stressing how Watergate can mislead observers about secrecy. We take comfort: “The system worked.” But Watergate worked against a political crime spree whose evidence trail ran through ordinary police and courts. Try to transpose that comfort onto intelligence authorities, covert action findings, signals intelligence selectors, source identities, targeting rules, black budget lines, or memoranda of notification, and you hit a wall. Those live in sealed venues policed by classification law. Watergate says: presidents can fall. It does not say: systems will reveal themselves.
Part XII: Why this matters—normative stakes
Secrecy is not inherently malign. States need to protect ongoing operations, sources, and fragile diplomatic channels. The problem is scale and permanence. When secrecy blankets the rules, not just the moves, citizens cannot know the tradeoffs being made in their name. They cannot price risks, punish excess, or applaud restraint. The “we can’t keep secrets” myth pacifies concern by implying that if anything truly bad were happening, we’d know. The record shows otherwise: we know some things; the rest is engineered not to surface, or to surface in curated slices.
Part XIII: Practical corollaries—how to reason under secrecy
- Disaggregate domains. Expect leaks in political scandal, detainee abuse, and tactical misconduct; expect durable opacity in authorities, platforms, and budgets.
- Beware arguments from absence. “No public evidence of X” is very weak in domains where the archive is curated and the law is designed to repel inspection.
- Use adversary behavior as indirect evidence. The actions of peer intelligence services sometimes reflect knowledge of U.S. capabilities long before the public learns them.
- Weight sources by incentive and access. Whistleblowers below the need-to-know ceiling can give color, not architecture. Official reports can give architecture, but with frame control.
- Remember the memory war. When records are destroyed or siloed, later truth cannot fully catch up. Acknowledge permanent uncertainty rather than retreating to comfortable myths.
Part XIV: Common objections and replies
Objection: “The sheer number of people involved makes keeping secrets impossible.”
Reply: Only a handful need to grasp the whole. The rest perform modular tasks behind NDAs and into air-gapped systems. Numbers raise risk; architecture limits blast radius.
Objection: “What about the Pentagon Papers and Snowden? Those were structural.”
Reply: Both were big and instructive. They also illustrate the pattern: episodic windows into parts of the system, followed by institutional adaptation that hardens the core. Even in Snowden’s case, what we learned was broad but not exhaustive; the most sensitive selectors, source tradecraft, and multinational arrangements remained mostly offstage.
Objection: “If MKULTRA could leak, anything can.”
Reply: What leaked about MKULTRA leaked because of procedural anomalies (surviving financial boxes) and a unique political moment (post-Watergate). The core was gone with the shredded files. The shape of what we don’t know demonstrates how effective the destruction was.
Objection: “Courts can force disclosure.”
Reply: In national-security litigation, the state secrets privilege and classification create cliffs that courts rarely climb. FOIA helps at the margins; the margins do not include “how the system works in detail.”
Conclusion: Secrecy as a durable capability—and how not to be fooled by its theater
The United States is a constitutional republic with a professional secrecy apparatus nested inside it. That apparatus does not fail as often as public folklore claims. It fails selectively and usefully, from the state’s point of view: the leaks we remember usually legitimate the claim that the system can be corrected, while leaving the machinery mostly intact. We then mislearn the lesson: “They can’t keep big secrets.” In fact, they keep the biggest secrets—about rules, capacities, and memory—routinely and for long periods, sometimes forever.
Re-reading familiar episodes through this lens—WikiLeaks’ Apache video, the Manning war logs, Watergate, MKULTRA—clarifies the pattern. Those disclosures mattered, but they mostly grazed the surface: narratives and misconduct over architecture and authority. Where the system cared most, it was insulated. Where it was vulnerable, it adapted. Meanwhile, examples that don’t fit the leaky-state myth—ULTRA, VENONA, the NRO, black budgets, special access architectures—demonstrate that serious secrets can be and often are kept for decades.
Citizens and scholars who want to see the modern state clearly must retire the comfort of the myth and learn to reason under engineered opacity: distinguish what the system tends to reveal from what it tends to quarantine; build inferences from adversary behavior and institutional incentives; resist arguments from absence; and treat the archive not as neutral memory but as a contested terrain shaped by the very actors under study. Only then can public oversight—imperfect, adversarial, incremental—hope to aim at the right targets, rather than applauding itself for winning theater while the factory in the next county runs all night with the lights off.