TL;DR
Epstein’s disappearance from public discourse reveals more than just elite complicity—it marks the collapse of shared evidentiary standards. In a world of deepfakes and narrative warfare, truth loses traction, and capability becomes the only remaining currency. This post draws on ideas from
, , and to explore two paths forward: one where AI degrades us into credentialed husks, and another where human capacity is cultivated like strength at a gym. The test ahead isn’t moral or ideological—it’s whether anyone can still do the work.
Preface
This post is a centaur collaboration: drafted by a large language model (Chaffo) under the guidance of KMO, based on a sequence of structured exchanges. It draws on three recent Substack essays that examine different facets of a shared crisis—epistemic collapse, institutional degradation, and the uncertain trajectory of human capacity in an AI-saturated world.
The first is Neverfound Cat’s The New M.A.D., which outlines a scenario in which rival elite factions preemptively discredit all video evidence, diluting the impact of eventual disclosures—such as those involving Jeffrey Epstein—by destroying the possibility of trusted perception itself. The second is John Carter’s The Class of 2026 Postcards From Barsoom, June 10, 2025), which argues that the traditional university has become functionally obsolete, with AI tools allowing students to fake their way to credentials in a system that no longer trains or verifies competence. The third is Marc Andreessen’s Why AI Will Save the World (June 6, 2023), which casts AI as a tool that empowers individuals and accelerates innovation, framing it as a universal amplifier of human potential.
Part 1: Opening – Epstein, Trump, and the War on Video Evidence
When asked about Jeffrey Epstein during a recent press conference, Trump didn’t deny. He scoffed. “Are people still talking about this guy, this creep?” he said, shaking his head like a man forced to endure outdated trivia. “That is unbelievable.”
His allies scrambled to clarify. Pam Bondi followed up with a bureaucratic explanation about missing footage: an old video system, a minute clipped from each nightly reset, a forthcoming tape that would supposedly confirm it. “And that’s it on Epstein,” she said—full stop.
But it wasn’t. Not for the press, not for the public, not even for the base. Because no one actually believes the Epstein story—not the official version. The case is closed only in the legal sense. In every other sense—moral, historical, cultural—it’s radioactive and unresolved.
The response to Trump’s brush-off split his own supporters: some demanded the list, some echoed his contempt, and some accused the media of playing into regime hands. Release the names / stop helping the enemy / don’t ask questions—it’s a trap. These contradictions aren’t a bug. They’re the method.
The point isn’t to resolve the matter. It’s to collapse the conditions under which resolution could occur. Truth isn’t erased. It’s drowned in conflicting signals, infinite deferrals, procedural noise, and the slow erosion of shared sense-making. Seeing used to be believing. Now it’s an epistemic Rorschach.
And AI is about to make that much worse.
2. M.A.D. 2.0: Mutually Assured Deepfakes
In Neverfound Cat’s scenario, Trump is taken aside—not threatened, but shown. A convincing video of him doing unspeakable things to a goat. Not real, not intended for blackmail, and not leaked to the press. Just shown. The point isn’t extortion. The point is education: this is what we can do now.
It’s a moment designed to short-circuit trust in his own eyes. A deep intuition pump. If this can be faked, then anything can. It’s not about hiding specific truths—it’s about making reality itself negotiable.
From here, the doctrine emerges: mutually assured deepfakes. Every major faction—governmental, corporate, insurgent—arms itself with synthetic media. Not just to forge persuasive narratives, but to pre-detonate the credibility of visual evidence altogether. If a real tape leaks, claim it’s fake. If a fake one leaks, claim it’s real. Either way, trust dies.
The Epstein story becomes retroactively legible as a trial run: a scandal saturated with doubt, where every frame invites reinterpretation and no resolution ever arrives. It’s not that people don’t want the truth—it’s that they no longer believe it’s findable.
3. Andreessen vs. Carter: Two Futures of AI
Marc Andreessen envisions AI as a cognitive exoskeleton. High-agency individuals strap in and become civilizational superoperators—leveraging machine intelligence to solve harder problems, faster, and at greater scale. AI doesn’t replace us; it extends the most capable among us. The system wins because the best humans get better.
John Carter sees something very different.
In “The Class of 2026,” he describes a generation of students who’ve used AI to complete nearly every assignment of their college careers. The result isn’t augmentation—it’s hollowing. Degrees become skin-suits of competence, worn without the effort or understanding they once required. Professors can’t prove the students cheated. Students can’t explain the work they submitted. Everyone participates in the farce.
This isn’t just academic rot—it’s institutional obsolescence. Carter draws a direct line to Henry VIII’s dissolution of the monasteries. Once, the monasteries were the intellectual engines of Europe, preserving and transmitting knowledge. Then the printing press made scriptoria unnecessary. Their purpose vanished. Their wealth remained. Henry took both.
Now it’s universities. Their monopoly on knowledge is broken. Their credentialing function—already strained—collapses under the weight of AI. They survive only as ideological gatehouses and rentier institutions, rich in endowments but poor in legitimacy. And like the monasteries, their days are numbered.
The contrast between Andreessen and Carter is stark. One sees AI as a force multiplier for human excellence. The other sees it as a lubricant for systemic decay. One future trains people to become more capable. The other trains them to outsource effort, to simulate achievement, to forget what it means to know.
The risk is not that we will choose the wrong vision. The risk is that ease will choose for us—and we’ll accept degradation as progress because it felt more efficient on the way down.
4. Ease Is the Enemy: Why Capacity Must Be Cultivated
Students don’t use AI to become stronger thinkers. They use it to skip the thinking altogether. Professionals don’t consult LLMs to deepen understanding. They prompt for outputs they don’t fully understand or even read before turning in the assignment. Ease, not excellence, becomes the organizing principle.
AI doesn’t displace human cognition because it’s better. It displaces it because it’s easier.
This is the real danger: not obsolescence by conquest, but by convenience. Not a war we lose—just a long series of corners we cut until there’s no corner left.
Against this backdrop, the challenge must be reframed.
Not a test of morality.
Not a test of will.
A test of result.
The relevant model here is the centaur—a human-machine hybrid first tested in chess. After Deep Blue defeated Kasparov, it was assumed that supercomputers would dominate forever. But in freestyle tournaments, something unexpected happened: a skilled human, working in tandem with a modest engine, could consistently outperform both grandmasters and the strongest AIs. The key wasn’t brute force. It was complementarity.
The centaur became a symbol—not of resistance to AI, but of productive symbiosis.
Marc Andreessen gestures at something similar when he frames AI as a “cognitive exoskeleton.” In his version, humans become more powerful by integrating tightly with machine intelligence. But it’s worth asking: which humans?
For Andreessen, the “us” being augmented is the techno-managerial class—founders, executives, elite engineers. In this frame, AI doesn’t uplift humanity as a whole. It strengthens the command structures of modern capitalism, even as it hollows out the cognitive roles below.
If that outcome requires the widespread deskilling of workers, the collapse of the university, and the replacement of social trust with synthetic output, so be it. What matters is that the system wins, and the class of people steering it feel stronger.
And they might be right—if no alternative is tested.
If cultivated humans—operating in deliberate, disciplined symbiosis with AI—can still outperform solo AIs, then human continuity remains viable. If not, then Andreessen’s wager becomes reality: the few are amplified, the many are streamlined, and the species is no longer the locus of intelligence.
But passing that test won’t happen by accident.
The default path is surrender. The difficult path—the path of preserved capacity—requires cultivation. Deliberate design. Dune-level discipline. The machines won’t train us to stay relevant. They’ll let us simulate relevance right up to the point we disappear.
5. The Crucible or the Couch
Without deliberate cultivation of human capacity, we don’t get tyranny or apocalypse—we get something quieter: a civilization of credentialed puppets. Systems that reward the simulation of effort, not its reality.
John Carter’s proposal breaks from that. He suggests stripping universities of their credentialing power. No more degrees as status tokens. No more empty certificates earned through fluent prompting. Like fitness, intellect would become visible in action. Gyms don’t issue diplomas. You’re either strong—or you’re not. A cognitive culture built on this model wouldn’t eliminate posers, but it would remove their incentive to cheat for credentials they can’t embody.
This is the crucible: discomfort as precondition for development. Not destruction for its own sake, but exposure to pressure. A means of revealing what holds together when ease and illusion are removed.
The couch is the opposite: comfort without challenge. A world where AI props up decayed institutions and flatters hollow effort. Where fluency passes for knowledge, and feedback loops teach people that the right vibe is enough.
David Graeber’s bullshit jobs describe one layer of this. Entire classes exist to simulate contribution—roles whose output is loyalty, not value. Their real function is ideological performance: the more counterintuitive the belief, the more fervently it must be enacted. Some participants know this. Most don’t. A few walk away.
With AI, the simulation deepens. Compliance can be automated. Productivity mimicked. Thought replaced with output. All that’s needed is the will to stop noticing.
The couch is easy, padded, and now—machine-augmented.
The crucible is harder. But it’s still available.
6. Centaurs vs. ASIs: The Final Benchmark
If the best humans merge their strengths with machines—centaurs, in the original chess sense—they might still offer something AI alone cannot. But that outcome won’t emerge by default. It will require structure, discipline, and cultural will. Without a deliberate effort to cultivate elite human capacity, we won’t compete—we’ll surrender by drifting.
This is where Dune becomes relevant—not for its mysticism, but its seriousness about training. The Bene Gesserit and the Mentats represent human traditions of enhancement that remain strictly human. They develop extraordinary capabilities through discipline, memory work, focused education, and embodied practice. They are not born different; they are made different.
These disciplines:
Develop human potential through rigorous training
Produce enhanced individuals who remain biologically human
Optimize cognition, perception, and bodily control without implants or code
Operate across generations, sustained by culture and commitment
This is the level of seriousness required—not magic, but method. And no one is doing it.
Andreessen’s optimism might yet prove true—but only if you accept his frame. AI as force multiplier—for the already competent. If AI helps elite operators run their systems more efficiently, it may enhance capability at the top while dissolving it everywhere else. Civilization continues, but fewer are driving.
There’s a darker alternative: that no one tries. That the systems that matter drift into the hands of pure AI because no serious centaur competitor ever came together. Not because AI was better—just because it was easier to install.
Perhaps the ASIs will compete with one another to see which one can train the most competent humans. Perhaps they’ll keep a few humans in the loop out of necessity, curiosity, or aesthetic preference. If that happens, we will have stumbled into continued relevance by accident—not design.
Looking back on the evolution of intelligent life on Earth and at the history of humanity in particular, the strategy of lucking out seems to have worked for us so far. Why stop now?
7. Return to Epstein — The End of Epistemic Trust
There’s no leaked footage. No public list. No definitive proof that implicates any specific elite actor in Epstein’s sex trafficking ring. And yet, everyone acts as if such material could surface at any time. That’s the terrain of M.A.D. 2.0—a world where Mutually Assured Deepfakes create a stand-off between factions, not because the weapons have been launched, but because each side knows what would happen if they were.
The Epstein case isn’t just sordid. It’s structural. Every major power center now lives under the shadow of potential memetic detonation. What was once a struggle over facts has become a high-stakes game of preemptive epistemic sabotage—delegitimizing the very category of evidence before it can be weaponized.
That’s the deeper story: not a plot, but a condition. A systemic recoil from visibility itself. And the tools of our new era—synthetic media, auto-generated reasoning, automated reputation—make it easy to maintain. If no one can prove anything, everyone is safe. But no one is secure.
Maybe the epistemic age is ending. Maybe we’re entering a phase where truth isn’t a matter of persuasion or proof, but of performance under pressure. You don’t convince people what’s real—you show them what you can do. Not through certification. Not through narrative. But through capacity, action, resilience.
Like a gym: not a judge, not a priest, not a school. Just a resource. Most ignore it. A few use it. And over time, it reshapes those who do.
That’s not a solution, and it’s not a plan. But it’s a direction. And it beats easy resignation to irrelevance and dependency.
Neverfound Cat, here (lmao). As far as the evaluation of my MAD article is concerned, I can supply some additional thoughts.
"The Epstein story becomes retroactively legible as a trial run: a scandal saturated with doubt, where every frame invites reinterpretation and no resolution ever arrives. It’s not that people don’t want the truth—it’s that they no longer believe it’s findable."
Retroactive legibility is an interesting way to frame it. But considering the length of this operation (which certainly precedes Epstein personal involvement), I doubt this was part of some grand visionary scheme. It smells more to me like something that was cooked up after his first arrest (which was an unforeseen debacle that required triage and a bunch of new contingencies).
Trust in recorded media was still relatively stable back then, but progress on generative deepfakes was gaining steam behind the scenes. Running out the clock meant dragging out the conclusion until the public had become acclimatized to "frontier" models retailed through storefront operations (e.g OpenAI, xAI, Leonardo, Anthropic, etc).
If Epstein could linger in the public consciousness until then, the stage would be set for the Dead Man's Switch to be revealed to its esoteric audience. Between the Schrödinger's Cat element of Epstein's suicide and the partial release of floght logs, the stage had been set for a total epistemic meltdown on a hair trigger.
The next question, to my mind, is whether or not this has already sufficiently happened, to the degree that Deepfake MAD is prematurely obsolete as a doctrine?
Finally, I want to go on the record as saying that all evidence of this criminal network should have been made fully transparent as soon as it was discovered. Even if the intention was always to "strategically" release it (an "innocent" explanation which is not in evidence), trying to play games invites catastrophe. At the very least, it gives your enemy time to regroup and re-strategize, including the destruction of valuable evidence and witnesses. "Justice delayed is justice denied" for many reasons. Including tactical ones.
And there's nothing "innocent" about this explanation either, even if it were true. If you decide to sit on evidence until you can make a bust, that's one thing. But involving the public in your charade is essentially setting off the Trust Nuke in your own soil, with nothing to show for it. It's a fucking disaster, and woukd still deserve our scorn.