summary:
So, the tech gods at OpenAI finally have adult supervision.Let that sink in. The company... So, the tech gods at OpenAI finally have adult supervision.
Let that sink in. The company on a mission to birth a digital deity, the one whose cofounder literally led chants of "Feel the AGI!", now needs an "independent expert panel" to tell them if they've actually done it. This little detail, tucked into their revised deal with Microsoft (Microsoft and OpenAI revise deal terms, require external panel to evaluate AGI claims), is the most revealing thing to happen in AI this year.
This is a leash. No, 'leash' isn't strong enough—it's a gilded cage built by Microsoft to keep its $13 billion pet from running wild and declaring itself God without permission. For years, we've been fed this myth of Sam Altman and his crew as visionaries steering humanity toward a new dawn. Now, the truth is out: their biggest investor got so spooked by the AGI-or-bust rhetoric that they wrote a "call your parents before you invent Skynet" clause into the contract.
Can you even imagine that first panel meeting? A bunch of academics and suits sitting around a table, staring at a command line, while an OpenAI engineer nervously types, "Are you AGI?" and the model spits back, "As a large language model..." It’s a bureaucratic solution to an existential question, a committee formed to measure the soul of a new machine. What are their metrics? Does it have to cure cancer, or just pass the bar exam in every state simultaneously? Who are these "experts" anyway? And are we really supposed to believe they can't be bought, influenced, or just plain bamboozled?
Saving the World, One Press Release at a Time
While Microsoft was busy installing a babysitter, OpenAI was busy rebranding itself as the world’s most valuable charity. They’ve "recapitalized," turning the for-profit arm into a "public benefit corporation" controlled by a nonprofit "Foundation." This Foundation is now sitting on equity valued at around $130 billion.
Let's translate this from PR-speak into English. They've created a corporate structure so convoluted it makes your head spin, all to maintain the illusion that they’re not just another hyper-capitalist enterprise chasing a trillion-dollar valuation. They even got the Attorneys General of California and Delaware to sign off on it, which tells you this wasn't just some friendly whiteboard session. This was a negotiation.
And what will this benevolent Foundation do with its mountain of cash? They've pledged $25 billion to "curing diseases" and building "AI resilience." It’s a classic tech-billionaire move: create a world-altering, potentially catastrophic technology, then promise to use a fraction of the profits to clean up the mess. The same guys who signed a statement putting AI extinction risk on par with nuclear war are the ones building the bomb, and we're supposed to just… trust them?
This is the core of the AGI conspiracy theory, as one MIT Tech Review piece, How AGI became the most consequential conspiracy theory of our time, rightly called it. It’s a belief system. You have the prophets (Altman, Hassabis) promising utopia—we’ll travel the stars!—and the doomsayers (also Altman, Hassabis) warning of hellfire. By selling both the dream and the nightmare, they justify everything: the insane energy consumption, the monopolistic power, the dizzying down payments on a future that doesn't even exist. Its a perfect, self-sustaining hype loop.
Your New Intern Is a Compulsive Liar
Amidst all this corporate drama, Sam Altman is still out there, selling the timeline. By 2026, we’ll all have AI "research interns." By 2028, they’ll be "fully independent researchers."
Give me a break. We're talking about a technology that still, to this day, confidently makes things up. We call them "hallucinations," a soft, almost psychedelic term for what is, essentially, digital bullshit. An intern that lies to you isn't an asset; it's a liability. An "independent researcher" that invents its sources would be fired and academically disgraced. Yet OpenAI plans to productize this very flaw and call it progress.
An AI that hallucinates ain't a researcher; it's a chaos agent. Imagine a pharmaceutical company using one of these "AI researchers" to analyze trial data. What happens when it confidently omits adverse side effects or invents a correlation that sends human scientists down a billion-dollar dead end? Who's responsible then? The goalposts will just keep moving, and offcourse the money will keep flowing.
Then again, maybe I'm the crazy one. Maybe a world where we have to double-check the work of our "autonomous" digital colleagues is the future we deserve. It just feels less like a leap into a new age of intelligence and more like a descent into a new age of verification, where the hardest job isn't discovery, but simply figuring out what's real.
The Emperor's New Algorithm
When you strip away the messianic language, the billion-dollar valuations, and the apocalyptic warnings, what’s really left? AGI isn't a destination we're approaching. It's a marketing slogan. It's the narrative that keeps the GPUs spinning and the venture capital flowing. The new Microsoft deal proves it. This isn't about humanity's future; it's about protecting a massive investment from a CEO who might just be starting to believe his own hype. The "independent panel" isn't there to verify God; it's there to make sure the stock price doesn't collapse when He fails to show up on schedule.

