AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.
Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.
Future bots could help design and carry out biological, nuclear, and chemical warfare. They could be weaponized to take down power grids, monitor congressional emails, and black out major media outlets. These aren’t purely hypothetical concerns: Earlier this month, Anthropic announced it had developed a new AI model, Claude Mythos Preview, capable of orchestrating cyberattacks on the level of elite, state-sponsored hacking cells, potentially putting a private company’s cyber offense on par with that of the CIA and NSA. In an example of Mythos’s power, Anthropic researchers described how the model used a “moderately sophisticated multi-step exploit” to work around restrictions and gain broad internet access, then emailed a researcher—much to his surprise—while he was eating a sandwich in the park.
Washington is getting antsy about the power imbalance. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore “potential nationalization” of AI. Murmurs of possible tactics abound—including more talk within the administration of the DPA after Anthropic’s Mythos announcement, one person with knowledge of such discussions told us. Meanwhile, Silicon Valley is watching carefully. In recent weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the possibility of nationalization. Lawyers who represent Silicon Valley’s biggest AI firms are paying attention.
Worth noting, later in the story it’s pointed out why full nationalization is vanishingly unlikely, but more federal oversight is likely.



This was an interesting article! While there is an argument to be made for an “AGI Manhattan project” I’m not convinced that companies like OpenAI or xAI will be of much value to a project like that at all. It would be like if the US government took over Joey’s Really Big Stacks of Dynamite Emporium in the 1940’s.
A group just mathematically proved that transformers can’t become AGI by proving a relationship between new information and ability to “process” that information.
Seizing existing companies won’t help make AGI
Really interested to see this proof if you have a link handy. Do you have any idea why it doesn’t apply to human cognition ?
While I’m interested to see the proof, it’s more of a formality. It doesn’t take a PhD to ask what happens when the “AGI” LLM is trained on out of date information. They don’t learn over time, and they have a limited context buffer. At the very minimum, it would run out of context just keeping up with changes to spoken language over 30 years, let alone advancements in fields, new fields, and so on.
I think there’s a misconception about what AGI is. The point of a “smarter” model is not that it knows all the facts, that would be wasteful as it is trivial to look up facts at inference time. The point is that a “smarter” model can generalize solutions to out of distribution problems (meaning problems that are not explicitly stated in its training corpus). So AGI wouldn’t be about a model that knows everything about language and every advancement in every field, but rather a model that is better than humans at finding solutions to problems (and fetching information from outside sources when it doesn’t know enough about a field to operate a solution).
The point about context is kind of irrelevant here as training data is not part of the inference context so you “add intelligence” to a model by re-training a new one, not by cramming the context of an existing one.
I was a little disappointed to see language like this on Lemmy, but I tried looking up Mythos online and literally all mainstream media talks about it this way. They’ve all bought into the Anthropic PR.
Language like what?
Language like “the Manhattan project” to describe mediocre, incremental, and disappointing updates to LLMs.
Sam Altman was calling GPT-5 “the Manhattan project” and putting on his best impression of a sad scared puppy just 8 months ago, before the thing released and people realized it sucked.