AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.

Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.

Future bots could help design and carry out biological, nuclear, and chemical warfare. They could be weaponized to take down power grids, monitor congressional emails, and black out major media outlets. These aren’t purely hypothetical concerns: Earlier this month, Anthropic announced it had developed a new AI model, Claude Mythos Preview, capable of orchestrating cyberattacks on the level of elite, state-sponsored hacking cells, potentially putting a private company’s cyber offense on par with that of the CIA and NSA. In an example of Mythos’s power, Anthropic researchers described how the model used a “moderately sophisticated multi-step exploit” to work around restrictions and gain broad internet access, then emailed a researcher—much to his surprise—while he was eating a sandwich in the park.

Washington is getting antsy about the power imbalance. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore “potential nationalization” of AI. Murmurs of possible tactics abound—including more talk within the administration of the DPA after Anthropic’s Mythos announcement, one person with knowledge of such discussions told us. Meanwhile, Silicon Valley is watching carefully. In recent weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the possibility of nationalization. Lawyers who represent Silicon Valley’s biggest AI firms are paying attention.

Worth noting, later in the story it’s pointed out why full nationalization is vanishingly unlikely, but more federal oversight is likely.

  • Kwakigra@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    21 hours ago

    This is kind of unprecedented. Usually a government only considers nationalizing an industry after it’s established. LLMs are still in the speculative pre-adoption phase, and unlike many other technologies from the last century, LLMs are not very useful at anything other than obfuscating accountability. This is great for racketeers and infuriating for the vast majority of people who have been outspoken at refusing to accept the worthless garbage LLMs can print on demand.

    This is a huge problem for LLMs as they cost more to run they they can possibly produce. The only value proposition is technically existing in industries which are totally speculative and require no productivity other than from their salespeople. LLMs can only last for as long as our economy remains fundamentally fraudulent. Making a public bet on LLMs to keep the fraud up is a massive risk that the people taking it have never had to worry about understanding.