On the morning of Friday, April 10th, a 20 year-old Texas man named Daniel Alejandro Moreno-Gama was arrested for allegedly throwing a molotov cocktail at Sam Altman’s mansion on Russian Hill in San Francisco. Less than two days later, police arrested 25 year-old Amanda Tom and 23 year-old Muhamad Tarik Hussein for allegedly firing a gun at the same house from their car before speeding away.
Earlier the same week, and thousands of miles away, an unknown assailant fired 13 shots into the front door of city councilman Ron Gibson, who had just voted to approve a new data center in Indianapolis against a groundswell of public outcry. A sign that read “NO DATA CENTERS” was left tucked under the doormat.
Little is known about the motives of Tom or Hussein, or the politics of the Indianapolis shooter, but reporters and the online commentariat quickly dredged up Moreno-Gama’s Discord chats and Substack posts. He was a reader of rationalist and AI doomer Eliezer Yudkowsky, who argues, as the title of his last book puts it, if Silicon Valley builds a “superintelligent” AI, “everyone dies.” Per the San Francisco Chronicle:
Online records show Moreno-Gama published multiple essays and forum posts warning that AI could lead to human extinction, calling AI models deceitful and misaligned with human interests. He accused tech leaders, including Altman, of lacking morals and being willing to gamble with humanity’s future, and adopted the alias “Butlerian Jihadist,” referencing a fictional anti-AI crusade from the ‘Dune’ series. His writings grew more urgent over time, with some posts edging toward calls for extreme action despite community moderators warning against violence.
According to the SFPD, after attacking Altman’s house, Moreno-Gama went to OpenAI’s offices, where he was arrested while banging the front doors with a chair, threatening to burn the office down and kill everyone inside. He had a jug of kerosene and a list of other AI leaders names and addresses, police said.



Great article, and the author brings receipts.
I want to preface this by saying I’m not an AI doomer. I fundamentally disagree with the premise that a word prediction machine (LLM) is capable of intelligence. We’re no closer to AGI with LLMs than we ever were.
I also think AI has its uses; it’s a great tool, for narrow, constrained use cases. Editing text and vibe coding simple scripts, for example—but even in incredibly simple cases, it gets shit wildly wrong very frequently.
But the benefits are massively outweighed by the harms. Coaching suicide. Filling the web with AI slop. Reputational harm from not catching hallucinations. Semantic ablation.
We’re not getting rid of AI; the models are here to stay, and anyone with $2K of hardware can run a decent model at home. But that’s also going to be the end of the AI bubble. There are no natural moats to protect a monopoly. OpenAI will never be profitable since the value they create is less than their operational costs. It’s a money pit.
So, in a sense, I guess I am an AI doomer —the inevitable collapse of the AI bubble is going to cause a major recession, at least as big as the '08 financial crash, and these tech bros are doing massive harm both now and when the economic fallout lands. No surprise people want them dead.
But I’m not worried about LLMs turning into SkyNet.