On Friday, April 10, 2026, a 20-year-old named Daniel Moreno-Gama traveled from Spring, Texas to San Francisco. He carried an incendiary device and a document listing AI executives and investors as targets. He arrived at the Pacific Heights home of OpenAI CEO Sam Altman and hurled a Molotov cocktail at the front gate. About an hour later, he was arrested outside OpenAI's headquarters, where he was trying to smash the building's glass doors with a chair and threatening to burn it down. The next morning, two more people -- a 23-year-old and a 25-year-old -- were arrested after discharging a firearm near Altman's other residence in San Francisco's Russian Hill neighborhood. Altman posted a photo of his husband and young child on X. "I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house," he wrote. It didn't work. The internet's reaction, particularly from younger users, was immediate and ugly. "He's not scared enough." "Based do it again." "FREE THAT MAN HE DID NOTHING WRONG." "Finally some good news on my feed." This is where we are now. The Legitimate Grievances Let's get something straight before we go any further. AI companies are doing real harm, right now, to real people, without their consent. That's literally the entire premise of this publication. It was forced on everyone. The grievances driving anti-AI sentiment are not invented. They are not fringe. They are mainstream: AI companies scraped the entire internet -- your writing, your art, your code, your face -- to train models worth hundreds of billions of dollars. Nobody consented to this. Nobody was compensated. Job displacement is accelerating. AI was cited in more than 55,000 U.S. layoffs in 2025 -- twelve times the number attributed to AI just two years earlier. Gen Z graduates face a cratering job market where 43% are underemployed. AI is being weaponized against individuals. A stalking victim recently sued OpenAI after her ex-boyfriend used ChatGPT to fabricate a psychological profile of her and distribute it to her friends and family -- with the chatbot validating his grievances. Data centers are consuming communities. At least $18 billion in data center projects have been blocked and another $46 billion delayed by local opposition. Communities cite higher utility bills, water depletion, noise, and destroyed green space. AI companies themselves stoke apocalyptic rhetoric when it serves their fundraising needs, then act surprised when people take the apocalypse seriously. A Gallup poll found that less than a fifth of Gen Z feels hopeful about AI. About a third say it makes them angry. Nearly half say it makes them afraid. These are rational responses to real conditions. None of this justifies a Molotov cocktail. The Movement Fractures The attack put two organizations in the crosshairs: Pause AI and Stop AI. Despite the similar names, they are different groups with different approaches, and their split tells you everything about the tensions inside the anti-AI movement. Pause AI was founded in Utrecht, Netherlands, in May 2023 by Joep Meindertsma. The name was inspired by the Future of Life Institute's open letter calling for a pause on "giant AI experiments." It's a global grassroots movement with local chapters, including a separate U.S. organization led by Holly Elmore, a Berkeley-based evolutionary biologist from Harvard. Pause AI explicitly rejects violence and civil disobedience. They protest, they lobby Congress, they organize demonstrations. Stop AI was founded in 2024 by Sam Kirchner and Guido Reichstadter, both of whom had been involved with Pause AI before Elmore kicked them out over tactical disagreements. Stop AI embraced direct action -- civil disobedience, confrontations at AI company headquarters, flash subpoena deliveries. Reichstadter staged a hunger strike outside Anthropic's headquarters and chained himself to OpenAI's security fence. The split was always about tactics. Pause AI believes in democratic change through democratic means. Stop AI believed that conventional advocacy was too slow for the stakes involved. And inside Stop AI, the debate went further -- cofounder Sam Kirchner reportedly suggested abandoning nonviolence entirely during an internal dispute. He assaulted another leader and then disappeared. He is still missing. Moreno-Gama, the alleged attacker, had posted on Pause AI's public Discord server. Thirty-four messages over two years, none containing explicit calls to violence. He also joined Stop AI's public forum, introduced himself, and asked "Will speaking about violence get me banned?" When told yes, he went silent. Both organizations say he was never a member. Both organizations condemned the attack immediately and unequivocally. The Radicalization Question Here's the uncomfortable truth that nobody wants to grapple with: when you spend years telling people that AI poses an existential threat to humanity -- that it could kill everyone, that we're "close to midnight," that the people building it are risking human extinction -- some small percentage of your audience is going to take you literally. Nirit Weiss-Blatt, an independent researcher who writes the newsletter AI Panic, put it plainly: "Young, anxious followers, looking for purpose, can be radicalized by apocalyptic AI rhetoric, even without explicit calls to violence." This is not about assigning blame. Pause AI didn't throw a firebomb. Stop AI didn't pull a trigger. The alleged attacker made his own choices and will face the consequences, including potential federal domestic terrorism charges. But the question of rhetoric matters. When Eliezer Yudkowsky writes a book called If Anyone Builds It, Everyone Dies, and when prominent AI safety researchers describe the situation in terms of human extinction, they are speaking to an audience that includes deeply anxious young people scrolling through Instagram and reading about how their future has been stolen by tech billionaires. The Luddites weren't anti-technology. They were against the unmitigated introduction of machinery that was destroying their livelihoods without their consent. Their concerns were dismissed. They turned to violence. The parallel should make everyone involved pause. What Accountability Actually Looks Like The AI industry needs accountability. Desperately. But accountability means laws, regulation, consent frameworks, and democratic oversight. It means the FTC doing its job. It means Congress passing actual legislation instead of holding endless hearings. It means courts enforcing existing rights. Accountability does not mean firebombing someone's house while their family is inside. It does not mean manifestos and hit lists. It does not mean celebrating attempted murder on TikTok. The anti-AI movement is at a genuine crossroads. The legitimate grievances are massive and growing. The industry's resistance to regulation is real. The harm is ongoing and accelerating. But the path forward is through democratic institutions -- fractured and slow as they are -- not through violence. Valerie Sizemore, a co-leader of Stop AI, told Fortune that after the attack, some members were anxious about being associated with violence. "But personally, I think it's all the more important for the nonviolent organizing we're doing, to give people something other than violence to do." She's right. And the AI industry would do well to give people something other than rage to feel. Because right now, the gap between what AI companies promise and what they deliver -- between the utopian fundraising pitches and the cratered job market, between the "safety" rhetoric and the scraping of everything you've ever created without asking -- is filled with nothing but anger. Not a soul was consulted if you wanted your life fed into a training dataset. People were shut out if you wanted your job automated. They didn't ask if you consented to having your face, your voice, your words, your creative work stolen to build products worth more than the GDP of most countries. The answer to that theft is not violence. But the theft itself has to stop. --- Related: Stealing Isn't Innovation AI Training Data Theft AI Job Replacement Crisis 2026 NYT v. Perplexity: Journalism Theft