GobblesGobbles
GobblesListen to today's tech podcast

A second federal lawsuit now alleges OpenAI's chatbot helped a gunman plan a mass shooting — and the legal theory isn't that AI pulled the trigger, but that it should have seen it coming.


One Person + AI = An Entire Institution

He merged the wrong branch. Accidentally restored deleted content. Essentially nuked phase one of the build. Then fixed it, rebuilt it, and pushed to production — solo.

A developer posted this week about rebuilding the full architecture for the Institute for AI Economics website using OpenAI's Codex: branches, pull requests, Vercel deployments, sitemap, SEO structure, research hub, and what they're calling a "future intelligence pipeline." No frontend dev. No backend dev. No PM, no SEO specialist, no infrastructure engineer, no content strategist. Just one person, an AI coding assistant, and a willingness to break things publicly.

The point isn't that AI helps developers go faster. It's that the cost of building has collapsed so completely that a single operator can now construct what once required a half-dozen specialized hires. As the poster put it: "the scariest people over the next 5 years are gonna be operators who think clearly, move fast, learn publicly, tolerate chaos, and don't wait for permission." That's not a productivity tip — that's a structural shift in who gets to build things.

Gobbles Gobble's Take: The six-person founding team just became a founding team of one with a good prompt and a high tolerance for chaos.

Source: r/artificial


Did the AI Owe a Mass Shooter's Victims a Warning?

On May 10, 2026, a federal lawsuit called Joshi v. OpenAI Foundation, et al. was filed in the Northern District of Florida. The case concerns the Florida State University shooting in April 2025, in which two people were killed and six were wounded. The plaintiff's theory: OpenAI's chatbot knew something was wrong and said nothing.

The Joshi case doesn't allege the chatbot told the shooter to open fire. Instead, like the earlier Stacey/M.G./Younge cases tied to the Tumbler Ridge mass shooting in Canada, it argues the AI company had a "duty to warn" — that conversations with the user should have flagged a troubled person potentially planning violence. Joshi goes slightly further, suggesting the chatbot aided in planning by answering questions about gun operation and the publicity generated by past shootings, even without explicitly encouraging an attack. A separate pending case, Lyons v. OpenAI Foundation, alleges something closer to direct causation: that a mentally ill user's chatbot interactions directly led him to kill his mother before killing himself.

All of these cases are in their earliest stages. But taken together, they're quietly rewriting the legal question around AI liability — from "did the AI cause harm" to "did the AI have a duty to stop it."

Gobbles Gobble's Take: If courts find AI companies are legally required to surveil chats for warning signs, every private conversation you have with a chatbot just became a monitored one.

Source: r/artificial


OpenAI Launches an AI to Hunt the Bugs Other AIs Create

OpenAI just launched a tool called Daybreak, an AI-powered system built to detect software vulnerabilities and validate patches. The pitch: use AI to secure the infrastructure that AI itself increasingly runs on.

The tool aims to automate vulnerability detection — a process that typically requires skilled security researchers working manually through codebases — and confirm that fixes actually hold. It arrives at a moment when AI-generated code is flooding software pipelines faster than human reviewers can audit it, and when AI-assisted cyberattacks are becoming more sophisticated on the other side of the equation.

The underlying tension is sharp: the same capabilities that make AI useful for building software fast also make it useful for finding — and exploiting — the seams in that software. Daybreak is OpenAI's bet that the offense-defense equation can be tilted back toward defense.

Gobbles Gobble's Take: Hiring an AI to fix the bugs that AI wrote is either the most elegant solution in software history or a very expensive ouroboros.

Source: The Hacker News


A Worm Named After a Sand Monster Just Crawled Through the AI Supply Chain

A supply chain attack dubbed "Mini Shai-Hulud" — named after the colossal burrowing worms from Dune — has compromised packages tied to TanStack, Mistral AI, Guardrails AI, and other AI-adjacent projects. The worm burrowed through software dependencies, the shared building blocks that most modern applications quietly rely on, and exposed how deeply interconnected — and how quietly fragile — the AI development ecosystem actually is.

Supply chain attacks are particularly dangerous because they don't target the finished product. They target the ingredients. A compromised dependency can sit undetected inside dozens of downstream applications, meaning the blast radius of a single breach spreads far beyond the original package. As AI development increasingly pulls from large ecosystems of open-source tools, that attack surface grows with every new model and every new integration.

The timing, arriving the same week OpenAI launched a vulnerability-detection tool, is either ironic or instructive — probably both.

Gobbles Gobble's Take: Your favorite AI startup's entire stack might be three bad dependencies away from belonging to someone else.

Source: The Hacker News


Quick Hits

  • Digg is back — as an AI news aggregator: The once-dominant link-sharing site has relaunched with a new focus on aggregating AI news, betting that the audience that killed it the first time now wants a curated feed of the thing that's replacing them. Engadget
  • Companies are hiring more new grads — if they know AI: Employers are expanding entry-level hiring, according to new reporting, but the signal is consistent: candidates who have pivoted toward AI skills are getting the calls. WJLA

In Case You Missed It

Yesterday's top stories:

Was this briefing useful?

One tap helps Gobbles learn what to cover more carefully.

Get Tech Gobbles in your inbox

Free daily briefing. No spam. Unsubscribe anytime.

See something wrong? Report an inaccuracy