NVIDIA's newest AI chip costs less to run than the humans it's supposed to replace — except it does, and that's the problem nobody in the industry wants to talk about.
Why AI Companies Want You to Think They're Building Something Dangerous
There's a quiet PR strategy running underneath every AI safety announcement, Senate hearing appearance, and apocalyptic warning from a tech CEO: fear is good for business. When an AI company tells you its own product might be too powerful to control, it's not a warning — it's a moat.
The BBC has been tracking how frontier AI labs have learned to weaponize existential dread. The playbook works like this: by framing AI as potentially dangerous, companies push regulators toward complex licensing regimes that only well-capitalized incumbents can navigate. OpenAI, Google DeepMind, and Anthropic all lobby for "responsible AI" frameworks that, conveniently, require the kind of safety infrastructure only they can afford to build. Startups and open-source projects get squeezed out not by competition, but by compliance cost.
The tell is in the timing. Safety warnings tend to spike right before funding rounds, regulatory hearings, or competitor launches — not after internal red-team results. One researcher described the dynamic bluntly: the companies most loudly warning about AI risk are the same ones racing hardest to ship. If they actually believed the warnings, they'd slow down. They haven't.
Gobble's Take: When a company tells you to be scared of its own product, check who benefits from the fear.
Source: BBC
The Boring Company Quietly Printing Money Behind the AI Boom
While everyone watches the AI model wars, a much quieter business is having the best year in its history: digital advertising. The New York Times reports that the same AI infrastructure boom driving headlines is also supercharging ad-targeting algorithms — and the companies selling ads are cashing in harder than almost anyone building the underlying models.
The mechanism is straightforward but underappreciated. Better AI means better personalization, which means higher click-through rates, which means advertisers pay more per impression. Google's ad revenue, Meta's ad revenue, and a cluster of smaller programmatic players are all posting numbers that would have seemed impossible three years ago — not because people are clicking more ads, but because the ads are getting eerily good at finding exactly the right person at exactly the right moment.
There's an irony embedded here that the industry doesn't love to acknowledge: the companies most likely to profit from the AI era aren't necessarily the ones building frontier models. They're the ones who figured out how to bolt AI onto a revenue engine that already worked. The AI labs burn billions. The ad platforms collect the toll.
Gobble's Take: The real AI trade of the decade might not be NVIDIA or OpenAI — it might be whoever owns the pipes the attention flows through.
Source: The New York Times
Meet the People Who Break AI for a Living — and Can't Unsee What They've Found
They call themselves jailbreakers. Their job is to get AI models to say, show, or do things the companies that built them explicitly designed them not to. One researcher who spoke to The Guardian put it simply: "I see the worst things humanity has produced."
These aren't trolls running prompts for fun. Many are contracted red-teamers — hired by the same AI labs whose systems they're breaking — and what they find shapes everything from content policy to model training. The work involves systematically probing models for pathways to harmful outputs: detailed instructions for violence, non-consensual imagery, manipulation scripts targeted at vulnerable users. The jailbreakers find the holes. The labs patch them. Then the jailbreakers find new holes.
What the Guardian's reporting makes clear is how industrialized this has become on both sides. There are now organized communities sharing jailbreak techniques the way security researchers share exploit code — and the sophistication is accelerating. One jailbreaker described spending weeks building a fictional "character" whose in-world logic gradually convinced a model it was operating under different rules. It worked. The gap between what AI companies promise their safety systems can catch and what a determined adversary can actually extract is, by most accounts, still wide.
Gobble's Take: Every "our model is safe" press release is written by people who haven't spent a week talking to the jailbreakers.
Source: The Guardian
Britain Is About to Learn What It Means to Have No AI of Its Own
Rafael Behr's column in The Guardian is the kind of piece that's easy to dismiss as hand-wringing — until you realize the numbers behind it are genuinely alarming. The UK has no frontier AI lab. No sovereign chip manufacturer. No hyperscaler. Every large language model running in British hospitals, schools, courts, and government departments is built, owned, and updated by an American company operating under American law.
Behr's argument isn't that US tech companies are malicious. It's simpler and more uncomfortable: dependence at infrastructure scale is a strategic vulnerability regardless of intent. If OpenAI changes its pricing, the NHS can't negotiate from strength. If the US government restricts AI exports in a future trade dispute, Britain can't route around it. If a model's values drift in ways that conflict with UK regulatory requirements, British institutions are passengers, not drivers.
The EU has at least partially grasped this — pouring money into Mistral, creating AI liability frameworks, mandating data localization. Britain, post-Brexit, has neither the regulatory bloc weight of the EU nor a serious domestic AI champion. The window to build one is closing fast, and right now the strategy appears to be hoping the Americans stay friendly.
Gobble's Take: "Ally" and "supplier you can't replace" are two different things — and Britain is slowly discovering which one it actually has.
Source: The Guardian
Quick Hits
- The next NVIDIA? Motley Fool makes its pick: Analysts are naming one under-the-radar AI chip stock they believe could match NVIDIA's dominance by 2030 — based on its memory bandwidth architecture and data center contract pipeline. The Motley Fool
- AI security hole exploited in 36 hours flat: A critical SQL injection vulnerability in LiteLLM — the open-source tool used by developers to route calls across AI models including GPT-4 and Claude — was weaponized less than two days after researchers published details, exposing API keys and user data across dozens of deployments. The Hacker News
In Case You Missed It
Yesterday's top stories:
- One Disney Employee Called Claude 51,000 Times a Day — And Nobody Asked Permission
- Kevin O'Leary Just Got Approved to Build a Data Center That Eats More Power Than All of Utah
- Wall Street Is Spooked: AI Fears Are Quietly Reshaping How Investors Bet on Growth
- Logitech's New Dial Puts Microsoft Office Controls in Your Left Hand
Related reads
Other Gobbles stories on similar themes.
The AI Model So Scary It Got a White House Summons
AI's New Unit of Ambition: The 'Bragawatt' Is a Gigawatt With a God Complex
Fake Pro-Trump Avatars Are Arguing Online—and They're Better at It Than Most Humans
A Canadian-German AI Merger Just Created a $1.2B Rival Aimed Directly at Silicon Valley's Throat
Get Tech Gobbles in your inbox
Free daily briefing. No spam. Unsubscribe anytime.
