GobblesGobbles
GobblesListen to today's tech podcast

Spain just launched a national agency to regulate AI — staffed by the country's best AI PhDs, who chose government salaries over startup risk. Now those minds are inspectors of technology built elsewhere.


Spain Is Training Its Best AI Minds to Be Inspectors, Not Inventors

Spain just launched a national AI supervision agency (AESIA). According to the original post, the country's best AI PhDs are choosing government jobs over startups because the incentive structure makes it the rational call: lifetime stability versus full financial risk with no safety net. The stated result is that world-class AI talent is being trained to become inspectors of what others build.

Commenters on r/artificial echoed this concern. One noted that "by creating an ecosystem of regulation ahead of a real ecosystem of innovation, a country simply creates a moat around the regulatory environment that only the big players in technology can afford to wade across," adding that "a small company has no chance of paying the overheads of having a compliance officer just to try out their simple prototype." Another observed that European bureaucracy has "successfully killed the risk taking culture," and that Spain's agency will likely end up regulating American companies.

Gobbles Gobble's Take: When the safest path for top technical talent runs through compliance rather than creation, you get a regulatory industry — not an AI one.

Source: r/artificial


Anthropic Keeps Adding Religions to Claude's Moral Curriculum — And the Questions Keep Multiplying

Anthropic, the AI safety company behind the Claude chatbot, has added several more religious traditions to the ethical training shaping Claude's behavior, according to a Gizmodo report. The goal, consistent with Anthropic's "Constitutional AI" approach, is to broaden Claude's understanding of harm and goodness across different cultural and belief frameworks — an attempt to build a model that doesn't reflect just one civilization's moral defaults.

The ambition is genuinely staggering. Human morality is contradictory, culturally specific, and often unwritten. Teaching an AI to navigate it by ingesting religious frameworks raises an obvious, uncomfortable question: when those frameworks clash — and they do, constantly — whose values win the tiebreaker? Anthropic's bet is that exposing Claude to more perspectives produces a more nuanced model, not a more confused one. That's a hypothesis, not yet a proof.

What the effort does signal clearly is that Anthropic believes alignment isn't a checkbox — it's an ongoing, expanding project. Whether "perfect morals" are achievable for any system, human or artificial, is a question philosophers have been failing to answer for millennia.

Gobbles Gobble's Take: Training an AI on every religion simultaneously sounds less like ethics and more like the world's most stressful theology exam — and Claude has to pass it in real time.

Source: Gizmodo


A Startup Claims It Can Guarantee AI Agents Can't Go Rogue — Security Researchers Aren't Convinced

An AI security middleware called Sentinel Gateway is making a bold claim: that it can make agentic AI security a non-issue by hard-limiting what any AI agent is allowed to do. The pitch is straightforward — if file deletion isn't in the agent's defined scope, the agent cannot delete files, period. If external data sharing isn't permitted, the agent cannot leak your customer database even if an employee explicitly instructs it to. Every action an agent takes gets logged and traced by prompt ID, and the system claims it can detect and flag manipulation attempts in real time.

The product targets a real and growing fear. Agentic AI — systems that don't just answer questions but take actions, make API calls, read and write files, and send messages — has a fundamentally different threat profile than a standard chatbot. As one security-focused commenter on r/artificial noted, "the threat model shifts from 'what can it say' to 'what can it do.'" When an agent can act autonomously across your stack, a single compromised instruction has a much larger blast radius than a bad chatbot response.

But the community's reception has been skeptical, and appropriately so. The core problem isn't the concept — scope-based permission controls are sound security thinking. The problem is the word "guarantee." Experienced security researchers point out that AI systems fail in the gaps, and that scope limits reduce risk rather than eliminate it. The real test isn't the demo; it's how Sentinel Gateway holds up against red team prompt injection attempts and independent security audits. Those results, so far, aren't public.

Gobbles Gobble's Take: "Guaranteed secure AI agent" belongs in the same category as "unsinkable ship" — compelling until the first edge case hits.

Source: r/artificial


In Case You Missed It

Yesterday's top stories:

Was this briefing useful?

One tap helps Gobbles learn what to cover more carefully.

Get Tech Gobbles in your inbox

Free daily briefing. No spam. Unsubscribe anytime.

See something wrong? Report an inaccuracy