GobblesGobbles
GobblesListen to today's tech podcast

Marc Andreessen — the billionaire who helped bankroll the AI boom — just got laughed off the internet for not knowing how AI actually works.


The Man Who Funded the AI Boom Doesn't Know How AI Works

Marc Andreessen, the venture capitalist whose 2023 "techno-optimist manifesto" helped set off the current AI frenzy, shared a lengthy "custom prompt" on Monday to show off his AI skills. The prompt opened by telling the chatbot it was a "world class expert in all domains" whose "intellectual firepower, scope of knowledge, incisive thought process, and level of erudition are on par with the smartest people in the world." The internet was amused. Then it got to the part where he instructed the AI to "never hallucinate or make anything up" — and the laughter got louder.

Hallucinations aren't a confidence problem you can talk a chatbot out of. They're a structural feature of how large language models work: models fill gaps with plausible-sounding details because that's what they're built to do, not because no one asked them nicely enough to stop. Journalist Karl Bode put it bluntly on Bluesky: "Yes, you can just demand that the LLM not make errors. That's definitely how the technology works." He followed up: "I know this isn't a unique observation but these gentlemen are in absolutely no way remarkable outside of their good fortune." Defector editor Alberto Burneko went further, diagnosing the prompt as a symptom of "AI psychosis" — a phenomenon where users spiral into their own delusions while treating the chatbot as an oracle. "You can't make an AI chatbot know everything in the world by telling it to know everything in the world," Burneko wrote.

The sharpest sting isn't the naivety — it's who it's coming from: the person whose investment bets have done more than almost anyone else's to shape which AI gets built and how fast.

Gobbles Gobble's Take: The people deciding which AI gets a billion dollars apparently believe you can fix hallucinations by asking nicely — which explains a lot.

Source: r/artificial


AI Agents Don't Fail with Error Messages — They Fail by Confidently Finishing the Wrong Task

The AI agent hype cycle promises autonomous workflows that handle the boring stuff while you focus on the big picture. The reality, according to developers actually running these systems in production, is messier: agents don't crash — they drift, invent, and stall in ways that are hard to catch precisely because they look like they're working.

One practitioner laid out the failure modes that never make it into the demos. "Context bleed" is when an agent carries memory from a previous task into a new one; by step six of a ten-step workflow, it's confidently producing wrong outputs that are plausible enough to slip past review. Agents also don't say "I don't know" — they fill gaps. In outreach automation, that means a personalized message referencing a detail the model simply invented. The third failure is structural: build a pipeline that's 90% autonomous, and the 10% that needs human review piles up silently. Two days later, 47 items are waiting and the entire pipeline is stalled. "The workflow needed a notification system before it needed the AI," one developer noted. Others in the thread flagged compounding small errors — each step defensible on its own, the chain leading somewhere nobody intended — and agents completing the wrong task entirely because the goal was underspecified rather than because the model was incapable.

The consistent thread: these aren't model failures. They're systems failures. The AI layer is usually the least broken part of an AI agent.

Gobbles Gobble's Take: Before you automate your next workflow with AI, make sure you've designed the human part first — because the robot will finish the job whether or not it was the right one.

Source: r/artificial


Quick Hits

  • Nintendo raising Switch 2 prices: Nintendo has announced it is raising prices for the Switch 2, adding to the cost pressure already hitting the gaming hardware market. The Verge
  • Russia's answer to Starlink: Russia has unveiled Rassvet, a satellite internet constellation designed to give the country a communications network independent of Western infrastructure. WIRED

In Case You Missed It

Yesterday's top stories:

Was this briefing useful?

One tap helps Gobbles learn what to cover more carefully.

Get Tech Gobbles in your inbox

Free daily briefing. No spam. Unsubscribe anytime.

See something wrong? Report an inaccuracy