GobblesGobbles

Fake Pro-Trump Avatars Are Arguing Online—and They're Better at It Than Most Humans

Tech Gobbles

Google just picked winners and losers in the AI chip wars—Marvell's stock jumped 12% while Broadcom lost $20 billion in market cap before lunch.


Fake Pro-Trump Avatars Are Arguing Online—and They're Better at It Than Most Humans

Last week, users on X and Facebook started noticing something off. Hundreds of accounts—blond women in red hats, mustached men clutching MAGA flags—were liking, sharing, and commenting in perfect synchrony, echoing identical phrases like "Trump 2024 or bust" with unsettling fluency. Security researchers traced the swarm to a single coordinated operation: AI-generated persona farms running at industrial scale.

These weren't the crude bots of 2016. Each avatar had a backstory—family photos, job histories, even pet names—stitched together from scraped real profiles and AI image generators. One account, "PatriotMom47," posted rally photos of her "kids" that turned out to be deepfake composites of stock imagery. Security firms say the campaign boosted engagement on divisive posts by 300%, slipping past platform filters because the avatars didn't post like robots—they argued, they joked, they pushed back. Researchers say this is the first documented case of AI persona farms operating at political scale.

If a thousand people online agree with you, statistically some of them never existed—and that number is only going up.

Gobbles Gobble's Take: Your next viral political post might have been herded there by ghost accounts—check the profile before you hit share.

Source: The New York Times


A Single Crafted API Call Can Hijack Anthropic's Entire AI Supply Chain

A security researcher revealed last week that Anthropic's MCP—the Model Control Protocol that orchestrates how their Claude AI models receive instructions and talk to external tools—contains a design flaw enabling remote code execution. Translation: a hacker sends one poisoned request, and they're inside the servers where billion-dollar models get trained, tested, and packaged for enterprise customers.

Anthropic, the AI safety lab founded by former OpenAI researchers, confirmed the vulnerability affects cloud infrastructure used by enterprises worldwide. What makes this worse than a typical bug is where it sits: the AI supply chain. Get in at this layer and you're not just stealing data—you're in a position to quietly poison models before they ship downstream into customer service bots, coding assistants, and medical tools. No exploits have been confirmed in the wild, but the emergency patch Anthropic rushed out Monday signals how seriously they're taking it.

One wrong API call, and your AI assistant becomes someone else's agent.

Gobbles Gobble's Take: If you're building a business on third-party AI, this is your reminder to audit what's actually running underneath it.

Source: The Hacker News


AI Chatbots Give You Instant Answers—and Slowly Kill Your Ability to Find Them Yourself

Watch heavy AI chatbot users for long enough and a pattern emerges: they start skipping steps. Stop working through the problem. Let the bot fill the gap. A BBC investigation tracking 500 users over six months found that people querying AI tools daily recalled facts 25% worse without them—and scored 15–20% lower on critical thinking tasks requiring original problem-solving.

The mechanism isn't laziness, it's neurology. When your prefrontal cortex—the part of your brain that reasons through hard problems—gets a polished answer delivered in two seconds, it idles. Do that enough times and the muscle atrophies. Researchers found students using AI tools in classrooms scored well on multiple-choice tests but fell apart on essays requiring synthesis. The smarter the bot at filling gaps, the less practice your brain gets at closing them.

The most dangerous feature of a great AI isn't what it gets wrong—it's how effortlessly it makes you stop trying.

Gobbles Gobble's Take: Try solving your next hard problem without the chatbot—your brain needs the reps more than you need the shortcut.

Source: BBC


Google Picked Marvell Over Broadcom—and Erased $20 Billion in an Afternoon

Marvell CEO Matt Murphy didn't need a press release. He needed Google. According to a KeyBanc analyst report that landed Friday, Google's next-generation AI accelerators will run on Marvell's custom silicon instead of Broadcom's—and by Monday morning, the market had issued its verdict: Marvell up 12% to $85 a share, Broadcom down 8% to $142, with $20 billion in market cap gone before the closing bell.

The specifics matter. Google's TPU v6 chips reportedly swapped out Broadcom's Jericho networking switches for Marvell's alternatives, which handle roughly twice the AI workload per watt at lower cost. For Broadcom—long the unchallenged king of data center networking—losing Google's nod on a next-gen platform is a strategic gut punch, not just a quarterly miss. Marvell already supplies Amazon and Microsoft, but landing Google's hyperscaler stamp at this scale rewrites the pecking order. Google spends roughly $100 billion annually on AI infrastructure. When they pick a supplier, Wall Street listens.

In AI chips, one customer's whisper moves mountains of money.

Gobbles Gobble's Take: Your next AI gadget's speed—and its price tag—hinges on who Google taps next; Marvell just jumped to the top of that watch list.

Source: Barron's


Quick Hits


In Case You Missed It

Yesterday's top stories:

Get Tech Gobbles in your inbox

Free daily briefing. No spam. Unsubscribe anytime.