A Meta contractor just fired 1,100 people for revealing that Ray-Ban smart glasses were recording inside homes, doctor's offices, and around children — then handed the footage to AI trainers to label.
Meta Fired 1,100 Whistleblowers After They Found Ray-Ban Glasses Recording Inside Homes and Doctor's Offices
The job sounded routine: label and categorize data collected by Meta's Ray-Ban smart glasses to train AI systems. What the 1,100 contractors actually found was recordings from private homes, medical appointments, and footage involving children — captured without subjects' knowledge. When they raised the alarm, the Meta contractor's response was to fire all 1,100 of them.
The mass termination exposes something uglier than a single privacy breach: a structure where the humans closest to the raw data have the least power to do anything about what they find. Meta has spent years positioning Ray-Ban smart glasses as a seamless, stylish entry into augmented reality wearables. This incident suggests the "seamless" part extends to seamlessly recording people who never agreed to be recorded — and that the system to catch that was the workers themselves, who are now gone.
If you own a pair of Ray-Ban Meta glasses, the people hired to watch what your glasses saw just got fired for watching.
Gobble's Take: Meta didn't fix the privacy problem — they fired the people who found it.
Source: r/technology
6% of Claude Users Are Asking an AI Whether to Quit Their Job, Who to Date, and Whether to Move Countries
Anthropic, the AI safety company behind the Claude chatbot, analyzed one million real user conversations and found something its researchers almost certainly didn't expect to headline: 6 out of every 100 people using Claude are asking it to help them make decisions that will reshape their lives — careers, relationships, countries of residence. Not "help me write a cover letter." Whether to leave the job entirely.
The finding lands awkwardly for a company whose founding pitch is that AI should be safe, honest, and careful. Claude is being asked to weigh in on decisions with consequences that will echo for years, often by people who may have no one else to ask. That's not a product failure — it might actually be a product success — but it surfaces a question the industry has mostly avoided: when an AI becomes the most-consulted voice in someone's life, who is responsible for the answer?
Anthropic built a careful AI. Turns out users want a fearless one.
Gobble's Take: The therapy industry should be more worried about this data than the lawyers are.
Source: r/artificial
The Pentagon Is Signing Classified AI Deals With Private Companies — and Not Saying Much Else
The U.S. Department of Defense has quietly expanded its partnerships with private AI companies, bringing commercial models and talent into classified national security work, according to reporting by The New York Times. The deals cover intelligence analysis and strategic defense applications — work that, by definition, won't be publicly audited or debated.
What makes this acceleration unusual is the speed. The military has historically moved slowly on vendor relationships, especially for sensitive work. The current pace suggests the Pentagon views falling behind on AI capabilities as a more urgent risk than the governance gaps these partnerships create. Private AI companies operate under commercial incentives; the work they're now doing operates under military classification. The overlap between those two realities has no established rulebook.
The most consequential AI deployments happening right now are the ones you'll never hear specifics about.
Gobble's Take: Silicon Valley wanted to change the world — turns out the Pentagon is happy to help, just quietly.
Source: The New York Times
A Tech Worker in China Was Laid Off and Told Point-Blank That AI Was the Replacement — Now Lawyers Are Arguing Whether That's Legal
When the tech worker in question received their termination notice, the company didn't offer a restructuring rationale or budget cuts as cover. They were told their role was being taken over by an AI system — directly, explicitly. The case, now drawing legal scrutiny in China, is one of the first to test whether labor law has anything to say about a human being replaced by a named technology rather than by vague "business conditions."
Chinese labor law, like most employment frameworks worldwide, was written without AI displacement in mind. Courts are now being asked to decide in real time whether a company has obligations when the replacement isn't a cheaper human or an outsourced team, but a model. The outcome will set precedent not just in China — it will be watched by employment lawyers and tech companies across every jurisdiction where AI replacement is accelerating. Which is all of them.
The legal system built to protect workers from being treated like machines is now being asked what happens when they're replaced by one.
Gobble's Take: Every jurisdiction's labor law is about to get stress-tested by a technology that moves faster than legislatures.
Source: NPR
Quick Hits
- $21 saved across 9,200 AI tasks — by routing them smarter: A developer built a router that automatically sends each AI task to the cheapest capable model, cutting total spend to $0.14 per task and saving $21 overall — a proof-of-concept that model selection, not prompt engineering, may be the next cost frontier. r/artificial
- Microsoft is trying to get lawyers to trust an AI agent embedded in Word: The new tool is designed to assist with legal document work directly inside Word — a high-stakes pitch, given that legal errors don't get a "suggest an edit" button. The Verge
In Case You Missed It
Yesterday's top stories:
Related reads
Other Gobbles stories on similar themes.
Apple Tests Multiple Smart Glasses Designs
A Self-Driving Car Was Just Burned to the Ground in San Francisco
One Disney Employee Called Claude 51,000 Times a Day — And Nobody Asked Permission
The Fake Grandmothers and Invented Fathers Pushing Political Narratives on Your Feed
Get Tech Gobbles in your inbox
Free daily briefing. No spam. Unsubscribe anytime.
