Scammers in India reportedly used AI-generated "police officer" video calls to coerce victims into transferring money — and now researchers say the same on-demand website-building tools are being spotted in Western fraud schemes too.
BBB Warns Hudson Valley Shoppers: Prom Dress Sites Are Collecting Money, Then Vanishing
A high school senior finds her dream prom dress at what looks like a professional boutique, pays, waits — and the dress never arrives. The website disappears. Emails bounce. According to a recent alert from the Better Business Bureau, this scenario is repeating across Hudson Valley and beyond, with fake online retailers targeting shoppers searching for special-occasion clothing. These sites, the BBB reported, typically use photos taken from legitimate businesses, advertise prices well below market, and close up shop once enough orders have been placed.
The BBB's alert flagged several patterns worth knowing before anyone in your family clicks "buy": no verifiable contact information, pressure to pay via Zelle or Venmo rather than a credit card, and reviews that exist only on the retailer's own site rather than on independent platforms. A credit card — not a peer-to-peer payment app — is the only payment method that gives buyers a meaningful shot at a chargeback if the goods don't arrive.
The dress in the photo was real. The store selling it was not.
Gobble's Take: The urgency of prom season is exactly what these sites are counting on — a week's worth of patience before you pay is worth more than a refund you'll never see.
Source: 101.5 WPDH
AI Agents Are a Gift to Scammers, and Boosters Aren't Talking About It
A Hacker News discussion about AI-powered domain registration and business automation surfaced a pointed concern: the technology benefits scammers and spammers more than almost anyone else. Commenters argued that LLM-generated content — emails, articles, images, spam — provides the most practical upside to bad actors, who have no need to verify outputs for accuracy. Legitimate use cases, by contrast, almost always require a human to check the results. Scammers can skip that step entirely.
One commenter made the case bluntly: LLMs allow scammers to generate effectively infinite content without any verification burden, while guardrails and filters remain easy to bypass with carefully worded prompts. Another noted that subtly wrong or hallucinated content may actually serve scam purposes better than accurate content — the goal is to fool the target into a false positive, not to be correct. The thread extended this logic to agentic tools specifically, with one commenter raising the prospect of an AI registering thousands of domains automatically and without user confirmation.
The broader critique in the thread was aimed at AI boosters who enthusiastically promote agentic workflows while ignoring the asymmetry: the people who benefit most from skipping human review are the ones who never needed it to be legitimate in the first place.
Gobble's Take: Until the upside of AI agents is clearer for honest use cases, the most honest summary is that this is infrastructure scammers are already better positioned to use than you are.
Source: Hacker News
AI-Generated "Police" Video Calls Are Coercing Victims Into Transferring Money on the Spot
A video call appears to show a uniformed police officer or customs official holding up what looks like a legal document. The "officer" tells the person on the other end that an arrest is imminent — unless funds are transferred immediately. According to a Substack analysis drawing on Indian legal and cybercrime reporting, this pattern, sometimes called a "digital arrest" scam, is a growing form of fraud in India in which criminals use AI-generated or AI-assisted video to impersonate law enforcement and apply intense psychological pressure in real time.
The same source described a related pattern: a few seconds of a family member's voice, taken from a public social media video, can reportedly be enough for AI tools to produce a cloned voice that sounds like that person in a live call. A frantic "parent" or "child" calls, claims to be in legal trouble or a medical emergency, and asks for money to be sent immediately. The analysis also noted that AI is being used to build fake trading platforms and generate fraudulent product reviews, expanding the range of ways synthetic media is being used in financial fraud. India's large digital user base and rapid adoption of mobile payments have made it a prominent case study, but researchers and legal analysts note the underlying tools are not geographically limited.
Fabricated uniforms and cloned voices are now available to fraudsters as off-the-shelf tools — which means the old instinct to trust your eyes and ears on a call needs to be reconsidered.
Gobble's Take: Any call that combines urgency, a authority figure, and a demand for immediate payment — regardless of how real the face or voice sounds — fits a pattern worth pausing on.
Source: Substack / AI & Cyber Frauds in India
In Case You Missed It
Yesterday's top stories:
Related reads
Other Gobbles stories on similar themes.
Deepfake Video Calls Are Now the Scam: The $25 Million Arup Case
Trafficked Workers, AI Microphones, and Fraud Quotas: How Voice-Cloning Farms Operate
When "Your Grandson's Voice" Costs Three Seconds and Almost Nothing to Fake
A Pediatric Doctor's Face Was Cloned to Sell Supplements — and He Can't Get the Videos Down
Was this briefing useful?
One tap helps Gobbles learn what to cover more carefully.
Get Family Scam Watch in your inbox
Free daily briefing. No spam. Unsubscribe anytime.
