AI voice cloning has made it possible to fake a grandchild's panicked phone call using just a few seconds of audio scraped from a social media video — and victims lost more than $200 million to deepfake-driven fraud in 2025 alone, according to Bank Iowa's fraud advisory.
That Voice Asking for Bail Money May Not Be Your Grandchild
When the phone rings and a familiar voice — a grandchild, a boss, a neighbor — sounds frightened and asks for money fast, the instinct is to help immediately. That urgency is exactly what scammers are engineering. According to IDX, a digital privacy firm, AI tools can now clone a person's voice from just a few seconds of audio found publicly online, then use that clone to place calls that are nearly indistinguishable from the real person.
The scenarios reported by victims follow a pattern: a "grandchild" calls from an unknown number saying they've been arrested and need bail wired right away; a "manager" calls asking for an urgent gift card purchase before a meeting; a voice that sounds like a federal agent demands immediate payment to avoid arrest. The emotional pressure is deliberate. Scammers using AI-generated voices rely on the target not having time to pause and check, according to IDX's consumer advisory.
One practical defense that security advisers and fraud counselors recommend: establish a private code word with close family members — something simple and unmemorable to outsiders — that anyone can ask for during a suspicious call. If the caller doesn't know it, hang up and call back on a number you already have saved.
Gobble's Take: A one-word family code costs nothing and takes about thirty seconds to set up — the math on that seems straightforward.
Source: IDX
The $200 Million Fraud That Uses Your Own Voice Against You
The old markers of a scam — misspelled words, awkward phrasing, a vague sender name — are increasingly unreliable. According to Bank Iowa's fraud advisory, generative AI now allows fraudsters to build convincing fakes using audio, video, and photos drawn from a target's own public digital presence: social media clips, recorded interviews, profile pictures, even brief voice messages left on public platforms.
The resulting losses are not small. Bank Iowa's advisory cites deepfake-driven fraud losses exceeding $200 million in 2025. The fraud takes several forms: fabricated video of a family member appearing to be in danger, AI-generated audio of an executive authorizing a wire transfer, or a synthetic version of someone's face used to pass identity verification. In each case, the raw material — the real person's voice or image — was already publicly available before any scam began.
The practical implication, as fraud advisers note, is that verification can no longer rely on recognizing someone's appearance or voice alone. Bank Iowa recommends treating any unexpected request for money or personal information as unverified until you've confirmed it through a separate channel — meaning you hang up and call back on a number you already know, or send a message through a platform you've used before.
Gobble's Take: "I'll call you right back on your regular number" is one of the most useful sentences a person can say in 2025.
Source: Bank Iowa
In Case You Missed It
Yesterday's top stories:
Related reads
Other Gobbles stories on similar themes.
The Phone Call That Can Empty Your Account in 10 Minutes: Your Grandchild's Voice, a Stranger's Demand
When "Your Grandson's Voice" Costs Three Seconds and Almost Nothing to Fake
Trafficked Workers, AI Microphones, and Fraud Quotas: How Voice-Cloning Farms Operate
A Pediatric Doctor's Face Was Cloned to Sell Supplements — and He Can't Get the Videos Down
Get Family Scam Watch in your inbox
Free daily briefing. No spam. Unsubscribe anytime.
