GobblesGobbles

A Pediatric Doctor's Face Was Cloned to Sell Supplements — and He Can't Get the Videos Down

5 min read3 sourcesAI-written, source-linked. Learn moreAlways verify alerts with an official source before acting.

A New Orleans children's doctor has spent months trying to scrub fake videos of "himself" from TikTok — videos he never made, selling supplements he's never seen.


A Pediatric Doctor's Face Was Cloned to Sell Supplements — and He Can't Get the Videos Down

Dr. Maurice Sholas has spent his career treating severely ill children in New Orleans. According to a report from Red Tape, scammers have been using AI to generate deepfake videos of him promoting vitamin supplements — without his knowledge or consent — and targeting those ads specifically at Black consumers on TikTok and Instagram.

The videos are convincing enough that viewers have reached out to Dr. Sholas directly, believing he endorsed the products. He's been working to have them removed, but according to cybersecurity researchers cited in the report, cloning a person's voice requires as little as 10 seconds of existing audio, and AI-generated content, once distributed across platforms, is extremely difficult to fully eliminate. Dr. Sholas has described finding himself studying intellectual property law in his spare time — time he'd rather spend on his patients.

The pattern reported here isn't unique to one doctor or one platform. Researchers and consumer advocates have documented a broader trend of scammers lifting the identities of real medical professionals — people whose faces carry built-in credibility — to sell unverified health products. If a video of a doctor promoting supplements appeared in your feed, the safest step is to search that doctor's name directly on their hospital or practice website before acting on anything they appear to say.

Gobbles Gobble's Take: A recognizable face is not the same as a trustworthy one anymore — and that's a harder adjustment than most people have had time to make.

Source: Red Tape / Substack


"Beng Laotou": The Romance Scam Built on Small Asks, High Frequency

Most people have heard warnings about large-scale romance fraud. A pattern documented in Chinese-language media and described in a Weibo Substack report describes something different: a scheme called "Beng Laotou" — a northern dialect phrase roughly translated as "scamming old men" — that deliberately keeps each transaction small enough not to trigger alarm.

According to the report, the amounts requested per interaction typically range from 20 to 100 RMB, roughly $3 to $14 USD, gathered through flirtatious chatting, emotional companionship, or mildly suggestive messaging on social media platforms and dating apps. A single target sending those amounts regularly could transfer the equivalent of several hundred to several thousand RMB each month without it ever feeling like a major financial loss. The scheme's low barrier to entry — no elaborate backstory, no fake investment platform — has reportedly allowed it to spread quickly and become, in some online communities, a normalized source of side income.

The dynamic reported by observers is worth passing along to older family members who use social platforms: the person on the other end of a warm, attentive online conversation may be managing several conversations simultaneously, late at night, with the goal of a small but steady cash stream. The cumulative loss over months can rival what a single large scam would have taken.

Gobbles Gobble's Take: The ask being small is part of the design, not a reason to feel safer about it.

Source: Weibo Substack


Financial Regulators Warn: A Loved One's Voice Can Be Cloned from a Short Clip Online

Financial regulators and consumer protection agencies have begun issuing formal warnings about AI voice-cloning technology being used in family impersonation scams — calls in which a victim hears what sounds exactly like a grandchild, sibling, or parent in distress, according to a report from TVW Stories.

The pattern reported by victims follows a consistent structure: the caller sounds panicked, claims to be in an accident, facing arrest, or dealing with another sudden crisis, and asks for money via wire transfer, gift cards, or cryptocurrency. According to researchers cited in the report, scammers can generate a convincing voice clone from as little as 30 seconds of audio pulled from a public social media post, voicemail greeting, or video. The emotional urgency built into the call — a family member apparently in immediate danger — is itself part of the method, designed to push recipients into acting before they stop to question what they're hearing.

Regulators have suggested a practical countermeasure that families can put in place now: agree on a specific code word or phrase that only real family members would know, and use it as a verification step any time someone calls asking for emergency money. If the caller can't produce it, hang up and call the person back on a number you already have saved.

Gobbles Gobble's Take: A family code word is a low-tech fix for a high-tech problem — and worth five minutes of awkward conversation at the next family dinner.

Source: TVW Stories / Substack


In Case You Missed It

Yesterday's top stories:

Get Family Scam Watch in your inbox

Free daily briefing. No spam. Unsubscribe anytime.