Disclosure: The article may contain affiliate links from partners who may compensate us. However, the words, opinions, and reviews are our own. Learn how we make money to support our mission.
AI voice and deepfake scams can feel unsettling because they make fake messages sound and look real. A call may sound like your grandchild. A video may appear to show a celebrity endorsing an investment. A voicemail may seem to come from your boss. A message may include a familiar face, voice, or writing style.
That is what makes these scams different. They do not only ask you to trust a stranger. They may imitate someone you already trust.
In this guide, you’ll learn how AI voice and deepfake scams work, which warning signs to watch for, and how to verify requests before sending money, sharing information, or taking action.
AI voice and deepfake scams use artificial intelligence to imitate real people or create convincing fake audio, images, or videos.
A scammer may use:
The goal is usually to make a scam feel more believable. Instead of reading a suspicious message, you may hear what sounds like a real person asking for help or see what looks like a familiar face promoting something.
The FTC has warned that scammers can clone a loved one’s voice using a short audio clip found online, then use that voice in family emergency scams.
👉 Compare: Identity Protection Tools in the Marketplace →
A familiar voice used to feel like proof. It is no longer enough by itself.
A scammer may use a cloned voice to say:
The voice may sound like a child, grandchild, spouse, friend, coworker, or boss. That does not mean the request is real.
The FTC says scammers use voice cloning to make requests for money or information more believable, including calls that sound like a boss or family member.
What to do:
Hang up and call the person directly using a number you already know. Do not use the number provided by the caller or message.
Smile Money Tip: A familiar voice can create an emotional shortcut. Verification gives you a way back to clarity.
A family safe word is a simple way to verify emergency requests.
Choose a word or phrase that:
For example, if someone calls claiming to be your grandchild in trouble, you can ask: “What is our family word?”
If they do not know it, stop the conversation.
What to do:
Create a safe word with parents, grandparents, adult children, teens, college students, and close family members. Make it part of your family fraud plan.
👉 Related: How to Avoid Romance Scams →
AI may make a scam sound real, but the scam pattern is usually familiar.
Watch for:
These are classic scam tactics. The technology may be new, but the pressure is not.
What to do:
If the request involves urgency, secrecy, or hard-to-reverse payments, stop and verify through another channel.
👉 Related: How to Lock Down Your Social Media Privacy Settings →
Deepfake scams are not limited to family emergency calls. Scammers also use fake celebrity or influencer videos to promote:
Some deepfake videos are created from real interviews or public appearances and altered to make it look like the person is endorsing something they never endorsed. Recent reporting has shown scammers using AI-generated celebrity deepfake ads on TikTok to promote fraudulent rewards programs and collect personal information.
What to do:
Do not invest, buy, or enter personal information because a famous person appears in a video. Go to the person’s official website or verified social media account. Search for the offer independently.
If the video sends you to a third-party site asking for personal information, payment, or account login, treat it with caution.
AI voice and deepfake scams can also target workplaces, small businesses, and professionals.
A scammer may impersonate:
They may ask for:
The FBI has warned that malicious actors have used AI-generated voice messages and texts to impersonate senior U.S. officials, build rapport, and move targets toward platforms designed to steal credentials or access accounts.
What to do:
Use a second verification channel. If a request comes by voice memo, confirm by phone, secure company chat, known email, or in person. For financial transfers, use a written approval process and callback rule.
Deepfakes are getting better, so visual clues are not always reliable. Still, some warning signs can help.
Watch for:
Do not rely only on spotting technical flaws. Some fakes look and sound convincing.
What to do:
Treat the request, not just the media, as the warning. If the request involves money, secrecy, credentials, or urgency, verify independently.
Scammers may use publicly available content to make scams more believable.
They may pull from:
You do not need to erase your life online, but you can reduce what strangers can collect.
What to do:
The goal is not fear. It is reducing easy material for scammers.
Use this verification checklist:
If the person pressures you not to verify, that is your answer.
If you received a suspicious AI voice call, fake video, or deepfake message:
The FBI’s 2025 Internet Crime Report found cyber-enabled crime losses approached $21 billion, with cryptocurrency and artificial intelligence-related complaints among the costliest categories.
Act quickly.
If you sent money by bank transfer or wire:
Contact your bank immediately and ask whether the transfer can be stopped, recalled, or investigated.
If you paid by card:
Contact your card issuer and ask about disputing the charge.
If you used a payment app:
Report the transaction in the app and contact your linked bank or card issuer.
If you sent crypto:
Save the wallet address, transaction hash, platform details, and messages. Recovery may be difficult, but documentation matters.
If you shared a password or code:
Change the password immediately, turn on multi-factor authentication, and review account activity.
If you shared personal information:
Monitor accounts, check credit reports, consider a fraud alert, and freeze your credit if sensitive information was exposed.
If a loved one was impersonated:
Tell the family. Create or update a safe word and callback rule.
AI can make scams more convincing, but the safest response is still simple: pause, verify, then decide.
Yes. The FTC has warned that scammers can use a short audio clip from online content to clone a loved one’s voice and use it in an emergency scam.
A deepfake scam uses AI-generated or manipulated video, audio, or images to impersonate a real person, create a fake endorsement, or make a fraudulent request seem believable.
Do not rely on the voice alone. Hang up and call the person back using a known number. For family emergencies, use a safe word or ask a question that cannot be answered from public information.
Be careful. Scammers can create fake celebrity videos that appear to endorse investments, giveaways, or products. Verify through the celebrity’s official channels and independent sources.
Report fraud to the FTC at ReportFraud.ftc.gov. If the scam happened online or involved money, you can also report it to the FBI’s Internet Crime Complaint Center at IC3.gov.
AI voice and deepfake scams are designed to make trust feel automatic. They use familiar faces, familiar voices, and familiar emotions to rush your decision.
Your protection is not knowing every new technology. It is having a verification habit. Pause, call back, ask the safe word, check the source, and never let urgency make the decision for you.
Next Steps:
Share the knowledge: