The New Guidelines for Identifying AI Deepfakes
A 2024 survey found that 43% of respondents had seen recent deepfake content, 26% had encountered a deepfake scam, and 9% had been a victim of one.
When deepfake AI scams began emerging, the fraudsters behind them seemed to have a leg up on everyone. Consumers were caught by surprise and easy to fool. Law enforcement didn't know enough about the technology to help people recognize them. And organizations that track scams weren't prepared to start counting them.
Even as AI video and voice scam technology continues to advance, the rest of the world is now catching up. A McAfee survey taken during the 2024 U.S. presidential election found that 43% of respondents had seen deepfake content in the prior 12 months, 26% had encountered a deepfake scam, and 9% had been a victim of one.1
Deepfake video scams have surged 700% over the last three years, according to ScamWatch HQ. Beyond deepfakes, overall AI scams and fraud are up too; Deloitte's Center for Financial Services predicts that generative AI could lead fraud losses to reach $40 billion in the U.S. by 2027.4
That said, consumer protection organizations now have more data about deepfake videos and voice scams — and a sharper understanding of these crimes. In September 2025, a partnership between the American Bankers Association (ABA) and the Department of Justice (DOJ) released new guidelines for detecting AI deepfake media.1 While AI-generated scams are likely to continue to evolve, for now, this is the latest on staying savvy to deepfake media.
A Clear Deepfake Definition
Like so many internet terms, it can be hard to keep track of what a "deepfake" actually is. According to the ABA and DOJ:
"Deepfakes can be altered images, videos or audio. They may depict people you know — including friends and family — or public figures, including celebrities, government officials and law enforcement."
Victims can experience deepfake scams as:
- Someone who sounds like a friend or family member in an emergency situation asking for money.
- Video calls from a friend or family member asking for help accessing their social media accounts.
- Video of a public figure endorsing an investment or product.
- A compelling image of a natural disaster victim used to solicit charity donations.
In September 2025, a partnership between the American Bankers Association and the Department of Justice released new guidelines for detecting AI deepfake media.
There are many variations on deepfake media scams, so consumers should be prepared for new iterations to surface.
How To Detect a Deepfake
When deepfake media first started to emerge, there were often telltale signs, like hands with too many fingers. Technological advancements have made these gaffes less common.
Here's what the ABA and DOJ endorse as deepfake red flags now:
- Blurry or distorted facial features.
- A person blinking too much or too little in a video.
- Inauthentic-looking hair and teeth.
- Audio and video being out of sync.
- A flat or unnatural tone of voice.
- Unnatural shadows or lighting.
- Emotionally charged messages involving fear or urgency.
- Requests for money, passwords, personal information, or secrecy.
- Uncharacteristic behavior from friends, family, or public figures.
How To Avoid Falling Victim to a Deepfake Scam
The partners also recommend these actions to protect yourself from becoming a victim of a scam involving deepfake media:
- Stop and think before reacting to a video, image, or a voice or video call. Are they asking for money, information, or an action like sharing on social media? (These are things scammers typically do.)
- Verify the legitimacy of people and requests. Hang up and call back during any urgent phone or video call. (Again, Synovus will NEVER call, email or text you asking for personal information, login credentials or computer access.) Use trusted numbers, official websites and online reverse image/video search tools (like Google Image search) to see if an image has been used elsewhere on the internet in suspicious ways.
- Create codewords or phrases with loved ones. When a suspicious call or video reaches you, ask them for the code.
- Do what you can to limit your digital footprint. Scammers use photos, voice clips and videos of people found online to create deepfake media.
- Never share a video or image online without verifying the source.
It can be frustrating to feel like anything you see online could be fake — and a scam, at that. Staying attuned to the latest AI developments and adopting a "verify everything" policy can help you stay ahead of scammers. And if you do fall for a deepfake — as many people do — the ABA and DOJ ask victims to report the incident to their local police, the FBI's Internet Crime Complaint Center, and your financial institution, if any money was exchanged.2 3
-
Looking to Improve Your Credit Report? Avoid These Scams.
Looking to repair your credit? Don't get scammed along the way. Watch for these credit repair scams to protect yourself.
-
5 Ways to Protect Yourself From Identity Theft
Identity thieves can damage your credit, steal your money, and disrupt your life. Here are ways to protect yourself against ID theft.
Important disclosure information
This content is general in nature and does not constitute legal, tax, accounting, financial or investment advice. You are encouraged to consult with competent legal, tax, accounting, financial or investment professionals based on your specific circumstances. We do not make any warranties as to accuracy or completeness of this information, do not endorse any third-party companies, products, or services described here, and take no liability for your use of this information.
- McAfee, "McAfee Study Reveals People’s Deep Concerns About the Impact of AI-Generated Deepfakes During Critical Election Year," published Apr 18, 2024. Accessed March 2, 2026. Back
- American Bankers Association, "Deepfake Media Scams," published September 2, 2025. Accessed March 2, 2026. Back
- Federal Bureau of Investigation, "Internet Crime Complaint Center," accessed March 2, 2026. Back
- Audrina Sinclair, “AI deepfakes are easier to make, harder to spot and made to fool you,” CBS News, published February 13, 2026. Accessed March 2, 2026. Back