Preemptive Tactics for Deepfake Detection and Prevention
Fraudsters are increasingly adept at creating scams against businesses. One such method is using generative AI (Gen AI) to create deepfakes. AI deepfakes are sophisticated content that recreates a person’s identity via text, email, voice or appearance. Gen AI enables bad actors to execute social engineering schemes, including deepfakes posing as executives or finance staff to gain sensitive information or authorize fraudulent payments.
“Deepfakes created with generative AI are an especially worrisome emerging threat as staff and customers can unwittingly initiate and authorize transactions,” says Caleb Callahan, senior director, Financial Intelligence, Synovus Financial Crimes Unit.
Are deepfakes really a security threat?
AI-generated video and audio pose challenging ethical questions. Are deepfakes identity theft? The lines are blurry. For example, some content comprises original images or voices used for training, advertising and entertainment. But the risk lies in content that is recreated using an individual’s likeness or voice without their consent.
Think of the “I am not Morgan Freeman” video or the “Back to the Future” spoof featuring Tom Holland and Robert Downey Jr. The actors didn’t give consent to use their likenesses, but these examples did no real harm. Other AI deepfakes, however, are created with nefarious intent. Fraudsters recently imitated the chief financial officer and other senior leaders of British design firm Arup on a video call, bilking the company of $25 million.
AI deepfakes aren’t as prevalent as business email compromise and hacking. But they are highly susceptible to fraudulent uses. Among all fraud types, deepfake use in the U.S. rose 1,200% in Q1 2023.1
Prevention is crucial, especially in the financial sector which witnessed a 700% increase last year.2 Deloitte’s Center for Financial Services estimates Gen AI will drive U.S. fraud losses to $40 billion by 2027.3
What are current AI regulations in the U.S?
In 2020, the UK proposed the “AI Act” to govern generative AI creation and use in that country. The legislation passed in May 2024. The AI Act has far-reaching implications for American companies like Google, Amazon and Apple, which are among the largest U.S. computing infrastructure providers.
There currently are no federal laws regulating AI in the U.S. However, government agencies are diligently working to develop legislation that governs responsible Gen AI use. For example, the Cybersecurity and Infrastructure Security Agency established a “Roadmap for AI,” that seeks to4:
- Promote the beneficial uses of AI to enhance cybersecurity capabilities.
- Protect AI systems from cyber-based threats.
- Deter malicious use of AI capabilities to threaten the American infrastructure.
Over a five-year period, 17 states have collaborated in taskforces and councils to enact 29 bills that regulate AI design, use and development. Their primary goals are ensuring privacy and holding creators accountable. The guiding principles are:5
- Engage in collaborative dialogue with stakeholders from multiple disciplines to inform AI design, development and use.
- Protect individuals from unintended, yet foreseeable, impacts or uses of unsafe or ineffective AI systems.
- Ensure that individuals have agency over how an AI system collects and uses personal data.
- Inform individuals when and how an AI system is being used and allow them to opt out in favor of a human alternative.
- Design AI systems in an equitable manner and protect individuals from discrimination.
AI developers must design and deploy systems that comply with the rules and standards governing AI systems. States established various committees to oversee compliance to legislation and developers will be held accountable if they do not meet AI regulation.
Deepfake detection and prevention are the first lines of defense.
The ability to detect deepfake content — including text, voice, images/video — and prevent its potential damage is critical. AI-enabled technology is rapidly advancing in this area. In addition to facial recognition, AI-enabled software features deepfake detectors that can analyze content for natural human behavior, such as blinking, pitch, tone and inflection with great precision. Intel’s “Fakecatcher” can even examine pixels in videos to map blood flow.6 Some applications are also able to detect emotion within audio and video files with 96% accuracy.
AI can evaluate text and email content to verify authenticity in the same way it analyzes voice and image files. The programs scan text to determine if it was created with commonly used language models like ChatGPT and Bard. Most tools include bases for findings, as well as additional prevention tips.
Blockchain technology enables verification of original text, images or videos versus replicas, providing another level of authentication. Watermarked content can also be securely stored within the blockchain.
Transform fraud prevention methods with technology.
Generative AI is becoming more sophisticated, thereby deepfake detection is harder. It’s important to understand how deepfakes work and integrate practices to avoid them. “Training staff and customers to question and verify unusual transaction requests, even those made by trusted individuals, is imperative,” says Callahan.
Continuous learning and evolving technologies help organizations mitigate deepfake risks.
-
Use multi-factor authentication.
Multi-factor authentication (MFA) is a widely used, effective means for identity verification. MFA requires users to successfully enter log-in credentials, plus other information such as tokens, one-time access codes or biometric patterns.
Adaptive authentication, which analyzes requests based on factors like geolocation, behavior, device type and risk, is also useful in identity verification. It automatically asks for additional information for high-risk interactions. Adaptive authentication not only offers greater security but can also reduce friction and increase conversion. -
Adopt voice and other biometric technologies.
Biometrics is a very precise measurement that uses physical characteristics to authenticate that a person is who they claim to be. People are becoming more comfortable with fingerprint scanning, facial recognition and iris scanning as more comprehensive means to validate identity. -
Establish and enforce security policies.
A holistic security plan heightens awareness and strengths fraud prevention efforts.
Benchmark against industry standards and best practices to create policies for a strong overall security posture. Then layer in procedures to identify and manage fraudulent content. -
Test security systems.
How easy is it to breach your networks and systems? Viewing assets as bad actors would (“reverse engineering”) can reveal vulnerabilities and enable companies to improve security. In 2021, Meta collaborated with Michigan University to develop a method that reverse-engineered deepfakes, revealing the AI used to create them. IBM and other technology companies are continuing efforts to disassemble malware and other types of fraud mechanisms.
Corporations must consistently monitor AI threats and learn to proactively detect and thwart deepfakes.
Seek expertise in preventing financial fraud.
AI deepfakes and other fraud attacks have one goal: To steal corporate data or financial assets. Recognized among the best banks in the Southeast, Synovus is equipped to help company’s like yours prevent payment fraud. For more information on how Synovus can help with fraud risk management, complete a short form and a Synovus Treasury & Payment Solutions Consultant will contact you with more details. You can also stop by one of our local branches.
-
A Smaller World, After All: Technology that Makes Sense of Global Trade
Global trade is increasingly complex. Learn how international trade platforms can help your company manage risk.
-
Strategies to Achieve Business Growth in 2025
Organizations are optimistic about the future. These corporate growth strategies will help to achieve your goals in 2025.
Important disclosure information
This content is general in nature and does not constitute legal, tax, accounting, financial or investment advice. You are encouraged to consult with competent legal, tax, accounting, financial or investment professionals based on your specific circumstances. We do not make any warranties as to accuracy or completeness of this information, do not endorse any third-party companies, products, or services described here, and take no liability for your use of this information.
- Sumsub, “Deepfakes are the New Big Threat to Business. How Can We Stop Them?,” June 22, 2023 Back
- WSJ, “Deepfakes are Coming for the Financial Sector,” April 3, 2024 Back
- Deloitte Center for Financial Services, “Generative AI is Expected to Magnify the Risk of Deepfakes and Other Fraud in Banking,” May 29, 2024 Back
- Cybersecurity and Infrastructure Security Agency, “2023-2024 CISA Roadmap for Artificial Intelligence,” November 2023 Back
- The Council of State Governments, “Artificial Intelligence in the States: Emerging Legislation, December 6, 2023 Back
- The AI Journal, “AI in Emotion Recognition: Does it Work?,” February 133, 2024 Back