In the last two years, growing concerns around the proliferation of and advances in deepfakes has raised concerns around their potential to impede adoption of facial and voice biometrics.

Deepfakes are increasing because many organizations are migrating identity verification, authentication, and high-value, high-risk transactions (e.g., payments, taking out an insurance policy) to remote digital interactions, rendering traditional in-person vetting procedures obsolete. Meanwhile, there have been significant advances in computing power and deepfake generator algorithms. Deepfakes can cause fraud losses, data breaches, compliance issues, and reputational damage. Deepfakes are easier to generate than ever, more convincing than ever, and permeate across all channels (including call center, mobile app, and online web).

Orgs need a strategy for defending against deepfakes because:

  • People are highly susceptible to falling for deepfakes.study sponsored by the UK’s Royal Society reports that “when individuals are given a warning that at least one video in a set of five is a deepfake, only 21.6% of respondents correctly identify the deepfake as the only inauthentic video, while the remainder erroneously select at least one genuine video as a deepfake.” Forrester expects that without warnings, the detection rates are even lower.
  • Deepfakes affect not just authentication but authorization, too. Deepfakes permeate not just authentication but also onboarding and authorization of high-risk, high-value transactions. In 2024, an employee in an organization’s finance department mistakenly paid out $25 million to fraudsters after the fraudsters, who created a deepfake video of the chief financial officer, instructed him to do so.
  • Deepfake creation has never been easier. It takes about 10 minutes to register for or optionally pay $10 to $20 (and decreasing over time) for GPU power, upload the victim/target’s video/audio, and upload the message (source) video/audio/text to Gooey.AI, Deepfakesweb.com, Deepgram.com, Wavel AI, and other online deepfake generation services. Mobile apps such as DeepFaceLab, Reface, and ZAO require no coding.
  • All deepfakes are not malicious. Government agencies, airport authorities, and chatbot vendors have been creating deepfakes for legitimate purposes, often to create human-looking and -sounding bots with which customers can have natural, convenient, and familiar conversations.

Protection against deepfakes takes many forms, from protecting the channel to understanding user behavior to looking at data artifacts in the deepfakes. Our just-published report, Detecting And Defending Against Deepfakes, discusses the most relevant methods that, when used in combination, help strengthen defenses against deepfakes, including spectral artifact analysis, liveness detection, behavioral analysis, and generative adversarial networks, as well as human training and processes that can assist in deepfake detection.

If you are looking to better protect your organization from deepfakes, please read our report and schedule an inquiry or guidance session with us.