While it might seem like generative AI is the only use case for AI around today, just a few years ago, deepfakes wore the mantle for attention and hype in the AI universe. That’s fallen off considerably today, but we will likely see a resurgence in interest based on attacks using deepfakes to scam and defraud enterprises soon, because the paths to monetization are now clear. We’ve been predicting the impact of deepfakes on the enterprise since 2020. We also wrote about how the tactics used in influence operations would become weaponized against enterprises.

Harmful use cases for deepfakes targeting enterprises include:

  • Fraud. Deepfake technologies can clone faces and voices, two common ways to authenticate and authorize activity. Using deepfake technology to clone and impersonate an individual will lead to fraudulent financial transactions victimizing individuals, but it will also happen in the enterprise (impersonating a senior executive to authorize wire transfers to criminals, for example). This scenario already exists today and will increase in frequency soon.
  • Stock-price manipulation. Stock prices fluctuate based on newsworthy events. For example, when well-regarded senior executives depart a publicly traded company, the stock price will decrease due to anxiety over the new leader. A deepfake of a CEO announcing the departure of a CFO may cause stocks to suffer a short decline in price. While this seems minor, if timed correctly, this could impact employee compensation and the company’s financing efforts.
  • Reputation and brand. Imagine a multiminute-long tirade from a prominent executive using offensive language, insulting customers, blaming partners, and making up information about your products or services. This scenario is what your board and PR team dreads, and it’s all too easy to artificially create this scenario today. By the time your firm reacts to this hypothetical event, the damage is done. If the video goes viral, it becomes impossible to expunge entirely. This sort of misinformation about your brand will linger for years, resurfacing occasionally and forcing your company to repeatedly respond.
  • Employee experience and HR. The history and origin of deepfakes will always be haunted by its use in creating nonconsensual pornographic material because 1) that’s what brought it to prominence and 2) this still exists. Take a scenario where one employee creates this kind of deepfake content using the likeness of another employee and it begins to circulate. This may seem hypothetical, but the FBI released details of sextortion scams using a similar tactic targeting individuals. This is the kind of petty, spiteful sabotage that will damage an employee’s mental health, sidetrack their career, and almost certainly result in costly litigation — and, of course, could also target executives, board members, and other prominent spokespeople.
  • Amplification. Most of the time when we think deepfakes, we think simply of creating fake content, and rightfully so. But you could just as easily use deepfakes to spread other deepfake content. Think of this as bots spreading content … but instead of giving those bots usernames and post histories, we give them faces and emotions. Now imagine using those deepfakes to create reactions to an original deepfake used to damage your brand to increase the likelihood that it is seen by a broader audience.

Truth Versus Believability

Refuting these kinds of attacks might seem straightforward and logical. That logical viewpoint neglects to consider that this is not a dispute between fact and fiction but truth and believability. Here is how this works:

  1. Something need not be true if it is believable.
  2. Introduce the “illusory truth effect”: People believe things that they repeatedly hear are true.
  3. Add social media style virality.
  4. Combine all these, and it explains why these attacks will impact your organization — and linger — for longer than you would think.

Obligatory Mentions Of Misinformation and Disinformation

I didn’t mention these (yet), because remember what I said before: This blog focuses on enterprise issues with respect to deepfakes. And yes, some of these events would be categorized as misinformation and disinformation. Widespread influence operations designed to spread misinformation and disinformation require government solutions. While I think that CISOs are heroes, I don’t think we can ask the average cybersecurity leader to solve this problem for society. I also left out consumer scams such as the current AI scam calls that the FTC warned about because those target individuals, not enterprises.

As an example of this: China’s internet regulator recently announced that 41 algorithms and tech giants authorized as “deep synthesis service providers” need to comply with its Administrative Provisions on Deep Synthesis for Internet Information Service, per the South China Morning Post. In addition, laws in California and Texas cover deepfakes used to influence elections but not commercial organizations.

Detecting, Investigating, Protecting Against, And Responding To Deepfakes

For most security leaders, this is not a high priority. My recent quotes in Bloomberg say that CISOs aren’t spending money on solving this problem right now — emphasis on right now. But the moment that these use cases impact an organization in a material way … budget dollars will emerge. And that’s where this blog gives you a head start on deepfake countermeasures. You might also notice that I didn’t say “preventing” deepfakes. That’s not possible, so go ahead and set prevention aside.

Academic Research

Deepfake detection currently has plenty of academic and corporate research dedicated to it. For an excellent curated list, check out Awesome Deepfakes Detection on GitHub.

Corporate Research

Intel recently released FakeCatcher, which can detect deepfakes with 96% accuracy in seconds. The tech giants have also released research related to deepfake detection. These aren’t solutions I’d depend on, and they have the same drawbacks as corporate research: No support models exist, which is a requirement for modern enterprises.

Commercial And Open Source Solutions

Commercial solutions are also available for detecting deepfakes. These solutions span multiple categories. Some focus on brand and reputation, while others prioritize fraud prevention. There are also some open source tools available.

Company Website
Sensity https://sensity.ai/
Deepware https://Deepware.ai/
YPB Systems https://ypbsystems.com/en/
Blackbird.AI https://blackbird.ai
Attestiv https://attestiv.com/
Reality Defender https://realitydefender.com/
Sentinel https://thesentinel.ai/
DeepTrace Technologies https://www.deeptracetech.com/

Using blockchain as a “source of truth” continues as one possible avenue, though it has yet to materialize as the answer for digital content.

Nontechnical Solutions

Several nontechnical solutions also exist that enterprises can integrate into existing processes. These include using code words or passphrases that rotate regularly for any phone- or text-based account transfer requests.

History also matters when it comes to defending against reputational damage from deepfakes. If your company operated in line with the seven levers of trust in Forrester’s trust imperative, then it’s significantly less likely that any deepfakes attempting to damage your brand and reputation will take hold, whereas, if the company has a history of operating without empathy and transparency, it becomes much easier for people to believe unfounded assertions about your brand. We will have more research coming out on this topic soon!

Forrester clients looking for a deeper dive on this topic can schedule time with me via inquiry or guidance session.