
By Jim Richberg, Head of Cyber Policy and Global Field CISO, Fortinet, and Fortinet Federal Board Member.
Deepfake Threats To Corporations – Although many people assume misinformation and disinformation only concern governments and politicians, they can also affect the profitability or even the financial viability of corporations.
Many use the term “misinformation” to describe both misinformation and disinformation, but they aren’t the same. Both involve false information, but intent is the distinguishing factor.
According to the American Psychological Association, misinformation is inaccurate information spread by people who may believe it to be true. With disinformation, the perpetrator knows the information is false and is disseminating it for malicious purposes.
Disinformation Using Deepfake Technology
Today, the use of deepfake technology is one of the most powerful tools fueling disinformation. Deepfakes are realistic false video, image, text or voice content. The rise in AI and machine learning embedded in commercially available tools such as generative adversarial networks (GANs) has leveled the playing field and increased the sophistication of deepfake content.
GANs are a type of AI model in which two neural networks interact to create highly realistic videos or audio that mimic real individuals. One network (the content generator) creates fake media, while the other (the discriminator) evaluates it and provides feedback. This process continues until the generator produces media realistic enough that the discriminator cannot determine it’s fake.
Who Targets Companies Using Deepfakes?
Numerous types of malicious actors are most likely to target companies using deepfake technology.
For example, cybercriminals who have stolen samples of a victim’s email along with their address book may use GenAI to generate tailored content that matches the language, tone and topics in the victim’s previous interaction with each potential new target to aid in spear phishing—convincing them to take action such as clicking on a malicious attachment.
Other cybercriminals use deepfakes to impersonate customers, business partners or company executives to initiate and authorize fraudulent transactions. According to Deloitte’s Center for Financial Services, GenAI-enabled fraud losses are growing at 32% year-over-year in the United States and could reach $40 billion by 2027.
Some high-profile cases with multimillion-dollar losses have occurred when corporate financial staff fell victim to video impersonation of company executives. However, in the short term, voice deepfakes may pose an even greater threat because of the growing use of voice recognition software in customer identity and access management at financial institution call centers.
According to a 2024 Ironscales report, 75% of surveyed organizations reported experiencing at least one deepfake-related incident within the previous 12 months. Nearly two-thirds expected the volume of these incidents to overtake ransomware within the following 12 to 18 months.
Disgruntled current or former employees may also generate deepfakes to seek revenge or damage a company’s reputation. By leveraging their inside knowledge, they can make the deepfakes appear especially credible.
Another potential deepfake danger may be from business partners, competitors or unscrupulous market speculators looking to gain leverage in negotiations or to try to affect a company’s stock price through bad publicity.
Reputational Deepfake Attacks
Malicious actors use deepfake tools for two primary reasons. The first is to facilitate fraud or compromise a network, and the second is to inflict harm on a company’s reputation or brand.
The viability and impact of deepfakes in reputational attacks can be a balancing act. Although it’s easier to fabricate credible content about a well-known target with a significant public profile and lots of online executive voice and video content, an attacker can have difficulty successfully changing an existing public impression. Even though a malicious actor may have to work harder to gain the data to successfully target more obscure companies and executives, the ensuing attack can often more readily damage these targets.
Repeated exposure to information enhances the likelihood of consumers believing it, regardless of whether it’s true or even credible. Social media platforms provide an environment for misinformation and disinformation to spread rapidly and be reinforced through repetition. Companies that have a strong preexisting brand reputation can more easily counter this misinformation, but not all firms and brands are well-known.
Mitigating Risks
To prevent financial fraud-related deepfakes, organizations should use many of the same techniques and tools they use to counter other types of malicious cyber activity. Defending your organization against the threat of deepfakes requires a multilayered approach that incorporates both technical and non-technical controls and new business processes.
Although technology is one aspect of a layered defense, organizations should also consider setting up an organizational playbook. According to a Forrester report (via Branding in Asia), only 20% of surveyed companies said they had an incident response and communications plan that covered deepfake attacks. Cybersecurity and IT staff also should keep abreast of the art of the possible in deepfake attacks and work to educate the workforce and the executive team.
The risks of deepfakes should be included in any reputation and brand monitoring efforts. Some specialized providers also offer dark web monitoring that can provide advanced indicators that your company or key personnel are being targeted for deepfake attacks.
Deepfakes target the accuracy of the information we rely on as consumers, employees and investors. In cybersecurity, we often talk about the “human firewall,” and deepfakes are a threat where people genuinely are your first line of defense.