On March 10, 2021, the FBI issued a warning about the threats posed by deepfake technology. “Malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months,” reads the statement, which points to the rise in AI- and ML-enabled technologies to spread misinformation and supercharge social engineering attacks.
The FBI defines synthetic content as “the broad spectrum of generated or manipulated digital content, which includes images, video, audio, and text.” Deepfakes are one such type of synthetic content, and, given that human error accounts for the majority of successful cybersecurity attacks, it is imperative that your organization educates its employees on this emerging threat.
What is a deepfake?
Deepfakes are a type of technology-generated synthetic media that manipulates an existing image or video with fake content using generative neural networks. While such tools as Photoshop have been used for years to manipulate images, the progress made in the fields of machine learning and artificial intelligence has resulted in the wide availability of and easy access to programs that allow anyone to create a fake image or video.
What organizations often don’t realize – and what cybersecurity experts have been warning about for a while now – is that deepfakes pose a real threat to organizational cybersecurity. Its application is not limited to entertainment purposes – be it swapping Nicolas Cage’s face to make it look like he starred in different movies or posting a funny Instagram video. It can be used to produce blackmail materials, commit fraud, and trick people through social engineering tactics, successfully gaining access to a network.
In response to the increasing threat posed by deepfake technologies, DARPA – the Defense Advanced Research Project Agency – established a MediaForensics program, which aims to “brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.” However, there isn’t yet a surefire way to automatically detect deepfakes.
The dangers of deepfakes to organizational cybersecurity
Imagine one of your employees receives an email that appears to be from the head of the IT department. The email contains a video file, in which the IT head urges to share the login information to the database due to an urgent system upgrade. Your employee has no reason to suspect that this video and email are inauthentic and provides the requested details. In a matter of seconds, your network gets attacked by ransomware, putting all data you store on your clients, patients, or customers at risk.
Deepfake technologies are a powerful tool that cyberattackers can and do use to launch highly convincing social engineering attacks. Given the fact that social engineering strategies like phishing, vishing, smishing, and pharming have been consistently the most frequent type of internet crime, it is pretty much inevitable that your organization’s employees will be targeted by deepfake-based attacks.
That’s why it is crucial that 1) you pay close attention to training your employees on the ways to spot and deal with social engineering attacks, and 2) you make sure that the training incorporates the concept of deepfakes and how to respond to them.
How to educate your employees about the dangers of deepfakes
- Start by explaining the concept – what it is and why malicious actors would want to leverage this technology.
- Explain how deepfakes can be used to try to get employees to compromise the network security – knowingly or unknowingly. For example, a deepfake video can be made of a colleague, a member of the cybersecurity team, or a senior executive, directing an employee to take certain actions. Deepfake technologies can also be used for blackmail. For example, employees may receive a fake video of themselves committing various acts – pornography is a common deepfake application – and a threat to release the video unless they provide network access information.
- Provide your employees with a step-by-step guide to handling such instances. Often, such social engineering campaigns come with an urgent call to action. “Bob, I am about to get on a flight and if you don’t send me this information right away, we will lose a major client and you will be fired!” or “You have 2 minutes to provide the requested information or this video of you goes public.” Malicious actors exploit human psychology to push people into making rash decisions.
- Make sure your employees have a dedicated person on the IT and cybersecurity team to contact when they suspect that they are a target of a deepfake-based attack. At the same time, ensure that your cybersecurity team develops protocols to respond to such reports in the shortest amount of time possible.
- Create a company-wide policy on how employees should handle suspicious digital activities like social engineering attacks and make sure to explain that employees will not face any repercussions for following the protocol.
As the complexity of cyberattacks increases, so should the vigilance of organizations when it comes to cybersecurity training programs. One training per year is insufficient. As the complexity of cyberattacks increases, so should the vigilance of organizations when it comes to cybersecurity training programs. One training per year is insufficient. Regular education on responsible digital behaviour and a robust cybersecurity infrastructure will go a long way toward ensuring your cyber safety in this new digital world.