The government is taking decisive action, as the threat posed by deepfakes in social engineering attacks continues to grow significantly.
In recent weeks, the UK’s AI Safety Institute announced a new research initiative called the Systemic Safety Grants Programme, which aims to fast-track the adoption of technologies that support the fight against deepfakes and other AI threats. The AI Safety Institute has partnered with Innovate UK and the Engineering and Physical Sciences Research Council (EPSRC) to sustain the programme.
One of the most dangerous trends in cyber attacks that UK enterprises face is the integration of AI into phishing, particularly the use of deepfakes in whaling or the targeted phishing of business leaders. C-suite business executives and other senior managers are in the crosshairs of phishing perpetrators, as these cybercriminals focus their attacks on bigger yields.
This is not a new tactic, as there have been a few reported cases of AI-aided whaling over the years, including a 2019 incident that targeted the CEO of a UK-based energy firm. However, the prevalence now is unprecedented.
Enterprise leaders need to understand the threat to implement appropriate preventive and mitigative measures. Armed with the knowledge of how these are created and executed, it is possible to thwart a deepfake-aided whaling scam. Here’s a look at how AI-enhanced whaling works and how to stop it.
Deconstructing a deepfake whaling scam
Reconnaissance missions are the start of a whaling phishing scam. This entails identifying a target and gathering relevant information. The attacker compiles data, particularly the target’s contacts, professional and personal relationships, and written and verbal communications.
These can either be publicly available or gained through other sources, such as an earlier breach into an organisation’s IT resources. The attacker may also collect details about the target’s schedules and work habits to optimise the timing and elements of the attack.
With all the necessary details compiled and analysed, the threat actor can proceed to generate the deepfake to use AI in an attack. Generating convincing deepfakes requires extensive media data.
The next step is execution. At this point, the attacker attempts to communicate with the target. The generated deepfakes may not necessarily be used immediately. The attacker has to go through two phases first: gaining the target’s attention and earning the target’s trust. To solicit attention, the threat actor needs to create a sense of urgency, usually through phishing emails that require immediate responses. To gain the target’s trust, the attacker deploys deepfakes. Once trust is obtained, the threat actor can start making fraudulent requests.
The last step is exfiltration. This is when the attacker collects their spoils, which can be data or money. In some cases, an attack’s main objective may not be data or financial theft but operational disruptions or reputational damage.
Deepfake generation and use
Today, there are many tools to produce deepfakes that are alarmingly available. Open-source tools like DeepFaceLab and FaceSwap make it possible to swap faces in video chats. Proprietary tools like Synthesia and D-ID enable the generation of realistic videos of people speaking based on still images. Meanwhile, there are voice cloning tools such as ElevenLabs and Speechify to simulate a person’s voice in voice exchanges over chat apps or during live calls.
Whaling perpetrators use deepfakes in nuanced ways. Since they are dealing with high-value targets, they cannot rely on generic schemes, as doing so would significantly reduce the chances of success. They have to tailor their attacks based on what the situation needs. If they are dealing with a security-conscious CEO, they would have to use deepfakes strategically. An animated image may be noticeable to some targets, so they have to look for compelling excuses to limit communications through voice chat or text messaging.
The main goal of using deepfakes in whaling is to make the scam more convincing. Attackers seek to manipulate victims emotionally to make them more disposed to comply with requests or demands. They usually focus on the following key vulnerabilities:
- Submission to authority or a relationship of trust – Whalers assume the identity of a top-level business executive to force key senior employees to perform their instructions. An example of this is the attack on supply chain solutions company Scoular, which lost $17 million because of craftily spoofed emails supposedly coming from the company’s CEO.
- Time pressure – Whaling perpetrators take advantage of people’s sense of urgency over a time-sensitive action. For example, the attacker may pose as a vendor and offer a company’s CFO a substantial discount on their payables if payment is made early to a specific account. This time pressure also works on employees when a high-level official is impersonated to compel certain actions, just like what happened with Mattel when a finance executive, eager to please her “new boss”, was duped into sending $3 million to an attacker’s account.
- Personal relationships – Although few have come forward to talk about having been duped by someone impersonating a known and trusted contact to manipulate top-level executives into illegal actions, it is highly plausible that such tactics are employed often in whaling attacks. They tend to be unreported because of the embarrassing nature of the offense and the negative implications for the affected organisations.
Mitigation and prevention strategies
Deepfakes and whaling target people’s lack of cybersecurity vigilance and tendency to trust familiarity. As such, effective mitigation and prevention solutions have to address these weaknesses.
One crucial solution is cybersecurity awareness and training. It is vital for everyone in an organisation to learn how to detect indicators of attacks. Like any employee, executives need to know how to distinguish deepfakes from real videos or audio of people. Everyone should develop the ability to notice poorly synchronised lip movements, unnatural posture and lighting, and odd movements or sounds during video and audio calls.
Another important preventive strategy is to establish a standardised protocol for communications and identify verification. All communications in an organisation should go through a specific process and be limited to the use of specific apps and tools to prevent threat actors from having opportunities to impersonate anyone.
Additionally, official transactions in an organisation should always undergo authenticity verification. Enterprises should adopt the zero-trust principle and never assume legitimacy based on the identity of the official supposedly making a request or instruction. Every payment, invoice approval, or other transaction should go through strict vetting. Organisations should always employ multi-factor authentication.
Lastly, it helps to deploy deepfake detection tools. The ones available now are always playing catch-up with the growing AI tools for deepfaking, but they can provide a good first line of defense, especially for those who are unfamiliar with the proliferation of such attacks.
The future of deepfake scams
The deepfake problem is only going to get worse. AI-enhanced systems are getting better at cloning real voices in real time and simulating videos of real people. Hence, it is vital to take the threat seriously. Organisations must recognize that the battlefield has shifted and adapt their strategies accordingly.
However, it is important to emphasize that AI is not only a tool for threat actors. It is also highly useful in establishing defences. Enterprises should not let cybercriminals monopolise AI for their nefarious goals, and learn to leverage it in providing automated cybersecurity training, evaluating employee behaviours, and spotting cybersecurity risks and attacks.