TechNews

Deepfake Phishing: That’s Not Your Boss on the Phone

Phishing attacks occur over more than just email. Any communication platform represents an opening for an attack, from social media to text message. Email phishing has historically represented the largest threat over the last 5 years, stealing over $12 billion. However, as email phishing defenses have strengthened, attackers are spreading out across multiple communication channels. A portmanteau of voice and phishing, the vishing attack represents the next step in social engineering attacks.

What is a Deepfake

Deepfake describes a video, image or audio file that was never actually recorded. Common examples display an advanced form of face swapping, but by training an AI to recognise the patterns, movements and mannerisms of the face and voice, it’s possible to fabricate almost every single component of a person’s virtual presence.

Take the examples of Obama saying that “Killmonger was right”; or of Mark Zuckerberg stating he had “total control of billions of people’s stolen data ”. These are only identifiable as deepfakes thanks to the fact that the creators explicitly stated as much.

These are created through a variety of artificial intelligence algorithms. In one technique, patterns of facial movement are tracked, logged and replicated through a generative adversarial network (GAN). This describes two artificial intelligence working off each other: one “generator” replicating the target’s face, and the “discriminator” which makes a judgment on how real or fake they look. If an individual wanted to capture the face of a particular celebrity, they’d feed this network existing videos, making sure to include as large a variety of angles as possible.

Once this data set has been thoroughly analyzed, it is extracted and applied to other media. This way, realistic movements and familiar expressions are implanted into a pre-existing piece of media. Make no mistake: this process is not limited just to visual elements. Long reels of speech recordings can also be fed into a similar algorithm. This processes the cadence, tone and inflections of the speaker, allowing a creator to produce entirely synthetic audio. Combine that with video, and you’ve got yourself a convincing recording of someone saying – or doing – something they didn’t.

This fabrication process is becoming easier and easier, too. ‘Shallowfakes’ have already hit the larger market, as apps turn still images into lifelike projections. The overwhelming majority of deepfake creators are within the pornography industry, unconsensually implanting women’s faces onto adult actors.

However, an increasing number of cybercriminals have taken note of the technology. Ransoming with fake and severely compromising videos is also on the rise, as attackers threaten people or businesses with a smorgasbord of illicit – but completely fabricated – behaviors. In the face of the potential damage wreaked on their reputation if released to the public, many victims choose simply to pay their ransom.

The increasing prevalence of deepfake technology has already started seeping into phishing attacks, as attackers can now mimic figures of authority across an even greater number of platforms. Combined with a landscape of WFH, and how it’s now possible to have colleagues you’ve worked with but never once met face-to-face, the threat landscape is ripe for a major deepfake attack.

Already in the Wild 

In 2019, the CEO of a UK-based energy company received a call. The CO of the firm’s German parent company was on the other end: his boss requested $243,000 to be transferred over. This was apparently urgent, as the caller demanded the money within the hour. The CEO was certain that it was his boss over the phone; he recognized the slight German accent and voice.

Complying, the CEO sent over the six-figure amount and continued with his day. Unfortunately, in one of the first cases of artificial intelligence-based hacking, he had just been duped into sending that money to an attacker-controlled bank account.

More general vishing attacks have replicated this attack pattern across Whatsapp and Messenger voice messages, adding an extra layer of apparent authentication to attackers’ attempts.

Furthermore, deepfake vishing can be utilized directly against phishing protection measures. Many organizations train their employees to verify suspect emails in a specific manner: by ringing up the sender. With a dual-prong email and audio attack, attackers could pull ahead of their victims’ phishing defenses.

Protecting Against Deepfake Phishing

Whereas it’s relatively simple to prevent email phishing attacks, vishing represents a far harder task for technology-based prevention. Vishing occurs primarily over phone calls – and could soon be occurring over video calls. To automatically defend against this, an organization would need to eavesdrop on all phone and video communications to find warning signs to detect an attack.

This is not necessarily impossible – for example, an initial weakness of even high-quality deepfakes was in the fake’s blinking. Because most deepfake datasets were fed images and video of their targets’ faces with the eyes open, the fake videos created from these would often appear to blink strangely, or not at all. In setting one AI against another, protective technology could technically pick up on these discrepancies and alert the viewer of a potential deepfake. However, eavesdropping on every communication channel between employees is somewhat immoral.

And of course, as soon as the blinking was identified as a widespread issue, deepfake creators began filling their datasets with blinks, and the weakness was soon eradicated.

Instead of attempting to wipe out the deepfake threat at the source, organizations should focus on the attacker’s objectives in their threat modeling. A vishing attack will have one of two goals: gaining access or infecting a device with malware. A threat model will isolate both of these scenarios, and separate solutions should be implemented for both.

In the case of illicit access – for example, through a compromised app account – a Web Application Firewall can detect and block unauthorized access. Credential theft is one of the major goals of any phishing attack, and represents a severe hole in an organization’s defenses. In the face of suspected credential theft, a WAF uses device threat intelligence to retain a degree of control over who connects to your network.

Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *