Impersonation is hardly a revolutionary type of fraud, but this summer Patrick Hillman, chief communications officer at cryptocurrency exchange Binance, fell victim to a new approach to spoofing – using artificial intelligence (AI)-generated video, also known as deepfake.

Deepfakes take an existing video of a real person and create a simulation that can be used in criminal activities (photo: Yakov Oskanov/Shutterstock)

In August, Hillman, who has been with the company for two years, received several messages online from people who claimed he had met with them about “potential opportunities to list their assets on Binance” – something he found odd because had no oversight of the Binance listings. The executive also said he had never met any of the people who sent him the messages.

IN company blog post, Hillman alleged that cybercriminals set up Zoom conversations with people through a fake LinkedIn profile and used his previous news interviews and TV appearances to create a deep fake of him to participate in the conversations. He described it as “sophisticated enough to fool a few highly intelligent members of the crypto community.”

This high-tech embodiment of the well-known “Nigerian Prince” email scam. it could prove costly for victims, and for cybercriminals the prospect can be enticing. Instead of putting resources into traditional forms of cyberattacks like DDoS attacks or account hacking, they can potentially create a deep fake version of a well-known company executive, impersonating their image and, in some cases, voice.

Bypassing conventional cybersecurity authentication protections, hackers can video call a company employee or even call them on the phone and request a money transfer to a “company bank account.” In the case of Binance, the scammers promised a Binance token in exchange for some money.

But despite their high profile, cases of confirmed deep fake cyberattacks are few and far between. And while the technology is becoming easier to access and deploy, some experts believe it will retain a complexity that puts it beyond the reach of cybercriminals. Meanwhile, experts are developing methods that could neutralize attacks before they start.

Henry Ider is an expert on deeply fake videos and other so-called “synthetic media”. As of 2019, he has been exploring the deepfake landscape and hosting a podcast BBC Radio 4 on the disruptive ways in which these images change everyday life.

He found that the term “deepfake” originally appeared on Reddit in late 2017, referring to a woman’s face superimposed over pornographic footage. But since then, he said technical monitor, it has expanded to include other types of generative and synthetic media.

Content from our partners
How to protect the public sector against ransomware attacks

Can transformational procurement help the public sector?

The ongoing battle to protect schools from cyberattacks

“Voice audio deep spoofing is cloning someone’s voice either through text-to-speech or speech-to-speech, which is like voice ripping,” he explains. Voice skinning is when someone else overlays a voice over your own in real time.

Adjer continues: “There are also things like generative text models like OpenAI’s GPT3, where you can type a prompt and then get a whole passage of text that sounds like the person who wrote it.”

Although it has evolved as a term that encompasses a broader meaning, Ider says that the majority of deeply fake content has a “malicious origin” and is what he would call image abuse. He adds that the increasing commercialization of the tools used to create deep fakes means they are easy to use and can be deployed via lower-powered machines such as smartphones.

This evolution also means that the end result is even more realistic. “You’ve got this pretty powerful triad of increasing realism, efficiency and affordability,” Ayder says.

Deepfakes: a scam on steroids

What does this mean for business? While image misuse is more about private individuals, in the cyber security space, Ayder says there are increasing reports of deep fakes being used against businesses, also known as “vishing”.

“Vishing is like voice phishing,” he says. “People are synthetically reproducing the voices of business leaders to extort money or get confidential information.” Eider mentions that several reports have come from the business world showing millions of dollars siphoned off by people posing as financial controllers.

“We’re also seeing people increasingly use real-time puppetry or facial reconstruction,” Eyder said Technical monitor. “It’s the equivalent of having an avatar of someone whose facial movements will mirror my own in real time. But apparently the person on the other end of the call doesn’t see my face, they see that avatar’s face.

This is the method believed to have been used to introduce Hillmann to Binance. Ider describes using deepfakes in this way as “fraud on steroids” and says it’s an increasingly common tactic used by cybercriminals.

Limited and conflicting information about deeply fake cyberattacks

While there have been reports of the use of deepfakes and the opportunities they offer cybercriminals, confirmed reports of their deployment remain limited.

David Sancho, senior threat researcher at Trend Micro, believes the problem is real. “The potential for abuse is very high,” he says. “There are successful attacks in all three use cases [video, image and audio] and I think we will see more.”

The researcher mentions an attack that took place in January 2020 that is being investigated by the US government. On this occasion, the cybercriminals managed to convince an an employee of a bank based in the United Arab Emirates that they were a director of one of the client companies, using deepfake audio as well as fake email messages. The bank employee was persuaded to transfer funds to

However, Sophos researcher John Shire said Technical monitor that there is no real indication that cybercriminals are using deep fakes “at scale.”

“There doesn’t seem to be any real concerted effort to incorporate deep fakes into cybercrime campaigns,” he says.

Shire believes the complexity involved in creating a convincing deepfake is still enough to deter many criminals. “Although it’s getting easier every day, it’s still probably beyond most cybercriminal gangs to do at the scale and speed they’d like compared to just sending out three million bulky phishing emails at once,” he says.

Scientists are developing ways to identify deepfakes

As deep fakes become more sophisticated, scientists in the cybersecurity space are fighting back. A method developed by academics at New York University, duplicated I GOT YOU (the name is an homage to the CAPTCHA system widely used to verify human users of websites) was developed to try to identify deep fakes before they can do any harm.

The method involves asking people to perform tasks, such as covering their face or making an unusual expression, that the deepfake algorithm is unlikely to have been trained on. However, the team behind the study notes that it can be “difficult” to get users to “comply with routine testing” and also suggests automated verification through the ability to apply a filter or sticker to a stream to confuse the deepfake model.

Additionally, advances in audio forgery detection have been made by researchers at the University of Florida. They have developed a way to measure the acoustic and fluid dynamic differences between vocal samples created organically and those that are synthetically generated.

Trend Micro’s Sancho says criminals can find ways to bypass such protections. “Note that there is more than one [type of algorithms], so if the results aren’t great with one, the attacker can refine it or try another until the end product is convincing,” he continues. “The starting material can be chosen so that the final product is good enough; attackers are not looking for perfection, just to be convincing about the target they are pursuing.”

Deepfakes have already found success in business scams – even more so in the form of romance scams and revenge porn – and scientists clearly already see it as a threat. How long before we see phishing or identity swapping on the same scale as phishing campaigns?

“It’s crazy how far technology has come in such a short period of time,” says Eyder. “If I am trying to impersonate someone to obtain confidential information, for financial extortion or fraud, [cybercriminals] will require very little data to generate good models. We are already seeing huge progress in this regard.”

Will deepfake cybercrime ever go mainstream?

Previous articleDolly Parton says she doesn’t think she’ll ever tour again
Next articleStrange material demonstrates exotic quantum state at room temperature