2024 is expected to be the biggest global election year in history. This coincides with the rapid rise of deepfakes. In the Asia-Pacific region alone, there was a 1,530% jump in deep counterfeiting from 2022 to 2023, according to a Sumsub report.

Link to Photo | Istock | Getty Images

Cybersecurity experts fear that AI-generated content has the potential to distort our perception of reality, a concern that is all the more worrisome in a critical election year.

But one of the top experts disputes the truth, suggesting instead that the threat deep fakes pose to democracy may be “overblown.”

Martin Lee, CTO of Cisco’s Talos intelligence and security research group, told CNBC that he thinks deepfakes — while a powerful technology in their own right — are not as impactful as fake news.

New generative AI tools, however, “threaten to facilitate the generation of fake content,” he added.

AI-generated material can often contain recognizable indicators that suggest it was not created by a real person.

Visual content, in particular, has proven vulnerable to flaws. For example, AI-generated images may contain visual anomalies, such as a person with more than two arms or a limb that is fused with the background of the image.

It can be more difficult to decipher between synthetically generated voice audio and voice clips of real people. But AI is only as good as its training data, experts say.

“However, machine-generated content can often be detected as such when viewed objectively. In any case, generating content is unlikely to limit attackers,” Lee said.

Experts previously told CNBC that they expect AI-generated disinformation to be a key risk in upcoming elections around the world.

“Limited Usefulness”

Matt Calkins, CEO of enterprise technology firm Appian, which helps businesses make apps easier with software tools, said AI has “limited utility.”

Many of today’s generative AI tools can be “boring,” he added. “Once it gets to know you, it can go from amazing to helpful [but] it just can’t cross that line right now.”

“Once we’re ready to trust AI with knowledge about itself, that’s going to be really amazing,” Calkins told CNBC in an interview this week.

That could make it a more effective — and dangerous — disinformation tool in the future, Calkins warned, adding that he’s not happy with the progress made in U.S. tech regulation efforts.

It may take AI to produce something extremely “offensive” for US lawmakers to act, he added. “Give us a year. Wait until the AI ​​insults us. And then maybe we’ll make the right decision,” Calkins said. “Democracies are reactive institutions,” he said.

No matter how advanced AI is, however, Cisco’s Lee says there are some tried-and-true ways to spot misinformation — whether it’s machine-made or human-made.

“People need to be aware that these attacks are happening and be aware of the techniques that can be used. When we come across content that stirs our emotions, we need to stop, pause, and ask ourselves if the information itself is believable, Lee suggested.

“Is it published by a reputable media source? Are other reputable media sources reporting the same thing?” he said. “If not, it’s probably a scam or disinformation campaign that should be ignored or reported.”

https://www.cnbc.com/2024/05/09/generative-ais-disinformation-threat-overblown-cyber-expert-says.html