2024 is expected to be the biggest global election year in history. This coincides with the rapid rise of deepfakes. In the Asia-Pacific region alone, there was a 1,530% jump in deep counterfeiting from 2022 to 2023, according to a Sumsub report.
Link to Photo | Istock | Getty Images
Ahead of Indonesia’s February 14 election, a video of the late Indonesian President Suharto standing up for the political party he once presided over has gone viral.
The AI-generated deep fake video that clones his face and voice has amassed 4.7 million views on X alone.
This was not a one time incident.
In Pakistan, around the national elections, a deep fake appeared of former Prime Minister Imran Khan, who announced that his party was boycotting them. Meanwhile, in the US, New Hampshire voters heard a deep fake of President Joe Biden asking them not to vote in the presidential primary.
Deepfkes of politicians are becoming more common, especially since 2024 is expected to be the biggest global election year in history.
It is reported at least 60 countries and more than four billion people will vote for their leaders and representatives this year, making deepfakes a matter of serious concern.
According to a Sumsub report in Novemberthe number of deepfakes worldwide increased 10-fold from 2022 to 2023. In the Asia-Pacific region alone, deepfakes grew by 1,530% during the same period.
Online media, including social platforms and digital advertising, saw the largest increase in identity fraud rates at 274% between 2021 and 2023. Professional services, healthcare, transportation and video games were also among the industries affected by identity fraud .
Asia is not ready to deal with deep election rigging in terms of regulation, technology and education, said Simon Chesterman, senior director of AI governance at AI Singapore.
In his Global Threat Report 2024cybersecurity firm Crowdstrike reported that with the number of elections scheduled for this year, nation-state actors, including China, Russia and Iran, are very likely to run disinformation or disinformation campaigns to sow disruption.
“The more serious interventions would be if a major power decided it wanted to disrupt elections in a country — that would probably be more impactful than political parties playing on the fringes,” Chesterman said.
Although several governments have tools (to prevent online lies), the fear is that the genie will be out of the bottle before they have time to push it back.
Simon Chesterman
Senior Director AI Singapore
However, most deepfakes will still be generated by actors in the respective countries, he said.
Carol Sohn, principal research fellow and head of society and culture at the Singapore Institute of Policy Studies, said local actors could include opposition parties and political opponents, or far-right and left-wing parties.
Deepfake dangers
At the very least, deep fakes pollute the information ecosystem and make it difficult for people to find accurate information or form an informed opinion about a party or candidate, Soon said.
Voters can also be turned off by a particular candidate if they see content about a scandalous issue that goes viral before it’s debunked as fake, Chesterman said. “Although several governments have tools (to prevent online lies), the worry is that the genie will be out of the bottle before it has time to push it back.”
“We’ve seen how quickly X can be taken over by deeply fake pornography featuring Taylor Swift — these things can spread incredibly quickly,” he said, adding that regulation is often insufficient and incredibly difficult to enforce. “It’s often too little, too late.”
Adam Myers, head of counter-adversarial operations at CrowdStrike, said deep fakes can also trigger confirmation bias in people: “Even if they know in their heart that it’s not true, if that’s the message they want and something they want to believe in, they’re not going to let it go.”
Chesterman also said that fake footage that shows election misconduct, such as ballot stuffing, can cause people to lose faith in the validity of the election.
On the other hand, candidates may deny the truth about themselves, which may be negative or unflattering, and instead attribute it to deep fakes, Sohn said.
Who should be responsible?
There is now a realization that more responsibility needs to be taken from social media platforms because of the quasi-public role they play, Chesterman said.
In February, 20 leading technology companies, incl Microsoft, Meta, Google, Amazon, IBM, as well as artificial intelligence startup OpenAI and social media companies such as Snap, TikTok and X have announced a joint commitment to combat the fraudulent use of AI in elections this year.
The signed technology agreement is an important first step, Sohn said, but its effectiveness will depend on implementation and enforcement. As tech companies adopt different measures across their platforms, a multi-pronged approach is needed, she said.
Technology companies will also need to be very transparent about the types of decisions that are made, such as the types of processes that are put in place, Sohn added.
But Chesterman said it’s also unreasonable to expect private companies to perform what are essentially public functions. Deciding what content to allow on social media is a difficult decision, and it can take companies months to decide, he said.
“We shouldn’t just rely on the good intentions of these companies,” Chesterman added. “That’s why regulations need to be put in place and expectations set for these companies.”
To that end, the Coalition for Content Provenance and Authenticity (C2PA), a non-profit organization, introduced digital content identifierswhich will show viewers verified information such as creator information, where and when it was created, and whether generative AI was used to create the material.
C2PA member companies include Adobe, Microsoft, Google and Intel.
OpenAI has announced that it will be implementing C2PA content credentials to images created with its DALL·E 3 offering earlier this year.
“I think it would be terrible if I said, ‘Oh yeah, I’m not worried. I feel great.’ For example, we’ll have to watch this fairly closely this year [with] super strict monitoring [and] super tight feedback.”
In an interview with Bloomberg House at the World Economic Forum in January, OpenAI founder and CEO Sam Altman said the company is “pretty focused” on making sure its technology isn’t used to manipulate elections.
“I think our role is very different from the role of a distribution platform” like a social media site or a news publisher, he said. “We have to work with them, so it’s like generate here and distribute here. And there should be a good conversation between them.”
Myers proposed creating a non-profit bipartisan technical organization with the sole mission of analyzing and identifying deep fakes.
“The public can then send them content they suspect has been manipulated,” he said. “It’s not flawless, but at least there’s some mechanism that people can rely on.”
But ultimately, while technology is part of the solution, much of it comes down to consumers not being ready yet, Chesterman said.
Soon also emphasized the importance of educating the public.
“We need to continue outreach and engagement efforts to increase vigilance and awareness when the public comes across information,” she said.
Society must be more vigilant; in addition to fact-checking when something is highly suspicious, users should also fact-check critical information, especially before sharing it with others, she said.
“There’s something for everyone to do,” Skoro said. “Everything is at hand.”
— CNBC’s Mackenzie Sigalos and Ryan Brown contributed to this report.
https://www.cnbc.com/2024/03/14/as-asia-enters-a-deepfake-era-is-it-ready-to-handle-election-interference.html