2024 is set to be the biggest global election year in history. It coincides with the rapid rise of deepfakes. In APAC alone, there was a 1530% increase in deepfakes from 2022 to 2023, according to a Sumsub report.
Photo Link | Istock | Getty Images
Cybersecurity experts fear that AI-generated content has the potential to distort our perception of reality — a worry that’s all the more worrisome in a year filled with crucial elections.
But one leading expert is going against the grain, suggesting instead that the threat deep fakes pose to democracy may be “overblown”.
Martin Lee, CTO of Cisco’s Talos intelligence and security research group, told CNBC that he believes deepfakes — while a powerful technology in their own right — are not as impactful as fake news.
But new artificial intelligence generation tools “threaten to make the creation of fake content easier,” he added.
AI-generated material may often contain commonly identifiable indicators that indicate it was not produced by a real person.
Visual content, in particular, has proven vulnerable to flaws. For example, AI-generated images may contain visual anomalies, such as a person with more than two arms or a limb that has merged into the background of the image.
It can be harder to decipher between synthetically generated voice audio and voice clips of real people. But AI is still only as good as the training data, experts say.
“However, machine-generated content can often be detected as such when viewed objectively. In any case, content generation is unlikely to limit attackers,” Lee said.
Experts previously told CNBC that they expect AI-generated disinformation to be a key risk in upcoming elections around the world.
‘Limited Use’
Matt Calkins, CEO of enterprise technology company Appian, which helps businesses make apps easier with software tools, said artificial intelligence has “limited utility.”
Many of today’s AI production tools can be “boring,” he added. “Once he gets to know you, he can go from amazing to useful [but] he just can’t cross that line right now.”
“Once we’re willing to trust AI with self-awareness, it’s going to be really incredible,” Calkins told CNBC in an interview this week.
That could make it a more effective — and dangerous — disinformation tool in the future, Calkins warned, adding that he’s unhappy with the progress being made in state tech regulation efforts.
It may take AI producing something extremely “offensive” for US lawmakers to act, he added. “Give us a year. Wait until the AI attacks us. And then maybe we’ll make the right decision,” Calkins said. “Democracies are reactionary institutions,” he said.
No matter how advanced AI gets, though, Cisco’s Lee says there are some proven ways to spot disinformation — whether it’s machine-generated or human-generated.
“People need to be aware that these attacks are happening and be aware of the techniques that may be used. When faced with content that triggers our emotions, we should stop, pause and ask ourselves if the information itself is plausible.” , Lee suggested.
“Is it published by a reliable media source? Are other reliable media sources reporting the same thing?” he said. “If not, it’s probably a scam or misinformation campaign that should be ignored or reported.”