Disinformation is expected to be among the top cyber risks for the 2024 election.
Andrew Brookes | Image Source | Getty Images
Britain is expected to face a barrage of cyberattacks and state-backed disinformation campaigns as it heads to the polls in 2024 — and artificial intelligence is a key risk, according to cyber experts who spoke to CNBC.
Britons will vote on May 2 in local elections and a general election is expected in the second half of this year, although British Prime Minister Rishi Sunak has yet to commit to a date.
The votes come as the country grapples with a range of problems, including a cost-of-living crisis and sharp divisions over immigration and asylum.
“With most UK citizens voting at the polls on Election Day, I expect the majority of cybersecurity risks to emerge in the months leading up to that day,” Todd McKinnon, CEO of identity security company Okta. .
It wouldn’t be the first time.
In 2016, the US presidential election and the UK Brexit vote were found to have been disrupted by disinformation shared on social media platforms, allegedly by groups linked to the Russian state, although Moscow denies these claims.
According to cyber experts, state actors have since made routine attacks in various countries to manipulate the outcome of elections.
Meanwhile, last week, the UK claimed that Chinese state hacking group APT 31 had tried to access the email accounts of British lawmakers, but said such attempts were unsuccessful. London has imposed sanctions on Chinese individuals and a tech company in Wuhan believed to be a front for APT 31.
The US, Australia and New Zealand followed with their own sanctions. China has denied the allegations of state-sponsored hacking, calling them “baseless”.
Cybercriminals using AI
Cyber experts expect malicious actors to interfere in the upcoming election in a number of ways — especially through disinformation, which is expected to be even worse this year due to the widespread use of artificial intelligence.
Synthetic images, videos and audio created using computer graphics, simulation methods and artificial intelligence – commonly referred to as “deep fakes” – will become commonplace as it becomes easier for people to create them, experts say.
“Nation-state actors and cybercriminals are likely to use AI-powered identity-based attacks such as phishing, social engineering, ransomware and supply chain compromise to target politicians, campaign staff and election-related institutions” , Okta’s McKinnon added.
“We’re also confident we’ll see an influx of AI and bot-based content generated by threat actors to stamp out disinformation on an even greater scale than we’ve seen in previous election cycles.”
The cybersecurity community has called for increased awareness of this type of AI-generated disinformation, as well as international cooperation to mitigate the risk of such malicious activity.
Top electoral risk
Adam Meyers, head of adversarial operations for cybersecurity firm CrowdStrike, said AI-powered disinformation is a top risk for the 2024 election.
“Right now, genetic AI can be used for bad or good, and so we’re seeing both applications being adopted more and more every day,” Meyers told CNBC.
China, Russia and Iran are most likely to conduct disinformation and disinformation operations against various global elections with the help of tools such as genetic artificial intelligence, according to Crowdstrike’s latest annual threat report.
“This democratic process is extremely fragile,” Meyers told CNBC. “When you start to see how hostile nation states like Russia or China or Iran can leverage genetic artificial intelligence and some of the newer technologies to create messages and use deep fakes to create a story or a narrative that is compelling people to accept, especially when people already have this kind of confirmation bias, is extremely dangerous.”
A key problem is that AI lowers the barrier to entry for criminals who want to exploit people online. This has already happened with the format scam emails created using easily accessible AI tools like ChatGPT.
Hackers are also developing more advanced — and personal — attacks by training AI models on our own data available on social media, according to Dan Holmes, fraud prevention specialist at regulatory tech firm Feedzai.
“You can train these voice AI models very easily… through exposure to social [media]Holmes told CNBC in an interview. “Is [about] to get that emotional level of engagement and come up with something creative.”
In the run-up to the election, a fake AI-generated audio clip of Keir Starmer, leader of the opposition Labor Party, abusing party officials was posted on social media platform X in October 2023. The post garnered up to 1.5 million likes. views, according to the fact-correcting charity Full Fact.
It’s just one example of many deepfakes that have cybersecurity experts worried about what’s to come as the UK approaches elections later this year.
Elections a test for tech giants
However, deep fake technology is becoming much more advanced. And for many tech companies, the race to beat them is now fighting fire with fire.
“Deepfakes went from a theoretical thing to a very live production today,” Mike Tuchen, Onfido’s CEO, said in an interview with CNBC last year.
“There’s now a cat and mouse game where it’s ‘AI vs. AI” — using artificial intelligence to detect deepfakes and mitigate the impact for our customers is the big battle right now.”
Cyber experts say it’s getting harder to tell what’s real — but there can be some telltale signs that content is being digitally manipulated.
AI uses prompts to generate text, images, and videos, but it doesn’t always get it right. So, for example, if you’re watching a video of an AI-generated dinner and the spoon suddenly disappears, that’s an example of an AI flaw.
“We’re sure to see more deepfakes throughout the election process, but one easy step we can all take is to verify the authenticity of something before we share it,” added Okta’s McKinnon.