With the election season underway and AI evolving at a rapid pace, the manipulation of AI in political advertising is becoming an issue of greater concern to the market and the economy. A new report from Moody’s on Wednesday warned that genetic artificial intelligence and deep rigging are among the election integrity issues that could pose a risk to US institutional credibility.
“The election is likely to be closely contested, raising concerns that AI deepfakes could be used to mislead voters, exacerbate division and sow discord,” wrote Moody’s assistant vice president and analyst Gregory Sobel and senior vice president William Foster. “If successful, disinformation agents could influence voters, influence the outcome of elections, and ultimately influence policymaking, which would undermine the credibility of American institutions.”
The government is stepping up its efforts to combat deepfakes. On May 22, Federal Communications Commission Chair Jessica Rosenworcel proposed a new rule which would require political TV, video and radio ads to disclose if they used AI-generated content. The FCC has been concerned about the use of artificial intelligence in ads this election cycle, with Rosenworcel pointing to potential problems with deep fakes and other falsified content.
Social media was outside the purview of FCC regulations, but the Federal Election Commission also examines widespread AI disclosure rules which will be extended to all platforms. In a letter to Rosenworcel, encouraged the FCC to delay its decision until after the election because its changes would not be binding on digital political ads. They added that they could confuse voters that online ads without the disclosures did not have AI even if they did.
While the FCC’s proposal may not fully cover social media, it opens the door for other agencies to regulate advertising in the digital world as the US government seeks to establish itself as a strong regulator of AI content. And, perhaps, these rules could be extended to even more types of advertising.
“This would be a groundbreaking decision that could change disclosures and advertising in traditional media for years to come around political campaigns,” said Dan Ives, Wedbush Securities managing director and senior equity analyst. “The concern is that you can’t put the genie back in the bottle and there are a lot of unintended consequences with that decision.”
Some social media platforms have already adopted some form of AI disclosure ahead of the regulations. Meta, for example, requires AI disclosure for all its ads and bans all new political ads in the week before the November election. Google requires all political ads with modified content that “depicts real or realistic-looking authentic people or events” to have disclosures, but does not require AI disclosures in all political ads.
Social media companies have good reason to be considered proactive on the issue, as brands worry about aligning themselves with spreading misinformation at a pivotal time for the nation. Google and Facebook expected to receive 47% of the projected $306.94 billion spent on US digital advertising in 2024. “This is a third issue for majors focusing on advertising during a very divisive election cycle ahead and misinformation artificial intelligence. It’s a very complicated time for online advertising,” Ives said.
Despite self-policing, AI-manipulated content does so on tag-free platforms due to the sheer volume of content published daily. If spam generated by artificial intelligence; the large amounts of AI imagesit’s hard to find everything.
“The lack of industry standards and the rapid evolution of technology make this effort challenging,” said Tony Adams, senior threat researcher at the Secureworks Counter Threat Unit. “Fortunately, these platforms have reported success in policing the most harmful content on their sites through technical controls, ironically powered by artificial intelligence.”
It’s easier than ever to create fake content. In May, Moody’s warned that deep fakes were “already weaponized” by governments and non-governmental entities as propaganda and to create social unrest and, at worst, terrorism.
“Until recently, creating a convincing deepfake required significant technical knowledge of specialized algorithms, computing resources and time,” Moody’s Ratings assistant vice president Abhi Srivastava wrote. “With the advent of easily accessible, affordable Gen AI tools, creating a sophisticated deep fake can be done in minutes. This ease of access, combined with the limitations of existing social media safeguards against the spread of fake content, creates a fertile environment for the widespread abuse of deep fakes”.
Deep fake sound via robocall it has been used in a presidential primary race in New Hampshire this election cycle.
One potential silver lining, according to Moody’s, is the decentralized nature of the US electoral system, alongside existing cyber security policies and general awareness of looming cyber threats. That will provide some protection, Moody’s says. State and local governments are enacting measures to further block deepfakes and unlabeled AI content, but free speech laws and concerns about blocking technological progress have slowed the process in some state legislatures.
Since February, 50 pieces of AI-related legislation have been introduced a week in state legislatures, according to Moody’s, including a focus on deepfakes. Thirteen states have election meddling and deep rigging laws, eight of which were enacted since January.
Moody’s noted that the US is vulnerable to cyber risks, ranks 10th out of 192 countries in the United Nations E-Government Development Index.
The public perception that deepfakes have the ability to influence political outcomes, even without concrete examples, is enough to “undermine public confidence in the electoral process and the credibility of government institutions, which constitutes a credit risk,” according to with Moody’s. The more concerned a population is about separating fact from fiction, the greater the risk that the public will become disengaged and distrustful of government. “Such trends would be credit negative, potentially leading to increased political and social risks and jeopardizing the effectiveness of government institutions,” Moody’s wrote.
“The response from law enforcement and the FCC may deter other domestic actors from using AI to defraud voters,” said Secureworks’ Adams. “But there is no question that foreign actors will continue, as they have for years, to interfere in American politics by exploiting artificial intelligence tools and systems that are being created. To voters, the message is to stay calm, stay alert and vote.” ”