Artificial intelligence-related lobbying reached new heights in 2023, with more than 450 organizations participating. That’s a 185 percent increase from last year, when just 158 organizations did so, according to federal lobbying disclosures analyzed by OpenSecrets for CNBC.
The rise of the AI lobby comes amid growing calls for AI regulation and a push by the Biden administration to begin codifying those rules. Companies that began lobbying in 2023 to have a say in how the legislation might affect their businesses include TikTok owner ByteDance, Tesla, Spotify, Shopify, Pinterest, Samsung, Palantir, Nvidia, Dropbox, Instacart, DoorDashAnthropic and OpenAI.
The hundreds of organizations lobbying for AI in 2023 ran the gamut from big tech and AI startups to pharmaceuticals, insurance, finance, academia, telecom and more. Until 2017, the number of organizations that reported pushing AI remained in the single digits, according to the analysis, but the practice has grown slowly but steadily in recent years, with an explosion in 2023.
More than 330 organizations that lobbied for AI last year had not done the same in 2022. The data showed a range of industries as new entrants to AI lobbying: chip companies such as AMD and TSMCventure firms such as Andreessen Horowitz, biopharmaceutical companies such as AstraZeneca, conglomerates such as Disneyand AI training data companies like Appen.
Organizations that reported lobbying on AI issues in 2023 typically lobby the government on a range of other issues. In total, they reported spending more than $957 million to lobby the federal government in 2023 on issues including but not limited to artificial intelligence, according to OpenSecrets.
In October, President Joe Biden issued an executive order on artificial intelligence, the first US government action of its kind, requiring new security assessments, justice and civil rights guidance, and research into the impact of artificial intelligence on the labor market. The order tasked the U.S. Commerce Department’s National Institute of Standards and Technology, or NIST, with developing guidelines for evaluating certain artificial intelligence models, including test environments for them, and being in part responsible for developing “consensus standards” for artificial intelligence.
After the executive order was unveiled, a frenzy of lawmakers, industry groups, civil rights groups, labor unions and others began digging into the 111-page document and noting the priorities, specific deadlines and broad implications of the landmark action.
A key debate centered on the issue of AI fairness. Many civil society leaders told CNBC in November that the mandate doesn’t go far enough in recognizing and addressing real-world harms from AI models, especially those that affect marginalized communities. But they said it’s an important step.
As of December, NIST is collecting public comments from businesses and individuals on how best to shape these rules; the public comment period ends Friday. In its request for information, NIST specifically asked respondents to weigh in on developing responsible AI standards, testing AI systems for vulnerabilities, managing the risks of genetic AI, and helping to reduce the risk of “synthetic content” , which includes misinformation and fakes.
— CNBC’s Mary Catherine Wellons and Megan Cassella contributed reporting.