Microsoft CEO Satya Nadella (R) speaks as OpenAI CEO Sam Altman (L) looks on during the OpenAI DevDay event in San Francisco on November 6, 2023.
Justin Sullivan | Getty Images
Microsoft has resigned from the observer position on the OpenAI board. applewhich was reportedly expected to take a similar observer position, will no longer pursue it, according to the Financial Times. But whatever clarity this week’s changes were intended to provide, many of the same concerns remain.
Regulators aren’t going away, and for those focused on ethics in AI, the same fears — of profits trumping safety — remain. Amba Kak, co-executive director of the non-profit AI Now Institute, described the announcement as a “fugitive” designed to hide the relationships between big tech companies and emerging players in artificial intelligence.
“The timing of this move matters,” Kak wrote in a message to CNBC. “It should be seen as a direct response to global regulatory scrutiny of these unconventional relationships.”
The close Microsoft-OpenAI tie and the outsized control the two companies have in the artificial intelligence industry will continue to be scrutinized by the Federal Trade Commission, according to a person with knowledge of the matter, who asked not to be identified due to confidentiality concerns. .
Meanwhile, the large groups of AI developers and researchers concerned about safety and ethics in the increasingly lucrative AI industry are unmoved. Current and former OpenAI employees published an open letter on June 4, outlining concerns about the rapid advances in artificial intelligence, despite a lack of oversight and whistleblower protections.
“AI companies have strong financial incentives to avoid effective oversight, and we do not believe that tailored corporate governance structures are sufficient to change this,” the workers wrote in the letter. They added that AI companies “currently have only weak obligations to share some of this information with governments and none with civil society” and cannot “be relied upon to share it voluntarily”.
Days after the letter was published, a source familiar with the matter confirmed to CNBC that the FTC and Justice Department were set to launch antitrust investigations into OpenAI, Microsoft and Nvidiafocusing on the behavior of companies.
FTC Chairwoman Lina Khan described her agency’s action as “market research into the investments and partnerships being created between AI developers and major cloud service providers.” Kak told CNBC that the regulators’ pursuit is helping to get answers and provide transparency.
Microsoft did not mention regulators at all in its explanation for stepping down as a board observer. The software giant said it can now step down because it is satisfied with the makeup of the startup’s board, which has been revamped in the eight months since the rebellion that led to the brief ouster of CEO Sam Altman and threatened Microsoft’s massive investment in OpenAI.
Microsoft initially took a non-voting position on OpenAI in November, following the Altman saga. The new board includes former National Security Agency director Paul Nakasone, along with Quora CEO Adam D’Angelo, former Treasury Secretary Larry Summers, former Salesforce co-CEO Bret Taylor and Altman. There are also new additions since March: Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation. Nicole Seligman, former executive vice president of Sony; and Fidji Simo, its CEO Instacart.
Following the announcement this week from Microsoft, OpenAI said Worthy the company is changing its approach to how it works with “strategic partners”. Apple has not made a statement. None of the three companies provided comment to CNBC for this article.
João Sedoc, assistant professor of technology at New York University’s Stern School of Business, said Microsoft’s latest move is positive for the AI industry because of the company’s perceived influence on OpenAI. He said it was “critical” for Microsoft to “step in and help stabilize” OpenAI after the sudden firing followed Altman’s quick reinstatement.
“Having Microsoft there presents a mixture of conflict of interest and competitive advantage,” Sedoc said, adding, “Microsoft and OpenAI have a strange relationship and synergies and competition at the same time.”
“Immense amount of information”
In addition to Microsoft’s approximately $13 billion investment in OpenAI, the two companies are working closely together to deliver productive AI products and services. OpenAI’s popular chatbot ChatGPT is powered by its large language models, powered by Microsoft’s Azure cloud technology.
But the two companies are not perfectly aligned. Earlier this year, Microsoft paid $650 million license Inflection AI technology and hire key talent from the company, notably CEO Mustafa Suleyman, who previously co-founded DeepMind, the artificial intelligence startup acquired by Google in 2014.
Ben Miller, CEO of investment platform Fundrise, said after the Inflection deal, Microsoft is “now on the path to being a real competitor with OpenAI,” meaning it shouldn’t be in the startup’s boardroom.
“Having a voice at the table is extremely influential in the company and gives Microsoft a tremendous amount of insight into the operations of the business,” Miller said.
Mustafa Suleyman, co-founder of Inflection.ai & DeepMind, speaking on CNBC’s Squawk Box at the World Economic Forum Annual Meeting in Davos, Switzerland on January 17, 2024.
Adam Galici | CNBC
Sedoc told CNBC that the split sets the right precedent as Big Tech companies become increasingly big investors in artificial intelligence. He cited AI startups such as Amazon-backed Anthropic and Hugging Face, whose investors include Google, Amazon, Nvidia and others.
“They’re probably thinking about the downstream implications of what that would mean for the general movement of the industry,” Sedoc said.
However, one area where Sedoc said it could be problematic is AI security.
“I think Microsoft has a lot of expertise and a longer history of thinking about this in a lot of different areas that OpenAI doesn’t have,” Sedoc said. “In that sense, I think there will be some downside to not being on the table.”
AI security practices were at the center of the dispute between Altman and OpenAI’s previous board, and it continues to cause rifts in the company.
In May, OpenAI disbanded its team focused on the long-term risks of artificial intelligence just a year after the company announced the group. The news came days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departure from the company. In a post on X, Leike wrote that “OpenAI’s security culture and processes have held back shiny products” and that he is “concerned” that the company is not on the right track.
“Building smarter-than-human machines is an inherently risky endeavor,” Leike wrote. “OpenAI bears a huge responsibility on behalf of all humanity.”
Announcing the appointment of former NSA director Nakasone to the board last month, OpenAI said he would join the newly formed Safety and Security Committee. OpenAI said at the time that the team was taking 90 days to evaluate the company’s processes and safeguards before making recommendations to the board and, ultimately, informing the public.
—CNBC’s Ryan Browne, Matt Clinch and Steve Kovach contributed reporting.
I’M WATCHING: OpenAI was hacked in April 2023 without disclosing to the public