Sam Altman, CEO of OpenAI, during a session at the World Economic Forum in Davos, Switzerland, on January 18, 2024.
Bloomberg | Bloomberg | Getty Images
Executives at some of the world’s leading artificial intelligence labs expect a form of artificial intelligence that will match — or even exceed — human intelligence sometime in the near future. But what it will ultimately look like and how it will be implemented remains a mystery.
Leaders from OpenAI, Cohere, Google’s DeepMind and big tech companies like Microsoft and Salesforce weighed the risks and opportunities presented by artificial intelligence at the World Economic Forum in Davos, Switzerland.
Artificial intelligence has become the focus of the business world in the last year, thanks in large part to the success of ChatGPT, OpenAI’s popular AI chatbot. AI building tools like ChatGPT are powered by big language models, algorithms trained on massive amounts of data.
This has caused concern among governments, companies and advocacy groups around the world, due to an onslaught of risks surrounding the lack of transparency and explanation of AI systems. Job losses resulting from increased automation; social manipulation through computer algorithms. surveillance; and data privacy.
AGI a ‘super vaguely defined term’
OpenAI CEO and co-founder Sam Altman said he believes AI may be a long way from becoming a reality and could be developed in the “reasonably near future”.
However, he noted that fears that it will dramatically reshape and disrupt the world are overblown.
“It’s going to change the world a lot less than we all think and it’s going to change jobs a lot less than we think,” Altman said in a Bloomberg-hosted conversation at the World Economic Forum in Davos, Switzerland.
Altman, whose company entered the mainstream after the public launch of its ChatGPT chatbot in late 2022, has changed his tune about the risks of artificial intelligence since his company came under the regulatory spotlight last year, with governments from the United States, the United Kingdom, the European Union, and beyond in an effort to rein in technology companies about the risks posed by their technologies.
In a May 2023 interview with ABC News, Altman said he and his company “fear” the downsides of a super-intelligent AI.
“We have to be careful here,” Altman told ABC. “I think people should be happy that we’re a little scared of that.”
AGI is an extremely vaguely defined term. If we just call it “better than humans at almost everything humans can do”, I agree, pretty soon we’ll be able to get systems to do that.
Altman then said he fears the potential for AI to be used for “large-scale disinformation,” adding, “Now that they’re getting better at writing computer code, [they] could be used for aggressive cyber attacks.”
In a discussion at the World Economic Forum in Davos, Altman said his ouster was a “microcosm” of the pressures OpenAI and other AI labs are facing internally. “As the world gets closer to AGI, the stakes, the stress, the tension level. All of that is going to go up.”
Aidan Gomez, the CEO and co-founder of AI startup Cohere, echoed Altman’s view that AI will likely be a real thing in the near future.
“I think we will have this technology very soon,” Gomez told CNBC’s Arjun Kharpal in a fireside chat at the World Economic Forum.
However, he said a key issue with AGI is that it is still vague as a technology. “First of all, AGI is an extremely vaguely defined term,” the Cohere boss added. “If we just call it ‘better than humans at almost everything humans can do,’ I agree, pretty soon we’ll be able to get systems to do that.”
However, Gomez said that even when AGI eventually arrives, it will likely take “decades” to truly integrate companies into companies.
“The question is really how quickly can we adopt it, how quickly can we get it into production, the scale of these models makes adoption difficult,” Gomez noted.
“And so the focus for us at Cohere has been to compress that: to make them more adaptive, more efficient.”
“The reality is that no one knows”
The issue of defining what AGI actually is and what it will ultimately look like is one that has puzzled many experts in the AI community.
Lila Ibrahim, managing director of Google’s DeepMind artificial intelligence lab, said no one really knows what type of AI qualifies as having “general intelligence,” adding that it’s important to develop the technology safely.
“The reality is, nobody knows” when AGI will arrive, Ibrahim told CNBC’s Kharpal. “There’s been a debate among AI experts who have been doing this for a long time both within the industry and within the organization.”
“We’re already seeing areas where AI has the potential to unlock our understanding … where humans haven’t been able to make that kind of progress. So it’s AI in collaboration with humans or as a tool,” Ibrahim said.
“Well, I think that’s really a big open question, and I don’t know how to answer it better than how we’re really thinking about this, but how much longer is it going to last?” Ibrahim added. “How do we think about what that might look like and how do we make sure we’re responsible stewards of the technology?”
Avoid “s— show”
Altman wasn’t the only top tech executive asked about the dangers of artificial intelligence at Davos.
Marc Benioff, CEO of business software company Salesforce, said on a panel with Altman that the tech world is taking steps to ensure the AI race doesn’t lead to a “Hiroshima moment.”
Many industry leaders in technology have warned that artificial intelligence could lead to an “extinction-level” event where machines become so powerful that they get out of control and wipe out humanity.
Several leaders in artificial intelligence and technology, including Elon Musk, Steve Wozniak and former presidential candidate Andrew Yang, have called for a halt to AI progress, stating that a six-month moratorium would be beneficial to allow society and regulatory authorities to catch up.
Geoffrey Hinton, an artificial intelligence pioneer often called the “Godfather of AI,” has he previously warned that advanced programs “can get out of control by writing their own computer code to modify themselves.”
“One of the ways these systems could get out of control is by writing their own computer code to modify themselves. And that’s something we should be seriously concerned about,” Hinton said in an October interview. on CBS’s “60 Minutes.”
Hinton left his role as Google’s vice president and engineering partner last year, raising concerns about the company’s handling of AI security and ethics.
Benioff said tech industry leaders and experts should ensure that AI prevents some of the problems that have plagued the web over the past decade or so — from manipulating beliefs and behaviors through algorithmic recommendations during election cycles to the violation of privacy.
“We really haven’t had this kind of interaction before” with AI-based tools, Benioff told the Davos crowd last week. “But we don’t trust it enough yet. So we have to cross trust.”
“We also need to reach out to these regulators and say, ‘Hey, if you look at social media over the last decade, it’s kind of been a f—ing s— show. It is very bad. We don’t We want that in our AI industry. We want to have a good healthy partnership with these coordinators and with these regulatory bodies.”
Limitations of LLMs
Jack Hidary, CEO of SandboxAQ, pushed back on the fervor of some tech executives that AI may be approaching the stage where it achieves “general” intelligence, adding that the systems still have a lot of teething problems to solve.
He said AI chatbots like ChatGPT have passed the Turing test, a test called the “imitation game,” developed by British computer scientist Alan Turing to determine whether a person is communicating with a machine and a human. But, he added, one big area where AI is lacking is common sense.
“One thing we have seen from LLMs [large language models] it’s very powerful it can write about students like there’s no tomorrow but it’s hard sometimes to find common sense and when you ask it, “How do people cross the street?” he can’t even recognize sometimes what a crosswalk is, versus other kinds of things, things that even a little kid would know, so it’s going to be really interesting to go beyond that in terms of logic.”
Hidary has a big prediction for how AI technology will evolve in 2024: This year, he said, will be the first that advanced AI communication software will be loaded into a humanoid robot.
“This year, we will see a ‘ChatGPT’ moment for embedded humanoid AI robots, this year in 2024 and then in 2025,” Hidary said.
“We’re not going to see robots coming off the assembly line, but we’re going to see them actually demonstrate what they can do using their intelligence, using their brains, maybe using LLM and other artificial intelligence techniques.”
“There are now 20 companies backed to create humanoid robots, plus of course Tesla and many others, so I think that’s going to be a transformation this year when it comes to that,” Hidary added.