He believes this trait can be built into AI systems — but isn’t sure.
“I think so,” Altman said when asked the question during an interview with Deborah Spar, senior associate dean at Harvard Business School.
The question of an AI uprising was once purely reserved for Isaac Asimov’s science fiction or James Cameron’s action movies. But since the rise of AI, it has become, if not a hot-button issue, at least a topic of discussion that warrants real consideration. What was once considered crank music is now a real regulatory question.
Altman said OpenAI’s relationship with the government has been “quite constructive.” A project as far-reaching and wide-ranging as developing AI should have been a government project, he added.
“In a well-functioning society this would be a government project,” Altman said. “Given that it’s not happening, I think it’s better that it’s happening this way as an American project.”
The federal government has yet to make significant progress on AI safety legislation. California tried to pass a law that would have forced AI developers to respond to catastrophic events such as mass destruction. would have been held responsible for developing weapons or attacking critical infrastructure. The bill passed the legislature but was vetoed by California Governor Gavin Newsom.
Some leading figures in AI have warned that ensuring it is fully compatible with the good of mankind is a critical question. Nobel laureate Geoffrey Hinton, known as the godfather of AI, said he “can’t see a way that guarantees safety.” Tesla CEO Elon Musk has regularly warned that AI could lead to the end of humanity. Musk was instrumental in founding OpenAI, providing critical funding to the non-profit at its inception. Despite the fact that Musk is suing him, for which Altman remains “grateful.”
There are several organizations—such as the nonprofit Alignment Research Center and the startup Safe Superintelligence—founded by OpenAI’s former chief science officer—that have devoted themselves entirely to this question in recent years.
OpenAI did not respond to a request for comment.
AI as it’s currently designed is well suited for alignment, Altman said. Because of this, they argue, it will be easier said than done to ensure that AI does not harm humanity.
“One thing that has worked surprisingly well is the ability to align an AI system to behave in a certain way,” he said. “So if we can define what that means in a bunch of different cases, then yes, I think we can get the system to work that way.”
Altman also has a typically unique idea for how OpenAI and other developers can “articulate” the principles and ideals that AI needs to stay with us: using AI to poll the public at large. do He suggested asking users of AI chatbots about their values and then using those answers to determine how AI should be configured to protect humanity.
“I’m interested in the thought experiment. [in which] An AI talks to you for a few hours about your value system,” he said. It “does that to me, to everybody else. And then he says, ‘Well, I can’t please everyone all the time.’
Altman hopes that by interacting with and understanding billions of people “at a deeper level,” AI can identify challenges facing society more broadly. From there, the AI can reach a consensus about what it needs to do to achieve the general well-being of the public.
The AI has an internal team dedicated to Super Alignment, tasked with making sure future digital superintelligences don’t go rogue and cause untold harm. In December 2023, the group released a preliminary research paper showing that it was working on a process by which one large language model would monitor another. That spring, the team’s leaders, Sutskewer and John Leake, left OpenAI. His team was disbanded, CNBC reported at the time.
Leike said he let go of growing differences with OpenAI’s leadership about its commitment to security as the company worked toward artificial general intelligence, a term that refers to an AI that is as smart as a human.
“Creating smarter-than-human machines is an inherently risky endeavor,” Leike wrote on X . “OpenAI is taking on a huge responsibility on behalf of all of humanity. But over the years, security culture and processes have overtaken flashy products.”
When Leakey left, Altman wrote on X that he was “very appreciative. [his] Contribution to Openai [sic] Alignment research and safety culture.