Prof. Yoshua Bengio at the One Young World Summit on Friday, September 20, 2024 in Montreal, Canada.
Renowned computer scientist Yoshua Bengio – a pioneer of artificial intelligence – has warned of the potential negative effects of modern technology on society and called for more research to mitigate its risks.
Bengio, a professor at the University of Montreal and head of the Montreal Institute for Learning Algorithms, has won several awards for his work in deep learning, a subset of AI that tries to mimic the activity of the human brain. To learn how to recognize the complex. Patterns in data.
But he has concerns about the technology and warned that some people with “a lot of power” might want to do the same. Look, humanity has been replaced by machines.
“It’s really important that we project ourselves into a future where we have machines that are as smart as we are in many ways, and that’s what society is all about,” Bengio told CNBC’s Tanya Brier at the One Young World Summit in Montreal. So what does that mean?”
Machines may soon achieve most of the cognitive abilities of humans, he said — artificial general intelligence (AGI) is a type of AI technology that aims to equal or better human intelligence.
“Intelligence gives power. So who will control that power?” he said. “Having systems that know more than most people can be dangerous in the wrong hands and create more instability at the geopolitical level, for example, or terrorism.”
According to Bengio, a limited number of organizations and governments can afford to build powerful AI machines, and the bigger the systems, the smarter they will be.
“These machines, you know, cost billions to build and train. [and] Very few organizations and very few countries will be able to do this. This is already the case,” he said.
“There is going to be a concentration of power: economic power, which can be bad for markets; political power, which can be bad for democracy; and military power, which can be bad for the geopolitical stability of our planet. . Open questions that we need to carefully study and begin to narrow down as soon as possible.”
We don’t have ways to make sure that these systems don’t harm people or turn against people … we don’t know how to do it.
Joshua Bengio
Head of the Montreal Institute for Learning Algorithms
He said that such results are possible within decades. “But if it’s five years, we’re not ready … because we don’t have ways to make sure that these systems don’t harm people or turn against people … we don’t know how to do that, He added. .
There are arguments to suggest that the way AI machines are currently being trained “will create systems that are anti-human,” Bengio said.
“Besides, there are people who might want to abuse that power, and there are people who might be happy to see humanity replaced by machines. I mean, there’s a limit, but these People can have a lot of power, and they can do it until we put the right guards in place,” he said.
AI guidance and regulation
Bengio endorsed one. An open letter Title in June: “The Right to Warn About Advanced Artificial Intelligence.” It was signed by current and former employees of OpenAI – the company behind the viral AI chatbot ChatGPT.
The letter warns of “serious risks” from AI development and calls for guidance from scientists, policymakers and the public in mitigating them. OpenAI has been plagued by increasing security concerns over the past few months, with its “AGI Readiness” team disbanded in October.
“The first thing governments need to do is regulate that compels. [companies] “When they build these frontier systems that are like the biggest systems, which cost millions of dollars to train,” Bengio told CNBC, “governments have to know where they are, you know, these Systems details.
As AI is evolving so rapidly, Bengio said governments should be “a little bit more creative” and legislate to adapt to changes in technology.
It did not take long to move the evolution of societies and humanity in a positive and beneficial direction.
Joshua Bengio
Head of the Montreal Institute for Learning Algorithms
Companies developing AI should also be held accountable for their actions, according to the computer scientist.
“Responsibility is also another tool that can compel. [companies] Behaving well, because… if it’s about their money, the fear of being sued – that will push them to do things that protect the public. If they know they can’t be prosecuted, because it’s kind of a gray zone right now, they don’t necessarily behave well,” he said.[Companies] Compete with each other, and, you know, they think that the first people to reach AGI will prevail. So it’s a race, and it’s a risk race.”
Bengio said the legislative process to make AI safer would be similar to the ways in which rules were set for other technologies, such as airplanes or cars. “To enjoy the benefits of AI, we have to regulate. We have to put in place. [in] “We have to have democratic oversight of how technology is developed,” he said.
Wrong information
The spread of misinformation, especially around elections, is a growing concern as AI advances. In October, OpenAI said it had “affected more than 20 operations and deception networks from around the world that attempted to use our models.” These include social posts by fake accounts before elections in the US and Rwanda.
“One of the biggest short-term concerns, but one that we’ll see as we move toward more capable systems, is AI’s ability to influence misinformation, misinformation, politics and opinion,” Bengio said. “As we move forward, we’ll have machines that can create more realistic images, more realistic sound reproduction of voices, more realistic videos,” he said.
Bengio cites that this influence can extend to interactions with chatbots. A study Italian and Swiss researchers show that OpenAI’s GPT-4 large language model can make people think better than humans. “It was just a scientific study, but you can imagine some people reading it and wanting to interfere with our democratic process,” he said.
‘The hardest question’
Bengio said the “hardest question” is this: “If we create entities that are smarter than us and have their own goals, what does that mean for humanity? Are we in danger?”
“These are all very difficult and important questions, and we don’t have all the answers. We need a lot more research and caution to minimize the potential risks,” Bengio said.
He appealed to people to work. He said that we have an agency. It is not too late to take the evolution of societies and humanity in a positive and beneficial direction. “But for that, we need enough people to understand both the benefits and the risks, and we need enough people to work on solutions. And solutions can be technical, they can be political…policy. There are, but we need a lot of effort in these directions,” Bengio said.
— CNBC’s Hadenfield and Sam Shedd contributed to this report.