Should we worry about AI’s emotions?


Stay informed with free updates

The author is a science commentator.

Discussions about whether AI will achieve or replace human intelligence are usually framed as one of existential threat to homo sapiens. A robot army is rising, Frankenstein-style, and turning on its creators. Autonomous AI systems that quietly manage government and corporate business may one day calculate that the world would run more smoothly if humans were cut out of the loop.

Now philosophers and AI researchers are asking: Will these machines develop the ability to bore or harm? In September, AI company Anthropic hired an “AI welfare” researcher to assess whether its systems are moving toward consciousness or agency, and if so, whether their well-being should be considered. Last week, an international group of researchers published a report on the same issue. He writes that the pace of technological progress “brings a realistic possibility that in the near future some AI systems will be conscious and/or strongly agentic, and thus morally significant”.

The idea of ​​AI worrying about emotions seems strange, but it reveals a paradox at the heart of the big AI push: that companies are racing to create artificial systems that are smarter and more like us, while also worrying that Artificial systems will become very intelligent and intelligent. Also like us. Because we don’t fully understand how consciousness, or a sense of self, arises in the human brain, we can’t really be confident that it will never occur in artificial brains. What seems remarkable, given the profound implications for our own generations of creating digital “minds,” is that there isn’t much external oversight of where these systems are going.

The report, titled Taking AI Welfare Seriously, was written by researchers at Eleos AI, which is dedicated to “investigating AI emotions and well-being,” along with New York University philosopher David Chalmers, who argues Virtual worlds are real worlds. , and Jonathan Birch, an academic at the London School of Economics whose recent book, The edge of feelingoffers a framework for thinking about animal and AI minds.

The report does not claim that AI emotions (the ability to feel sensations such as pain) or consciousness are possible or imminent, only that “there is considerable uncertainty about these possibilities”. They parallel our historical ignorance of the moral status of nonhuman animals, which enabled factory farming. It was only in 2022, thanks to Birch’s work, that crabs, lobsters and octopuses were protected under the UK’s Animal Welfare (Sentences) Act.

Human intuition, they warn, is a poor guide: our own species is prone to both anthropomorphism, which attributes human traits to non-humans that they do not possess, and anthropomorphism, which denies human traits to non-humans. They do what they have. .

The report recommends that companies take the issue of AI well-being seriously. That researchers seek ways to investigate AI consciousness, following the lead of scientists studying nonhuman animals. And that policymakers are beginning to consider the idea of ​​sentient or conscious AI, even convening citizen assemblies to explore the issues.

These arguments have received some support in the traditional research community. “I think true artificial consciousness is unlikely, but not impossible,” says Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex and a leading consciousness researcher. He believes that our sense of self is tied to our biology and is more than mere computation.

But if he’s wrong, as he admits he might be, the consequences could be far-reaching: “The creation of conscious AI would be a moral disaster because we’d introduce new forms of moral subjectivity into the world and potentially New forms of suffering will be introduced.” Seth added that no one should try to make such machines.

gave Illusion A sense of consciousness feels more like a near-concern. In 2022, a Google engineer was fired for saying he believed the company’s AI chatbot showed signs of emotion. Anthropic has been his major language model of “character training” to give him traits like concern.

As machines everywhere, especially LLMs, are designed to be more human-like, we are at risk of massive fraud by companies with few checks and balances in place. We risk caring for machines that cannot reciprocate, diverting our limited moral resources from important relationships. My poor human intuition is less concerned about AI minds gaining the ability to feel – and more about human minds losing the ability to care.

Video: Content creators take the fight to AI | FT Tech

Letter in response to this article:

We should do more than worry about AI’s emotions. / From Daniel Hulme and Calum Chase, Co-Founders, Consensium, London SE1, UK


Leave a Comment