Beliefs in AI consciousness on the rise as most ChatGPT users see AI models as capable of ‘conscious experiences’

The more people use tools like ChatGPT, the more likely they are to think they are conscious, which will carry ramifications for legal and ethical approaches to AI.

A new study has uncovered a fascinating insight into the perceptions of ChatGPT users: the majority believe that AI models possess some form of “conscious experiences” or sentience. This anthropomorphic view of AI suggests a growing sense of connection and empathy between humans and the technology they interact with on a daily basis.

The findings raise important questions about the evolving relationship between humans and artificial intelligence. As AI continues to play an increasingly central role in our lives, these beliefs could shape the development of future technology and inform the ethical considerations surrounding its use.

LIVE SCIENCE

Most people believe that large language models (LLMs) like ChatGPT have conscious experiences just like humans, according to a recent study. 

Experts in technology and science overwhelmingly reject the idea that today’s most powerful artificial intelligence (AI) models are conscious or self-aware in the same way that humans and other animals are. But as AI models improve, they are becoming increasingly impressive and have begun to show signs of what, to a casual outside observer, may look like consciousness.

The recently launched Claude 3 Opus model, for example, stunned researchers with its apparent self-awareness and advanced comprehension. A Google engineer was also suspended in 2022 after publicly stating an AI system the company was building was “sentient.” 

In the new study, published April 13 in the journal Neuroscience of Consciousness, researchers argued that the perception of consciousness in AI is as important as whether or not they actually are sentient. This is especially true as we consider the future of AI in terms of its usage, regulation and protection against negative effects, they argued.

It also follows a recent paper that claimed GPT-4, the LLM that powers ChatGPT, has passed the Turing test — which judges whether an AI is indistinguishable from a human according to other humans who interact with it.

READ MORE AT LIVE SCIENCE

Report

Leave a Reply

Your email address will not be published. Required fields are marked *