You can also find out more about the following: one paper Eleos AI has published its report, which argues that AI consciousness should be evaluated using the a “computational functionalism” approach. Putnam himself once advocated a similar idea, although he did not use it. criticized It was later in his professional career. He was a professional athlete. theory suggests It is possible to think of human minds as certain types of computation systems. Then, you could determine if any other computing systems such as the chabot have sentience indicators similar to a human.
Eleos AI wrote in the newspaper that “a major challenge in applying” The following is an approach “is that it involves significant judgment calls, both in formulating the indicators and in evaluating their presence or absence in AI systems.”
It is important to note that model welfare, as a field, is new and evolving. There are many critics of this field, such as Mustafa Suleyman the CEO of Microsoft AI who has recently published a blog What is the best way to get in touch with us? “seemingly conscious AI.”
“This is both premature, and frankly dangerous,” Suleyman wrote in a general way about the model welfare research field. “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.”
Suleyman writes that “there is zero evidence” It is now clear that AI can be conscious. He provided a link. paper Long coauthored an article in 2023 which proposed a framework for evaluating if AI systems have achieved a high level of performance. “indicator properties” Of consciousness. Suleyman didn’t respond to WIRED when they asked for a comment.
After Suleyman’s blog was published, I spoke with Long and Campbell. While they agree with him on many points, they do not believe that model welfare research is to be discontinued. Instead, they claim that Suleyman’s cited harms were the very reasons What is the best way to get in touch with you? They want to learn about the subject in the beginning.
“When you have a big, confusing problem or question, the one way to guarantee you’re not going to solve it is to throw your hands up and be like ‘Oh wow, this is too complicated,'” Campbell says “I think we should at least try.”
Testing Consciousness
Researchers who study model welfare are primarily concerned with consciousness. They argue that if we could prove you and me are conscious, the same logic can be applied to language models. Long and Campbell don’t think AI is conscious right now, or that it will be in the future. They want us to be able to test it.
“The delusions are from people who are concerned with the actual question, ‘Is this AI, conscious?’ and having a scientific framework for thinking about that, I think, is just robustly good,” Long is saying
It is easy to misinterpret heady questions or mind-blowing experiments in an age where AI research can often be packaged up into headlines that are sensationalized and shared on social media. What happened when Anthropic first published an article about the anthropological study of human behavior? safety report Claude Opus may have taken a look at “harmful actions” Blackmail a fictional engineer in order to keep it running.

