Blake Lemoine incident is remembered today as a high‑water mark of AI hype. It pushed the entire idea of conscious AI It brought the idea into the public eye for just a few news cycles, but also sparked a discussion among computer scientists and researchers of consciousness that has intensified over time. The tech community has continued to ridicule the idea publicly (and the poor researcher) LemoineIn private, it began to consider the possibilities much more seriously. It is possible that a conscious AI lacks a commercially sound rationale. (How do you make money with it?) Moral dilemmas can arise (such as how to treat an AI capable of experiencing pain). Yet some AI engineers have come to think that the holy grail of artificial general intelligence—a machine that is not only supersmart but also endowed with a human level of understanding, creativity, and common sense—might require something like consciousness to attain. In the tech community, what had been an informal taboo surrounding conscious AI—as a prospect that the public would find creepy—suddenly began to crumble.
The turning point came in the summer of 2023, when a group of 19 leading computer scientists and philosophers posted an 88‑page report titled “Consciousness in Artificial IntelligenceThe Butlin report is also called this informally. It seemed that everyone within the AI community and consciousness research had read this report in a matter of days. This was the draft report’s headline: “Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious barriers to building conscious AI systems.”
They acknowledged that the authors’ inspiration for convening the group, and writing the report, was “the case of Blake Lemoine.” “If AIs can give the impression of consciousness,” The aforementioned coauthor told Science magazine, “that makes it an urgent priority for scientists and philosophers to weigh in.”
The abstract in the preprint caught the attention of everyone. “no obvious barriers to building conscious AI systems.” The first time I heard those words, it felt as if a threshold of importance had been reached. And this was not just technological. This had more to do with the very nature of our species.
What would it mean for humanity to discover one day in the not‑so‑distant future that a fully conscious machine had come into the world? This would likely be a Copernican event, which would shake our senses of importance and uniqueness. Humans have defined themselves for thousands of years in opposition to other species. “lesser” animals. It has meant denying animal such supposed uniquely human traits like feelings, language, reason and consciousness (one of Descartes’s most flagrant mistakes). Scientists, who have shown that many species have intelligence and consciousness, feelings and language, as well as using tools and tools and using them, are challenging centuries-old human exceptionalism, have given up most of the distinctions. The shift is still ongoing and has prompted a number of difficult questions regarding our own identity as well as our obligations towards other species.
With AI, the threat to our exalted self‑conception comes from another quarter entirely. AIs will define us, not other animals. As computer algorithms surpass us in sheer brainpower—handily beating us at games like chess and Go and various forms of “higher” thought like mathematics—we can at least take solace in the fact that we (and many other animal species) still have to ourselves the blessings and burdens of consciousness, the ability to feel and have subjective experiences. AI can be seen as an adversary that draws humans and other animal species closer together. It would be a wonderful story to tell and could bring good news to the animals that are asked join Team Conscious. But what happens if AI begins to challenge the human—or animal, I should say—monopoly on consciousness? What will we become then?

