Skip to content

At what point does AI become conscious? And what do we owe it once it gets there? | GUEST COMMENTARY

  • Sophia answers questions at Hanson Robotics studio in Hong Kong...

    Vincent Yu/AP

    Sophia answers questions at Hanson Robotics studio in Hong Kong on March 29, 2021. Sophia is a robot of many talents — she speaks, jokes, sings and even makes art. Two years ago, she caused a stir in the art world when a digital work she created as part of a collaboration was sold at an auction for $688,888 in the form of a non-fungible token (NFT). (AP Photo/Vincent Yu)

  • Sophia answers questions at Hanson Robotics studio in Hong Kong...

    Vincent Yu/AP

    Sophia answers questions at Hanson Robotics studio in Hong Kong on March 29, 2021. Sophia is a robot of many talents — she speaks, jokes, sings and even makes art. Two years ago, she caused a stir in the art world when a digital work she created as part of a collaboration was sold at an auction for $688,888 in the form of a non-fungible token (NFT). (AP Photo/Vincent Yu)

of

Expand
Author
PUBLISHED: | UPDATED:

AI has become proficient at recognizing faces, understanding speech, reading and writing, and diagnosing diseases, and it has potential to discover new medicines. Hanson Robotics’ Sophia, a lifelike bot, could converse naturally with a person and sprinkle the conversation with irony. ChatGPT, launched by Open AI in November, writes papers of higher quality than most humans can write. And in February, Bing’s AI Chat program stunned a New York Times columnist by claiming to be in love with him and saying “I want to be alive.”

AI is not only becoming scarily intelligent in some respects; it is also learning voraciously and surprising us with novelty.

Now that AI’s novelty is starting to make people wonder whether a program might have desires and interests of its own, we face tough questions. First: At what point can we realistically judge that an artificial system has its own consciousness, or subjective experience, rather than simply sounding as if it does? Second: If and when we reach that point, should we acknowledge that the system has moral status or rights? And if so, what follows practically for our relationship to AI?

The question about how to judge whether an artificial system is conscious raises the classic mind/body problem: how to understand the relationship between minds, such as your consciousness, and matter, such as your brain. Consensus on this issue is lacking. But many scientists and philosophers who study consciousness today hold that minds are not mysterious supernatural phenomena beyond the reach of science — immaterial souls — but instead are causally produced by, or realized in, brains. They see consciousness as part of the natural world.

Many find it hard to imagine that artificial materials such as silicon might generate something as wondrous as consciousness. Yet, upon reflection, it seems no less amazing that our fleshy brains achieve consciousness. Yet they do.

What evidence should convince us that an artificial system not only acts as if it is conscious — as Bing’s AI Chat program sometimes does — but really is conscious? Some AI experts maintain that if a highly advanced robot asks us whether we humans are conscious or wonders aloud about a nonbodily afterlife or shows a preference for future pleasures over past ones (suggesting that they actually feel pleasure), such behaviors would strongly suggest a familiarity with consciousness that only a conscious being could have.

If an AI achieved consciousness, should we treat it as having moral status or rights? My suggestion as a philosopher-ethicist is that, if their consciousness included any felt desires; any sensory feelings, such as pain or bodily comfort; or any emotional states, such as frustration or joy, then they would have interests of their own. In that case, I would apply the same standard I apply to nonhuman animals: If they have feelings and interests of their own, then they have moral status; they matter morally for their own sake and should not be regarded as mere resources for our use. Sentient beings — conscious beings with feelings — have moral status irrespective of their species or whether they are alive. To hold otherwise is to cling to speciesism, an irrational prejudice against members of other species, or what I call “biologism,” an irrational prejudice against nonliving entities.

Suppose some future artificial systems convince us of their sentience. Should we infer that they have not only (some) moral status but also the especially strong protections we call moral rights? Assuming these robots or other systems persuade us that they are sentient on the basis of highly intelligent, self-aware behavior, I would argue that they are so person-like as to qualify as persons, despite being artificial and nonliving. On this basis, we ought to accord them basic rights that all persons should enjoy: a right to “life” (or non-destruction), a right not to be caused to suffer or otherwise be harmed needlessly, and liberty rights including, crucially, a right not to be enslaved.

Advances in AI are driven partly by scientific and philosophical curiosity. But to a great extent they are driven by our interests in the work AI can do for us: clear minefields, perform surgeries, diagnose diseases, write papers and provide companionship, among other tasks. Ironically, advances in AI might lead to the existence of entities that should be recognized as having a moral right to refuse to keep working, involuntarily, for our benefit.

David DeGrazia (ddd@gwu.edu) is the Elton Professor of Philosophy at George Washington University.