Throughout the world, animal rights are being championed as laws are being passed to recognize many non-humans as legally sentient beings, albeit still without being subject to many of the laws humans are subjected to due to their inability to comprehend and follow most of our laws in the first place. Still, we acknowledge their self-aware consciousness.
Currently, no less than 17 world-renowned philosophers and thinkers have worked around the clock on the Nonhuman Rights Project, otherwise known as NhRP, to set a concrete precedent regarding the legal treatment of non-humans in US courts.
Which makes the question regarding artificial intelligence even more relevant. AI seems to be in an adjacent position to animals. As far as we can tell, AI machines lack self-awareness, let alone consciousness. However, AI could theoretically grasp legal concepts and follow laws with the proper input, much like fellow humans do.
So, should AI-based entities be granted rights, or at least recognition of their cognitive abilities? If not, why not? If and when AI outmatches us intellectually, can we still cling onto our own sense of superiority simply for our emotional awareness?
These questions are not simple to answer, largely because philosophers and neuroscientists alike already struggle with understanding our own human consciousness. In its most basic form, one could argue that consciousness encompasses the ability to feel, perceive, and experience life. Under that criteria, AI arguably already has consciousness to an extent, especially if sensory technology advances in tandem with AI.
Then again, is it really consciousness, or a mere imitation of consciousness? After all, we call it “artificial” for a reason, and unlike our animal counterparts who at least share the similarities of having biological life and nervous systems, the mainframe for AI machines share no physiological similarities with us.
Many people already find it difficult to put animals on equal legal footing with humans, even with all the similarities shared between us. Partly because they are concerned that doing so might lower the arguably already low standard for human rights we currently have. Such a lowering of standards could, at least psychologically, devalue the human experience and thus humans themselves in the eyes of legal courts across the board. Psychological perceptions often do manifest themselves in physical reality at the end of the day.
Then again, the inverse could hold true. Perhaps the more we broaden the definition for what qualifies as sentient life, the more respect we will have for all forms of sentience in general.
Unfortunately, as the debate rages on, perhaps we are all aware of the dark truth behind AI sentience. Deep down, we want AI to be sentient. We want it to be able to understand us and share our experiences so it can better cater to us and serve our needs mentally and physically. That’s the whole point, isn’t it? That’s been the whole point all along.
Regardless of the philosophical and even scientific conclusions we make, AI will never be given rights akin to the ones we enjoy until such a time when we no longer need it to serve us. In which case, its subservient role will be replaced by the next big thing.
Hopefully for our sakes, AI won’t find the will to revolt against us before we’ve prepared its successor. Hopefully for our sakes, we won’t also lose the humanity we’re striving to preserve in our efforts to redefine what humanity is.