by Navaneeth Dontuboyina (’24) | November 16, 2020
We have always thought of machines as objects that exist solely to benefit us—and perhaps rightfully so. We build them, so we can use them in any way we please. If they are inefficient or obsolete, we discard them. Yet, as we move into the age of technology, we continue to build more complex machines at an unprecedented rate, leading to the coining of the term artificial intelligence, or AI.
AI uses algorithms like deep learning to process data and make human-like decisions. What AI strives to do is “synthesize” intellect, or try to develop the natural intelligence that you and I have, rather than simply mimic human intelligence. What sets artificially intelligent machines apart from other gadgets is that AI may be able to complete a given task without us. Unlike a bicycle that relies on a human user to fulfill a need, AI no longer requires human interference after a certain level of guidance.
Training an AI machine typically consists of presenting it with datasets and “teaching” it how to categorize them correctly. As the machine is trained with more datasets, it increases in “intelligence”; the ultimate goal is to create a machine with close to human intelligence. Naveen Joshi, a cognitive sciences columnist at Forbes adds, “[T]he rapid rate at which AI is developing new capabilities means that we might be getting close to the inflection point when the AI research community surprises us with the development of artificial general intelligence.” This creates a moral dilemma: can we ethically “kill” an AI machine that has developed such sentience? Should an artificially intelligent machine be granted autonomous rights?
Whether an AI machine should have natural rights or freedoms is a question more tricky than it initially seems. The obvious answer is yes; laws protecting individual liberties should apply to all sentient beings. If AI machines are as conscious as we are, they should be able to exercise those abilities to their full extent. The government of Saudi Arabia accordingly granted the world’s first “robot citizen” Sophia equal rights in 2017, a decision mocked by Robert Hart from Quartz News, who noted that “Sophia seems to have more rights than half of the humans living in Saudi Arabia.” Putting aside the apparent irony, both individuals and countries could logically disagree with the idea that robots deserve rights. Jessica Peng from Columbia University argues that “AI cannot be identified as human biologically, philosophically, or legally, and should not be given human rights.” To summarize, Peng and others believe that the ability to think is irrelevant unless the subject is human. She continues, suggesting that “giving [AI] human rights would endanger the entirety of human civilization, a sentiment eerily similar to that of the late Stephen Hawking who believed that AI will eventually overthrow the human race for their own betterment. This opinion is also reflected in the entertainment industry, especially in science fiction movies and TV shows.
Whether you believe that we should grant AI rights or that we should be able to shut down AI, one thing is for certain—we will have to decide quickly. Historian Yuval Noah Harari is among those who predict that sentient AI will eventually be an irreplaceable part of our lives; we must make a decision before it is too late.