Should AI have basic human rights?

Dec 05 2020

Article cover image

AI is all around us and it’s developing, quite literally, at an exponential rate. We don’t necessarily even know when we’re using AI systems or the fact that we’re constantly teaching them. From our Gmail suggested responses to the playlists curated for us by Spotify, many of us interact with AI systems daily, feeding them more and more data, so they can become better at their jobs. 


These AI systems are intelligent, but the reason we might not even think of them as AI is that they are a long way from the way AI is portrayed in the media and in movies. Science Fiction tends to imagine what a future of dispassionate, or even dangerous, sentient AI robots might look like. The Terminator, Her, and Ex Machina, for example, all depict AI that can think and feel like humans, but also outwit and even threaten us. On the other hand, sometimes we see more sympathetic portrayals, like in The Good Place, Star Trek, or Wall-E  – humanoids or AIs that have feelings, but are not of this world, and struggle to fit in, or else are sent to protect or save us. 


These depictions aren’t totally fictional. AI, robotics, and haptics experts from across the globe are currently attempting to build the foundations for a sentient AI system, or at least one that can do more of the things that humans can do. This is why philosophers and ethicists are busy pondering the questions this possible future throws up: How soon will machines become as smart as humans, and how quickly could they become even smarter? What happens if these systems start to perceive humans as a threat, and put us in danger? And if AI will one day hold the ability to think and feel just like humans can, should we ensure they have basic human rights? 


According to Neama Dadkhahnikoo, the Technical Lead on the IBM Watson AI XPRIZE and an AI industry expert, these questions are all interconnected. “It’s not the topic of AI having human rights that is divisive per se, it’s that if AI is advanced enough that it should have human rights it could be a danger to the human species,” he explains. “If you’re creating an AI system that’s so advanced and independent that it actually requires human rights, then it will surpass us as human beings in terms of intelligence very quickly. Then, the question of whether it should have human rights becomes moot and that’s what people are worried about.” 


Yet, says Neama, this is still the stuff of sci-fi or hypothesis. While the AI we use can sound like a human, or have human resemblances – think Siri or Alexa – in reality, these systems are a long way from being even remotely close to humans in their intellect or decision-making capabilities. The AI we currently have is impressive, but it’s mostly based on pattern recognition. We’ve created incredible neural networks that can learn everything from languages to how to recognize a dog or a car – but this AI is not meant for novel thinking, and so does not require human rights.


“We’ve been talking about sentient AI – AI at a stage that is so smart and powerful it can rival humans  – for many, many years but it’s mostly in the realm of science fiction – and I believe it exists as science fiction now,” says Neama. “We are nowhere near generalized AI, which is AI that can think for itself.” 


As for the future, while there are different schools of thinking about how long it will take to invent sentient AI, Neama estimates that we could be decades away from building the underlying technologies needed for this to become a reality. So, while it makes sense to think ahead about what kind of precautions and ethics we want to consider, debating whether AI should have basic human rights at this moment can be a distraction from more important questions about how we can use AI… for good. What if we flipped the question, says Neama, and instead of asking “Should AI have basic human rights?” we asked: “How can AI help us uphold human rights?” 


“Let’s say we do get to a point where we need to debate this, I think it comes down to a question of sentience. ‘Is an AI system alive? And does it have free will?’ When you’re starting to approach that area is when AI should have human rights. But until then, AI is just a tool that enables humans. So, I believe we should be focussing on making sure that AI is not displacing humans or infringing on the human rights that people have now, and instead that it’s working collaboratively with humans and empowering humans to do better at the things that we want to do.” 


This is, Neama continues, the key focus of the $5M IBM Watson AI XPRIZE – a prize that challenges teams to demonstrate how humans can work with AI to tackle important global challenges. That could be combating malaria, improving infant biometrics, finding lost children, or advancing the care of depression. Next year, XPRIZE will announce the winner out of a handful of finalist teams from around the world who have been looking at issues like these – the biggest issues facing humanity – and who have developed the most groundbreaking AI technology to solve them. 


Ethics have been built into the evaluation of teams since the beginning, Neama explains, but this was not about the ethics of whether AI should have sentience and rather, the ethics of using AI to help humans: accountability, accessibility, lack of bias, transparency, trust, and the protection of human rights. “These are things everyone who talks about AI should be focusing on,” Neama urges. “Ethical AI is very important now for big companies and small companies and we have to be very cognizant of how we’re using AI technology to ensure it’s not doing harm.” 

Here he cites the examples of using data sets in the wrong context, or not testing AI on the correct group of people. “AI systems frequently do well in the lab under controlled settings but then when you apply those to the real world they can fail to perform. Say you’re using data from North America and then you want to deploy it in the developing world, but the system doesn’t recognize the nuances of local language and customs – if you don't teach AI about the culture that you're applying it to you, it can have very negative outcomes.” AI can learn the biases in the data sets it is fed as well, he adds: “We’ve seen the Tay Chatbot trained by humans to be racist, or things around data bias, like resume screeners that only hire men because the datasets that engineers used taught them existing hiring biases. When you train AI in a controlled environment and import that to the real world it sometimes breaks down.” 

These are the ethics we should be thinking about, Neama concludes, and they present an exciting challenge to make AI a whole let better. The scope for using AI to tackle global issues is huge – if we get the technology right. Questions around AI and human rights will become important, but should not hijack from the conversations around how AI can be a tool for good. At XPRIZE, we believe AI is here to benefit us, not replace us, and to solve the potential dystopian problems of the future and create utopias in the now.