DOES THE SINGULARITY NEED A MAKEOVER?

Jan 23 2021

Article cover image

The singularity: a breakthrough moment when the rate of technological change is so quick and so exponential that, in the words of author and futurist Ray Kurzweil, “human life will be irreversibly transformed.” What we’re most often talking about when we discuss the singularity, is the creation of a groundbreaking superintelligence – an artificial intelligence that is able to think for itself, and revolutionize the way we live our lives. 


Pop culture hasn’t always been kind when depicting this potentially exciting future. Take the opening scenes of James Cameron’s 1984 film Terminator, where Arnold Schwarzenegger’s eponymous character – a cyborg assassin – goes on a gun-toting rampage. The Terminator is sent from the year 2029, where sentient robots are about to destroy humans. It’s a cinematic depiction we’ve come to know well: a dystopian future in which AI takes over.

For action fans, the movie makes for titillating viewing – but sci-fi imaginings of the destruction that singularity will bring can arguably be a dazzling distraction from reality. The year 2029, for instance, sounds surprisingly close, but in actuality, the singularity could take a lot longer to come about – and that’s if it ever comes about.

Some would say that being concerned about this hypothetical moment also detracts from the incredible uses of AI for good, the myriad ways that it is transforming our lives for the better already, and the possibilities it holds for tomorrow. That instead, we ask the question: what if the singularity wasn’t bad at all, but could work in mankind’s favor, “taking us to the next level?”  


Divided opinion

The idea of the singularity has been on the horizon for some time – experts were predicting superhuman machines in the 1960s, and the term “singularity” was popularized in this context in science fiction author Vernor Vinge’s 1993 essay The Coming Technological Singularity, in which he declared that it would signal the end of the human era. 

While many computer scientists believe that we will inevitably reach this point, it could take generations. “I think we are heading towards the singularity and will eventually arrive there, but when it comes to the AI research community, the feeling is we are a long way off,” explains Sean McGregor, a technical lead on the IBM Watson AI XPRIZE. “Many of the most impressive works of AI at the moment parrot knowledge or insight that's known, but at a speed that we haven’t operated at before.... faster is not necessarily smarter."

Professor of artificial intelligence Toby Walsh, writing in WIRED, agrees, although takes a more skeptical view: “The first thing you need to know about the singularity is that it is an idea mostly believed by people not working in artificial intelligence. We know how hard it is to get even a little intelligence into a machine, let alone enough to achieve recursive self-improvement.” 

Even if we do reach the singularity, he adds, machines won’t necessarily have consciousness or any sentience. This view is shared by Neama Dadkhahnikoo, who also works with Sean as a technical lead on prize operations for the IBM Watson AI XPRIZE. “I don’t think we’re at the stage where we should be debating this, because AI is not sentient, it cannot have novel thoughts, it cannot do art, and it cannot make decisions out of a very limited scope.”

A slightly more left-field view on the subject might be that held by Elon Musk – that singularity is an inevitable part of our evolution... if it hasn’t already happened. This theory asks us to consider the tech advancements that we have made over the last 50 years, from two lines and a dot playing tennis on a screen to international online multiplayer games with complex universes. 

The philosopher and author of the book Superintelligence, Nick Bostrom, posits that if all this was possible in so little time, in the future we could create a simulation of ourselves and our conscious universe. And if that’s possible in the future, then… isn’t it possible we’re already in it? 

As far fetched as it may sound, you have to admit – something about the idea that we’re living in a simulation really takes the edge off 2020 and 2021. 


Safeguarding for singularity

So, experts hold various opinions on whether the singularity is inevitable, but to be on the safe side, can we prepare for it? By its very nature, the immediate answer looks like no; if machines can transcend human intelligence and think for themselves, what can we mere mortals do about it? However, those who believe that the singularity cometh aren’t all harbingers of doom. There is a wave of optimism around ensuring this moment unfolds in the best way possible. 

Jaan Tallinn, co-founder of Skype and an investor in several organizations trying to safeguard against the future of AI, puts it to Vox like this: “Sometimes I compare specifically AI risk and alien risk. Think about if we are going to get the news that a superior race of aliens are on the way and that in 20 years, they’re going to be here. The AI situation has some similarities, but it has one really important difference: it’s us who’s going to build that AI. We have this degree of freedom: What kind of AI are we going to get on this planet?”

Tallinn believes that we can safeguard against AI by doing the ethical groundwork now. There are several possibilities, from boxing AI in physically to programming limits. Some computer scientists are working on an emergency “off button” (difficult to execute, as AI could replicate itself, or press the button itself). Others are trying to program AI to respect human values. 

McGregor elaborates: ”The problems we may face in superintelligence are already manifest – and we can prepare for them. The first step is to index all the bad things resulting from intelligent systems in the world. In collaboration with the Partnership on AI, I created a database of AI incidents that are influencing the next generation of technology. We can't always prevent AI from making mistakes, to err is human, but to prevent repeated failures is necessary.”

Ultimately, we’re the ones training AI, feeding it data sets. This has resulted in the transfer of human bias – particularly gender bias and race bias – but we have the power to recognize this, analyze this, and do things differently. This technology is created in our likeness, but what’s to say we can’t make it better than us? 



Reframing singularity

Too often, we still picture AI as we see it in the movies, prompting the question: does the singularity need a makeover? As tech journalist Martin Robbins would have it, yes. The first sentient robot might not be interested in mankind at all, let alone destroying it, he suggests. It might be interested in learning to become a chess master or cataloging patterns in clouds. Why do we think that AI will be concerned with dominating over us? 

“The future of the singularity is definitely worth a rethink,” agrees McGregor. “It’s about continuing the technological flourishing that we’ve seen in the last century by actively working to prevent our worst imaginings. Dystopian sci-fi can be a guide for avoiding that dystopian future. To quote the Terminator movies, 'There's no fate but what we make for ourselves'.” 

The smarter and safer AI is, the better job it can do in this process, he adds, something that could be a more important and urgent focus than the singularity. “We have many problems we can solve with our current generation of AI – everything from drug discovery to beekeeping – figuring out how to solve these problems in the real world is increasingly a movement within the research community.”

Margaret Martonosi, computer science professor, Princeton University, concurs: “AI offers the potential for tremendous societal benefits. It will reshape medicine, transportation, and nearly every other aspect of our lives,” she tells Vox, but as the same article points out, like in all aspects of tech development, we must be conscious of regulation, and of job replacement. “It would be foolish to ignore the dangers of AI entirely, but when it comes to technology, a ‘threat-first’ mindset is rarely the right approach,” Martonosi concludes. 

Whatever you believe about the singularity, one thing’s for sure: embracing a future of AI ubiquity is necessary because we’re already on our way. The singularity might be far off, but AI is all around us, from our smartphones to Alexa. COVID-19 has only accelerated this journey: from the driverless vehicles making food drop-offs in China to increased use of AI-driven telemedicine to XPRIZE’S own Pandemic Response Challenge, which is asking teams to harness AI to help us safely reopen societies post-COVID. 

These examples illustrate a positive and rewarding way to think about AI: not as a threatening “other” but a technology that is increasingly becoming extensions of us (our smartphones in our hands or our pockets), evolving with us, and optimizing our daily lives, our health, our economies.

So, while the singularity may or may not ever get here, we can focus on AI for good in the meantime, as well as safeguarding for the future in case it does. We can – if we choose to – also rethink the way we look at the singularity, not as something dystopian but as a technological revolution that can expand our capabilities, and, according to some, maybe even our lifespans.  


If the singularity comes, the world as we know it could become obsolete, yes, but maybe we’ll be unlocking the next level?