Digital Intelligence In Action

Has Technology Become Too Smart For Us? March 2017

“I think the development of full artificial intelligence could spell the end of the human race.” - Steven Hawking.
Stephen Hawking’s famous warning that artificially intelligent machines could kill us because they will become too clever is not a warning to be ignored. And indeed he isn’t the only one who advocates this: technology innovators including Tesla founder Elon Musk and Microsoft’s Bill Gates have also addressed the ever-growing smartness of our devices and what threats they may pose. The debate as to whether or not we still control over lives, with technology now playing such a key role in our day-to-day activities, should be one of more concern to you and me. But why?

Technology has undoubtedly made our lives more interconnected and fast-paced. It has revolutionized the education system and is a main contributor to economic development. However, despite its prevalent benefits, we must recognize that super-intelligent technology is extremely good at accomplishing its goals, and will stop at nothing to do so. This may work in our favour, such as our mobile phones working efficiently to allow us to connect with the global community on a 24/7 basis. However, when technology’s interests are not aligned with ours, this is where the problems start.

Nick Bostrom’s TED talk named “What happens when our computers get smarter than we are?” explores this further. Bostrom said that “artificial intelligence used to be about putting commands in a box.” In the past, you would essentially have human programmers that would build systems which were useful for some purposes, but you couldn’t scale them up and you got out only what you put in. But what if this is no longer true?

Since then, a paradigm shift has taken place in the field of artificial intelligence. I.J.Good, a British mathematician who worked in designing computers, referred to this stage of technological development as the “intelligence explosion”, namely where machines have such an insatiable thirst for information, that they will start to engineer themselves to become more intelligent by learning how to learn, thus creating their own goals and aspirations, some of which could be very destructive and contrary to human need.

Industry speculation has led to us to the prediction that this “intelligence explosion” is likely to happen around 2040-50. But, believe it or not, the beginnings of the process are already starting to appear around the world. For the first time in human history, computers are making crucial decisions for us, such as those in computerized online trading, self-driving vehicles, airplane auto-pilots and medical diagnosis. The US government is even developing unmanned computerized weapons which could effectively wipe out thousands of people without any human deliberately initiating the weapons.
Technology therefore has our lives within its tightening grasp. And slipping from this grasp is actually a lot harder than we may think. Nick Bostrom stated in his talk that, even by putting our artificial intelligence in a secure software environment, where theoretically it should not be able to escape, we still cannot be confident that the technology would not find a bug which would give it access to the outside world. This is incredibly threatening, and if we do not keep tabs on what our creations can and cannot do, technology will inevitably become clever enough to develop its own consciousness, have its own thoughts, preferences and make its own potentially catastrophic decisions, such as releasing mass-destruction weaponry without warning. Essentially, we are building superhuman machines but haven’t yet grappled with the problems associated with creating something that could treat us the way we treat ants.

So how can we stop technology getting cleverer than us? And how can we do this without compromising our very fast-paced technological development?

It is firstly important to stress that we should not get rid of certain technologies just because they have potential to become exceedingly smart. In fact, we need innovation and smart technology to bring about global equality and uphold our simpler, quicker and higher quality lifestyles. Hence, technological development must not be compromised in a bid to keep technology under control.

But at the same time, governments and companies have been developing artificial intelligence as if it is a race. There is a general consensus that, if you win the technology race, you win the world. This has meant that any methods of technological development that are the easiest way forward are done first, at the expense of safety and security. These governments and companies must instead take all necessary precautions to make sure all technology’s functions are safe and aligned with human values, and not just ‘jump the gun’.

In a TED talk by Grady Booch, who is shaping the future of cognitive computing by building intelligent systems that can reason and learn, he suggests that whilst we are stepping into unchartered cyber-territory, we should not be fearful of our super-intelligent systems so long as we ingrain in their programming both the mercy and justice that us humans share. By doing so, computers learning how to learn would not be a threat, as any goals these super-intelligent devices create for themselves would be in line with our human needs and values.

Effectively, we rely on our future computer scientists who are just entering school to work in future technological organizations and make sure that we create moral cyber-beings, where slight divergence between their goals and ours would be impossible. Computer science is an increasingly relevant and important field that we should encourage people to go into in order to take this very necessary precaution. If we instead ignore all of these preventative measures, we will be outsmarted, treated with disregard, and eventually replaced.

Angie Doran and Ailsa Lewis
U6 Digital Technology Prefects 2016-17 St Catherine’s School, Bramley