We all remember that immortal phrase, “I can’t do that Dave,” especially now when it’s almost 20 years after it was purported to have happened. But is the menace real? Are there danger in full-scale deployment of AI systems? Will they ever be moral, adhere to the ethics of society? Or will they go rogue?

HAL 9000 red eye

The Machine Intelligence Research Institute (MIRI) published an interesting report in 2008, “Artificial Intelligence as a Positive and Negative Factor in Global Risk”, where they delved into the question of the risks of AI for humanity in depth. This was followed up by another high-level report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” in 2018 by the University of Oxford.

In their executive summary they highlighted the four high-level recommendations they formulated in response to the changing threat landscape (paraphrasing):

  1. Collaboration between policymakers and technical researchers
  2. Researchers and engineers should take the dual-use nature of AI seriously
  3. Best practices should be formulated to address dual-use concerns, and especially in terms of computer and software security
  4. Expand the range of stakeholders and domain experts in discussions about the challenges

It is clear that the focus was on preventing the weaponisation of AI, as well as secure its use so that it does not open itself up to malicious practices. As experts have said, drones turned into missiles, fake videos manipulating public opinion and automated hacking of computer systems are just some aspects of the threats of AI.

The machines are coming. So said The Economist, the well-known business weekly in their issue of 19 December 2017. It pointed out the half of all normal jobs were threatened by AI.
Android typing on laptop

But the menace is deeper than that. What about societal fabric itself? The mostly unrecognised contract we have, “we work - society provides security, opportunities, education etc.”? How will this change once AI encroaches more and more in our daily lives, takes our jobs?

But this has happened before. Every ‘revolution’, the agricultural, the invention of writing and reading, printing, the industrial revolution, the new information age, all have impacted us in the same manner, to varying degrees to be sure, but the changes have been profound. We will cope, but not by refusing to change, not accepting the revolution, but by taking it in its stride. Mitigating where necessary, adapting, and changing our own lives.

More worrying, in my opinion, is that we are also building in into the AI solutions we deploy our own biases, prejudices, and attitudes. AI will reflect the society in which it is built. If we do not make it ethical, it will not be; witness the so-called decision support systems employed in judiciaries world-wide: they embody and have codified the biases, racism, attitudes of the old systems on which the have been built. Teaching a NN to find precedence cases with old cases and how they have been resolved before, will not make the system impartial: on the contrary, it will make it exactly like the one it replaces, only faster and more efficient.

How can we ensure that the old is not replicated in the new? By recognising the values we want to impart in it, and change the old cases, review and modify the justifications to reflect the desired outcomes. In a changing world this is an ongoing task, and gigantic as well. It is much easier to use the old cases as a foundation and rely on the people using it exercising judgement.

So the danger is that we are blind, we just lazily accept whatever the machine says. We need to ask, to question: to demand explainability from AI. This means that the systems must be able to justify somehow what they come up with — and we must be ready to question the answers. We cannot let go of the steering wheel, just like Tesla emphasises in the self-driving cars.

It is often maintained that AI speeds up the technological and social change so much that people cannot adapt to it. This process is called the singularity. It may also be thought of as the moment when a machine makes a better machine. Compared to biological and cultural change this could happen very quickly. Super-AI may for instance revolutionise financial markets, researchers and scientists, and at worst create arms that humans wouldn’t even understand. The dangers of an unchecked AI development has caused the founder of Microsoft, Bill Gates, the physicist Stephen Hawking and entrepreneur and inventor Elon Musk to raise the alarm, among others.

Today, financial, governmental and managerial clout is increasingly dependent on who wields the power of AI. We need to be cognisant of this whenever we accept an AI answer.

Once our machines become intelligent enough, we need to ask ourselves another question. What do they want, the machines, that is? I once read an interesting article about this (“The Basic AI Drives”, by Stephen Omohundro), where the author wants us to design “friendly AI”, and impart the values we want. He details the AI drives as follows.

  • AIs will want to self-improve
  • AIs will want to be rational
  • AIs will try to preserve their utility functions
  • AIs will try to prevent counterfeit utility
  • AIs will be self-protective
  • AIs will want to acquire resources and use them efficientl

It is clear that we still the masters, and we will design AI to reflect what we feel is right. However, it’s critically important for us to recognise this and do what is right, and not what is expedient.