Hans Moravec, the futurist, once said ‘Machines find routine human tasks hard. Humans find routine machine tasks hard’. Regardless, the momentum of advances in AI are continuing their path to create augmented world where humans and machines can work together synergistically.
Artificial intelligence is both new and old. We have always been intrigued by the concept of mind, and intelligence, and the problem of computation as a human endeavour. However, it can only be said that the concept of AI in the modern meaning took shape in the 1950s.
The famous scientist Alan Turing formulated what later became known as the Turing test. Developed in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.
The key point is, if you cannot tell which is which, the machine is intelligent.
From a legal point of view, the situation is murkier. It is true that Congress in the United States has developed a Bill to formalise the definition of AI as well the relationship of AI to human intelligence.
In the beginning AI could fairly easily solve checkers, but it was a simple problem compared to the then holy Grail of chess.
The path towards that goal went through machine learning, first in the form of evolutionary algorithms (GA), and then more and more towards neural networks (NN). Today, NNs have developed further, as we will see later both for GAs and NNs, and are now exemplified by deep learning algorithms.
The path there was not without its detours, however. One was the all-singing, all-dancing database with all the world’s knowledge, first begun in the 1980s. Today it is known as CYC, and still exists. The real problem turned out to be its fragility: it could not generalise at all. It is certainly better today, using newer technologies, but basically is something of a dead end. It is excellent for very domain-specific queries, but not general knowledge.
The pinnacle of the traditional approaches to AI came in 1997, when IBM’s Deep Blue algorithm defeated world chess master Gary Kasparov in a dramatic contest a chess.
This contest exemplified the brute-force approach to AI: the strategy was to just look as far forward as necessary to find a winning tactic.
It turned out that chess wasn’t that hard a problem as previously thought. It also can be said to have spelt the end of an era; it was realised that this brute-force strategy could only take you so far.
The modern approach is different, however. It learns. And finds out things nobody could imagine an algorithm could do.
The real problem is that problems come in two different forms.
- P. Polynominal, Solution in time T(n) = n2 or n3 or … nx , that is, time is always a certain constant exponent of n, a given number. These problems can always be solved by using a bigger, faster, better computer and algorithm.
- NP. Non-polynominal, where the solution is obtained in time T(n) = an where a is at least constant. In other words, the time is always exponential to the size of the problem. These cannot be solved by bigger/better/faster systems, since they grow without bounds.
It turns out that most of the interesting problems in the world are NP.