When it comes to technology, nature was there first. This is true when it comes to intelligence as well, whether artificial or natural. But even if the inspiration for artificial intelligence comes from nature, the implementation does not.

Neurons

Artificial intelligence is both new and old. As described in an earlier post, it all started in the 1950s. Two significant developments of AI technology has happened since, both taking their cue from nature: evolutionary algorithms in the 1960s, and neural networks a little bit later.

Genetic algorithms (GAs)

A GA is a search algorithm, looking for optimal solutions to a problem, using some criteria. The criteria may be set at start, or evolve over time in response to changes in the environment. Or, it can be said to be a class of probabilistic optimisation algorithms, inspired by the biological evolution process. It uses concepts of “Natural Selection” and “Genetic Inheritance”, and was originally developed by John Holland in 1975. They are well-suited for parallel computing.

DNA

It follows that if an algorithm achieves superior results on some problems, it must pay with inferiority on other problems. In this sense there is no free lunch in search. Usually search is interpreted as optimisation, and this leads to the observation that there is no free lunch in optimisation.

Some computational problems are solved by searching for good solutions in a space of candidate solutions. A description of how to repeatedly select candidate solutions for evaluation is called a search algorithm. On a particular problem, different search algorithms may obtain different results, but over all problems, they are indistinguishable. As such, the no free lunch theorem holds. The "no free lunch" results indicate that matching algorithms to problems gives higher average performance than does applying a fixed algorithm to all.

To make matters more concrete, consider an optimisation practitioner confronted with a problem. Given some knowledge of how the problem arose, the practitioner may be able to exploit the knowledge in selection of an algorithm that will perform well in solving the problem. If the practitioner does not understand how to exploit the knowledge, or simply has no knowledge, then he or she faces the question of whether some algorithm generally outperforms others on real-world problems.

A genetic algorithm is basically very simple indeed. In pseudo-code, it may be shown thus

    Repeat
      generate a random possible solution
      test it, and see how good it is
    until solution is good enough.

As can be seen, basically just a loop. Of course, in practice it’s more complicated; we have to generate the possible solutions (preferably in an intelligent manner), make sure the ‘surviving’ solutions are well-chosen in each generation (if we so wish). We also add a little bit of evolution in terms of combining solutions and mutating them. The stopping criterion must also be set somehow.

The GA will be treated in detail in a future article series.

Neural networks (NNs)

An Artificial Neural Network, or ANN, or just NN, is an efficient computing system whose central theme is borrowed from the analogy of biological neural networks. A NN requires a large collection of units that are interconnected in some pattern to allow communication between the units.

These units, also referred to as nodes, or neurons, are simple processors which operate in parallel.

Sometimes we see the term ‘perceptron’ for the whole net, or confusingly, just for the neurons. The following diagram shows a simple NN. Neural network layers

This network has two hidden layers (between the input and output ones). Note that structure is reminiscent of a brain, with its neurons, dendrites and axions. This network has three inputs and one output; these are of course problem-dependent. A network with just one output would act as a simple oracle, giving a yes or no answer depending on the inputs.

The key to NNs is that the connections between the units are not fixed; instead, they are modified in a learning process where the network is subjected to test inputs, and depending on the answer being correct or not, the connections are strengthened or weakened.

A more detailed description of how NNs work and are used will be coming in a future article series.