October 31, 2020

ML misconceptions (8): neural networks may need to be retrained

by Sam Sandqvist

The are a lot of misconceptions regarding neural networks. This article, and subsequent ones on the topic will present the major ones one need to take into account.(1)

Blog 15-1
Given that you were able to train a neural network to work successfully in and out of sample still means this neural network may stop working over time. There are many reasons why this may happen.

This is not a poor reflection on neural networks but rather an accurate reflection of, e.g., the financial markets. Financial markets are complex adaptive systems meaning that they are constantly changing so what worked yesterday may not work tomorrow. Anything dependent on time will change!

This characteristic is called non-stationary or dynamic optimisation problems and neural networks are not particularly good at handling them.

It is clear that (for instance, in the network depicted above) the more complex, deeper, a network is the more dependent the result may be on the weights in the net: since the depth is thought to extract salient features, if the features are not longer present or change dramatically, the weights are inappropriate for the task at hand. The result is typically nonsense output. This is alleviated with periodic retraining, especially in dynamic environments.

Blog 15-2

Consider the image above, depicting a static environment. When the network has been training to perform well, it will continue to do so.

However, in the image below, this is no longer the case. We need to retrain the network if the optimum changes.

Blog 15-3

Dynamic environments, such as financial markets, are extremely difficult for neural networks to model.

Two approaches are either to keep retraining the neural network over time, or to use a dynamic neural network.

Dynamic neural networks 'track' changes to the environment over time and adjust their architecture and weights accordingly. They are adaptive over time.

For dynamic problems, multi-solution meta-heuristic optimisation algorithms can be used to track changes to local optima over time. One such algorithm is the multi-swarm optimisation algorithm, a derivative of the particle swarm optimisation. Additionally, genetic algorithms with enhanced diversity or memory have also been shown to be robust in dynamic environments.

(1) The inspiration for the misconceptions is adapted from an article by Stuart Reid from 8 May 2014 available at http://www.turingfinance.com/misconceptions-about-neural-networks/.

Sam Sandqvist
AUTHOR

Sam Sandqvist

Dr Sam Sandqvist is our in-house Artificial Intelligence Guru. He holds a Dr. Sc. in Artificial Intelligence and is a published author. He is specialized in AI Theory, AI Models and Simulations. He also has industry experience in FinServ, Sales and Marketing.