Neural Network Learning Rules For OpenAI

Glyn Matthews


Table of Contents
Architectures
Learning Rules
Conclusion
Bibliography

Outlined in this document are some neural network learning algorithms intended for the OpenAI project. The intention of this report is to provided a basis for developing implementations of the artificial neural network (henceforth ANN) framework. It is not the purpose to provide a tutorial for neural networks, nor is it an exhaustive discussion of learning rules. The examples in the report are, according to much of the literature, the most widely used learning rules and this is the reason for presenting them. A final implementation may include only some of these or may include others that aren't discussed here. Of course, there are a wide variety of supervised and unsupervised learning algorithms to choose from (a list of some of these can be found at the comp.ai.neural-nets FAQ).

Therefore the primary focus of this report is to describe algorithms for ANNs which perform particularly strongly for processing the following commonly used applications:

  1. Function approximation;

  2. Time series processing;

  3. Classification;

  4. Pattern recognition;

These are suggested because they are well established tasks for which ANNs are used extensively.

The methods described here require the definition of new learning rules by inheriting the LearningRule as the base class and for networks that aren't feedforward MLPs, new architecures that inherit the abstract Architecture class also need to be defined.

The public methods which need to be implemented are:

    LearningRule.correctLayer(Layer layer, DataElement dataElement);
    LearningRule.ready(Layer layer);
    Architecture.connectNetwork(Network network);
    Architecture.reconnectNetwork(Network network);
    Architecture.iterateNetwork(Network network);
    Architecture.createDefaultConnections(Network network);
    Architecture.connectAll(Vector fromNeurons, Vector toNeurons);
    Architecture.disconnectAllFrom(Vector neurons);
    Architecture.disconnectAllTo(Vector neurons);

Architectures

Any learning rule is intimately tied with the network topology or architecture, in such a way as to almost make the two inseperable. A key difficulty would be to seperate the parameters and functions of a given architecture from that of a learning rule.

Feedforward Neural Networks

The multilayer perceptron architecture (fully connected feedforward with biases) is already implemented in the architecture package (FeedforwardArchitecture.java). However there is no scope to include techniques such as pruning, which require networks that are not fully connected. This would mean including methods which add or remove single connections.

Additionally, a feedforward architecture would also need to include a function to add or remove nodes. As will be explained later, some learning rules (such as cascade correlation) require the addition or subtraction of nodes.

Figure 1. Fully Connected Feedforward Multilayer Perceptron With Biases

Feedforward networks have the advantage that they are very simple and often very quick. Also, traditional numerical methods can be applied to feedforward network training as well as algorithms invented especially for neural networks, so there is a wide range of algorithms which can be used.

Recurrent Network Architectures

Recurrent networks have the same characteristics as the standard feedfoward, but with feedback connections. Because of these feedback connections, cycles are present in the network. Therefore training is sometimes iterated for a long period of time before a response is produced.

Figure 2. Neural Network With Feedback Connection

Recurrent networks tend to be more difficult to train than feedforward networks, as a result of the cycles, though there is still a fair number of algorithms which are frequently used, including Jordan, Elman or time delay networks. Some problems are particularly suited to the use of recurrent networks in favour of feedforward networks, such as time series prediction.

Self Organising Maps (SOM)

A Self Organising Map neural network defines a mapping from input signal of arbitrary dimension to a one or two dimensional array of nodes. This array of nodes defines corresponds to a discrete map. Figure 3 shows a one dimensional SOM structure. Figure 4 displays an SOM in two dimensions.

Figure 3. One Dimensional Self Organising Map

As can be seen from figure 1, the SOM is essentially a two layer neural network with full connections, only now with connections between neurons in the output layer. Methods for learning in such an architecture are to be discussed in the following section.

Figure 4. Two Dimensional Self Organising Map

SOMs represent a different topology because of the connections between neurons in the output layer. Connections don't exist in this way for other network types. Although recurrent networks may have connections between neurons in the same layer, they are ...

The biological basis of SOMs is that sensory inputs, such as motor, visual, auditory etc., are mapped onto corresponding areas of the cerebral cortex in an orderly fashion. The neurons are close together and interact via short synapse connexions.

This gives rise to several key ideas for the Kohonen SOM:

  • Competition: For each input pattern, the neurons in the network compete to see which one is the closest to the input.

  • Cooperation: The winning neuron determines a neighbourhood - a group of networks close by providing the basis for the neighbouring neurons to cooperate, that is to increase the weights together.

  • Adaption: The excited neurons steadily adapt their values to be closer to that of the input pattern through adjustments to their weights

These will be elaborated in the next section.