Model of Artificial neural network

Network structure

Component based representation of a neural network.
Component based representation of a neural network.

A neural network is generally composed of a succession of layers each of which takes its inputs to outputs of the previous. Each layer (i) is composed of N i neurons take their inputs on N i-1 neurons in the previous layer. Each synapse is associated a synaptic weight, so that N i-1 are multiplied by weights and then summed by the neuron level i, which is equivalent to multiply the input vector by a transformation matrix. Put one behind the other layers of a neural network would put cascading several transformation matrices and could be reduced to a single matrix product of others, if there were to each layer, the output function which introduces a nonlinearity at each step. This shows the importance of the choice of sound output function: a neural network whose outputs are linear has no interest.

Beyond this simple structure, the neural network may also contain loops that change radically the possibilities but also the complexity. In the same way that loops can transform a combinational logic in sequential logic, loops in a network of neurons transform a recognition device of inputs into a complex machine capable of all sorts of behaviors.

Function Combination

Consider any one neuron.

It receives neural input a number of values through its synaptic connections, and produces some value in using a combining function. This function can be formalized as a feature vector to scalar, including:

  • Type networks MLP (Multi Layer Perceptron) calculate a linear combination of inputs, that is to say that the combining function returns the dot product between input vector and the vector of synaptic weights.
  • The network type RBF (Radial Basis Function) calculate the distance between inputs, that is to say that the combining function returns the Euclidean norm of the vector from the vector difference between the input vectors.

Function activation

The activation function (or thresholding function, or transfer function) is used to introduce non-linearity in the functioning of the neuron.

The thresholding functions are generally three intervals:

  1. below the threshold, the neuron is not active (often in this case, its output is 0 or -1);
  2. around the threshold, a phase transition;
  3. above the threshold, the neuron is active (often in this case, its output is 1).

Classic examples of activation functions are:

  1. The function sigmoid.
  2. The function hyperbolic tangent.
  3. The function of Heaviside.


Bayesian logic, including theorem of Cox-Jaynes formalizes the learning issues, involving also a function S which comes up repeatedly:

alt

Spread of Information

This calculation, the neuron propagates its new internal state on its axon. In a simple model, neuronal function is simply a function of thresholding: it equals 1 if the weighted sum exceeds a certain threshold, 0 otherwise. In a richer model, the neuron operates with real numbers (usually in the range [0,1] or [-1,1]). They say that the neural network goes from one state to another when all neurons in parallel recalculate their internal state, according to their entries.


Vinkle.com Android app