Different types of neural networks
The total weight of synaptic connections determines the functioning of the neural network. The reasons are presented to a subset of the neural network: the input layer. When applying a pattern to a network, it seeks to achieve a stable state. When reached, the activation values of output neurons is the result. The neurons that are neither part of the input layer or the output layer are called hidden neurons.
The types of neural network differ by several parameters:
- the topology of connections between neurons;
- the aggregate function used (weighted sum, pseudo-Euclidean distance ...);
- thresholding function used (sigmoid, step, linear function, Gaussian function, ...);
- the algorithm for learning (gradient backpropagation, cascade correlation);
- other parameters specific to certain types of neural networks, such as the relaxation method for neural networks (eg Hopfield networks) that are not easy to spread (eg Multilayer Perceptron).
Many other parameters may be implemented as part of the learning of these neural networks, for example:
- the method of degradation of weights (weight decay), avoiding the side effects and neutralize over-learning.