Artificial Neural Network

An artificial neural network is a computation model whose design is inspired schematically the functioning of biological neurons (human or not). Neural networks are generally optimized by learning methods of statistical error, so they are placed one hand in the family Application statistics, they enrich with a set of paradigms for generating large spaces functional, flexible and partly structured and partly in the family of methods of the artificial intelligence they enrich in to make decisions based more on perception than on the reasoning formal logic. In modeling biological pathways, they can test the hypotheses derived from functional or neurophysiological test the consequences of these assumptions in order to compare them with real networks.

History

Simplified view of an artificial neural network
 
Simplified view of an artificial neural network

Neural networks are built on a paradigm biological, that of formal neuron (in the same way that genetic algorithms are the natural selection). These types of metaphors are everyday living with the ideas of cybernetics.

Neurologists Warren McCulloch and Walter Pitts led the early work on neural networks after their seminal article: What the frog's eye tells to the frog's brain. They formed a simplified model of biological neuron commonly known formal neuron. They also showed theoretically that neural networks can perform simple functions of logic, arithmetic and symbolic complex.

The formal neuron is designed as a controller with a transfer function that transforms its input into output according to specific rules. For example, a neuron's total input, compares the resulting sum to a threshold value, and responds by emitting a signal if the sum is greater than or equal to this threshold (ultra-simplified model of how a biological neuron). These neurons are also involved in networks where the topology of connections is variable: proactive, recurring ... . Finally, the efficiency of transmission of signals from one neuron to another can vary: one speaks of "synaptic weight", and these weights can be adjusted by learning rules (which mimic the plasticity of synaptic networks Biological).

The function of neural networks like the live model is to solve problems. In contrast to traditional methods of resolving computer, you should not build a program step by step according to the understanding of it. The most important parameters of this model are the coefficients synaptic. They are the ones who build the model resolution based on information given to the network. We must therefore find a mechanism to calculate the starting quantities that can acquire the problem. This is the fundamental principle of learning. In a model of artificial neural networks, learning is to first calculate the values of coefficients in synaptic function examples available.

The work of McCulloch and Pitts gave no indication of a method for adapting the synaptic coefficients. The question at the heart of reflections on learning has had an initial response through the work of physiologist Canadian Donald Hebb learning in 1949 described in his book The Organization of Behavior. Hebb proposed a simple rule that can change the value of the synaptic coefficients depending on the activity of units they connect. This rule now known as the "Hebb rule" is almost always present in current models, even the most sophisticated.
 

Neural network with feedback
Neural network with feedback

From this article, the idea was planted over time in people's minds, and it germinated in the mind of Frank Rosenblatt in 1957 with the model of the perceptron. This is the first artificial system capable of learning from experience, including where his teacher made some mistakes (in this he differs significantly from a system of learning formal logic). Further work also marked the area, such as Donald Hebb in 1949.

In 1969, a severe blow was dealt to the scientific community revolving around the networks of neurons: Lee Marvin Minsky and Seymour Papert published a book highlighting some theoretical limitations of Perceptron, and more generally linear classifiers, including an inability to deal problems of nonlinear or connectedness. They implicitly extended these restrictions to all models of artificial neural networks. Seeming to an impasse, research on neural networks lost much of its funding public sector and industry also turned away. Funding for the artificial intelligence were directed more towards the formal logic and the search dragged on for ten years However, the strong qualities of certain neural networks in adaptive (eg Adaline), enabling them to model an evolutionary phenomena themselves evolving bringing them to be integrated in ways more or less explicit in the corpus of adaptive systems used in telecommunications or the control of industrial processes.

In 1982, John Joseph Hopfield, recognized physicist, gave a new impetus to the neuron by publishing an article introducing a new model of neural network (fully recurrent). This article was successful for several reasons, the principal was dyeing theory of neural networks rigor own physicists. The neuron became a subject of study acceptable, although the model of Hopfield suffered major limitations of models of the 1960s, including an inability to handle nonlinear problems.

At the same time, algorithmic approaches to artificial intelligence were the subject of disillusionment, their applications do not meet expectations. This disappointment motivated a reorientation of research in artificial intelligence to neural networks (although these networks involve artificial perception rather than the artificial intelligence per se). The search was restarted and the industry took some interest in neural (especially for applications such as guided cruise missiles). In 1984. The system gradient backpropagation of error is the most debated topic in the field.

A revolution occurs when in the field of artificial neural networks: a new generation of networks of neurons capable of dealing successfully nonlinear phenomena: the multilayer perceptron has no defects highlighted by Marvin Minsky. Proposed first by Werbos, the Multi-Layer Perceptron appears in 1986 introduced by Rumelhart and, simultaneously, in a nearby appellation among Yann Le Cun. These systems are based on the backpropagation gradient error in systems with multiple layers, each type of Adaline Bernard Widrow, close to the Perceptron Rumelhart.

Neural networks have since grown considerably, and were among the first systems to receive the illumination of the theory of statistical regularization introduced by Vladimir Vapnik in the Soviet Union and popularized in the West since the fall of the wall. This theory, one of the most important area of statistics, allows to anticipate, regulate and investigate the phenomena related to learning. We can thus regulate a system of learning that best referee between a poor model (eg the average) and modeling too rich to be optimized so unrealistic on a number of examples too small, and would be ineffective on examples not yet learned even close examples learned. The over-learning is a difficulty faced by all systems learning by example that they use methods of optimizing direct (eg linear regression), iterative (eg gradient descent), or iterative semi-direct (conjugated gradient, expectation-maximization ...) and that they are applied to conventional statistical models, the hidden Markov models or neural networks.


Vinkle.com Android app