What Is a Backpropagation Neural Network?

Back propagation network [1] (back propagation network, BP network for short) is a kind of neural network. A method for machine learning based on neural networks (machines acquire new knowledge and skills, and identify existing knowledge).

BP network is a kind of feed-forward neuron network. This network was proposed in 1985. In this network, there are two kinds of signals flowing: one is the working signal, which is transmitted forward after applying the input signal, until The actual output signal at the output is a function of the input and weight. The second is the error signal. The difference between the actual output of the network and the theoretical output is the error. It starts at the output and propagates backward layer by layer. The output of the j-th unit of the output at n iterations is
, Then the error signal for this unit is
, The square error of unit j is defined as
. The instantaneous value of the total squared error at the output is:
Where v is the objective function of learning, and the purpose of learning is to make
Up to the smallest. BP network is the most widely used network at present.
BP algorithm is a supervised learning algorithm. The purpose of its learning is to modify its weight with the error between the actual output of the network and the target vector, so that the output A and the expected T are as close as possible, even if the sum of squared errors of the network output layer is minimized. It gradually approaches the goal by continuously calculating changes in network weights and deviations in a direction that decreases with respect to the slope of the error function. Each change in weight and bias is directly proportional to the impact of the network error and is transmitted to each layer in a back-propagation manner.
The BP algorithm is composed of two parts: forward transfer of information and back propagation of errors. In the forward propagation process, input information is calculated from the input layer by layer through the hidden layer to the output layer. The state of each layer of neurons is only Affects the state of neurons in the next layer. If the desired output is not obtained at the output layer, the error value of the output layer is calculated, and then turned to back propagation, and the error signal is transmitted back along the original connection path through the network to modify the weights of the neurons in each layer until the desired aims.
1) Function approximation: train a network to approximate a function with the input vector and the corresponding output vector;
2) Pattern recognition: use a specific output vector to associate it with the input vector
3) Classification: classify the input vector in a defined and appropriate manner
4) Data compression: reduce the dimension of the output vector for transmission or storage in the actual application of artificial neural network. 80% -90% of the artificial neural network model uses BP network or its variants. It is also a forward network. The core part reflects the best part of the artificial neural network.

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?