What Is Neural Backpropagation?
The BP algorithm (that is, the back-propagation algorithm) is a learning algorithm suitable for multi-layer neural networks. It is based on the gradient descent method. The input-output relationship of a BP network is essentially a mapping relationship: a BP neural network with n inputs and m outputs performs a continuous mapping from n-dimensional Euclidean space to a finite field in m-dimensional Euclidean space. The mapping is highly non-linear. Its information processing ability is derived from multiple compounding of simple non-linear functions, so it has a strong function reproduction ability. This is the basis for the application of the BP algorithm.
- The back-propagation algorithm is designed to reduce the number of common sub-expressions without considering the storage overhead. [1]
- BP algorithm (that is, error back propagation algorithm) is a learning algorithm suitable for multilayer neural networks, which is built on gradients
Back-propagation algorithm to achieve mapping capabilities
- The learning process of BP network is an error correction learning algorithm, which is composed of forward propagation and back propagation. In the forward propagation process, the input signal passes from the input layer through the action function. It is transmitted layer by layer to the hidden layer and the output layer. The state of each layer of neurons only affects the state of the next layer of neurons. If the desired output is not obtained at the output layer, it will switch to back propagation and return the link signal along the original connection path. By modifying the connection rights of the neurons in each layer
- Neuron model and its learning process
- Wielanfl and Lelghton (1987) give an example of a three-layer network that divides space into concave subspaces. Huang and Lipmann (1987) demonstrated that the three-layer network can handle several very complex touch-based identification problems. These studies have promoted the widespread application of three-layer networks. Funashi and Hecht-Nielsen (1989) respectively proved that with the increase of hidden units, the mappings realized by three-layer congruence can uniformly approximate continuous functions on emergency or square integrable functions on compact sets according to L norm. Revealed the rich mapping capabilities of the three-layer network.
Back propagation algorithm memory mechanism
- Mitchison and Durbin (1989) gave an estimate of the upper and lower bounds of the learning capacity of three-tier peers under certain conditions. The transfer and output units of the three-layer network are specified by the application problem. Ying Xingren (1990), who only has the number of hidden units and is variable, analyzes the memory mechanism of the three-layer neural network in detail and points out that there are enough hidden units. The three-layer neural network can remember any given sample set. Asymptotic functions (very general functions, including step functions, Sigmoid functions, etc.) are used as the hidden unit to stimulate the number of plaques. k-1 hidden units can accurately memorize k samples of experimental values. When a step excitation function is used. The probability that a real-valued sample given by k + 1 tunnels can be memorized by k + 1 hidden units is zero. The same is true of associative memory in the collaterals of the Signaoid excitation function.
Backpropagation algorithm fault tolerance
- In addition to the characteristics of distributed memory of BP network, it also has certain fault tolerance and anti-interference ability. Sun Debao and Gao Chao studied the fault tolerance and anti-interference of the three-layer BP network. The result of multiplying the product of layer-connected weight matrices. [3]
- The BP network can better realize signal recognition and signal-to-noise separation in the case of wide frequency band, small noise ratio, and fewer signal modes.