What Is a Feedforward Neural Network?
Feedforward neural network is the simplest neural network, with neurons arranged in layers. Each neuron is connected only to the neurons in the previous layer. Receive the output of the previous layer and output to the next layer. There is no feedback between the layers. It is one of the most widely used and rapidly developing artificial neural networks. Research began in the 1960s, and current theoretical research and practical applications have reached a very high level. [1]
- Feedforward neural network (FNN)
- For the structural design of feedforward neural networks, there are usually three types of methods: direct training, pruning, and growth.
- The direct training method to design an actual network has a good guiding significance for setting the initial network of the pruning method; because the pruning method requires starting from a sufficiently large initial network, the pruning process is doomed to be long and complicated, and more unfortunately BP training is only the steepest descent optimization process, and it cannot guarantee that for a large initial network, it must converge to a global minimum or a sufficiently good local minimum. Therefore, the pruning method is not always effective. The growth method seems to be more in line with the process of people's understanding of things and accumulating knowledge. It has the characteristics of self-organization, so the growth method may be more promising and have more development potential. [3]
- Feedforward neural networks are simple in structure and widely used, and can approximate arbitrary continuous functions and square integrable functions with arbitrary precision. And can accurately realize any limited training sample set. From a system point of view, the feedforward network is a static nonlinear mapping. Through complex mapping of simple non-linear processing units, complex non-linear processing capabilities can be obtained. From a computational point of view. Lack of rich dynamics. Most feedforward networks are learning networks, and their classification capabilities and pattern recognition capabilities are generally stronger than feedback networks. [1]
- Perceptron network
- Perceptron (also known as perceptron) is the simplest feedforward network. It is mainly used for pattern classification and can also be used in learning control and multi-modal control based on pattern classification. Perceptron networks can be divided into single-layer perceptron networks and multi-layer perceptron networks.
- BP network
- BP network is a feedforward network that uses the Back Propagation learning algorithm to adjust the connection weight. The difference from the perceptron is that the neuron transformation function of the BP network uses the sigmoid function (Sigmoid function), so the output is a continuous quantity between 0 and 1, which can achieve any nonlinear mapping from input to output .
- RBF network
- RBF network refers to a feedforward network of hidden layer neurons composed of RBF neurons. An RBF neuron refers to a neuron whose transformation function is a Radial Basis Function (RBF). A typical RBF network consists of three layers: an input layer, one or more RBF layers (hidden layers) composed of RBF neurons, and an output layer composed of linear neurons.