What Is a Neural Algorithm?
Logical thinking refers to the process of reasoning according to logical rules; it first converts information into concepts and represents them with symbols, and then performs logical reasoning in serial mode according to symbolic operations; this process can be written as serial instructions for the computer carried out. However, intuitive thinking is to synthesize the information stored in the distributed, and the result is a sudden idea or a solution to the problem. The fundamental point of this way of thinking lies in the following two points: 1. Information is stored on the network through the distribution of excitement patterns on neurons; 2. Information processing is accomplished through a dynamic process of simultaneous interactions between neurons.
Neural network algorithm
- Logical thinking
- Thinking science generally believes that the thinking of the human brain is divided into three basic ways: abstract (logical) thinking, image (intuitive) thinking, and inspiration (epiphany) thinking.
- Artificial Neural Networks (ANN) systems have been around since the 1940s. It is connected by many neurons with adjustable connection weights. It has the characteristics of large-scale parallel processing, distributed information storage, and good self-organizing and self-learning capabilities. BP (Back Propagation) algorithm, also called error back propagation algorithm, is a supervised learning algorithm in artificial neural networks. The BP neural network algorithm can approximate arbitrary functions in theory. The basic structure is composed of non-linear change units and has strong non-linear mapping capabilities. In addition, parameters such as the number of intermediate layers of the network, the number of processing units in each layer, and the learning coefficient of the network can be set according to specific conditions, which has great flexibility. It is used in many fields such as optimization, signal processing and pattern recognition, intelligent control, and fault diagnosis. Has a wide range of applications.
How neural network algorithms work
- The study of artificial neurons originated from the theory of brain neurons. In the late 19th century, in the fields of biology and physiology, Waldeger et al. Created the neuron theory. It is recognized that a complex nervous system is composed of a large number of neurons. The cerebral cortex contains more than 10 billion neurons, with about tens of thousands per cubic millimeter. They are connected to each other to form a neural network. They receive various information from inside and outside the body through sensory organs and nerves, and are transmitted to the central nervous system. Analysis and synthesis of information, and then control information sent by motor nerves, in order to achieve the connection between the body and the internal and external environment, and coordinate various functional activities throughout the body.
- Neurons, like other types of cells, include cell membranes, cytoplasm, and nuclei. However, the shape of nerve cells is very special, with many processes, so it is divided into three parts: cell body, axon and dendrite. There is a nucleus in the cell, and the role of the protrusion is to transmit information. A dendrite is a protrusion that introduces an input signal, and an axon is a protrusion that serves as an output terminal. It has only one.
- The dendrite is an extension of the cell body. It gradually becomes thinner after it is emitted from the cell body. Each part of the full length can be interconnected with the axon end of other neurons to form a so-called "synapse." At the synapse, the two neurons are not connected, it is just the junction where information transmission functions occur, and the gap between the contact interfaces is about (15-50) × 10 meters. Synapses can be divided into two types, excitatory and inhibitory, which correspond to the polarity of coupling between neurons. The number of synapses per neuron is normal, up to 10. The strength and polarity of the connections between different neurons are different, and all can be adjusted. Based on this feature, the human brain has the function of storing information. Using a large number of neurons interconnected to form an artificial neural network can show some characteristics of the human brain.
- Artificial neural network is an adaptive non-linear dynamic system composed of a large number of simple basic elements-neurons interconnected. The structure and function of each neuron is relatively simple, but the system behavior produced by a large number of neuron combinations is very complicated.
- Artificial neural network reflects some basic characteristics of human brain function, but it is not a realistic description of biological systems, but some kind of imitation, simplification and abstraction.
- Compared with digital computers, artificial neural networks are closer to the human brain in terms of composition principles and functional characteristics. Instead of performing operations step by step according to a given program, they can adapt themselves to the environment, summarize laws, complete certain operations, and recognize. Or process control.
- Artificial neural networks must first learn with certain learning criteria before they can work. The artificial neural network is used as an example to describe the recognition of the two letters "A" and "B". It stipulates that when "A" is input to the network, it should output "1", and when the input is "B", the output is "0".
- Therefore, the criteria for network learning should be: If the network makes a wrong decision, learning through the network should make the network reduce the possibility of making the same mistake next time. First, each connection weight of the network is given a random value in the (0, 1) interval, and the image mode corresponding to "A" is input to the network. The network weights and sums the input modes, compares them with thresholds, and performs non Linear operation to get the output of the network. In this case, the probability of the network output being "1" and "0" is each 50%, that is, completely random. At this time, if the output is "1" (the result is correct), the connection weight is increased, so that the network can still make a correct judgment when it encounters the "A" mode input again.
- If the output is "0" (that is, the result is wrong), then the network connection weight is adjusted in the direction of reducing the comprehensive input weighting value. The purpose is to reduce the crime when the network encounters the "A" mode input next time. The same possibility of error. In this way, when several handwritten letters "A" and "B" are input to the network in turn, and after the network has learned several times according to the above learning method, the correct rate of network judgment will be greatly improved. This shows that the network has successfully learned the two modes, and it has memorized the two modes on the connection weights of the network. When the network encounters any of these modes again, it can make rapid and accurate judgments and identifications. Generally speaking, the more neurons the network contains, the more patterns it can remember and recognize.
Neural Network Algorithm Features
- (1) The human brain has strong adaptive and self-organizing characteristics. Acquired learning and training can develop many unique activities. For example, blind people are very sensitive to hearing and touch; deaf people are good at using gestures; well-trained athletes can show extraordinary sports skills and so on.
- The function of an ordinary computer depends on the knowledge and capabilities given in the program. Obviously, it will be very difficult to compile procedures for smart activities.
- Artificial neural networks also have preliminary adaptive and self-organizing capabilities. Change the synaptic weight value during learning or training to meet the requirements of the surrounding environment. The same network may have different functions due to different learning methods and contents. Artificial neural network is a system with learning ability, which can develop knowledge beyond the original knowledge level of the designer. Generally, its learning and training methods can be divided into two types, one is supervised or called mentored learning, then use the given sample standard to classify or imitate; the other is unsupervised learning or called unsupervised learning At this time, only the learning method or certain rules are specified, and the specific learning content varies with the environment in which the system is located (that is, the condition of the input signal). The system can automatically discover the characteristics and regularity of the environment and has a function more similar to the human brain.
- (2) Generalization ability
- Generalization ability means that it has good prediction ability and control ability on untrained samples. In particular, when there are some noisy samples, the network has a good prediction ability.
- (3) Non-linear mapping capability
- When the system is thorough or clear for designers, mathematical tools such as numerical analysis and partial differential equations are generally used to build accurate mathematical models. However, when the system is complex or the system is unknown, the amount of system information is small. When it is difficult to build accurate mathematical models, the nonlinear mapping capability of neural networks shows advantages, because it does not require a thorough understanding of the system, but it can also achieve the mapping relationship between input and output, which greatly simplifies the design Difficulty.
- (4) High parallelism
- Concurrency is somewhat controversial. Recognize the reason for parallelism: Neural networks are mathematical models abstracted from the human brain. Since people can do something at the same time, from the perspective of functional simulation, neural networks should also have strong parallelism.
- For many years, people have tried to understand and answer the above questions from various angles such as medicine, biology, physiology, philosophy, informatics, computer science, cognitive science, and organizational synergy. In the research process of finding the answers to the above questions, an emerging field of multidisciplinary technology has gradually formed over the years, which is called "neural network". The study of neural networks involves many subject areas, which are combined, infiltrated and promoted. Scientists in different fields start from the interests and characteristics of their respective disciplines, ask different questions, and conduct research from different angles.
- The following compares artificial neural networks with general computer working characteristics:
- From the perspective of speed, the speed of transmitting information between neurons in the human brain is much lower than that of computers. The former is on the order of milliseconds, while the latter often reaches several hundred megahertz. However, because the human brain is a large-scale parallel and serial combined processing system, it can make fast judgments, decisions, and processes on many issues, and its speed is much higher than that of ordinary computers with serial structures. The basic structure of artificial neural network imitates the human brain and has the characteristics of parallel processing, which can greatly improve the work speed.
- The characteristic of the human brain to store information is to use the change in synaptic efficiency to adjust the storage content, that is, the information is stored in the distribution of the connection strength between neurons, and the storage area and the computer area are integrated. Although the human brain has a large number of nerve cell deaths every day (about 1,000 per hour on average), it does not affect the normal thinking activities of the brain.
- Ordinary computers have independent memory and arithmetic unit. Knowledge storage and data calculation are not related to each other. They can only be communicated by a program written by a person. This communication cannot exceed the expectations of the programmer. Local damage to components and minor errors in the program can cause serious malfunctions.
Application and development of neural network algorithms
- The purpose of psychologists and cognitive scientists to study neural networks is to explore the mechanism of human brain processing, storing, and searching information, to clarify the mechanism of human brain function, and to establish the microstructure theory of human cognitive process.
- Experts in biology, medicine, and brain science try to promote the development of brain science to a quantitative, accurate, and theoretical system through the study of neural networks. Seeking new ways to solve a large number of problems that cannot be solved or have great difficulties, and construct a new generation of computers that more closely approximates the function of the human brain.
- Early research work on artificial neural networks dates back to the 1940s. The following is a brief introduction to the development history of artificial neural networks in chronological order, with famous figures or prominent research results in one aspect as clues.
- In 1943, psychologist W. McCulloch and mathematical logician W. Pitts first proposed a mathematical model of neurons on the basis of analyzing and summarizing the basic characteristics of neurons. This model is still in use today and directly affects the progress of research in this field. Therefore, they can be called the pioneers of artificial neural network research.
- In 1945, a design team led by von Neumann trial-produced a successfully stored program-type electronic computer, marking the beginning of the electronic computer era. In 1948, he compared the fundamental differences between the structure of the human brain and stored program computers in his research work, and proposed a regenerative automaton network structure composed of simple neurons. However, the rapid development of instruction storage computer technology forced him to abandon a new approach to neural network research, continue to devote himself to the research of instruction storage computer technology, and made a great contribution in this field. Although von Neumann's name is associated with ordinary computers, he is also one of the pioneers of artificial neural network research.
- In the late 1950s, F. Rosenblatt designed the "perceptron", which is a multilayer neural network. This work is the first time that the research of artificial neural networks has been put into practice from a theoretical discussion. At that time, many laboratories around the world imitated making perceptrons, which were applied to the research of text recognition, voice recognition, sonar signal recognition, and learning and memory problems. However, the climax of this research on artificial neural networks did not last long, and many people gave up their research work in succession. This was because the development of digital computers was at its heyday, and many people mistakenly believed that digital computers could solve artificial intelligence and models. All the problems of recognition, expert system, etc., caused the work of the perceptron not to be taken seriously. Secondly, the level of electronic technology at that time was relatively backward. The main components were electron tubes or transistors. The neural networks produced by them were bulky and expensive. It is completely impossible to make a neural network similar in size to a real neural network. In addition, in a 1968 book entitled "Perceptron", it was pointed out that the function of linear perceptrons is limited. The basic problems of multi-layer networks, and the fact that multi-layer networks cannot find effective calculation methods, have led a large number of researchers to lose confidence in the future of artificial neural networks. In the late 1960s, the research on artificial neural networks entered a low tide.
- In addition, in the early 1960s, Widrow proposed an adaptive linear element network, which is a linear weighted summation threshold network with continuous values. Later, based on this, a nonlinear multilayer adaptive network was developed. At the time, although these work did not mark the name of the neural network, it was actually an artificial neural network model.
- With the decline of people's interest in perceptrons, the research on neural networks has been quiet for a long time. In the early 1980s, analog and digital ultra-large-scale integrated circuit manufacturing technology was improved to a new level and completely put into practical use. In addition, the development of digital computers encountered difficulties in several applications. This background indicates that the time has come to find a way out for artificial neural networks. American physicist Hopfield published two papers on artificial neural network research in 1982 and 1984 in the Proceedings of the National Academy of Sciences, which caused a great response. People re-recognize the power of neural networks and the practicality of applying them. Immediately, a large number of scholars and researchers carried out further work around the method proposed by Hopfield, forming a research boom of artificial neural networks since the mid-1980s.
- In 1985, Ackley, Hinton, and Sejnowski applied the simulated annealing algorithm to neural network training and proposed the Boltzmann machine. This algorithm has the advantage of escaping extreme values, but it takes a long time to train.
- In 1986, Rumelhart, Hinton and Williams proposed a multi-layer feedforward neural network learning algorithm, namely the BP algorithm. It derives the correctness of the algorithm from the perspective of proof, and it has a theoretical basis for learning algorithms. From the perspective of learning algorithms, it is a great improvement.
- In 1988, Broomhead and Lowe first proposed radial basis networks: RBF networks.
- In general, neural networks have undergone a tortuous process from the climax to the trough and then to the climax.