What Are the Different Types of Neural Network Tools?
Artificial Neural Networks (ANNs for short) are also referred to as Neural Networks (NNs) or Connection Models for short. It is an algorithm mathematics that imitates the behavioral characteristics of animal neural networks and performs distributed parallel information processing. model. This kind of network depends on the complexity of the system and adjusts the interconnection relationship between a large number of internal nodes to achieve the purpose of processing information. [1]
Neural Networks
(Communication definition)
- Biological neural network mainly refers to the neural network of the human brain, which is the technical prototype of artificial neural network. The human brain is the material basis of human thinking. The function of thinking is located in the cerebral cortex, which contains about 10 ^ 11 neurons. Each neuron is connected to about 103 other neurons through synapses, forming a highly complex Highly flexible dynamic network. As a discipline, biological neural networks mainly study the structure, function, and working mechanism of human brain neural networks, and are intended to explore the laws of human brain thinking and intelligent activities.
- Artificial neural network is a technical reproduction of biological neural network in a simplified sense. As a discipline, its main task is to build a practical artificial neural network model according to the principles of biological neural network and the needs of practical applications, and design the corresponding Learning algorithms simulate some intelligent activity of the human brain, and then implement them technically to solve practical problems. Therefore, the biological neural network mainly studies the mechanism of intelligence; the artificial neural network mainly studies the realization of intelligent mechanism, and the two complement each other. [2]
- The research content of neural networks is quite extensive, reflecting the characteristics of multidisciplinary cross-technology fields. The main research work focuses on the following aspects:
- Biological prototype
- Study the biological prototype structure and functional mechanism of neural cells, neural networks, and nervous system from the aspects of physiology, psychology, anatomy, brain science, and pathology.
- Modeling
- Based on the research of biological prototypes, the theoretical models of neurons and neural networks are established. These include conceptual models, knowledge models, physical and chemical models, and mathematical models.
- algorithm
- Based on the theoretical model research, a specific neural network model is constructed to implement computer simulation or to prepare hardware, including the study of network learning algorithms. This aspect of work is also called technical model research.
- The algorithm used in neural networks is vector multiplication, and symbol functions and their various approximations are widely used. Parallelism, fault tolerance, hardware implementation, and self-learning are some of the basic advantages of neural networks and the differences between neural network computing methods and traditional methods.
- According to their model structure, artificial neural networks can be roughly divided into two types: feedforward networks (also known as multilayer perceptron networks) and feedback networks (also known as Hopfield networks). The former can be regarded mathematically as a type Large-scale nonlinear mapping systems, which are a class of large-scale nonlinear dynamic systems. According to the way of learning, artificial neural networks can be divided into three categories: supervised learning, unsupervised, and semi-supervised learning; according to work methods, they can be divided into two categories: deterministic and random; and they can be divided into continuous or discrete according to time characteristics. Type two, and so on.
- Regardless of the type of artificial neural network, their common characteristics are massive parallel processing, distributed storage, elastic topology, highly redundant and non-linear operations. Therefore, it has a very fast computing speed, strong association ability, strong adaptability, strong fault tolerance and self-organization ability. These characteristics and capabilities constitute the technical basis for artificial neural network simulation of intelligent activities, and have gained important applications in a wide range of fields. For example, in the field of communications, artificial neural networks can be used for data compression, image processing, vector coding, error control (error correction and error detection coding), adaptive signal processing, adaptive equalization, signal detection, pattern recognition, and ATM flow control. , Routing, communication network optimization and intelligent network management, etc.
- The research of artificial neural network has been combined with the research of fuzzy logic, and based on this, it has been supplemented with the research of artificial intelligence, and has become the main direction of the new generation of intelligent systems. This is because artificial neural networks mainly simulate the intelligent behavior of the human right brain and artificial intelligence mainly simulates the intelligent mechanism of the human left brain. The combination of artificial neural networks and artificial intelligence can better simulate human intelligent activities. The new generation of intelligent systems will be able to help humans expand his intelligence and thinking functions more powerfully, and become smart tools for humans to understand and transform the world. Therefore, it will continue to be an important frontier of contemporary scientific research.
- "How does the human brain work?"
- "Can humans make artificial neurons that mimic the human brain?"
- For many years, people have tried to understand and answer the above questions from various angles such as medicine, biology, physiology, philosophy, informatics, computer science, cognitive science, and organizational synergy. In the research process of finding the answers to the above questions, an emerging field of multidisciplinary technology has gradually formed, which is called "neural network". The study of neural networks involves many subject areas, which are combined, infiltrated and promoted. Scientists in different fields start from the interests and characteristics of their respective disciplines, ask different questions, and conduct research from different angles.
- Artificial neural networks must first learn with certain learning criteria before they can work. The artificial neural network is used as an example to describe the recognition of the two letters "A" and "B". It stipulates that when "A" is input to the network, it should output "1", and when the input is "B", the output is "0".
- Therefore, the criteria for network learning should be: If the network makes a wrong decision, learning through the network should make the network reduce the possibility of making the same mistake next time. First, each connection weight of the network is given a random value in the (0, 1) interval, and the image mode corresponding to "A" is input to the network. The network weights and sums the input modes, compares them with thresholds, and performs non Linear operation to get the output of the network. In this case, the probability of the network output being "1" and "0" is each 50%, that is, completely random. At this time, if the output is "1" (the result is correct), the connection weight is increased, so that the network can still make a correct judgment when it encounters the "A" mode input again.
- The function of an ordinary computer depends on the knowledge and capabilities given in the program. Obviously, it will be very difficult to compile procedures for smart activities.
- Artificial neural networks also have preliminary adaptive and self-organizing capabilities. Change the synaptic weight value during learning or training to meet the requirements of the surrounding environment. The same network may have different functions due to different learning methods and contents. Artificial neural network is a system with learning ability, which can develop knowledge beyond the original knowledge level of the designer. Generally, its learning and training methods can be divided into two types, one is supervised or called mentored learning, then use the given sample standard to classify or imitate; the other is unsupervised learning or called unsupervised learning At this time, only the learning method or certain rules are specified, and the specific learning content varies with the environment in which the system is located (that is, the condition of the input signal). The system can automatically discover the characteristics and regularity of the environment and has a function more similar to the human brain.
- A neural network is like a child who loves to learn. What you teach her will not forget and will apply what she has learned. We add each input in the Learning Set to the neural network and tell the neural network what classification the output should be. After all the learning sets have been run, the neural network summarizes her own ideas based on these examples. How exactly she summarized it is a black box. Then we can use the neural network to test the test examples in the Testing Set. If the test passes (such as 80% or 90% accuracy), the neural network is successfully constructed. We can then use this neural network to determine the classification of transactions.
- Neural network is a model for modeling and connecting the basic unit of the human brain-neurons, to explore a model that simulates the function of the human nervous system, and to develop an artificial intelligence with intelligent information processing functions such as learning, association, memory and pattern recognition. system. An important feature of a neural network is that it can learn from the environment and store the results of the learning in the synaptic connections of the network. The learning of a neural network is a process. Encouraged by its environment, it successively inputs some sample patterns to the network, and adjusts the weight matrix of each layer of the network according to certain rules (learning algorithms). Converge to a certain value and the learning process ends. Then we can use the generated neural network to classify the real data.
- In 1943, psychologist W. McCulloch and mathematical logician W. Pitts first proposed a mathematical model of neurons on the basis of analyzing and summarizing the basic characteristics of neurons. This model is still in use today and directly affects the progress of research in this field. Therefore, they can be called the pioneers of artificial neural network research.
- In 1945, a design team led by von Neumann trial-produced a successfully stored program-type electronic computer, marking the beginning of the electronic computer era. In 1948, he compared the fundamental differences between the structure of the human brain and stored program computers in his research work, and proposed a regenerative automaton network structure composed of simple neurons. However, the rapid development of instruction storage computer technology forced him to abandon a new approach to neural network research, continue to devote himself to the research of instruction storage computer technology, and made a great contribution in this field. Although von Neumann's name is associated with ordinary computers, he is also one of the pioneers of artificial neural network research.
- In the late 1950s, F. Rosenblatt designed the "perceptron", which is a multilayer neural network. This work is the first time that the research of artificial neural networks has been put into practice from a theoretical discussion. At that time, many laboratories around the world imitated making perceptrons, which were applied to the research of text recognition, voice recognition, sonar signal recognition, and learning and memory problems. However, the climax of this research on artificial neural networks did not last long, and many people gave up their research work in succession. This was because the development of digital computers was at its heyday, and many people mistakenly believed that digital computers could solve artificial intelligence and models. All the problems of recognition, expert system, etc., caused the work of the perceptron not to be taken seriously. Secondly, the level of electronic technology at that time was relatively backward. The main components were electron tubes or transistors. The neural networks produced by them were bulky and expensive. It is completely impossible to make a neural network similar in size to a real neural network. In addition, in a 1968 book entitled "Perceptron", it was pointed out that linear perceptron functions are limited, and it cannot solve problems such as XOR. The basic problems of multi-layer networks, and the fact that multi-layer networks cannot find effective calculation methods, have led a large number of researchers to lose confidence in the future of artificial neural networks. In the late 1960s, the research on artificial neural networks entered a low tide.
- In addition, in the early 1960s, Widrow proposed an adaptive linear element network, which is a linear weighted summation threshold network with continuous values. Later, based on this, a nonlinear multilayer adaptive network was developed. At the time, although these work did not mark the name of the neural network, it was actually an artificial neural network model.
- With the decline of people's interest in perceptrons, the research on neural networks has been quiet for a long time. In the early 1980s, analog and digital ultra-large-scale integrated circuit manufacturing technology was improved to a new level and completely put into practical use. In addition, the development of digital computers encountered difficulties in several applications. This background indicates that the time has come to find a way out for artificial neural networks. American physicist Hopfield published two papers on artificial neural network research in 1982 and 1984 in the Proceedings of the National Academy of Sciences, which caused a great response. People re-recognize the power of neural networks and the practicality of applying them. Immediately, a large number of scholars and researchers carried out further work around the method proposed by Hopfield, forming a research boom of artificial neural networks since the mid-1980s.
- Among many neural network tools, NeuroSolutions has always been in the leading position in the industry. It is a highly graphical neural network development tool that can be used in windows XP / 7. It combines a modular, icon-based web design interface, advanced learning programs, and genetic optimization. The neural network design tool that can be used to study and solve complex problems in the real world has almost unlimited use.
- The research of neural network can be divided into two aspects: theoretical research and applied research.
- Theoretical research can be divided into the following two categories:
- 1. Use neurophysiology and cognitive science to study human thinking and intelligent mechanisms.
- 2. Use the research results of neural basic theory, use mathematical methods to explore neural network models with more complete functions and superior performance, and in-depth study of network algorithms and performance, such as: stability, convergence, fault tolerance, robustness, etc .; development New network mathematical theory, such as: neural network dynamics, nonlinear neural fields, etc.
- Applied research can be divided into the following two categories:
- 1. Research on software simulation and hardware implementation of neural network.
- 2. Research on the application of neural networks in various fields. These areas include:
- Pattern recognition, signal processing, knowledge engineering, expert system, optimized combination, robot control, etc. With the continuous development of neural network theory and related theories and related technologies, the application of neural networks will definitely be deepened.