What Is the Optimal Control Theory?

Optimal control theory is a main branch of modern control theory, focusing on the basic conditions and comprehensive methods to optimize the performance indicators of control systems. Optimal control theory is a discipline that studies and solves the problem of finding the optimal solution from all possible control schemes. It is an important part of modern control theory.

Optimal control theory is a main branch of modern control theory, focusing on the basic conditions and comprehensive methods to optimize the performance indicators of control systems. Optimal control theory is a discipline that studies and solves the problem of finding the optimal solution from all possible control schemes. It is an important part of modern control theory.
Chinese name
Optimal control theory
Foreign name
optimal control theory
Research object
Control System
Application
Synthesis and design of the fastest control system

Introduction to Optimal Control Theory

The groundbreaking work in this area is mainly the dynamic programming proposed by REBellman and the maximum principle proposed by Pontriagin et al. Early work in this area should be traced back to Cybernetics, the foundation of N. Wiener and others. In 1948, Wiener published a paper entitled "Cybernetics-The Science of Control and Communication in Animals and Machines". For the first time, the concept of information, feedback, and control was scientifically proposed. Foundation.

Research content of optimal control theory

The problems studied by optimal control theory can be summarized as follows: For a controlled dynamic system or motion process, find an optimal control scheme from a class of allowed control schemes, so that the system's motion is controlled by a certain initial While the state transitions to the specified target state, its performance index value is optimal. Such problems are widespread in technical or social issues.
For example, determine an optimal control method to minimize the fuel consumption of the spacecraft from one orbit to another orbit, select a temperature regulation rule and the corresponding raw material ratio to maximize the output of the chemical reaction process, and formulate a most reasonable The population policy makes the aging index, dependency index, labor index, etc. to be optimal during the population development process, which are some typical optimal control problems. Optimal control theory was formed and developed in the mid-1950s, driven by space technology. The maximal principle proposed by the Soviet scholar ..Pontryagin in 1958 and the dynamic programming proposed by the American scholar R. Bellman in 1956 have played an important role in the formation and development of the optimal control theory. The optimal control problem of linear system under quadratic performance index was proposed and solved by RE Kalman in the early 1960s.

Main methods of optimal control theory

In order to solve the optimal control problem, it is necessary to establish a motion equation describing the controlled motion process, give the allowable value range of the control variable, specify the initial state and target state of the motion process, and specify a performance index to evaluate the quality of the motion process. . Generally, the performance index depends on the selected control function and the corresponding motion state. The motion state of the system is restricted by the motion equation, and the control function can only be selected within the allowed range. Therefore, from a mathematical point of view, determining the optimal control problem can be expressed as: under the constraints of the motion equation and allowable control range, the extreme value of the performance index function (called a functional) with the control function and motion state as variables (Maximum or minimum). The main methods to solve the optimal control problem include classical variational method, maximum value principle and dynamic programming.

Classical variational method of optimal control theory

A mathematical method for studying extreme values of functionals. The classical variational method can only be used in the case where the value range of the control variable is not limited. In many practical control problems, the value of the control function is often limited by closed boundaries. For example, the rudder can only rotate within two limits, and the torque of the motor can only be generated within the range of positive and negative maximums. Therefore, the classical variational method is powerless to solve many important practical optimal control problems.

Maximal principle of optimal control theory

The maximal principle is a generalization of the Hamiltonian method in analytical mechanics. The prominent advantage of the maximal principle is that it can be used in situations where the control variable is limited, and it can give the conditions that must be satisfied for optimal control in a problem.

Optimal control theory dynamic programming

Dynamic programming is a kind of mathematical programming, which can also be used in the case of limited control variables. It is a more effective method that is very suitable for calculation on a computer.
Optimal control theory has been applied to the most fuel-efficient control systems, minimum energy consumption control systems, and linear regulators.

Optimal control theory optimization technology

The realization of optimal control is inseparable from optimization technology. Optimization technology is a discipline that studies and solves optimization problems. It studies and solves how to find the best solution from all possible solutions. In other words, the optimization technique is to study and solve the two problems of how to represent the optimization problem as a mathematical model and how to find its optimal solution as soon as possible based on the mathematical model. Generally speaking, using optimization methods to solve practical engineering problems can be divided into three steps:
According to the proposed optimization problem, establish a mathematical model of the optimization problem, determine variables, list constraints and objective functions;
Carry out specific analysis and research on the established mathematical model, and select an appropriate optimization method;
List the block diagram and write the program according to the algorithm of the optimization method, use the computer to find the optimal solution, and evaluate the convergence, versatility, simplicity, calculation efficiency, and error of the algorithm.

Optimal Control Theory Solution

The so-called optimization problem is to find an optimal control scheme or optimal control law, so that the system can optimally achieve the expected goal. After the mathematical model of the optimization problem is established, the main problem is how to solve the optimization problem through different solving methods. Generally speaking, optimization methods include offline static optimization methods and online dynamic optimization methods, and the methods for solving optimization problems can be roughly divided into four categories:
Analytical method
The optimization of objective functions and constraints with simple and explicit mathematical expressions can usually be solved analytically. The solution method is to first obtain the analytical solution by mathematical analysis method according to the necessary conditions of the extreme value of the function, and then indirectly determine the optimal solution according to the sufficient conditions or the actual physical meaning of the problem.
This method is suitable for situations where performance indicators and constraints have obvious analytic expressions. The general step is to first obtain the necessary conditions for optimal control by using the derivative method or variational method to obtain a set of equations or inequalities, and then solve the set of equations or inequalities. The analytical solution to obtain the optimal control is the maximum required. Excellent control. Analytic methods can be roughly divided into two categories. The first type, when unconstrained, uses differential or variational methods. The second type, when there is a constraint, uses the maximum value principle or dynamic programming. [1]
(1) Variational method: When the control vector is unconstrained, the Hamilton function is introduced. The variational method can be used to derive the necessary conditions for optimal control, that is, regular equations, governing equations, boundary conditions, and cross-sectional conditions.
(2) Maximum value principle: When solving the optimal control problem by variational method, it is assumed that the control vector u (O) is not subject to any restrictions, that is, the allowable control set can be regarded as the open set of the entire P-dimensional control space. Variation u is arbitrary. At the same time, Hamiltonian number H is required to be continuously differentiable to u. However, in practical engineering, the control variable is often limited to a certain degree. At this time, the maximum value principle can be used to solve the optimal control problem. This method is actually derived from the variational method, but because it can be applied to the case where the control variable u (t) is bounded by the boundary and does not require the Hamiltonian number H to be continuously differentiable, it has gained widespread use. Applications.
(3) Dynamic programming: The same as the principle of maximum value, it is an effective mathematical method to deal with the optimal control problem where the control vector is limited to a certain closed set. It turns the complex optimal control problem into a recursive multi-level decision process. The functional relationship, its foundation and the core time optimality principle, that is, in a multi-level decision problem, regardless of the initial state and initial decision, when any one of the levels and states is used as the initial level and initial state again, the following decision is correct Part of the multi-level decision-making process starting with this level must still be an optimal decision. Therefore, using this principle of optimality can inevitably turn a multi-level decision problem into an optimal single-level decision problem, and the decision at this level has nothing to do with any previous decision at this level, only the initial position and initial decision at this level . For continuous systems using the dynamic programming method to find the optimal control problem, the continuous system can be discretized first, the finite difference equation approximation can be used instead of the continuous equation, and then the discrete dynamic programming method is used to solve the problem.
2. Numerical solution (direct method)
For optimization problems with more complex objective functions or no explicit mathematical expressions, or optimization problems that cannot be solved analytically, direct methods can usually be used to solve them. The basic idea of the direct method is to use a direct search method to produce a sequence of points through a series of iterations, and gradually approach the best advantage. Direct methods are often obtained from experience or experiment. [1]
When the performance index is more complex or cannot be expressed by the explicit function of the variable, the direct search method can be used. After several iterations, the best advantage is obtained. The numerical calculation method can be divided into two categories:
(1) Interval elimination method, also known as one-dimensional search method, is suitable for solving univariate extreme value problems. There are mainly golden section method and polynomial interpolation method.
(2) The mountain climbing method, also known as the multidimensional search method, is suitable for solving multivariate extreme value problems. There are mainly coordinate rotation method and step acceleration method.
3. Analytical and numerical optimization method (gradient method)
It is a method combining analysis and numerical calculation. There are two main categories: one is the unconstrained gradient method, such as the steep descent method and the quasi-Newton method. The second category is constrained gradient methods, such as feasible direction method and gradient projection method.
4. Network optimization methods
This method uses the network graph as a mathematical model, and uses the graph theory method to search for optimization methods.

Recent advances in optimal control theory

Optimal control theory online optimization method

The offline optimization method based on object mathematical model is an ideal method. This is because although the industrial process (object) is designed to run continuously under certain normal working conditions, factors such as environmental changes, aging of catalysts and equipment, and changes in raw material components have caused disturbances to the industrial process, so Working conditions are not optimal.
Common ways to solve such problems.
(1) Design method of local parameter optimization and global optimization
The basic idea of the local parameter optimization method is to adjust the adjustable parameters of the controller according to the difference between the reference model and the output of the controlled process, so that the integral of the squared output error is minimized. This allows the controlled process and the reference model to be precisely aligned as quickly as possible.
In addition, the combination of static and dynamic optimization, and variable local optimization is the overall optimization. The overall optimality is reflected by the overall objective function. The overall optimization consists of two parts: one is a static optimization (or offline optimization), and its objective function is constant over a period of time or a certain range; the other is a dynamic optimization (or online optimization) It refers to the optimization of the entire industrial process. The industrial process is a dynamic process. To keep a system always in an optimized state, it is necessary to eliminate all kinds of interference at any time and coordinate the local optimization parameters or field controllers to achieve the entire system optimal.
(2) Rolling optimization algorithm in predictive control
Predictive control, also known as Model-based Control, is a new type of optimized control algorithm that arose in the late 1970s. However, it is different from the usual discrete optimal control algorithms. Instead of using a constant global optimization goal, it uses a rolling limited time domain optimization strategy. This means that the optimization process is not performed offline once, but repeatedly online. The locality of this finite target makes it ideally only a global suboptimal solution, but its rolling implementation can take into account the uncertainties caused by model mismatches, time variations, interference, etc. The new optimization is always based on reality, so that the control remains practically optimal. This heuristic rolling optimization strategy takes into account the effects of ideal optimization and actual uncertainty in the future for a sufficient period of time. In a complex industrial environment, this is more practical and effective than optimal control based on ideal conditions.
The optimization mode of predictive control has distinctive characteristics: its discrete form of limited optimization goals and the implementation process of rolling advancement enable dynamic optimization in the entire process of control, and static parameter optimization in each step of control. With this thinking, you can handle more complex situations, such as constraints, multi-objective, non-linear and even non-parametric. Learning from the hierarchical thinking in planning, you can also layer goals according to their importance and type, and implement different levels of optimization. Obviously, the idea of hierarchical decision-making and artificial intelligence methods in large system control can be introduced into predictive control to form a multi-layer intelligent predictive control model. This multi-layer intelligent predictive control method will overcome the shortcomings of the predictive control algorithm of a single model and is one of the important directions of current research.
(3) Steady-state hierarchical control
For the control of complex large industrial processes (objects), a distributed control mode is often used. At this time, the computer's online steady-state optimization often uses a hierarchical control structure. This structure has both a control layer and an optimization layer, and the optimization layer is a two-level structure composed of a local decision unit level and a coordinator. The optimization process is: each decision unit responds to the sub-process optimization in parallel, and the optimization process is coordinated by the upper-level decision unit (coordinator). Each decision unit and coordinator find the optimal solution by iterating with each other. The important contributions of Polish scholar Findeisen and others must be mentioned here.
Because the more accurate mathematical models of industrial processes are not easy to obtain, and industrial processes (objects) tend to be non-linear and slow time-varying, Polish scholar Findesien proposed that the solution obtained by the model in the optimization algorithm is an open-loop optimization solution. In the design phase of on-line steady-state control of large industrial processes, open-loop solutions can be used to determine the optimal operating point. However, in practical use, this solution may not be able to make the industrial process in the optimal working conditions, but will also violate constraints. The new idea they proposed is: extract the steady-state information of the associated variables from the actual process and feed it back to the upper-level coordinator (global feedback) or local decision unit (local feedback), and use it to modify the optimal solution based on the model To bring it closer to the true optimal solution.
(4) Integrated research method for system optimization and parameter estimation
The difficulty of steady-state hierarchical control is that the input and output characteristics of the actual process are unknown. The feedback correction mechanism proposed by Polish scholars can only obtain a suboptimal solution. However, its main disadvantage is that it is generally difficult to accurately estimate the degree of deviation of the suboptimal solution from the optimal solution, and the suboptimal degree of the suboptimal solution often depends on the selection of the initial point. A natural idea is to separate optimization and parameter estimation and alternate them until the iterations converge to a solution. In this way, the computer's online optimization control includes two parts of the task: the optimization based on the rough model (the rough model is usually available) and the modified model under the set point. This method is called an integrated research method for system optimization and parameter estimation. (Integrated System Optimization and Parameter Estimation)

Intelligent optimization method of optimal control theory

For more and more complex control objects, on the one hand, the required control performance is no longer limited to one or two indicators; on the other hand, the above-mentioned various optimization methods are based on the optimization problem and have accurate mathematical models. Based on. However, many practical engineering problems are difficult or impossible to obtain accurate mathematical models. This limits the practical application of the above-mentioned classical optimization methods. With the development of fuzzy theory, neural networks and other intelligent technologies and computer technology.
Intelligent optimization methods have been valued and developed.
(1) neural network optimization method
The study of artificial neural networks originated in 1943 with the work of Mc Culloch and Pitts. In terms of optimization, Hopfield first introduced the Lyapuov energy function to judge the stability of the network in 1982, and proposed a Hopfield single-layer discrete model; Hopfield and Tank developed the Hopfield single-layer continuous model. In 1986, Hopfield and Tank directly matched electronic circuits with Hopfield models to implement hardware simulation. Kennedy and Chua proposed analog circuit models based on nonlinear circuit theory, and used the Lyapuov function of the system differential equations to study the stability of electronic circuits. These works have strongly promoted the research on neural network optimization methods.
According to the neural network theory, the minimum point of the energy function of the neural network corresponds to the stable equilibrium point of the system, so that the solution of the minimum point of the energy function is converted to the stable equilibrium point of the system. With the evolution of time, the orbit of the network always moves in the direction of decreasing energy function in space, and finally reaches the equilibrium point of the system-that is, the minimum point of the energy function. Therefore, if the stable attractor of the neural network dynamic system is considered as the minimum point of the appropriate energy function (or augmented energy function), the optimization calculation will reach a minimum point with the system flow from an initial point. If the concept of global optimization is applied to the control system, the objective function of the control system will eventually reach the desired minimum point. This is the basic principle of neural optimization calculations.
As with general mathematical planning, neural network methods also have weak points that focus on more analysis times. How to combine with structural optimization techniques such as approximate reanalysis of structures and reduce the number of iterations is one of the future research directions.
Since the Hopfield model can be applied to both discrete and continuous problems, it is expected to effectively solve the nonlinear optimization problem of mixed discrete variables commonly found in control engineering.
(2) Genetic algorithm
Genetic algorithm and genetic programming are a new search and optimization technology. It imitates the evolution and inheritance of living things, and according to the "survival of the fittest" principle, it gradually approaches the optimal solution from the initial solution. In many cases, genetic algorithms are significantly better than traditional optimization methods. This algorithm allows the problem to be solved to be non-linear and discontinuous, and can find the global optimal solution and the suboptimal solution from the entire feasible solution space, avoiding to get only the local optimal solution. This can provide us with more useful reference information for better system control. At the same time, the process of searching for the optimal solution is instructive, which avoids the dimensional disaster problem of general optimization algorithms. With the development of computer technology, these advantages of genetic algorithm will play an increasing role in the field of control.
Research shows that genetic algorithm is a structural optimization method with great potential. It is used to solve complex optimization problems such as nonlinear structural optimization, dynamic structural optimization, shape optimization, and topology optimization. It has great advantages.
(3) Fuzzy optimization method
Optimization problems have always been one of the most widely used areas of fuzzy theory.
Since Bellman and Zadeh pioneered this research in the early 1970s, their main research has focused on theoretical research in the general sense, fuzzy linear programming, multi-objective fuzzy programming, and fuzzy programming theory in stochastic programming and many practical problems. Application. The main research method is to use the a-cut of fuzzy sets or determine the membership functions of fuzzy sets to transform fuzzy programming problems into classic programming problems to solve.
The fuzzy optimization method has the same requirements as the ordinary optimization method. It still seeks a control scheme (that is, a set of design variables), satisfies the given constraints, and makes the objective function the optimal value. The only difference is that it contains fuzzy factors. Ordinary optimization can be reduced to solving a common mathematical programming problem, and fuzzy programming can be reduced to solving a fuzzy mathematical programming problem. Contains control variables, objective functions, and constraints, but the control variables, objective functions, and constraints may be vague, or one aspect may be vague and the other aspects clear. For example, fuzzy factors in the optimization design problem of fuzzy constraints are contained in constraints (such as geometric constraints, performance constraints and human constraints, etc.). The basic idea of solving fuzzy mathematical programming problems is to transform fuzzy optimization into non-fuzzy optimization, that is, ordinary optimization problems. Methods can be divided into two categories: one is to give a fuzzy solution; the other is to give a specific crisp solution. It must be pointed out that the above solutions are all proposed for fuzzy linear programming. However, most practical engineering problems are described by fuzzynonlinear programming. So some people proposed the level cut method, the bounded search method and the maximum level method, etc., and achieved some promising results.
In the field of control, fuzzy control is integrated with self-learning algorithms, fuzzy control and genetic algorithms. By improving the learning algorithm and genetic algorithm, according to a given optimization performance index, the controlled object is gradually optimized and learned, which can effectively determine Structure and parameters of fuzzy controller.

Case study of optimal control theory

Application of Optimal Control Theory in Excitation Control of Power System

With the continuous development of modern control theory and its practical applications, the research on the use of modern control theory for optimal control of power system operating performance has developed rapidly. How to design multi-parameter excitation regulators in an optimized way Much progress has been made.
(1) Comprehensive excitation regulator based on nonlinear optimization and PID technology
For a synchronous generator of a non-linear system, when it deviates from the system operating point or a large disturbance occurs in the system, if a power system stabilizer based on PID technology is still used, an error will occur. To this end, it can be used as an excitation regulator based on non-linear optimal control technology. However, the non-linear optimal control regulator has the disadvantage of weak voltage control capability. Therefore, a new type of excitation regulator design is used which can organically combine the non-linear optimal excitation regulator and PID technology power system stabilizer. principle.
This comprehensive excitation regulator uses the research results of nonlinear optimal control theory. It uses a precise linearization mathematical method in the nonlinear excitation control. There is no rounding error after the linearization of the equilibrium point. The mathematical model is theoretically accurate for all operating points of the generator; at the same time, the PID link is added to the weak voltage regulation capability of the non-linear excitation control. This device has strong voltage regulation characteristics. The small unit test has achieved very good experimental results. It has good adjustment characteristics when it runs near the equilibrium point and deviates more from the equilibrium point.
(2) Adaptive Optimal Excitation Controller
Combining adaptive control theory with optimal control theory, through the three links of multi-variable parameter identification, optimal feedback coefficient calculation and control algorithm operation, adaptive optimal control of synchronous generator excitation can be achieved.
This generator's adaptive optimal excitation scheme uses a multivariate real-time identifier composed of a least squares algorithm with a variable forgetting factor, so that the element values in the coefficient matrices A and B of the system state equation follow the operating conditions of the system The change of the parameters of the synchronous motor is calculated, and then the optimal feedback coefficient is calculated to realize the adaptive optimal excitation control of the synchronous motor.
Although the linear optimal control theory is used to obtain the feedback coefficients, since the element values in the coefficient matrix of the equation of state change with the change of system operating conditions, the control effect reflects the nonlinear characteristics of the power system, which is essentially a kind of Non-linear control.
The digital simulation test results show that the excitation control system can automatically track the system's operating conditions and identify the constantly changing system parameters online, so that the control effect is always in an optimal state. As a result, the dynamic quality of the control system is improved, and the stability of the power system operation can be improved.
(3) Nonlinear excitation control based on neural network inverse system method
The neural network inverse system method combines the neural network's ability to approximate non-linear functions and the linearization ability of the inverse system method to construct a physically achievable neural network inverse system, thereby achieving large-scale linearization of the controlled system. A pseudo-linear composite system can be constructed without the need for system parameters, thereby transforming the control problem of a non-linear system into a control problem of a linear system.
In the case of large interference, the transient time of the controller of the neural network inverse system method is very short, and the amount of overshoot is small, which effectively improves the transient response quality of the system and improves the stability of the power system. This controller also has Very robust performance. In addition, the neural network inverse system method does not need to know the mathematical model and parameters of the original system, nor does it need to measure the state of the controlled system. It only needs to know the reversible of the controlled system and the order of the input and output differential equations. The structure is simple and easy. Engineering realization.
(4) Optimal excitation control based on grey predictive control algorithm
Predictive control is a computer algorithm that uses multi-step prediction to increase the amount of information that reflects the future trends of the process, so it can overcome the effects of uncertain factors and complex changes. Grey predictive control is a branch of predictive control. It needs to establish grey differential equations, which can make a comprehensive analysis of the system. GM (1, N) is used to model the generator's power deviation, speed deviation, and voltage deviation serial values. After comprehensive analysis, the predicted values of each state quantity are obtained. At the same time, the predicted values are obtained according to the optimal control theory. The optimal feedback gain of the controlled excitation control system of the state variables, so as to obtain the optimal excitation control amount with prediction information.
The concepts of gray modeling and "advanced control" in the gray predictive control theory make up for the precise linearization and "post hoc control" in the linear optimal control theory. The simulation results on a single-machine infinite system show that this excitation control has fast response And high accuracy, so that the power system can show good dynamic characteristics under large and small disturbances.

Application of Optimal Control Theory in the Field of Control

At present, the most active areas of research on optimal control theory are neural network optimization, simulated annealing algorithm, chemotaxis algorithm, genetic algorithm, robust control, predictive control, chaos optimization control, and steady-state hierarchical control. [2]
(1) Hopfield neural network optimization
Artificial neural network design is generally based on the experience and practice of experts. The most widely used error back propagation neural network, referred to as BP network, is a hierarchical neural network with 3 or more layers. According to neural network theory, the network always moves in the direction of decreasing energy function, and finally reaches the equilibrium point of the system. In other words: the minimum point of the Hopfield energy function is the stable equilibrium point of the system, as long as the equilibrium point of the system is obtained, the minimum point of the energy function is obtained. If the global optimization theory is applied to the control system, the objective function of the control system finally reaches exactly the desired minimum point.
(2) Simulated annealing algorithm
In 1983, Kirkpatrick and his collaborators proposed a simulated annealing (SA) method, which is a Monte-Caula technique for solving single-objective multivariable optimization problems. This method is an artificial simulation of a physical process, which is based on the crystallization of liquids or annealing of metals. After liquid and metal objects are heated to a certain temperature, all their molecules and atoms move freely in the state space D. As the temperature decreases, these molecules and atoms gradually stay in different states. When the temperature drops to a relatively low temperature, these molecules and atoms are arranged in a certain structure again, forming a crystal structure composed of ordered atoms. The simulated annealing method has been widely used in production scheduling, neural network training, and image processing.
(3) Chemotactic algorithm
The chemotaxis algorithm (CA) is a stochastic optimization method proposed to simulate the principle of phototaxis in the growth process of bacteria. Its characteristics are simple structure and easy to use. In the search process, CA only searches in the direction that makes the performance of the solution better. Whether it can jump out of the local minimum depends on the size of the variance. Its global search ability is worse than the simulated annealing method and genetic algorithm, but the local search ability is stronger. , Convergence is faster.
(4) Genetic Algorithm
Genetic algorithm (GA) is a random search algorithm that simulates natural selection and genetics. It is a highly parallel, random, and adaptive optimization algorithm that simulates the evolutionary process of the "survival of the fittest" in nature. It expresses the solution of the problem as the survival process of the fittest of the "chromosome". Through the continuous evolution of the "chromosome" group, including operations such as replication, crossover, and mutation, it finally converges to the "best-adapted" individual. The optimal or satisfactory solution to the problem. GA is a general-purpose optimization algorithm. Its coding technology and genetic operations are relatively simple, and optimization is not restricted by constraints. Its two most significant features are implicit parallelism and global solution space search. With the development of computer technology, GA has been paid more and more attention, and has been successfully applied in the fields of machine learning, pattern recognition, image processing, neural networks, optimization control, combination optimization, VLSI design, genetics and other fields.
(5) Robust control
Robust control is a control system design method for uncertain systems. The main research issues in its theory are the description methods of uncertain systems, the analysis and design methods of robust control systems, and the application fields of robust control theory. One of the most prominent signs of the development of robust control theory is H control. H control is essentially an optimal control theory in the frequency domain. The combination of robust control and optimal control solves many practical problems such as linear quadratic control, motor speed regulation, tracking control, sampling control, stabilization of discrete systems, and disturbance suppression.
(6) Predictive control
Predictive control, also known as model-based control, is a new type of computer-optimized control algorithm. Its essential characteristics are predictive model, rolling optimization, and feedback correction. The rolling optimization is performed repeatedly online, and the time zone and absolute form of the optimization performance index are different at different times. This kind of rolling optimization can make up for the uncertainties caused by various factors in a timely manner, and always build a new optimization on the basis of reality, so that the control system maintains the actual optimality.
(7) Chaos optimization control
Chaos is a universal non-linear phenomenon, which means that random-like behavior can occur in deterministic non-linear systems without any additional random factors, but there are delicate inherent regularities. Chaotic motion has the characteristics of randomness, ergodicity and regularity. The basic characteristic of chaotic motion is the instability of the motion orbit, which is manifested as a sensitive dependence on the initial value or an extreme sensitivity to small disturbances. Chaotic motion traverses all states repeatedly according to its own rules within a certain range. This ergodicity can be used to optimize search and avoid falling into local minima. Therefore, chaos optimization technology has become an emerging search optimization technology.
(8) Steady-state hierarchical control
Hierarchical control is a kind of computer-based online steady-state optimization control structure. Its guiding idea is to decompose a large system into several interrelated subsystems, that is, to decompose the optimal control problem of a large system into problems of each subsystem. A coordinator is set on each subsystem to determine whether the results obtained by the subsystem to solve the subproblems are suitable for the optimal control of the entire large system. If not, instruct each subsystem to modify the subproblems and recalculate. The optimal solution can be obtained through the mutual iterative solution of the coordinator.

Application of optimal control theory in other fields

The application of optimal control theory in management science has achieved many valuable application results. Representative among them is the book "Optimization Management" by American scholars SP Session and GL Thompson [3] . The book outlines the application of optimal control theory to the problems of optimal investment, production and inventory, sales, maintenance and replacement of machinery and equipment in finance; economic applications are mainly based on econometric models of macroeconomic interdependence. Provide economic forecasts and explain the dynamic behavior of economic issues. In Zhu Daoli's "Large System Optimization Theory and Application", the optimal control theory is used to establish an economic model, and the GRG algorithm is used to explain economic problems and form the economic optimal control in economic disciplines [4] . Many experts have also demonstrated the prominent role of optimal control in economics in studying dynamic optimal stability economic policies. In terms of natural resources and population, optimal control theory can be applied to allocate non-renewable and renewable resources. In addition, the application of optimal control in talent distribution has also been reported.

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?