What Is Constrained Optimization?

Constrained optimization problem is a non-linear programming problem with constraints. Constrained optimization problems with only equality constraints can be solved by using the elimination method, Lagrangian multiplier method, or penalty function method to solve unconstrained optimization problems; for constraints with equality constraints and inequality The following optimization methods can be used: reducing inequality constraints to equality constraints; reducing constraint problems to unconstrained problems; approximating nonlinear programming problems with linear approximation methods; and working in a direction in the feasible region One-dimensional search, seeking the optimal solution [1] .

Constrained optimization problem is a non-linear programming problem with constraints. The general form of the minimization problem is
Constrained optimization problems with only equality constraints can be used
Constrained optimization problem is to find the objective function
Meet the constraints
The extreme problem. Therefore, constraint optimization is also called conditional extreme value [2] .
There are two solutions to constrained optimization problems:

Constrained optimization problem

Example 1: Maximum area.Let the sum of the length and width of the rectangle equal
How to design the length and width of the rectangle to maximize the area?
Solution: This is a constraint optimization problem: Let the rectangle be x and y, and find the maximum value of the objective function A = xy under the condition x + y = a.
Since it is easy to solve y = ax from the constraint x + y = a, it is substituted into the objective function
The problem comes down to finding the extreme value of the unary function A (x).
by
Point
. This is a practical problem. The value must exist.
Is the maximum point. So when
When the rectangular area is the largest, its maximum value is
.
From the above example, we can see the idea of turning the constraint optimization problem into an unconstrained optimization problem: from the constraint conditions
Solve
And substitute it for the objective function
So the problem turns into finding a unary function
Unconstrained optimization problem.
However, this approach has limitations because sometimes constraints from
Solving y or x is not easy. Therefore, another method is introduced below [2] .

Lagrangian multiplier method for constrained optimization problems

The idea of this method is: turn the constraint optimization problem into an unconstrained optimization problem, and see what conditions it should meet?
Assume
Is a function
In constraints
Extreme points for constrained optimization problems. If function
There are continuous partial quotients in the neighborhood of point (x, y), and
,
Not all 0 (may be set
0), according to Fermat's Lemma, a unary function
Derivative at point x
By implicit differentiation, there is
and
By
OK, so
Substitute into the above formula and eliminate
, Got
which is
make
Then
The point (x, y) that satisfies this system of equations (1) is called a possible extreme point .
In order to facilitate the memorization and write the equations (1) easily, we construct a function
Weigh
Is a Lagrange function . Then the system of equations (1) can be written as
Therefore, we summarize the steps to solve the constraint optimization problem by Lagrange multiplier method as follows:
Construct Lagrange function
Called Lagrange multiplier ;
Solving equations
The obtained point (x, y) is a possible extreme point;
According to the nature of the actual problem, find the extreme value at the possible extreme point [2] .

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?