Calculus Optimization Methods/Lagrange Multipliers: Difference between revisions

From testwiki
Jump to navigation Jump to search
imported>Pi zero
{{BookCat}}
 
(No difference)

Latest revision as of 23:15, 9 June 2017

The method of Lagrange multipliers solves the constrained optimization problem by transforming it into a non-constrained optimization problem of the form:

  • (x1,x2,,xn,λ)=f(x1,x2,,xn)+λ(kg(x1,x2,,xn))

Then finding the gradient and Hessian as was done above will determine any optimum values of (x1,x2,,xn,λ).

Suppose we now want to find optimum values for f(x,y)=2x2+y2 subject to x+y=1 from [2].

Then the Lagrangian method will result in a non-constrained function.

  • (x,y,λ)=2x2+y2+λ(1xy)

The gradient for this new function is

  • x(x,y,λ)=4x+λ(1)=0
  • y(x,y,λ)=2y+λ(1)=0
  • λ(x,y,λ)=1xy=0

Finding the stationary points of the above equations can be obtained from their matrix from.

[401021110][xyλ]=[001]

This results in x=1/3,y=2/3,λ=4/3.

Next we can use the Hessian as before to determine the type of this stationary point.

H()=[401021110]

Since H()>0 then the solution (1/3,2/3,4/3) minimizes f(x,y)=2x2+y2 subject to x+y=1 with f(x,y)=2/3.

Template:BookCat