2-504-09 Lecture Notes - Lecture 5: Column Generation, Row And Column Vectors, Level Set

71 views17 pages
Department
Course
1
Sensitivity analysis in linear programming
Jacques Desrosiers and Jean Bertrand Gauthier (September 2016, January 2017)
Introduction
Linear programming is a powerful mathematical tool to model and solve a variety of
management problems. While it may not be perfect, it is often used to approximate more
complex models typically involving nonlinear components. You may even recall the
nonlinear variant of the ADA restaurant whose resolution time was greatly affected. In
this spirit, LP solvers are extremely reliable and efficient; so much so that the
approximation becomes warranted. There remains to evaluate the quality of the found
solution.
Consider the following linear program, denoted LP, of dimension :


subject to 


This model contains n bounded variables such that each variable is restricted within the
interval , where at least one of these bounds is finite. This formulation is more
general than the standard form which imposes non negativity, that is, 
We also find m equality (slack variables added as needed) constraints where the symbol
appears in bracket for the -th constraint. Sensitivity analysis otherwise being an irrelevant
exercise, let us further assume that LP is feasible with a finite optimal solution, say . The
latter is an extreme point of the feasible domain.
Aside from the variables whose values are dictated by the outcome of the solver, the
remainder of the linear program is composed of known data or parameters. The latter term
is customarily used to refer to the objective function coefficients (), right-hand side
(RHS) terms (), variable bounds () as well as the constraint matrix coefficients ().
Of course, the optimized solution for the variables () is then influenced by the parameters.
What happens to this solution if certain parameters are modified? Enters sensitivity
analysis.
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 17 pages and 3 million more documents.

Already have an account? Log in
2
1. Dissecting an optimal solution
An optimal solution of LP consists of three components:
the objective function value
a primal vector 
of the model’s variables,
a dual vector
associated with the constraints,
where
 . Whether one solves LP through Excel or using industrial grade
software such as Gurobi or IBM Cplex, the solver grants access to all three of these
components for any optimal solution. Sometimes the retained solution is not unique, that
is, the optimal value of the objective function can be obtained with multiple vectors
or . Since in-depth primal and dual properties of linear programs are not mandatory to
comprehend sensitivity analysis, let us state without proof the stopping conditions satisfied
by any LP solver. The latter are useful to interpret the retained optimal solution.
Definition 1. Let be a variable of cost For a vector of
dimension m, we define the reduced cost as


where is the dot product between the vector and the column
vector .
It is a simple computation for which the example on the left yields a
reduced cost of

Theorem 1. Optimality conditions (minimization)
Let be a feasible solution to LP. This solution is optimal if and only if there exists a
vector such that, for all variables , we have:
;  ; .
These conditions are necessary and sufficient to ascertain optimality. They can be read as
follows: for any optimal solution,
if the reduced cost of a variable is positive, then this variable is at its lower bound;
if the reduced cost is negative, then it is at its upper bound;
if the variable has value in between its bounds, then its reduced cost is null.
cj
50
aj
8
-1
3
4
-6
-5
4
3
2
6
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 17 pages and 3 million more documents.

Already have an account? Log in
3
Observe the implied conditions order meaning that there are no other interpretations such
as , nor , nor 
These optimality conditions are in fact analogous to those encountered upon minimizing a
continuous convex function  defined over the interval :
if the derivative at  is positive, then is at its upper bound;
if the derivative at  is negative, then is at its lower bound;
if is within its bounds, then the derivative at is null.
Going back to linear programs, the fact that these conditions are expressed using reduced
costs simply comes from an equivalent linear program which rewrites the objective
function
at , using a vector which satisfies the constraints of LP.
In the case of a maximization problem, similar conditions can be deduced.
Theorem 1. Optimality conditions (maximization)
Let be a feasible solution to LP. This solution is optimal if and only if there exists a
vector such that, for all variables , we have:
;  ; .
Theorem 2. Extreme point solutions
If there exists an optimal solution to LP, then there exists an optimal solution which lies on
an extreme point of the feasible region.
Granted we assume is finite for LP, it is then possible to only consider extreme point
solutions. In the primal simplex algorithm (Dantzig, 1947), such solutions are qualified as
basic. In any basic solution, there is at most variables which take their value strictly
within the interval bounds, the remaining variables then being at either their lower or upper
bound.
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 17 pages and 3 million more documents.

Already have an account? Log in

Document Summary

Jacques desrosiers and jean bertrand gauthier (september 2016, january 2017) Linear programming is a powerful mathematical tool to model and solve a variety of management problems. While it may not be perfect, it is often used to approximate more complex models typically involving nonlinear components. You may even recall the nonlinear variant of the ada restaurant whose resolution time was greatly affected. In this spirit, lp solvers are extremely reliable and efficient; so much so that the approximation becomes warranted. There remains to evaluate the quality of the found solution. Consider the following linear program, denoted lp, of dimension (cid:1865) (cid:1866): (cid:3041) This model contains n bounded variables such that each variable (cid:3037) is restricted within the interval [ (cid:3037);(cid:1873)(cid:3037)], where at least one of these bounds is finite. This formulation is more general than the standard form which imposes non negativity, that is, (cid:3037)(cid:3410)(cid:882),(cid:1862)=(cid:883), ,(cid:1866).

Get access

Grade+
$40 USD/m
Billed monthly
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
10 Verified Answers
Class+
$30 USD/m
Billed monthly
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
7 Verified Answers