Given a linear programming problem as
Ax ≤ B
x ≥ 0
A∈ Mmxn, rank(A)=m, b∈Rm, C∈Rn
It is associated with another problem which we call Dual form
Atw ≥ C
w ≥ 0
A∈ Mmxn, con rank(A)=m, b∈Rm, C∈Rn
Bur now w ∈Rm
That is, what was the cost vector, now is constraints vector and vice versa. Constraints matrix is transposed in the dual.
In some cases it is easier to solve one than another, or it is possible to give conditions
for one that determine properties of solutions of the another one.
More generally, if we have a Linear Programming problem as
, i ∈M1
, i ∈M - M1
≥ 0, j ∈N1
That is, we have only restricted in sign someone of the variables and only in
some of the restrictions we have inequality.
Then the dual problem becomes:
, j ∈N1
, j ∈N - N1
≥ 0, i ∈M1
ie, we have
1. An inequality constraint in the primal variable means restricted in sign variable
in the dual.
2. Equality constraint in the primal, resulting in a variable unrestricted in
sign in the dual.
a) If one of the two problems have finite optimal solution, the other also does.
b) If the primal problem has unbounded solution, then the dual has not feasible
optimal solution (the converse is not true).
Given a pair of dual problems, only one of these conditions is true:
1) Neither have feasible optimal solution.
2) One have solution feasible but is unbounded and the other has no solution.
3) They both have finite optimal solutions.
Corollary 2 (duality theorem)
Given a pair of dual problems, a necessary and sufficient condition for the primal
optimal solution has finite x, is that there is a dual feasible solution w such
Cx = bw