
The simplex algorithm performs iterations into the extreme points set of feasible region, checking for each one if Optimalit criterion holds.

The Simplex Algorithm whose invention is due to George Dantzig in 1947 and in 1975 earned him the National Medal of Science is the main method for solving linear programming problems.
Given a linnear programming standard problem P,
Min Ctx
Subject to
Ax = b
x ≥ 0
Where
A∈ Mmxn, rk(A)=m, b∈Rm, C∈Rn
If P has optimal solution x
k finite, we know from
Linear Programming Theory
that x
k is contained into the
extreme points
subset of feasible set, S.
Simplex Algorithm basic idea is to perform iterations into extreme points set
until a condition called "optimality criterion" holds.
If it holds at a certain extreme point x
k, then this x
k
is the solution that we are looking for.
Suppose that x,
extreme point of S.
Because the extreme points set is formed by solutions of Ax=b subsystems, x can be write as
x = [B
1b, 0]
t, with B mxmsquare A submatrix with rank m.
We can decompose A in B and N submatrices, B as said, is submatrix with rank m
and N is the matrix formed with the remaining columns from A.
Lets w other point from the feasible region S. Then
Aw = b, ie
Bw
B + Nw
N = b
Because B is invertible
w
B = B
1b  B
1Nw
N
Applying C
t to w we obtain
C
tw = C
Btw
B + C
Ntw
N = C
Bt(B
1b  B
1Nw
N) + C
Ntw
N (*)
As w belongs to feasible region, we know that
w
N ≥ 0, so, if x were the optimal solution,
then it must holds
C
tw > C
tx = C
tB
1b
Now, we substitute at (*), obtain
C
Nt  C
BtB
1N > 0 (1)
This is the basic idea from the Simplex Algorithm that we were refering before,
and it is call
Optimality Criterion
The Simplex Algorithm Optimality Criterion
CNt  CBt B1 N > 0
If this condition holds, then x is the optimal solution of P.
The criterion equation has as coordinates
c
j  C
BtB
1 a
j
= c
j  C
Bty
j
= c
j  z
j
Being a
j, column vector of N.
In resume, optimality criterion is
z
j  c
j < 0, con z
j = C
Bt B
1 a
j
Suposse now than (1) does not holds, ie
C
Nt  C
Bt B
1 N ≥ 0 (2)
them there are two possibilities ...
1) yj = B1aj ≤ ∀j, then we can build
x = w + λ d
j, being d
j = (B
1a
j, e
j)
t
we know that d
j is an extreme direction of feasible set S, in particular
Ad
j = 0
so x = w + λ d
j is also feasible solution ∀ λ ≥ 0.
In another way
C
tx = C
tw + λd
j = C
tw + λ(C
tB
1a
j) d
j = C
tw  λ(z
j  c
j) → ∞ when λ → ∞
This means that we can make the solution as small as
want without leaving the feasible set S and so this is a
Unbounded case solution
2) The Vector
yj = B1aj has someone positive component
Them is possible to build other feasible solution (it will be into the
extreme point of S) where the function is smaller
The new solutions is build as follows
x = w + λd
j
being d
j = (B
1a
j e
j)
t
and
λ = min{β/y
ij : y
ij > 0}, β = B
1 b.
With this value λ, x is feasible solution and if for example
λ = β
j/ y
ij
Then x is the basic solution associed to the matrix
B
, = (a
1, a
2, ..., a
r1, a
j, a
r+1, ..., , a
m )
Ie,
That is, we changed a vector by another (we have substituted the vector in r position for which is in place j).
In resume we have built a solution from other.
Note: if there are various indexes satisfying
z
j  c
j > 0
Then the operations is performed on the index k that satisfies
z
j  c
k = max (z
j  c
j), con z
j  c
j > 0
This is the simplex algorithm basic idea, iterating between the extreme points of feasible set,
which are solutions of linear systems equations taken from square submatrices of constraints matrix A, and make this until
one of them meets the optimality criterion.
There remains the problem of obtaining these solutions, we will see in
section
Simplex calculations
how Simplex algorithm
offers us a calculation in which there is not need to perform the inverse of the matrix A. This calculation is called pivoting the matrix and is another basic element of the simplex algorithm.