The Simplex Algorithm whose invention is due to George Dantzig in 1947 and in 1975 earned him the National Medal of Science is the main method for solving linear programming problems.

The simplex algorithm performs iterations into the extreme points set of feasible region, checking for each one if Optimalit criterion holds.

Given a linnear programming standard problem P,
Minimize \( C^t x \)
Subject to
\( \begin{matrix} Ax=b \\ x\geqslant 0 \end{matrix} \)
Donde
\( A \in \mathbb M _{mxn}, \, rnk(A)=m ,\,\, b \in R^{m}, \, C \in R^{n}, \,\, n \geqslant m \)
Thus, m is the number of constraits (files of matrix A or dimension of vector b) and n is the number of variables (columns of matriz A or dimension of vector C).
If P has any optimal solution x
k finite, we know from
Linear Programming Theory that x
k is contained into the
extreme points subset of feasible points set, S.
Simplex Algorithm basic idea is to perform iterations into extreme points set until a condition called "optimality criterion" holds.
If it holds at a certain extreme point x
k, then this x
k then such point is the solution that we are looking for.
Lets suppose that x,
extreme point of S.
Because the extreme points set is conformed by solutions of Ax=b subsystems, x can be write as
\( x=\begin{bmatrix} B^{1}b \\ 0\end{bmatrix} \).
with B mxmsquare A submatrix with rank m.
We can decompose A in two B and N submatrices in following mode
\( A=\begin{bmatrix} B \\ N\end{bmatrix} \)
B as said, is an mxm submatrix with rank m corresponding to solution of system Bx=b
and N is the matrix formed with the remaining columns from A.(Remember that n≥m).
Lets w another point from the feasible region S. Then
Aw = b, ie
\(Bw_{B} + Nw_{N} = b \)
Because B is invertible
\(w_{B} = B^{1}b  B^{1}Nw_{N} \)
Applying C
t to w we obtain
\( C^{t}w = C_{B}^{t}w_{B} + C_{N}^{t}w_{N} \)\( = C_{B}^{t}(B^{1}b  B^{1}Nw_{N}) + C_{N}^{t}w_{N} \)\( = C_B^tx_B + C_{N}^{t}w_{N}  C^tB^{1}Nw_{N} \)
Because w belongs to feasible region, we know that
w
N ≥ 0, so, if x were the optimal solution, then it must holds
\((1) \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,C_{N}^{t}C^t_{B}B^{1}N\, \geq \, 0\)
This is the basic idea from the Simplex Algorithm that we were refering before,
and it is call
Optimality Criterion
The Simplex Algorithm Optimality Condition
$$ C_{N}^{t}C^t_{B}B^{1}N\, \geq\, 0 $$
If this condition holds, then x is the optimal solution of P.
The optimallity condition equation has as coordinates
\(c_{j}  C^t_{B}B^{1}a_{j} = c_{j}  X_{B}a_{j} = c_{j}  z_{j} \)
Being a
j, column vector of N.
In resume, optimality criteria is
$$ c_{j}  z_{j} \geq 0 $$
Lets suposse now than (1) does't holds, ie
\( C_{N}^{t}C_{B}B^{1}N\, < 0 \)
them there are two possible cases ...
Case 1
\( y_{j} = B^{1}a_{j} \leqslant 0, \; \forall j \) then we can build
\( x= w+\lambda d_{j} \) siendo \( d_{j}=\begin{bmatrix} B^{1}a_{j} \\ e_{j} \end{bmatrix} \)
we know that d
j is an extreme direction of feasible set S, in particular
\( Ad_{j} = 0 \)
so \( x= w+\lambda d_{j} \) is also feasible solution \( \forall \lambda d_{j} \geqslant 0 \)
In another way
\( C^{t}x = C^{t} (w+\lambda d_{j} ) = \)\( C^{t}w + \lambda(C^{t}B^{1}a_{j})d_{j} = \)\( C^{t}w  \lambda(z_{j}c_{j}) \mapsto  \infty \) when \( \lambda \mapsto \infty \)
This means that we'd can to make solution as small as we'd want without leaving the feasible set S and so this is a
Unbounded case solution
Case 2
Vector\( y_{j} \) has someone positive component
We determine the index s that meets
$$ s = min \{ k: \,\ c_{k}  z_{k} < 0 \} $$
Ei, take the minimum of negative in \( c_{k}  z_{k} \)
Them it is possible to build any other feasible solution (it's going to be into the
extreme point of S) where the function is smaller
The new solutions it is build as follows
\( x= w+\lambda d_{s} \)
being\( d_{s}=\begin{bmatrix} B^{1}a_{s} \\ e_{s} \end{bmatrix} \) and
\( \lambda = min \{ \frac {x_i}{y_{is}}, y_{is} \geqslant 0, x = B^{1}b \} \).
With this value λ, x is feasible solution and if for example for any r
\( \lambda = \frac {x_r}{y_{rs}} \)
Then x is the basic solution associed to matrix
\( B' = (a_1, a_2, ..., a_{r1},a_s,a_{r+1},...,a_m) \)
Ie:
s: index entering into basis.
r: index leaving from basis.
That is, we changed a vector by another (we have substituted the vector in r position for which is in place s).
In resume we're going to built a solution from other.
Now we have to do the change of basis to obtain the matrix A and the new \( c_jz_j \). We will see that in the section
Simplex Calculations.
This is the simplex algorithm basic idea, iterating between the extreme points of feasible set, which are solutions of linear systems equations taken from square
submatrices of constraints matrix A, and make this until one of them meets the optimality condition, let's say x, and in such case optimal solution it's reached at x.