The Simplex Algorithm whose invention is due to George Dantzig in 1947 and in 1975 earned him the National Medal of Science is the main method for solving linear programming problems.

The simplex algorithm performs iterations into the extreme points set of feasible region, checking for each one if Optimalit criterion holds.

Given a linnear programming standard problem P,
Minimize \( C^t x \)
Subject to
\( \begin{matrix} Ax=b \\ x\geqslant 0 \end{matrix} \)
Donde
\( A \in \mathbb M _{mxn}, \, rnk(A)=m ,\,\, b \in R^{m}, \, C \in R^{n}, \,\, n \geqslant m \)
Thus, m is the number of constraits (files of matrix A or dimension of vector b) and n is the number of variables (columns of matriz A or dimension of vector C).
If P has optimal solution x
k finite, we know from
Linear Programming Theory that x
k is contained into the
extreme points subset of feasible points set, S.
Simplex Algorithm basic idea is to perform iterations into extreme points set until a condition called "optimality criterion" holds.
If it holds at a certain extreme point x
k, then this x
k then such point
is the solution that we are looking for.
Suppose that x,
extreme point of S.
Because the extreme points set is formed by solutions of Ax=b subsystems, x can be write as
\( x=\begin{bmatrix} B^{1}b \\ 0\end{bmatrix} \).
with B mxmsquare A submatrix with rank m.
We can decompose A in B and N submatricesin following mode
\( A=\begin{bmatrix} B \\ N\end{bmatrix} \)
B as said, is submatrix with rank m
and N is the matrix formed with the remaining columns from A.(Remember that n≥m).
Lets w another point from the feasible region S. Then
Aw = b, ie
\(Bw_{B} + Nw_{N} = b \)
Because B is invertible
\(w_{B} = B^{1}b  B^{1}Nw_{N} \)
Applying C
t to w we obtain
\( C^{t}w = C_{B}^{t}w_{B} + C_{N}^{t}w_{N} = C_{B}^{t}(B^{1}b  B^{1}Nw_{N}) + C_{N}^{t}w_{N} = C_B^tx_B + C_{N}^{t}w_{N}  C^tB^{1}Nw_{N} \).
Because w belongs to feasible region, we know that
w
N ≥ 0, so, if x were the optimal solution, then it must holds
\((1) \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,C_{N}^{t}C^t_{B}B^{1}N\, \geq \, 0\)
This is the basic idea from the Simplex Algorithm that we were refering before,
and it is call
Optimality Criterion
The Simplex Algorithm Optimality Criterion
$$ C_{N}^{t}C^t_{B}B^{1}N\, \geq\, 0 $$
If this condition holds, then x is the optimal solution of P.
The criterion equation has as coordinates
\(c_{j}  C^t_{B}B^{1}a_{j} = c_{j}  C^t_{B}y_{j} = c_{j}  z_{j} \)>
Being a
j, column vector of N.
In resume, optimality criterion is
$$ c_{j}  z_{j} \geq 0 $$
Suposse now than (1) does not holds, ie
\( C_{N}^{t}C_{B}B^{1}N\, < 0 \)
them there are two possible cases ...
Case 1
\( y_{j} = B^{1}a_{j} \leqslant 0, \; \forall j \) then we can build
\( x= w+\lambda d_{j} \) siendo \( d_{j}=\begin{bmatrix} B^{1}a_{j} \\ e_{j} \end{bmatrix} \)
we know that d
j is an extreme direction of feasible set S, in particular
\( Ad_{j} = 0 \)
so \( x= w+\lambda d_{j} \) is also feasible solution \( \forall \lambda d_{j} \geqslant 0 \)
In another way
\( C^{t}x = C^{t} (w+\lambda d_{j} ) = C^{t}w + \lambda(C^{t}B^{1}a_{j})d_{j} = C^{t}w  \lambda(z_{j}c_{j}) \mapsto  \infty \) cuando \( \lambda \mapsto \infty \)
This means that we can make the solution as small as
want without leaving the feasible set S and so this is a
Unbounded case solution
Case 2
The Vector\( y_{j} \) has someone positive component
We determine the index j that meets
$$ c_{j}  z_{j} = min \{ c_{k}  z_{k} \, : \,\ c_{k}  z_{k} < 0 \} $$
Ei, take the minimum of negative in \( c_{k}  z_{k} \)
Them is possible to build other feasible solution (it will be into the
extreme point of S) where the function is smaller
The new solutions is build as follows
\( x= w+\lambda d_{j} \)
being\( d_{j}=\begin{bmatrix} B^{1}a_{j} \\ e_{j} \end{bmatrix} \) and
\( \lambda = min \{ \frac {x_i}{y_{ij}}, y_{ij} \geqslant 0, x = B^{1}b \} \).
With this value λ, x is feasible solution and if for example for any r
\( \lambda = \frac {x_r}{y_{rj}} \)
Then x is the basic solution associed to the matrix
\( B' = (a_1, a_2, ..., a_{r1},a_j,a_{r+1},...,a_m) \)
Ie:
j: index entering into basis.
r: index leaving from basis.
That is, we changed a vector by another (we have substituted the vector in r position for which is in place j).
In resume we have built a solution from other.
Now we have to do the change of basis to obtain the matrix A and the new \( c_jz_j \). We will see that in the section
Simplex Calculations.
This is the simplex algorithm basic idea, iterating between the extreme points of feasible set,
which are solutions of linear systems equations taken from square submatrices of constraints matrix A, and make this until
one of them meets the optimality criterion.