There are two objetives in this section. On the one hand we want to find necessary and sufficient conditions for a certain point x to be solution of a non linnear programming problem.

On the other hand we'd like to find an algorithm to find those point, begining at any certain initial given point x0.

# Linear Programming in Convex Regions andFunctions

### Basic Concepts and priciples

### Extended Theory

**Convex Functions Properties**

**1.**f is convex then the level set

Ck = {x ∈ S : f(x) ≤ k, k ∈ R } Is convex too.

**2.**If f is convex at S then it is continuous in the interior of f.

**3.**f is convex at S, open set of Rn then f has directional derivates at everywhere, ie

∀ x, d ∈ Rn

f'(x; d) = lim ( f(x + kd) - f(x) ) / k

k → 0

Exists and it is finite.

**4.**f is convex if and only if its Epigraph:

Epi(f) = { (x, y) ∈ S x R : y ≥ f(x) }

Is a convex set of Rn+1

**5.**f is concave if and only if it Hipograph:

Hipo(f) = { (x, y) ∈ S x R : y ≤ f(x) }

is a convex set of Rn+1

Convexity is not a sufficient condition for differentiability, for example the function

f(x) = |x|

Is not differentiable at x = 0, however it is convex at that point.

From point 3, we know that f has directional derivatives at x = 0.

Obviously, if a function is differentiable we can define the gradient of f. For differentiable functions do not define the subgradient , which makes role of the gradient for those functions thar are not differentiable.

**Theorem (Existence of subgradient in convex functions)**

Lets f: S ⊂ Rn → R convex, then

f has subgradient in all interior points of S.

**Theorem (Characterization of convexity for differentiable functions)**

Lets f: S ⊂ Rn → R differentiable and S convex open and non empty set, then f is convex if and only if

f(x) ≥ f(y) + ∇f(y) (x - y) ∀ x ∈ S

The following theorem, which is another characterization of convexity for differentiable functions generalizes the idea of increasing functions on differentiable functions.

**Theorem (Characterization 2 of convexity for differentiable functions)**

Lets f: S ⊂ Rn → R diferenciable and S convex, open and non empty set, then f is convex if and only if

(∇f(y) - ∇f(x) ) (x - y) ≥ 0 ∀ x, y ∈ S

Note that at one dimension case we have the two-derivate convex definition.

**Theorem (Necessary and sufficient condition for a point be a non linear programming problem solution)**

Lets f: S ⊂ Rn → R with S convex, open and not empty. Lets be the nonlinear programming problem

Min(f(x))

Subject to x∈S

x0 is an optimal solution if and only if f has a subgradeint, ξ at x0 such as

ξt (x - x0) ≥ 0, ∀x∈S

(∇f(y) - ∇f(x) ) (x - y) ≥ 0 ∀ x, y ∈ S

**Corollary**In the same theorem hypotheses, if S is open

x0 is an optimal solution if and inly if there exist a subgradient 0 of f at x0.

# Was useful? want add anything?

Post here