 # Non-Linnar Optimizer, on Line Application

### Basic concepts and Principles

We report the application on-line Non-Linnar Optimizer which is intended to help solve nonlinear programming problems.
works by calculating the nearest optimum (maximum or minimum depending on whether the problem is a maximization or minimization) to a given initial point.
Non-Linnar Optimizer here

### Extended Theory

To use it properly, you must enter details of your problem, ie:
Objective Function : Can be a function of several variables, write as follows
1) If two variables: use x and y as the name for these variables. Example sin (x + y)
2) If there are 3 variables: use t, x and y as names. Example exp (t (x +2 y)
3) If there are 4 variables: use t, x, y and z as names. Example sqrt(x^2+2y+z)
4) If there are 4 variables: use t, x, y, z and w as names. Example cos(x + y*z + t*w)

Table constraints: Enter restriction functions, these functions will be as much as many variables as the objective function and can be linear or not.

Starting point : Point satisfying the constraints of the problem, is the point at which the algorithm begins to perform the iterations, that point will have many dimensions as number of variables have the objective function. Note that you can add (or delete) throught the links "Add "and "Delete" respectively.

The program window opens with a default problem whose immediate solution is the point (0, 0): This is to minimize the function f(x, y)= x^2 +y^2 within the circle x^2 +y^2 ≤ 1.
The aplication consist in the following menú:

1) Manual Shows a small application manual
2) Reset Redisplays the default initial problem mentioned above.
4) Delete Restriction Deletes a restriction function.
5) Run Iterations are performed looking for an optimal solution

How the algorithm works?
The algorithm starts from the initial point, at which point calculates the gradient of the function and moving in the direction of greater growth or decline (depending on whether the problem is a maximization or minimization) function. This sense is the sense that points the gradient vector or the opposite (if the problem is minimization).

The algorithm stops on one of these cases:
1) Has reached an inflection point (or saddle point). At this point the gradient is zero indicates a minimum or maximum saddle point (or maximum and nor minimum).
2) We leave the feasible region. In this case the optimum point nearest to the starting point is on the border of the feasible region.
3) Infinite solution. In this case has reached the highest value for the arithmetic of the computer.

Note that both the optimum as the optimal point where it reaches may contain errors due to truncation
1) limitation of computer arithmetic (which is spread among the iterations of algorithm.)
2) Truncation error of the algorithm in the case of referral number in each iteration is an O (1e-14)

Note that due to the nature of the nonlinear programming problems, finding a optimal point not meant to be the optimal value, the final value may vary depending on the starting point. Especially in the nonconvex objective functions case.

We could define the solution provided by Non-Linnear Optimizing optimal value as the one "nearest" to the starting point given. Where the "nearest" term must to be understood of increased growth or decrease direction in the objective function.

# Was useful? want add anything?

Post here

### Post from other users

Post here 