function f1=gattainObjfun(x)
+A few examples displaying the various functionalities of fgoalattain have been provided below. You will find a series of problems and the appropriate code snippets to solve them.
+Example
+ Here we solve a simple objective function, subjected to no constraints.
+ 
+
+
+
+function f1=gattainObjfun(x)
f1(1)=2*x(1)*x(1)+x(2)*x(2)-48*x(1)-40*x(2)+304
f1(2)=-x(1)*x(1)-3*x(2)*x(2)
f1(3)=x(1)+3*x(2)-18
f1(4)=-x(1)-x(2)
f1(5)=x(1)+x(2)-8
endfunction
+
x0=[-1,1];
+
goal=[-5,-3,-2,-1,-4];
weight=abs(goal)
-
-
[x,fval,attainfactor,exitflag,output,lambda]=fgoalattain(gattainObjfun,x0,goal,weight) |  |  | |
+Example
+ We proceed to add simple linear inequality constraints.
+ 
+
+
+
+function f1=gattainObjfun(x)
+f1(1)=2*x(1)*x(1)+x(2)*x(2)-48*x(1)-40*x(2)+304
+f1(2)=-x(1)*x(1)-3*x(2)*x(2)
+f1(3)=x(1)+3*x(2)-18
+f1(4)=-x(1)-x(2)
+f1(5)=x(1)+x(2)-8
+endfunction
+
+x0=[-1,1];
+
+goal=[-5,-3,-2,-1,-4];
+weight=abs(goal)
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+[x,fval,attainfactor,exitflag,output,lambda]=fgoalattain(gattainObjfun,x0,goal,weight,A,b) |  |  | |
+
+Example
+Here we build up on the previous example by adding linear equality constraints.
+We add the following constraints to the problem specified above: 
+
+
+
+function f1=gattainObjfun(x)
+f1(1)=2*x(1)*x(1)+x(2)*x(2)-48*x(1)-40*x(2)+304
+f1(2)=-x(1)*x(1)-3*x(2)*x(2)
+f1(3)=x(1)+3*x(2)-18
+f1(4)=-x(1)-x(2)
+f1(5)=x(1)+x(2)-8
+endfunction
+
+x0=[-1,1];
+
+goal=[-5,-3,-2,-1,-4];
+weight=abs(goal)
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+
+[x,fval,attainfactor,exitflag,output,lambda]=fgoalattain(gattainObjfun,x0,goal,weight,A,b,Aeq,beq) |  |  | |
+
+Example
+In this example, we proceed to add the upper and lower bounds to the objective function.
+ 
+
+
+
+function f1=gattainObjfun(x)
+f1(1)=2*x(1)*x(1)+x(2)*x(2)-48*x(1)-40*x(2)+304
+f1(2)=-x(1)*x(1)-3*x(2)*x(2)
+f1(3)=x(1)+3*x(2)-18
+f1(4)=-x(1)-x(2)
+f1(5)=x(1)+x(2)-8
+endfunction
+
+x0=[-1,1];
+
+goal=[-5,-3,-2,-1,-4];
+weight=abs(goal)
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+
+lb = [-1;-%inf];
+ub = [%inf;1];
+
+[x,fval,attainfactor,exitflag,output,lambda]=fgoalattain(gattainObjfun,x0,goal,weight,A,b,Aeq,beq,lb,ub) |  |  | |
+
+
+Example
+Finally, we add the non-linear constraints to the problem. Note that there is a notable difference in the way this is done as compared to defining the linear constraints.
+ 
+
+
+
+function f1=gattainObjfun(x)
+f1(1)=2*x(1)*x(1)+x(2)*x(2)-48*x(1)-40*x(2)+304
+f1(2)=-x(1)*x(1)-3*x(2)*x(2)
+f1(3)=x(1)+3*x(2)-18
+f1(4)=-x(1)-x(2)
+f1(5)=x(1)+x(2)-8
+endfunction
+
+x0=[-1,1];
+
+goal=[-5,-3,-2,-1,-4];
+weight=abs(goal)
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+
+lb = [-1;-%inf];
+ub = [%inf;1];
+
+function [c, ceq]=nlc(x)
+c=[x(1)^2-5,x(1)^2+x(2)^2-8];
+ceq=[];
+endfunction
+
+[x,fval,attainfactor,exitflag,output,lambda]=fgoalattain(gattainObjfun,x0,goal,weight,A,b,Aeq,beq,lb,ub,nlc) |  |  | |
+
+
+Example
+ We can further enhance the functionality of fgoalattain by setting input options. We can pre-define the gradient of the objective function and/or the hessian of the lagrange function and thereby improve the speed of computation. This is elaborated on in example 6. We take the following problem, specify the gradients, and the jacobian matrix of the constraints. We also set solver parameters using the options.
+ 
+
+
+function f1=gattainObjfun(x)
+f1(1)=2*x(1)*x(1)+x(2)*x(2)-48*x(1)-40*x(2)+304
+f1(2)=-x(1)*x(1)-3*x(2)*x(2)
+f1(3)=x(1)+3*x(2)-18
+f1(4)=-x(1)-x(2)
+f1(5)=x(1)+x(2)-8
+endfunction
+
+x0=[-1,1];
+
+goal=[-5,-3,-2,-1,-4];
+weight=abs(goal)
+
+function G=myfungrad(x)
+G = [ 4*x(1) - 48, -2*x(1), 1, -1, 1;
+2*x(2) - 40, -6*x(2), 3, -1, 1; ]'
+endfunction
+
+
+function [c, ceq]=confun(x)
+
+c = [1.5 + x(1)*x(2) - x(1) - x(2), -x(1)*x(2) - 10]
+
+ceq=[]
+endfunction
+
+function [DC, DCeq]=cgrad(x)
+
+
+
+
+
+DC= [
+x(2)-1, -x(2)
+x(1)-1, -x(1)
+]'
+DCeq = []'
+endfunction
+
+Options = list("MaxIter", [3000], "CpuTime", [600],"GradObj",myfungrad,"GradCon",cgrad);
+
+
+
+[x,fval,maxfval,exitflag,output] = fgoalattain(myfun,x0,goal,weight,[],[],[],[],[],[], confun, Options) |  |  | |
+
+Example
+Infeasible Problems: Find x in R^2 such that it minimizes the objective function used above under the following constraints:
+ 
+
+
+
+function f1=gattainObjfun(x)
+f1(1)=2*x(1)*x(1)+x(2)*x(2)-48*x(1)-40*x(2)+304
+f1(2)=-x(1)*x(1)-3*x(2)*x(2)
+f1(3)=x(1)+3*x(2)-18
+f1(4)=-x(1)-x(2)
+f1(5)=x(1)+x(2)-8
+endfunction
+
+x0=[-1,1];
+
+goal=[-5,-3,-2,-1,-4];
+weight=abs(goal)
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1/3,-5; 2, 1];
+beq = [11;8];
+
+[x,fval,maxfval,exitflag,output,lambda] = fgoalattain(gattainObjfun,x0,goal,weight,A,b,Aeq,beq) |  |  | |
+
Authors
- Prajwala TM, Sheetal Shalini , 2015
@@ -169,11 +356,11 @@ It has type "struct" and contains the following fields.
Report an issue |
- << FOSSEE Optimization Toolbox
+ << cbcintlinprog
|
- FOSSEE Optimization Toolbox
+ FOSSEE Optimization Toolbox
|
diff --git a/help/en_US/scilab_en_US_help/fminbnd.html b/help/en_US/scilab_en_US_help/fminbnd.html
index d60efaa..7e71531 100644
--- a/help/en_US/scilab_en_US_help/fminbnd.html
+++ b/help/en_US/scilab_en_US_help/fminbnd.html
@@ -16,7 +16,7 @@
|
- FOSSEE Optimization Toolbox
+ FOSSEE Optimization Toolbox
|
@@ -29,7 +29,7 @@
- FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > fminbnd
+ FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > fminbnd
fminbnd
@@ -44,53 +44,58 @@
[xopt,fopt,exitflag,output]=fminbnd(.....)
[xopt,fopt,exitflag,output,lambda]=fminbnd(.....)
-Parameters
+ Input Parameters
- f :
-
a function, representing the objective function of the problem
- - x1 :
-
a vector, containing the lower bound of the variables of size (1 X n) or (n X 1) where 'n' is the number of Variables, where n is number of Variables
- - x2 :
-
a vector, containing the upper bound of the variables of size (1 X n) or (n X 1) or (0 X 0) where 'n' is the number of Variables. If x2 is empty it means upper bound is +infinity
+ A function, representing the objective function of the problem.
+ :
+ A vector, containing the lower bound of the variables of size (1 X n) or (n X 1) where n is number of variables. If it is empty it means that the lower bound is .
+ :
+ A vector, containing the upper bound of the variables of size (1 X n) or (n X 1) or (0 X 0) where n is the number of variables. If it is empty it means that the upper bound is .
- options :
-
a list, containing the option for user to specify. See below for details.
- - xopt :
-
a vector of doubles, containing the the computed solution of the optimization problem.
+ A list, containing the options for user to specify. See below for details.
+ Outputs
+ - xopt :
+
A vector of doubles, containing the computed solution of the optimization problem.
- fopt :
-
a scalar of double, containing the the function value at x.
+ A double, containing the the function value at x.
- exitflag :
-
a scalar of integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
+ An integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
- output :
-
a structure, containing the information about the optimization. See below for details.
+ A structure, containing the information about the optimization. See below for details.
- lambda :
-
a structure, containing the Lagrange multipliers of lower bound and upper bound at the optimized point. See below for details.
+ A structure, containing the Lagrange multipliers of lower bound, upper bound and constraints at the optimized point. See below for details.
Description
Search the minimum of a multi-variable function on bounded interval specified by :
Find the minimum of f(x) such that
- 
- The routine calls Ipopt for solving the Bounded Optimization problem, Ipopt is a library written in C++.
- The options allows the user to set various parameters of the Optimization problem.
-It should be defined as type "list" and contains the following fields.
- - Syntax : options= list("MaxIter", [---], "CpuTime", [---], TolX, [----]);
-- MaxIter : a Scalar, containing the Maximum Number of Iteration that the solver should take.
-- CpuTime : a Scalar, containing the Maximum amount of CPU Time that the solver should take.
-- TolX : a Scalar, containing the Tolerance value that the solver should take.
-- Default Values : options = list("MaxIter", [3000], "CpuTime", [600], TolX, [1e-4]);
- The exitflag allows to know the status of the optimization which is given back by Ipopt.
- - exitflag=0 : Optimal Solution Found
-- exitflag=1 : Maximum Number of Iterations Exceeded. Output may not be optimal.
-- exitflag=2 : Maximum CPU Time exceeded. Output may not be optimal.
-- exitflag=3 : Stop at Tiny Step.
-- exitflag=4 : Solved To Acceptable Level.
-- exitflag=5 : Converged to a point of local infeasibility.
- For more details on exitflag see the ipopt documentation, go to http://www.coin-or.org/Ipopt/documentation/
+ 
+ fminbnd calls Ipopt which is an optimization library written in C++, to solve the bound optimization problem.
+
+ Options
+The options allow the user to set various parameters of the Optimization problem. The syntax for the options is given by:
+ options= list("MaxIter", [---], "CpuTime", [---], "TolX", [---]);
+ The options should be defined as type "list" and consist of the following fields:
+ - MaxIter : A scalar, specifying the maximum number of iterations that the solver should take.
+- CpuTime : A scalar, specifying the maximum amount of CPU Time in seconds that the solver should take.
+- TolX : A scalar, containing the tolerance value that the solver should take.
+The default values for the various items are given as:
+ options = list("MaxIter", [3000], "CpuTime", [600]);
+
+ The exitflag allows the user to know the status of the optimization which is returned by Ipopt. The values it can take and what they indicate is described below:
+ - 0 : Optimal Solution Found
+- 1 : Maximum Number of Iterations Exceeded. Output may not be optimal.
+- 2 : Maximum amount of CPU Time exceeded. Output may not be optimal.
+- 3 : Stop at Tiny Step.
+- 4 : Solved To Acceptable Level.
+- 5 : Converged to a point of local infeasibility.
+ For more details on exitflag, see the ipopt documentation which can be found on http://www.coin-or.org/Ipopt/documentation/
The output data structure contains detailed informations about the optimization process.
-It has type "struct" and contains the following fields.
- - output.Iterations: The number of iterations performed during the search
-- output.Cpu_Time: The total cpu-time spend during the search
-- output.Objective_Evaluation: The number of Objective Evaluations performed during the search
-- output.Dual_Infeasibility: The Dual Infeasiblity of the final soution
-- output.Message: The output message for the problem
+It is of type "struct" and contains the following fields.
+ - output.Iterations: The number of iterations performed.
+- output.Cpu_Time : The total cpu-time taken.
+- output.Objective_Evaluation: The number of Objective Evaluations performed.
+- output.Dual_Infeasibility : The Dual Infeasiblity of the final soution.
+- output.Message: The output message for the problem.
The lambda data structure contains the Lagrange multipliers at the end
of optimization. In the current version the values are returned only when the the solution is optimal.
It has type "struct" and contains the following fields.
@@ -98,10 +103,30 @@ It has type "struct" and contains the following fields.
lambda.upper: The Lagrange multipliers for the upper bound constraints.
-Examples
-
-
-
+ A few examples displaying the various functionalities of fminbnd have been provided below. You will find a series of problems and the appropriate code snippets to solve them.
+Example
+ Here we solve a simple non-linear objective function, bounded in the interval [0,1000].
+ Find x in R such that it minimizes:
+ 
+
+ Example 1: Minimizing a bound function R.
+
+function y=f(x)
+y=1/x^2
+endfunction
+
+x1 = [0];
+x2 = [1000];
+
+[x,fval,exitflag,output,lambda] =fminbnd(f, x1, x2) |  |  | |
+
+
+Example
+ Here we solve a bounded objective function in R^6. We use this function to illustrate how we can further enhance the functionality of fminbnd by setting input options. We can pre-define the gradient of the objective function and/or the hessian of the lagrange function and thereby improve the speed of computation. This is elaborated on in example 2. We also set solver parameters using the options.
+ Find x in R^6 such that it minimizes:
+ 
+
+
function y=f(x)
y=0
@@ -115,29 +140,13 @@ It has type "struct" and contains the following fields.
options=list("MaxIter",[1500],"CpuTime", [100],"TolX",[1e-6])
-[x,fval] =fminbnd(f, x1, x2, options)
- |  |  | |
+[x,fval] =fminbnd(f, x1, x2, options) |  |  | |
-Examples
-
-
-
-
-function y=f(x)
-y=1/x^2
-endfunction
-
-x1 = [0];
-x2 = [1000];
-
-[x,fval,exitflag,output,lambda] =fminbnd(f, x1, x2)
- |  |  | |
-
-Examples
-
-
-
-
+Example
+ Unbounded Problems: Find x in R^2 such that it minimizes:
+ 
+
+
function y=f(x)
y=-((x(1)-1)^2+(x(2)-1)^2);
@@ -163,7 +172,7 @@ It has type "struct" and contains the following fields.
|
- FOSSEE Optimization Toolbox
+ FOSSEE Optimization Toolbox
|
diff --git a/help/en_US/scilab_en_US_help/fmincon.html b/help/en_US/scilab_en_US_help/fmincon.html
index 5dc1d2b..c36f58f 100644
--- a/help/en_US/scilab_en_US_help/fmincon.html
+++ b/help/en_US/scilab_en_US_help/fmincon.html
@@ -16,7 +16,7 @@
|
- FOSSEE Optimization Toolbox
+ FOSSEE Optimization Toolbox
|
@@ -29,11 +29,11 @@
- FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > fmincon
+ FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > fmincon
fmincon
- Solves a multi-variable constrainted optimization problem
+ Solves a multi-variable constrainted optimization problem.
Calling Sequence
@@ -49,73 +49,75 @@
[xopt,fopt,exitflag,output,lambda,gradient]=fmincon(.....)
[xopt,fopt,exitflag,output,lambda,gradient,hessian]=fmincon(.....)
-Parameters
+ Input Parameters
- f :
-
a function, representing the objective function of the problem
+ A function, representing the objective function of the problem.
- x0 :
-
a vector of doubles, containing the starting values of variables of size (1 X n) or (n X 1) where 'n' is the number of Variables
+ A vector of doubles, containing the starting values of variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
- A :
-
a matrix of doubles, containing the coefficients of linear inequality constraints of size (m X n) where 'm' is the number of linear inequality constraints
+ A matrix of doubles, containing the coefficients of linear inequality constraints of size (m X n) where 'm' is the number of linear inequality constraints.
- b :
-
a vector of doubles, related to 'A' and containing the the Right hand side equation of the linear inequality constraints of size (m X 1)
+ A vector of doubles, related to 'A' and represents the linear coefficients in the linear inequality constraints of size (m X 1).
- Aeq :
-
a matrix of doubles, containing the coefficients of linear equality constraints of size (m1 X n) where 'm1' is the number of linear equality constraints
+ A matrix of doubles, containing the coefficients of linear equality constraints of size (m1 X n) where 'm1' is the number of linear equality constraints.
- beq :
-
a vector of doubles, related to 'Aeq' and containing the the Right hand side equation of the linear equality constraints of size (m1 X 1)
+ A vector of double, vector of doubles, related to 'Aeq' and represents the linear coefficients in the equality constraints of size (m1 X 1).
- lb :
-
a vector of doubles, containing the lower bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of Variables
+ A vector of doubles, containing the lower bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
- ub :
-
a vector of doubles, containing the upper bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of Variables
+ A vector of doubles, containing the upper bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
- nlc :
-
a function, representing the Non-linear Constraints functions(both Equality and Inequality) of the problem. It is declared in such a way that non-linear inequality constraints are defined first as a single row vector (c), followed by non-linear equality constraints as another single row vector (ceq). Refer Example for definition of Constraint function.
+ A function, representing the Non-linear Constraints functions(both Equality and Inequality) of the problem. It is declared in such a way that non-linear inequality constraints (c), and the non-linear equality constraints (ceq) are defined as separate single row vectors.
- options :
-
a list, containing the option for user to specify. See below for details.
- - xopt :
-
a vector of doubles, cointating the computed solution of the optimization problem
+ A list, containing the option for user to specify. See below for details.
+ Outputs
+ - xopt :
+
A vector of doubles, containing the computed solution of the optimization problem.
- fopt :
-
a scalar of double, containing the the function value at x
+ A double, containing the value of the function at x.
- exitflag :
-
a scalar of integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
+ An integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
- output :
-
a structure, containing the information about the optimization. See below for details.
+ A structure, containing the information about the optimization. See below for details.
- lambda :
-
a structure, containing the Lagrange multipliers of lower bound, upper bound and constraints at the optimized point. See below for details.
+ A structure, containing the Lagrange multipliers of the lower bounds, upper bounds and constraints at the optimized point. See below for details.
- gradient :
-
a vector of doubles, containing the Objective's gradient of the solution.
+ A vector of doubles, containing the objective's gradient of the solution.
- hessian :
-
a matrix of doubles, containing the Lagrangian's hessian of the solution.
+ A matrix of doubles, containing the Lagrangian's hessian of the solution.
Description
- Search the minimum of a constrained optimization problem specified by :
-Find the minimum of f(x) such that
- 
- The routine calls Ipopt for solving the Constrained Optimization problem, Ipopt is a library written in C++.
- The options allows the user to set various parameters of the Optimization problem.
-It should be defined as type "list" and contains the following fields.
- - Syntax : options= list("MaxIter", [---], "CpuTime", [---], "GradObj", ---, "Hessian", ---, "GradCon", ---);
-- MaxIter : a Scalar, containing the Maximum Number of Iteration that the solver should take.
-- CpuTime : a Scalar, containing the Maximum amount of CPU Time that the solver should take.
-- GradObj : a function, representing the gradient function of the Objective in Vector Form.
-- Hessian : a function, representing the hessian function of the Lagrange in Symmetric Matrix Form with Input parameters x, Objective factor and Lambda. Refer Example for definition of Lagrangian Hessian function.
-- GradCon : a function, representing the gradient of the Non-Linear Constraints (both Equality and Inequality) of the problem. It is declared in such a way that gradient of non-linear inequality constraints are defined first as a separate Matrix (cg of size m2 X n or as an empty), followed by gradient of non-linear equality constraints as a separate Matrix (ceqg of size m2 X n or as an empty) where m2 & m3 are number of non-linear inequality and equality constraints respectively.
-- Default Values : options = list("MaxIter", [3000], "CpuTime", [600]);
- The exitflag allows to know the status of the optimization which is given back by Ipopt.
- - exitflag=0 : Optimal Solution Found
-- exitflag=1 : Maximum Number of Iterations Exceeded. Output may not be optimal.
-- exitflag=2 : Maximum amount of CPU Time exceeded. Output may not be optimal.
-- exitflag=3 : Stop at Tiny Step.
-- exitflag=4 : Solved To Acceptable Level.
-- exitflag=5 : Converged to a point of local infeasibility.
- For more details on exitflag see the ipopt documentation, go to http://www.coin-or.org/Ipopt/documentation/
- The output data structure contains detailed informations about the optimization process.
-It has type "struct" and contains the following fields.
- - output.Iterations: The number of iterations performed during the search
-- output.Cpu_Time: The total cpu-time spend during the search
-- output.Objective_Evaluation: The number of Objective Evaluations performed during the search
-- output.Dual_Infeasibility: The Dual Infeasiblity of the final soution
-- output.Message: The output message for the problem
- The lambda data structure contains the Lagrange multipliers at the end
-of optimization. In the current version the values are returned only when the the solution is optimal.
+ Search the minimum of a constrained optimization problem specified by:
+ Find the minimum of f(x) such that
+ 
+ fmincon calls Ipopt, an optimization library written in C++, to solve the Constrained Optimization problem.
+ Options
+The options allow the user to set various parameters of the Optimization problem. The syntax for the options is given by:
+ options= list("MaxIter", [---], "CpuTime", [---], "GradObj", ---, "Hessian", ---, "GradCon", ---);
+ The options should be defined as type "list" and consist of the following fields:
+ - MaxIter : A Scalar, specifying the maximum number of iterations that the solver should take.
+- CpuTime : A Scalar, specifying the maximum amount of CPU time in seconds that the solver should take.
+- GradObj : A function, representing the gradient function of the Objective in vector form.
+- Hessian : A function, representing the hessian function of the Lagrange in the form of a Symmetric Matrix with Input parameters as x, Objective factor and Lambda. Refer to Example 5 for definition of Lagrangian Hessian function.
+- GradCon : A function, representing the gradient of the Non-Linear Constraints (both Equality and Inequality) of the problem. It is declared in such a way that gradient of non-linear inequality constraints are defined first as a separate Matrix (cg of size m2 X n or as an empty), followed by gradient of non-linear equality constraints as a separate matrix (ceqg of size m2 X n or as an empty) where m2 & m3 are number of non-linear inequality and equality constraints respectively.
+The default values for the various items are given as:
+ options = list("MaxIter", [3000], "CpuTime", [600]);
+ The exitflag allows the user to know the status of the optimization which is returned by Ipopt. The values it can take and what they indicate is described below:
+ - 0 : Optimal Solution Found
+- 1 : Maximum Number of Iterations Exceeded. Output may not be optimal.
+- 2 : Maximum amount of CPU Time exceeded. Output may not be optimal.
+- 3 : Stop at Tiny Step.
+- 4 : Solved To Acceptable Level.
+- 5 : Converged to a point of local infeasibility.
+ For more details on exitflag, see the Ipopt documentation which can be found on http://www.coin-or.org/Ipopt/documentation/
+ The output data structure contains detailed information about the optimization process.
+It is of type "struct" and contains the following fields.
+ - output.Iterations: The number of iterations performed.
+- output.Cpu_Time : The total cpu-time taken.
+- output.Objective_Evaluation: The number of Objective Evaluations performed.
+- output.Dual_Infeasibility : The Dual Infeasiblity of the final soution.
+- output.Message: The output message for the problem.
+ The lambda data structure contains the Lagrange multipliers at the end of optimization. In the current version, the values are returned only when the the solution is optimal.
It has type "struct" and contains the following fields.
- lambda.lower: The Lagrange multipliers for the lower bound constraints.
- lambda.upper: The Lagrange multipliers for the upper bound constraints.
@@ -123,52 +125,99 @@ It has type "struct" and contains the following fields.
- lambda.ineqlin: The Lagrange multipliers for the linear inequality constraints.
- lambda.eqnonlin: The Lagrange multipliers for the non-linear equality constraints.
- lambda.ineqnonlin: The Lagrange multipliers for the non-linear inequality constraints.
-
+ A few examples displaying the various functionalities of fmincon have been provided below. You will find a series problems and the appropriate code snippets to solve them.
-Examples
-
-
-
-
-
-
-
-
-
-
+Example
+ Here we solve a simple non-linear objective function, subjected to three linear inequality constraints.
+ Find x in R^2 such that it minimizes:
+ 
+
+
function y=f(x)
-y=-x(1)-x(2)/3;
+y=x(1)^2 - x(1)*x(2)/3 + x(2)^2;
endfunction
-
+
x0=[0 , 0];
-A=[1,1 ; 1,1/4 ; 1,-1 ; -1/4,-1 ; -1,-1 ; -1,1];
-b=[2;1;2;1;-1;2];
-Aeq=[1,1];
-beq=[2];
-lb=[];
-ub=[];
-nlc=[];
-
-function y=fGrad(x)
-y= [-1,-1/3];
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+[x,fval,exitflag,output,lambda,grad,hessian] =fmincon(f, x0,A,b) |  |  | |
+
+Example
+ Here we build up on the previous example by adding linear equality constraints.
+We add the following constraints to the problem specified above:
+ 
+
+
+
+function y=f(x)
+y=x(1)^2 - x(1)*x(2)/3 + x(2)^2;
endfunction
-
-function y=lHess(x, obj, lambda)
-y= obj*[0,0;0,0]
+
+x0=[0 , 0];
+A=[1,1 ; 1,1/4 ; -1,1];
+b=[2;1;2];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+[x,fval,exitflag,output,lambda,grad,hessian] =fmincon(f, x0,A,b,Aeq,beq); |  |  | |
+
+Example
+ In this example, we proceed to add the upper and lower bounds to the objective function.
+ 
+
+
+
+function y=f(x)
+y=x(1)^2 - x(1)*x(2)/3 + x(2)^2;
endfunction
-
-options=list("GradObj", fGrad, "Hessian", lHess);
-
-[x,fval,exitflag,output,lambda,grad,hessian] =fmincon(f, x0,A,b,Aeq,beq,lb,ub,nlc,options)
- |  |  | |
+
+x0=[0 , 0];
+A=[1,1 ; 1,1/4 ; -1,1];
+b=[2;1;2];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+
+lb = [-1;-%inf];
+ub = [%inf;1];
+[x,fval,exitflag,output,lambda,grad,hessian] =fmincon(f, x0,A,b,Aeq,beq,lb,ub);
+ |  |  | |
-Examples
-
-
-
-
-
+Example
+ Finally, we add the non-linear constraints to the problem. Note that there is a notable difference in the way this is done as compared to defining the linear constraints.
+ 
+
+
+
+function y=f(x)
+y=x(1)^2 - x(1)*x(2)/3 + x(2)^2;
+endfunction
+
+x0=[0 , 0];
+A=[1,1 ; 1,1/4 ; -1,1];
+b=[2;1;2];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+
+lb = [-1;-%inf];
+ub = [%inf;1];
+
+function [c, ceq]=nlc(x)
+c=[x(1)^2-1,x(1)^2+x(2)^2-1];
+ceq=[];
+endfunction
+[x,fval,exitflag,output,lambda,grad,hessian] =fmincon(f, x0,A,b,Aeq,beq,lb,ub,nlc);
+ |  |  | |
+
+Example
+ Additional Functionality:
+ We can further enhance the functionality of fmincon by setting input options. We can pre-define the gradient of the objective function and/or the hessian of the lagrange function and thereby improve the speed of computation. This is elaborated on in example 5. We take the following problem and add simple non-linear constraints, specify the gradients and the hessian of the Lagrange Function. We also set solver parameters using the options.
+
+ 
+
+
function y=f(x)
y=x(1)*x(2)+x(2)*x(3);
@@ -190,7 +239,7 @@ It has type "struct" and contains the following fields.
function y=fGrad(x)
y= [x(2),x(1)+x(3),x(2)];
endfunction
-
+
function y=lHess(x, obj, lambda)
y= obj*[0,1,0;1,0,1;0,1,0] + lambda(1)*[2,0,0;0,-2,0;0,0,2] + lambda(2)*[2,0,0;0,2,0;0,0,2]
endfunction
@@ -203,84 +252,38 @@ It has type "struct" and contains the following fields.
options=list("MaxIter", [1500], "CpuTime", [500], "GradObj", fGrad, "Hessian", lHess,"GradCon", cGrad);
[x,fval,exitflag,output] =fmincon(f, x0,A,b,Aeq,beq,lb,ub,nlc,options)
- |  |  | |
+ |  |  | |
-Examples
-
-
-
-
-
-
-
-
+Example
+
+ Infeasible Problems: Find x in R^2 such that it minimizes:
+ 
+
+
+
function y=f(x)
-y=-(x(1)^2+x(2)^2+x(3)^2);
+y=x(1)^2 - x(1)*x(2)/3 + x(2)^2;
endfunction
-
-x0=[0.1 , 0.1 , 0.1];
-A=[];
-b=[];
-Aeq=[];
-beq=[];
-lb=[];
-ub=[0,0,0];
-
-options=list("MaxIter", [1500], "CpuTime", [500]);
-
-[x,fval,exitflag,output,lambda,grad,hessian] =fmincon(f, x0,A,b,Aeq,beq,lb,ub,[],options)
- |  |  | |
+x0=[0 , 0];
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+Aeq = [1,1];
+beq = 3;
+[x,fval,exitflag,output,lambda,grad,hessian] =fmincon(f, x0,A,b,Aeq,beq) |  |  | |
-Examples
-
-
-
-
-
-
-
-
-
-
-
-
-
+Example
+ Unbounded Problems: Find x in R^2 such that it minimizes:
+ 
+
+
function y=f(x)
-y=x(1)*x(2)+x(2)*x(3);
-endfunction
-
-x0=[1,1,1];
-A=[];
-b=[];
-Aeq=[];
-beq=[];
-lb=[0 0.2,-%inf];
-ub=[0.6 %inf,1];
-
-function [c, ceq]=nlc(x)
-c=[x(1)^2-1,x(1)^2+x(2)^2-1,x(3)^2-1];
-ceq=[x(1)^3-0.5,x(2)^2+x(3)^2-0.75];
-endfunction
-
-function y=fGrad(x)
-y= [x(2),x(1)+x(3),x(2)];
-endfunction
-
-function y=lHess(x, obj, lambda)
-y= obj*[0,1,0;1,0,1;0,1,0] + lambda(1)*[2,0,0;0,0,0;0,0,0] + ..
-lambda(2)*[2,0,0;0,2,0;0,0,0] +lambda(3)*[0,0,0;0,0,0;0,0,2] + ..
-lambda(4)*[6*x(1),0,0;0,0,0;0,0,0] + lambda(5)*[0,0,0;0,2,0;0,0,2];
+y=-(x(1)^2 - x(1)*x(2)/3 + x(2)^2);
endfunction
-
-function [cg, ceqg]=cGrad(x)
-cg = [2*x(1),0,0;2*x(1),2*x(2),0;0,0,2*x(3)];
-ceqg = [3*x(1)^2,0,0;0,2*x(2),2*x(3)];
-endfunction
-
-options=list("MaxIter", [1500], "CpuTime", [500], "GradObj", fGrad, "Hessian", lHess,"GradCon", cGrad);
-
-[x,fval,exitflag,output,lambda,grad,hessian] =fmincon(f, x0,A,b,Aeq,beq,lb,ub,nlc,options)
- |  |  | |
+x0=[0 , 0];
+A=[-1,-1 ; 1,1];
+b=[-2;1];
+[x,fval,exitflag,output,lambda,grad,hessian] =fmincon(f, x0,A,b); |  |  | |
+
Authors
- R.Vidyadhar , Vignesh Kannan
@@ -295,7 +298,7 @@ It has type "struct" and contains the following fields.
|
- FOSSEE Optimization Toolbox
+ FOSSEE Optimization Toolbox
|
diff --git a/help/en_US/scilab_en_US_help/fminimax.html b/help/en_US/scilab_en_US_help/fminimax.html
index 22b6ad9..2aa5c38 100644
--- a/help/en_US/scilab_en_US_help/fminimax.html
+++ b/help/en_US/scilab_en_US_help/fminimax.html
@@ -16,7 +16,7 @@
|
- FOSSEE Optimization Toolbox
+ FOSSEE Optimization Toolbox
|
@@ -29,7 +29,7 @@
- FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > fminimax
+ FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > fminimax
fminimax
@@ -49,64 +49,72 @@
[xopt, fval, maxfval, exitflag, output]= fminimax(.....)
[xopt, fval, maxfval, exitflag, output, lambda]= fminimax(.....)
-Parameters
+ Input Parameters
- fun:
-
The function to be minimized. fun is a function that accepts a vector x and returns a vector F, the objective functions evaluated at x.
+ The function to be minimized. fun is a function that has a vector x as an input argument, and contains the objective functions evaluated at x.
- x0 :
-
a vector of double, contains initial guess of variables.
+ A vector of doubles, containing the starting values of variables of size (1 X n) or (n X 1) where 'n' is the number of Variables.
- A :
-
a matrix of double, represents the linear coefficients in the inequality constraints A⋅x ≤ b.
+ A matrix of doubles, containing the coefficients of linear inequality constraints of size (m X n) where 'm' is the number of linear inequality constraints.
- b :
-
a vector of double, represents the linear coefficients in the inequality constraints A⋅x ≤ b.
+ A vector of doubles, related to 'A' and represents the linear coefficients in the linear inequality constraints of size (m X 1).
- Aeq :
-
a matrix of double, represents the linear coefficients in the equality constraints Aeq⋅x = beq.
+ A matrix of doubles, containing the coefficients of linear equality constraints of size (m1 X n) where 'm1' is the number of linear equality constraints.
- beq :
-
a vector of double, represents the linear coefficients in the equality constraints Aeq⋅x = beq.
+ A vector of double, vector of doubles, related to 'Aeq' and represents the linear coefficients in the equality constraints of size (m1 X 1).
- lb :
-
a vector of double, contains lower bounds of the variables.
+ A vector of doubles, containing the lower bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
- ub :
-
a vector of double, contains upper bounds of the variables.
+ A vector of doubles, containing the upper bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
- nonlinfun:
-
function that computes the nonlinear inequality constraints c⋅x ≤ 0 and nonlinear equality constraints c⋅x = 0.
- - xopt :
-
a vector of double, the computed solution of the optimization problem.
- - fopt :
-
a double, the value of the function at x.
+ A function, representing the Non-linear Constraints functions(both Equality and Inequality) of the problem. It is declared in such a way that non-linear inequality constraints (c), and the non-linear equality constraints (ceq) are defined as separate single row vectors.
+ - options :
+
A list, containing the option for user to specify. See below for details.
+ Outputs
+ - xopt :
+
A vector of doubles, containing the computed solution of the optimization problem.
+ - fval :
+
A vector of doubles, containing the values of the objective functions at the end of the optimization problem.
- maxfval:
-
a 1x1 matrix of doubles, the maximum value in vector fval
+ A double, representing the maximum value in the vector fval.
- exitflag :
-
The exit status. See below for details.
+ An integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
- output :
-
The structure consist of statistics about the optimization. See below for details.
+ A structure, containing the information about the optimization. See below for details.
- lambda :
-
The structure consist of the Lagrange multipliers at the solution of problem. See below for details.
+ A structure, containing the Lagrange multipliers of lower bound, upper bound and constraints at the optimized point. See below for details.
Description
- fminimax minimizes the worst-case (largest) value of a set of multivariable functions, starting at an initial estimate. This is generally referred to as the minimax problem.
- 
- Currently, fminimax calls fmincon which uses the ip-opt algorithm.
+ fminimax minimizes the worst-case (largest) value of a set of multivariable functions, starting at an initial estimate, a problem generally referred to as the minimax problem.
+ 
+ Currently, fminimax calls fmincon which uses the Ipopt solver.
max-min problems can also be solved with fminimax, using the identity

- The options allows the user to set various parameters of the Optimization problem.
-It should be defined as type "list" and contains the following fields.
- - Syntax : options= list("MaxIter", [---], "CpuTime", [---], "GradObj", ---, "GradCon", ---);
-- MaxIter : a Scalar, containing the Maximum Number of Iteration that the solver should take.
-- CpuTime : a Scalar, containing the Maximum amount of CPU Time that the solver should take.
-- GradObj : a function, representing the gradient function of the Objective in Vector Form.
-- GradCon : a function, representing the gradient of the Non-Linear Constraints (both Equality and Inequality) of the problem. It is declared in such a way that gradient of non-linear inequality constraints are defined first as a separate Matrix (cg of size m2 X n or as an empty), followed by gradient of non-linear equality constraints as a separate Matrix (ceqg of size m2 X n or as an empty) where m2 & m3 are number of non-linear inequality and equality constraints respectively.
-- Default Values : options = list("MaxIter", [3000], "CpuTime", [600]);
- The objective function must have header :
+
+ Options
+The options allow the user to set various parameters of the Optimization problem. The syntax for the options is given by:
+ options= list("MaxIter", [---], "CpuTime", [---], "GradObj", ---, "GradCon", ---);
+ - MaxIter : A Scalar, specifying the Maximum Number of iterations that the solver should take.
+- CpuTime : A Scalar, specifying the Maximum amount of CPU Time in seconds that the solver should take.
+- GradObj : A function, representing the gradient function of the Objective in Vector Form.
+- GradCon : A function, representing the gradient of the Non-Linear Constraints (both Equality and Inequality) of the problem. It is declared in such a way that gradient of non-linear inequality constraints are defined first as a separate Matrix (cg of size m2 X n or as an empty), followed by gradient of non-linear equality constraints as a separate matrix (ceqg of size m2 X n or as an empty) where m2 & m3 are number of non-linear inequality and equality constraints respectively.
+- Default Values : options = list("MaxIter", [3000], "CpuTime", [600]);
+The default values for the various items are given as:
+ options = list("MaxIter", [3000], "CpuTime", [600]);
+ The objective function must have a header :
F = fun(x) |  |  | |
where x is a n x 1 matrix of doubles and F is a m x 1 matrix of doubles where m is the total number of objective functions inside F.
On input, the variable x contains the current point and, on output, the variable F must contain the objective function values.
- By default, the gradient options for fminimax are turned off and and fmincon does the gradient opproximation of minmaxObjfun. In case the GradObj option is off and GradConstr option is on, fminimax approximates minmaxObjfun gradient using numderivative toolbox.
- If we can provide exact gradients, we should do so since it improves the convergence speed of the optimization algorithm.
- Furthermore, we must enable the "GradObj" option with the statement :
+ By default, the gradient options for fminimax are turned off and and fmincon does the gradient opproximation of minmaxObjfun. In case the GradObj option is off and GradCon option is on, fminimax approximates minmaxObjfun gradient using the numderivative toolbox.
+ Syntax
+ Some syntactic details about fminimax, including the syntax for the gradient, defining the non-linear constraints, and the constraint derivative function have been provided below:
+ If the user can provide exact gradients, it should be done, since it improves the convergence speed of the optimization algorithm.
+ Furthermore, we can enable the "GradObj" option with the statement :
minimaxOptions = list("GradObj",fGrad); |  |  | |
This will let fminimax know that the exact gradient of the objective function is known, so that it can change the calling sequence to the objective function. Note that, fGrad should be mentioned in the form of N x n where n is the number of variables, N is the number of functions in objective function.
- The constraint function must have header :
+ The constraint function must have header:
[c, ceq] = confun(x) |  |  | |
-where x is a n x 1 matrix of dominmaxUbles, c is a 1 x nni matrix of doubles and ceq is a 1 x nne matrix of doubles (nni : number of nonlinear inequality constraints, nne : number of nonlinear equality constraints).
+Where x is a n x 1 matrix of minimax doubles, c is a 1 x nni matrix of doubles and ceq is a 1 x nne matrix of doubles (nni : number of nonlinear inequality constraints, nne : number of nonlinear equality constraints).
On input, the variable x contains the current point and, on output, the variable c must contain the nonlinear inequality constraints and ceq must contain the nonlinear equality constraints.
By default, the gradient options for fminimax are turned off and and fmincon does the gradient opproximation of confun. In case the GradObj option is on and GradCons option is off, fminimax approximates confun gradient using numderivative toolbox.
If we can provide exact gradients, we should do so since it improves the convergence speed of the optimization algorithm.
@@ -116,35 +124,55 @@ This will let fminimax know that the exact gradient of the objective function is
The constraint derivative function must have header :
[dc,dceq] = confungrad(x) |  |  | |
where dc is a nni x n matrix of doubles and dceq is a nne x n matrix of doubles.
- The exitflag allows to know the status of the optimization which is given back by Ipopt.
- - exitflag=0 : Optimal Solution Found
-- exitflag=1 : Maximum Number of Iterations Exceeded. Output may not be optimal.
-- exitflag=2 : Maximum amount of CPU Time exceeded. Output may not be optimal.
-- exitflag=3 : Stop at Tiny Step.
-- exitflag=4 : Solved To Acceptable Level.
-- exitflag=5 : Converged to a point of local infeasibility.
- For more details on exitflag see the ipopt documentation, go to http://www.coin-or.org/Ipopt/documentation/
- The output data structure contains detailed informations about the optimization process.
-It has type "struct" and contains the following fields.
- - output.Iterations: The number of iterations performed during the search
-- output.Cpu_Time: The total cpu-time spend during the search
-- output.Objective_Evaluation: The number of Objective Evaluations performed during the search
-- output.Dual_Infeasibility: The Dual Infeasiblity of the final soution
- The lambda data structure contains the Lagrange multipliers at the end
-of optimization. In the current version the values are returned only when the the solution is optimal.
+ The exitflag allows the user to know the status of the optimization which is returned by Ipopt. The values it can take and what they indicate is described below:
+ - 0 : Optimal Solution Found
+- 1 : Maximum Number of Iterations Exceeded. Output may not be optimal.
+- 2 : Maximum amount of CPU Time exceeded. Output may not be optimal.
+- 3 : Stop at Tiny Step.
+- 4 : Solved To Acceptable Level.
+- 5 : Converged to a point of local infeasibility.
+ For more details on exitflag, see the ipopt documentation which can be found on http://www.coin-or.org/Ipopt/documentation/
+ The output data structure contains detailed information about the optimization process.
+It is of type "struct" and contains the following fields.
+ - output.Iterations: The number of iterations performed.
+- output.Cpu_Time : The total cpu-time taken.
+- output.Objective_Evaluation: The number of Objective Evaluations performed.
+- output.Dual_Infeasibility : The Dual Infeasiblity of the final soution.
+- output.Message: The output message for the problem.
+ The lambda data structure contains the Lagrange multipliers at the end of optimization. In the current version the values are returned only when the the solution is optimal.
It has type "struct" and contains the following fields.
- lambda.lower: The Lagrange multipliers for the lower bound constraints.
- lambda.upper: The Lagrange multipliers for the upper bound constraints.
- lambda.eqlin: The Lagrange multipliers for the linear equality constraints.
- lambda.ineqlin: The Lagrange multipliers for the linear inequality constraints.
- lambda.eqnonlin: The Lagrange multipliers for the non-linear equality constraints.
-- lambda.ineqnonlin: The Lagrange multipliers for the non-linear inequality constraints.
-
+lambda.ineqnonlin: The Lagrange multipliers for the non-linear inequality constraints.
+A few examples displaying the various functionalities of fminimax have been provided below. You will find a series of problems and the appropriate code snippets to solve them.
+Example
+ Here we solve a simple objective function, subjected to no constraints.
+ 
+
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+x0 = [0.1,0.1];
+
+[x,fval,maxfval,exitflag,output,lambda] = fminimax(myfun, x0) |  |  | |
+
+Example
+ We proceed to add simple linear inequality constraints.
- Examples
-
-
-
+
+
+
+
function f=myfun(x)
f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
f(2)= -x(1)^2 - 3*x(2)^2;
@@ -154,18 +182,102 @@ It has type "struct" and contains the following fields.
endfunction
x0 = [0.1,0.1];
-
-xopt = [4 4]
-fopt = [0 -64 -2 -8 0]
-maxfopt = 0
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+[x,fval,maxfval,exitflag,output,lambda] = fminimax(myfun, x0,A,b) |  |  | |
+
+Example
+Here we build up on the previous example by adding linear equality constraints.
+We add the following constraints to the problem specified above:
+ 
+
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+x0 = [0.1,0.1];
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+
+[x,fval,maxfval,exitflag,output,lambda] = fminimax(myfun, x0,A,b,Aeq,beq) |  |  | |
+
+Example
+In this example, we proceed to add the upper and lower bounds to the objective function.
+ 
+
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+x0 = [0.1,0.1];
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+
+lb = [-1;-%inf];
+ub = [%inf;1];
-[x,fval,maxfval,exitflag,output,lambda] = fminimax(myfun, x0)
- |  |  | |
+[x,fval,maxfval,exitflag,output,lambda] = fminimax(myfun, x0,A,b,Aeq,beq,lb,ub) |  |  | |
- Examples
-
-
-
+
+Example
+Finally, we add the non-linear constraints to the problem. Note that there is a notable difference in the way this is done as compared to defining the linear constraints.
+ 
+
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+x0 = [0.1,0.1];
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+
+lb = [-1;-%inf];
+ub = [%inf;1];
+
+function [c, ceq]=nlc(x)
+c=[x(1)^2-1,x(1)^2+x(2)^2-1];
+ceq=[];
+endfunction
+
+[x,fval,maxfval,exitflag,output,lambda] = fminimax(myfun, x0,A,b,Aeq,beq,lb,ub) |  |  | |
+
+Example
+ We can further enhance the functionality of fminimax by setting input options. We can pre-define the gradient of the objective function and/or the hessian of the lagrange function and thereby improve the speed of computation. This is elaborated on in example 6. We take the following problem, specify the gradients, and the jacobian matrix of the constraints. We also set solver parameters using the options.
+ 
+
+
function f=myfun(x)
f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
f(2)= -x(1)^2 - 3*x(2)^2;
@@ -173,6 +285,7 @@ It has type "struct" and contains the following fields.
f(4)= -x(1) - x(2);
f(5)= x(1) + x(2) - 8;
endfunction
+
function G=myfungrad(x)
G = [ 4*x(1) - 48, -2*x(1), 1, -1, 1;
@@ -200,7 +313,7 @@ It has type "struct" and contains the following fields.
DCeq = []'
endfunction
-minimaxOptions = list("GradObj",myfungrad,"GradCon",cgrad);
+minimaxOptions = list("MaxIter", [3000], "CpuTime", [600],"GradObj",myfungrad,"GradCon",cgrad);
x0 = [0,10];
@@ -210,6 +323,30 @@ It has type "struct" and contains the following fields.
[x,fval,maxfval,exitflag,output] = fminimax(myfun,x0,[],[],[],[],[],[], confun, minimaxOptions) |  |  | |
+Example
+Infeasible Problems: Find x in R^2 such that it minimizes the objective function used above under the following constraints:
+ 
+
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+x0 = [0.1,0.1];
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1/3,-5; 2, 1];
+beq = [11;8];
+
+[x,fval,maxfval,exitflag,output,lambda] = fminimax(myfun, x0,A,b,Aeq,beq) |  |  | |
+
@@ -223,7 +360,7 @@ It has type "struct" and contains the following fields.
|
- FOSSEE Optimization Toolbox
+ FOSSEE Optimization Toolbox
|
diff --git a/help/en_US/scilab_en_US_help/fminunc.html b/help/en_US/scilab_en_US_help/fminunc.html
index d5f0786..b2e9059 100644
--- a/help/en_US/scilab_en_US_help/fminunc.html
+++ b/help/en_US/scilab_en_US_help/fminunc.html
@@ -16,11 +16,11 @@
|
- FOSSEE Optimization Toolbox
+ FOSSEE Optimization Toolbox
|
- linprog >>
+ intfminbnd >>
|
@@ -29,7 +29,7 @@
- FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > fminunc
+ FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > fminunc
fminunc
@@ -45,59 +45,81 @@
[xopt,fopt,exitflag,output,gradient]=fminunc(.....)
[xopt,fopt,exitflag,output,gradient,hessian]=fminunc(.....)
- Parameters
+ Input Parameters
- f :
-
a function, representing the objective function of the problem
+ A function, representing the objective function of the problem.
- x0 :
-
a vector of doubles, containing the starting of variables.
- - options:
-
a list, containing the option for user to specify. See below for details.
- - xopt :
-
a vector of doubles, the computed solution of the optimization problem.
+ A vector of doubles, containing the starting values of variables of size (1 X n) or (n X 1) where 'n' is the number of Variables.
+ - options :
+
A list, containing the options for user to specify. See below for details.
+ Outputs
+ - xopt :
+
A vector of doubles, containing the computed solution of the optimization problem.
- fopt :
-
a scalar of double, the function value at x.
+ A double, containing the the function value at x.
- exitflag :
-
a scalar of integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
- - output :
-
a structure, containing the information about the optimization. See below for details.
+ An integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
+ - output :
+
A structure, containing the information about the optimization. See below for details.
- gradient :
-
a vector of doubles, containing the the gradient of the solution.
+ A vector of doubles, containing the objective's gradient of the solution.
- hessian :
-
a matrix of doubles, containing the the hessian of the solution.
+ A matrix of doubles, containing the lagrangian's hessian of the solution.
Description
- Search the minimum of an unconstrained optimization problem specified by :
-Find the minimum of f(x) such that
+ Search the minimum of an unconstrained optimization problem specified by :
+ Find the minimum of f(x) such that

- The routine calls Ipopt for solving the Un-constrained Optimization problem, Ipopt is a library written in C++.
- The options allows the user to set various parameters of the Optimization problem.
-It should be defined as type "list" and contains the following fields.
- - Syntax : options= list("MaxIter", [---], "CpuTime", [---], "Gradient", ---, "Hessian", ---);
-- MaxIter : a Scalar, containing the Maximum Number of Iteration that the solver should take.
-- CpuTime : a Scalar, containing the Maximum amount of CPU Time that the solver should take.
-- Gradient : a function, representing the gradient function of the Objective in Vector Form.
-- Hessian : a function, representing the hessian function of the Objective in Symmetric Matrix Form.
-- Default Values : options = list("MaxIter", [3000], "CpuTime", [600]);
- The exitflag allows to know the status of the optimization which is given back by Ipopt.
- - exitflag=0 : Optimal Solution Found
-- exitflag=1 : Maximum Number of Iterations Exceeded. Output may not be optimal.
-- exitflag=2 : Maximum CPU Time exceeded. Output may not be optimal.
-- exitflag=3 : Stop at Tiny Step.
-- exitflag=4 : Solved To Acceptable Level.
-- exitflag=5 : Converged to a point of local infeasibility.
- For more details on exitflag see the ipopt documentation, go to http://www.coin-or.org/Ipopt/documentation/
- The output data structure contains detailed informations about the optimization process.
-It has type "struct" and contains the following fields.
- - output.Iterations: The number of iterations performed during the search
-- output.Cpu_Time: The total cpu-time spend during the search
-- output.Objective_Evaluation: The number of Objective Evaluations performed during the search
-- output.Dual_Infeasibility: The Dual Infeasiblity of the final soution
-- output.Message: The output message for the problem
+ Fminunc calls Ipopt which is an optimization library written in C++, to solve the unconstrained optimization problem.
+ Options
+The options allow the user to set various parameters of the optimization problem. The syntax for the options is given by:
+ options= list("MaxIter", [---], "CpuTime", [---], "GradObj", ---, "Hessian", ---, "GradCon", ---);
+ - MaxIter : A Scalar, specifying the Maximum Number of Iterations that the solver should take.
+- CpuTime : A Scalar, specifying the Maximum amount of CPU Time in seconds that the solver should take.
+- Gradient: A function, representing the gradient function of the objective in Vector Form.
+- Hessian : A function, representing the hessian function of the lagrange in the form of a Symmetric Matrix with input parameters as x, objective factor and lambda. Refer to Example 5 for definition of lagrangian hessian function.
+The default values for the various items are given as:
+ options = list("MaxIter", [3000], "CpuTime", [600]);
+ The exitflag allows the user to know the status of the optimization which is returned by Ipopt. The values it can take and what they indicate is described below:
+ - 0 : Optimal Solution Found
+- 1 : Maximum Number of Iterations Exceeded. Output may not be optimal.
+- 2 : Maximum amount of CPU Time exceeded. Output may not be optimal.
+- 3 : Stop at Tiny Step.
+- 4 : Solved To Acceptable Level.
+- 5 : Converged to a point of local infeasibility.
+ For more details on exitflag, see the Ipopt documentation which can be found on http://www.coin-or.org/Ipopt/documentation/
+ The output data structure contains detailed information about the optimization process.
+It is of type "struct" and contains the following fields.
+ - output.Iterations: The number of iterations performed.
+- output.Cpu_Time : The total cpu-time taken.
+- output.Objective_Evaluation: The number of objective evaluations performed.
+- output.Dual_Infeasibility : The Dual Infeasiblity of the final soution.
+- output.Message: The output message for the problem.
- Examples
-
-
+ A few examples displaying the various functionalities of fminunc have been provided below. You will find a series of problems and the appropriate code snippets to solve them.
+
+Example
+ We begin with the minimization of a simple non-linear function.
+ Find x in R^2 such that it minimizes:
+ 
+
+
+
+function y=f(x)
+y= x(1)^2 + x(2)^2;
+endfunction
+
+x0=[2,1];
+
+[xopt,fopt]=fminunc(f,x0)
+ |  |  | |
+
+Example
+ We now look at the Rosenbrock function, a non-convex performance test problem for optimization routines. We use this example to illustrate how we can enhance the functionality of fminunc by setting input options. We can pre-define the gradient of the objective function and/or the hessian of the lagrange function and thereby improve the speed of computation. This is elaborated on in example 2. We also set solver parameters using the options.
+ 
+
+
function y=f(x)
y= 100*(x(2) - x(1)^2)^2 + (1-x(1))^2;
@@ -113,28 +135,16 @@ It has type "struct" and contains the following fields.
y= [1200*x(1)^2- 400*x(2) + 2, -400*x(1);-400*x(1), 200 ];
endfunction
-options=list("MaxIter", [1500], "CpuTime", [500], "Gradient", fGrad, "Hessian", fHess);
+options=list("MaxIter", [1500], "CpuTime", [500], "GradObj", fGrad, "Hessian", fHess);
[xopt,fopt,exitflag,output,gradient,hessian]=fminunc(f,x0,options)
|  |  | |
-Examples
-
-
-
-function y=f(x)
-y= x(1)^2 + x(2)^2;
-endfunction
-
-x0=[2,1];
-
-[xopt,fopt]=fminunc(f,x0)
- |  |  | |
-
-Examples
-
-
-
+Example
+ Unbounded Problems: Find x in R^2 such that it minimizes:
+ 
+
+
function y=f(x)
y= -x(1)^2 - x(2)^2;
@@ -150,7 +160,7 @@ It has type "struct" and contains the following fields.
y= [-2,0;0,-2];
endfunction
-options=list("MaxIter", [1500], "CpuTime", [500], "Gradient", fGrad, "Hessian", fHess);
+options=list("MaxIter", [1500], "CpuTime", [500], "GradObj", fGrad, "Hessian", fHess);
[xopt,fopt,exitflag,output,gradient,hessian]=fminunc(f,x0,options) |  |  | |
@@ -167,11 +177,11 @@ It has type "struct" and contains the following fields.
|
- FOSSEE Optimization Toolbox
+ FOSSEE Optimization Toolbox
|
- linprog >>
+ intfminbnd >>
|
diff --git a/help/en_US/scilab_en_US_help/index.html b/help/en_US/scilab_en_US_help/index.html
index 59a4ed9..a12788e 100644
--- a/help/en_US/scilab_en_US_help/index.html
+++ b/help/en_US/scilab_en_US_help/index.html
@@ -31,8 +31,14 @@
FOSSEE Optimization Toolbox
-- FOSSEE Optimization Toolbox
-
- fgoalattain — Solves a multiobjective goal attainment problem
+- FOSSEE Optimization Toolbox
+
- cbcintlinprog — Solves a mixed integer linear programming constrained optimization problem in intlinprog format.
+
+
+
+
+
+- fgoalattain — Solves a multiobjective goal attainment problem
@@ -44,7 +50,7 @@
-- fmincon — Solves a multi-variable constrainted optimization problem
+- fmincon — Solves a multi-variable constrainted optimization problem.
@@ -62,6 +68,36 @@
+- intfminbnd — Solves a multi-variable optimization problem on a bounded interval
+
+
+
+
+
+- intfmincon — Solves a constrainted multi-variable mixed integer non linear programming problem
+
+
+
+
+
+- intfminimax — Solves minimax constraint problem
+
+
+
+
+
+- intfminunc — Solves an unconstrainted multi-variable mixed integer non linear programming optimization problem
+
+
+
+
+
+- intqpipopt — Solves a linear quadratic problem.
+
+
+
+
+
- linprog — Solves a linear programming problem.
@@ -106,7 +142,7 @@
- symphonymat — Solves a mixed integer linear programming constrained optimization problem in intlinprog format.
-- Symphony Native Functions
+
- Symphony Native Functions
- sym_addConstr — Add a new constraint
diff --git a/help/en_US/scilab_en_US_help/intfminbnd.html b/help/en_US/scilab_en_US_help/intfminbnd.html
new file mode 100644
index 0000000..c494c4d
--- /dev/null
+++ b/help/en_US/scilab_en_US_help/intfminbnd.html
@@ -0,0 +1,178 @@
+
+
+ intfminbnd
+
+
+
+
+
+
+
+ FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > intfminbnd
+
+
+ intfminbnd
+ Solves a multi-variable optimization problem on a bounded interval
+
+
+Calling Sequence
+ xopt = intfminbnd(f,intcon,x1,x2)
+xopt = intfminbnd(f,intcon,x1,x2,options)
+[xopt,fopt] = intfminbnd(.....)
+[xopt,fopt,exitflag]= intfminbnd(.....)
+[xopt,fopt,exitflag,output]=intfminbnd(.....)
+[xopt,fopt,exitflag,gradient,hessian]=intfminbnd(.....)
+
+Input Parameters
+ - f :
+
A function, representing the objective function of the problem.
+ :
+ A vector, containing the lower bound of the variables of size (1 X n) or (n X 1) where n is number of variables. If it is empty it means that the lower bound is .
+ :
+ A vector, containing the upper bound of the variables of size (1 X n) or (n X 1) or (0 X 0) where n is the number of variables. If it is empty it means that the upper bound is .
+ - intcon :
+
A vector of integers, representing the variables that are constrained to be integers.
+ - options :
+
A list, containing the options for user to specify. See below for details.
+ Outputs
+ - xopt :
+
A vector of doubles, containing the computed solution of the optimization problem.
+ - fopt :
+
A double, containing the the function value at x.
+ - exitflag :
+
An integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
+ - gradient :
+
A vector of doubles, containing the objective's gradient of the solution.
+ - hessian :
+
A matrix of doubles, containing the Lagrangian's hessian of the solution.
+
+Description
+ Search the minimum of a multi-variable function on bounded interval specified by :
+Find the minimum of f(x) such that
+ 
+ intfminbnd calls Bonmin, which is an optimization library written in C++, to solve the bound optimization problem.
+ Options
+The options allow the user to set various parameters of the Optimization problem. The syntax for the options is given by:
+ options= list("IntegerTolerance", [---], "MaxNodes",[---], "MaxIter", [---], "AllowableGap",[---] "CpuTime", [---],"gradobj", "off", "hessian", "off" );
+ - IntegerTolerance : A Scalar, a number with that value of an integer is considered integer.
+- MaxNodes : A Scalar, containing the maximum number of nodes that the solver should search.
+- CpuTime : A scalar, specifying the maximum amount of CPU Time in seconds that the solver should take.
+- AllowableGap : A scalar, that specifies the gap between the computed solution and the the objective value of the best known solution stop, at which the tree search can be stopped.
+- MaxIter : A scalar, specifying the maximum number of iterations that the solver should take.
+- gradobj : A string, to turn on or off the user supplied objective gradient.
+- hessian : A scalar, to turn on or off the user supplied objective hessian.
+ The default values for the various items are given as:
+ options = list('integertolerance',1d-06,'maxnodes',2147483647,'cputime',1d10,'allowablegap',0,'maxiter',2147483647,'gradobj',"off",'hessian',"off")
+
+ The exitflag allows the user to know the status of the optimization which is returned by Bonmin. The values it can take and what they indicate is described below:
+ - 0 : Optimal Solution Found
+- 1 : Maximum Number of Iterations Exceeded. Output may not be optimal.
+- 2 : Maximum amount of CPU Time exceeded. Output may not be optimal.
+- 3 : Stop at Tiny Step.
+- 4 : Solved To Acceptable Level.
+- 5 : Converged to a point of local infeasibility.
+ For more details on exitflag, see the Bonmin documentation which can be found on http://www.coin-or.org/Bonmin
+
+A few examples displaying the various functionalities of intfminbnd have been provided below. You will find a series of problems and the appropriate code snippets to solve them.
+Example
+ We start with a simple objective function. Find x in R^6 such that it minimizes:
+ 
+
+
+
+function y=f(x)
+y=0
+for i =1:6
+y=y+sin(x(i));
+end
+endfunction
+
+x1 = [-2, -2, -2, -2, -2, -2];
+x2 = [2, 2, 2, 2, 2, 2];
+intcon = [2 3 4]
+[x,fval] =intfminbnd(f ,intcon, x1, x2)
+ |  |  | |
+
+Example
+ Here we solve a bounded objective function in R^6. We use this function to illustrate how we can further enhance the functionality of fminbnd by setting input options. We can pre-define the gradient of the objective function and/or the hessian of the lagrange function and thereby improve the speed of computation. This is elaborated on in example 2. We also set solver parameters using the options.
+
+
+function y=f(x)
+y=0
+for i =1:6
+y=y+sin(x(i));
+end
+endfunction
+
+x1 = [-2, -2, -2, -2, -2, -2];
+x2 = [2, 2, 2, 2, 2, 2];
+intcon = [2 3 4]
+
+options=list("MaxIter",[1500],"CpuTime", [100])
+[x,fval] =intfminbnd(f ,intcon, x1, x2, options)
+ |  |  | |
+
+
+Example
+ Unbounded Problems: Find x in R^2 such that it minimizes:
+ 
+
+
+
+function y=f(x)
+y=-((x(1)-1)^2+(x(2)-1)^2);
+endfunction
+
+x1 = [-%inf , -%inf];
+x2 = [ %inf , %inf];
+
+options=list("MaxIter",[1500],"CpuTime", [100]);
+intcon = [1 2];
+[x,fval,exitflag,output,lambda] =intfminbnd(f,intcon, x1, x2, options) |  |  | |
+
+
+
+
+
+
+
diff --git a/help/en_US/scilab_en_US_help/intfmincon.html b/help/en_US/scilab_en_US_help/intfmincon.html
new file mode 100644
index 0000000..8a19596
--- /dev/null
+++ b/help/en_US/scilab_en_US_help/intfmincon.html
@@ -0,0 +1,339 @@
+
+
+ intfmincon
+
+
+
+
+
+
+
+ FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > intfmincon
+
+
+ intfmincon
+ Solves a constrainted multi-variable mixed integer non linear programming problem
+
+
+Calling Sequence
+ xopt = intfmincon(f,x0,intcon,A,b)
+xopt = intfmincon(f,x0,intcon,A,b,Aeq,beq)
+xopt = intfmincon(f,x0,intcon,A,b,Aeq,beq,lb,ub)
+xopt = intfmincon(f,x0,intcon,A,b,Aeq,beq,lb,ub,nlc)
+xopt = intfmincon(f,x0,intcon,A,b,Aeq,beq,lb,ub,nlc,options)
+[xopt,fopt] = intfmincon(.....)
+[xopt,fopt,exitflag]= intfmincon(.....)
+[xopt,fopt,exitflag,gradient]=intfmincon(.....)
+[xopt,fopt,exitflag,gradient,hessian]=intfmincon(.....)
+
+Input Parameters
+ - f :
+
A function, representing the objective function of the problem.
+ - x0 :
+
A vector of doubles, containing the starting values of variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
+ - intcon :
+
A vector of integers, representing the variables that are constrained to be integers.
+ - A :
+
A matrix of doubles, containing the coefficients of linear inequality constraints of size (m X n) where 'm' is the number of linear inequality constraints.
+ - b :
+
A vector of doubles, related to 'A' and represents the linear coefficients in the linear inequality constraints of size (m X 1).
+ - Aeq :
+
A matrix of doubles, containing the coefficients of linear equality constraints of size (m1 X n) where 'm1' is the number of linear equality constraints.
+ - beq :
+
A vector of double, vector of doubles, related to 'Aeq' and represents the linear coefficients in the equality constraints of size (m1 X 1).
+ - lb :
+
A vector of doubles, containing the lower bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
+ - ub :
+
A vector of doubles, containing the upper bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
+ - nlc :
+
A function, representing the Non-linear Constraints functions(both Equality and Inequality) of the problem. It is declared in such a way that non-linear inequality constraints (c), and the non-linear equality constraints (ceq) are defined as separate single row vectors.
+ - options :
+
A list, containing the option for user to specify. See below for details.
+Outputs
+ - xopt :
+
A vector of doubles, containing the the computed solution of the optimization problem.
+ - fopt :
+
A double, containing the value of the function at xopt.
+ - exitflag :
+
An integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
+ - gradient :
+
a vector of doubles, containing the Objective's gradient of the solution.
+ - hessian :
+
a matrix of doubles, containing the Objective's hessian of the solution.
+
+Description
+ Search the minimum of a mixed integer constrained optimization problem specified by :
+Find the minimum of f(x) such that
+ 
+ intfmincon calls Bonmin, an optimization library written in C++, to solve the Constrained Optimization problem.
+ Options
+The options allow the user to set various parameters of the Optimization problem. The syntax for the options is given by:
+ options= list("IntegerTolerance", [---], "MaxNodes",[---], "MaxIter", [---], "AllowableGap",[---] "CpuTime", [---],"gradobj", "off", "hessian", "off" );
+ - IntegerTolerance : A Scalar, a number with that value of an integer is considered integer.
+- MaxNodes : A Scalar, containing the maximum number of nodes that the solver should search.
+- CpuTime : A scalar, specifying the maximum amount of CPU Time in seconds that the solver should take.
+- AllowableGap : A scalar, that specifies the gap between the computed solution and the the objective value of the best known solution stop, at which the tree search can be stopped.
+- MaxIter : A scalar, specifying the maximum number of iterations that the solver should take.
+- gradobj : A string, to turn on or off the user supplied objective gradient.
+- hessian : A scalar, to turn on or off the user supplied objective hessian.
+ The default values for the various items are given as:
+ options = list('integertolerance',1d-06,'maxnodes',2147483647,'cputime',1d10,'allowablegap',0,'maxiter',2147483647,'gradobj',"off",'hessian',"off")
+
+ The exitflag allows to know the status of the optimization which is given back by Ipopt.
+ - 0 : Optimal Solution Found
+- 1 : InFeasible Solution.
+- 2 : Objective Function is Continuous Unbounded.
+- 3 : Limit Exceeded.
+- 4 : User Interrupt.
+- 5 : MINLP Error.
+ For more details on exitflag, see the Bonmin documentation which can be found on http://www.coin-or.org/Bonmin
+
+A few examples displaying the various functionalities of intfmincon have been provided below. You will find a series of problems and the appropriate code snippets to solve them.
+Example
+ Here we solve a simple objective function, subjected to three linear inequality constraints.
+ Find x in R^2 such that it minimizes:
+ 
+
+
+
+function [y, dy]=f(x)
+y=-x(1)-x(2)/3;
+dy= [-1,-1/3];
+endfunction
+
+x0=[0 , 0];
+
+intcon = [1];
+
+A=[1,1 ; 1,1/4 ; 1,-1 ;];
+b=[2;1;2];
+
+[x,fval,exitflag,grad,hessian] =intfmincon(f, x0,intcon,A,b)
+ |  |  | |
+
+Example
+ Here we build up on the previous example by adding linear equality constraints.
+We add the following constraints to the problem specified above:
+ 
+
+
+
+function [y, dy]=f(x)
+y=-x(1)-x(2)/3;
+dy= [-1,-1/3];
+endfunction
+
+x0=[0 , 0];
+
+intcon = [1];
+
+A=[1,1 ; 1,1/4 ; 1,-1 ;];
+b=[2;1;2];
+
+Aeq=[1,-1;2,1];
+beq=[1,2];
+
+[x,fval,exitflag,grad,hessian] =intfmincon(f, x0,intcon,A,b,Aeq,beq)
+ |  |  | |
+
+Example
+ In this example, we proceed to add the upper and lower bounds to the objective function.
+ Find x in R^2 such that it minimizes:
+ 
+
+
+
+function [y, dy]=f(x)
+y=-x(1)-x(2)/3;
+dy= [-1,-1/3];
+endfunction
+
+x0=[0 , 0];
+
+intcon = [1];
+
+A=[1,1 ; 1,1/4 ; 1,-1 ;];
+b=[2;1;2];
+
+Aeq=[1,-1;2,1];
+beq=[1,2];
+
+lb=[-1, -%inf];
+ub=[%inf, 1];
+
+[x,fval,exitflag,grad,hessian] =intfmincon(f, x0,intcon,A,b,Aeq,beq,lb,ub)
+ |  |  | |
+
+Example
+ Finally, we add the non-linear constraints to the problem. Note that there is a notable difference in the way this is done as compared to defining the linear constraints.
+ 
+
+
+
+function [y, dy]=f(x)
+y=x(1)*x(2)+x(2)*x(3);
+dy= [x(2),x(1)+x(3),x(2)];
+endfunction
+
+x0=[0.1 , 0.1 , 0.1];
+intcon = [2]
+A=[];
+b=[];
+Aeq=[];
+beq=[];
+lb=[];
+ub=[];
+
+function [c, ceq, cg, cgeq]=nlc(x)
+c = [x(1)^2 - x(2)^2 + x(3)^2 - 2 , x(1)^2 + x(2)^2 + x(3)^2 - 10];
+ceq = [];
+cg=[2*x(1) , -2*x(2) , 2*x(3) ; 2*x(1) , 2*x(2) , 2*x(3)];
+cgeq=[];
+endfunction
+
+options=list("MaxIter", [1500], "CpuTime", [500], "GradObj", "on","GradCon", "on");
+
+[x,fval,exitflag,output] =intfmincon(f, x0,intcon,A,b,Aeq,beq,lb,ub,nlc,options)
+ |  |  | |
+
+Example
+ We can further enhance the functionality of intfmincon by setting input options. We can pre-define the gradient of the objective function and/or the hessian of the lagrange function and thereby improve the speed of computation. This is elaborated on in example 5. We take the following problem and add simple non-linear constraints, specify the gradients and the hessian of the Lagrange Function. We also set solver parameters using the options.
+
+
+
+function [y, dy]=f(x)
+y=x(1)*x(2)+x(2)*x(3);
+dy= [x(2),x(1)+x(3),x(2)];
+endfunction
+
+x0=[0.1 , 0.1 , 0.1];
+intcon = [2]
+A=[];
+b=[];
+Aeq=[];
+beq=[];
+lb=[];
+ub=[];
+
+function [c, ceq, cg, cgeq]=nlc(x)
+c = [x(1)^2 - x(2)^2 + x(3)^2 - 2 , x(1)^2 + x(2)^2 + x(3)^2 - 10];
+ceq = [];
+cg=[2*x(1) , -2*x(2) , 2*x(3) ; 2*x(1) , 2*x(2) , 2*x(3)];
+cgeq=[];
+endfunction
+
+options=list("MaxIter", [1500], "CpuTime", [500], "GradObj", "on","GradCon", "on");
+
+[x,fval,exitflag,output] =intfmincon(f, x0,intcon,A,b,Aeq,beq,lb,ub,nlc,options)
+ |  |  | |
+
+Example
+ Infeasible Problems: Find x in R^3 such that it minimizes:
+ 
+
+
+
+function [y, dy]=f(x)
+y=x(1)*x(2)+x(2)*x(3);
+dy= [x(2),x(1)+x(3),x(2)];
+endfunction
+
+x0=[1,1,1];
+intcon = [2]
+A=[];
+b=[];
+Aeq=[];
+beq=[];
+lb=[0 0.2,-%inf];
+ub=[0.6 %inf,1];
+
+function [c, ceq, cg, cgeq]=nlc(x)
+c=[x(1)^2-1,x(1)^2+x(2)^2-1,x(3)^2-1];
+ceq=[x(1)^3-0.5,x(2)^2+x(3)^2-0.75];
+cg = [2*x(1),0,0;2*x(1),2*x(2),0;0,0,2*x(3)];
+cgeq = [3*x(1)^2,0,0;0,2*x(2),2*x(3)];
+endfunction
+
+options=list("MaxIter", [1500], "CpuTime", [500], "GradObj", "on","GradCon", "on");
+
+[x,fval,exitflag,grad,hessian] =intfmincon(f, x0,intcon,A,b,Aeq,beq,lb,ub,nlc,options)
+ |  |  | |
+
+Example
+ Unbounded Problems: Find x in R^3 such that it minimizes:
+ 
+
+
+
+
+
+
+
+
+
+
+function y=f(x)
+y=-(x(1)^2+x(2)^2+x(3)^2);
+endfunction
+
+x0=[0.1 , 0.1 , 0.1];
+intcon = [3]
+A=[];
+b=[];
+Aeq=[];
+beq=[];
+lb=[];
+ub=[0,0,0];
+
+options=list("MaxIter", [1500], "CpuTime", [500]);
+
+[x,fval,exitflag,grad,hessian] =intfmincon(f, x0,intcon,A,b,Aeq,beq,lb,ub,[],options)
+ |  |  | |
+
+
+
+
+
+
+
+
diff --git a/help/en_US/scilab_en_US_help/intfminimax.html b/help/en_US/scilab_en_US_help/intfminimax.html
new file mode 100644
index 0000000..8c3e809
--- /dev/null
+++ b/help/en_US/scilab_en_US_help/intfminimax.html
@@ -0,0 +1,364 @@
+
+
+ intfminimax
+
+
+
+
+
+
+
+ FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > intfminimax
+
+
+ intfminimax
+ Solves minimax constraint problem
+
+
+Calling Sequence
+ xopt = intfminimax(fun,x0,intcon)
+xopt = intfminimax(fun,x0,intcon,A,b)
+xopt = intfminimax(fun,x0,intcon,A,b,Aeq,beq)
+xopt = intfminimax(fun,x0,intcon,A,b,Aeq,beq,lb,ub)
+xopt = intfminimax(fun,x0,intcon,A,b,Aeq,beq,lb,ub,nonlinfun)
+xopt = intfminimax(fun,x0,intcon,A,b,Aeq,beq,lb,ub,nonlinfun,options)
+[xopt, fval] = intfminimax(.....)
+[xopt, fval, maxfval]= intfminimax(.....)
+[xopt, fval, maxfval, exitflag]= intfminimax(.....)
+
+Input Parameters
+ - fun:
+
The function to be minimized. fun is a function that has a vector x as an input argument, and contains the objective functions evaluated at x.
+ - x0 :
+
A vector of doubles, containing the starting values of variables of size (1 X n) or (n X 1) where 'n' is the number of Variables.
+ - A :
+
A matrix of doubles, containing the coefficients of linear inequality constraints of size (m X n) where 'm' is the number of linear inequality constraints.
+ - b :
+
A vector of doubles, related to 'A' and represents the linear coefficients in the linear inequality constraints of size (m X 1).
+ - Aeq :
+
A matrix of doubles, containing the coefficients of linear equality constraints of size (m1 X n) where 'm1' is the number of linear equality constraints.
+ - beq :
+
A vector of double, vector of doubles, related to 'Aeq' and represents the linear coefficients in the equality constraints of size (m1 X 1).
+ - intcon :
+
A vector of integers, representing the variables that are constrained to be integers.
+ - lb :
+
A vector of doubles, containing the lower bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
+ - ub :
+
A vector of doubles, containing the upper bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
+ - nonlinfun:
+
A function, representing the Non-linear Constraints functions(both Equality and Inequality) of the problem. It is declared in such a way that non-linear inequality constraints (c), and the non-linear equality constraints (ceq) are defined as separate single row vectors.
+ - options :
+
A list, containing the option for user to specify. See below for details.
+Outputs
+ - xopt :
+
A vector of doubles, containing the computed solution of the optimization problem.
+ - fopt :
+
A vector of doubles, containing the values of the objective functions at the end of the optimization problem.
+ - maxfval:
+
A double, representing the maximum value in the vector fval.
+ - exitflag :
+
An integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
+ - output :
+
A structure, containing the information about the optimization. See below for details.
+ - lambda :
+
A structure, containing the Lagrange multipliers of lower bound, upper bound and constraints at the optimized point. See below for details.
+
+Description
+ intfminimax minimizes the worst-case (largest) value of a set of multivariable functions, starting at an initial estimate. This is generally referred to as the minimax problem.
+ 
+ max-min problems can also be solved with intfminimax, using the identity
+ 
+ Currently, intfminimax calls intfmincon, which uses the bonmin algorithm, an optimization library in C++.
+
+ Options
+ The options allow the user to set various parameters of the Optimization problem. The syntax for the options is given by:
+ options= list("IntegerTolerance", [---], "MaxNodes",[---], "MaxIter", [---], "AllowableGap",[---] "CpuTime", [---],"gradobj", "off", "hessian", "off" );
+ - IntegerTolerance : A Scalar, a number with that value of an integer is considered integer.
+- MaxNodes : A Scalar, containing the maximum number of nodes that the solver should search.
+- CpuTime : A scalar, specifying the maximum amount of CPU Time in seconds that the solver should take.
+- AllowableGap : A scalar, that specifies the gap between the computed solution and the the objective value of the best known solution stop, at which the tree search can be stopped.
+- MaxIter : A scalar, specifying the maximum number of iterations that the solver should take.
+- gradobj : A string, to turn on or off the user supplied objective gradient.
+- hessian : A scalar, to turn on or off the user supplied objective hessian.
+ The default values for the various items are given as:
+ options = list('integertolerance',1d-06,'maxnodes',2147483647,'cputime',1d10,'allowablegap',0,'maxiter',2147483647,'gradobj',"off",'hessian',"off")
+ The objective function must have header :
+ F = fun(x) |  |  | |
+where x is a n x 1 matrix of doubles and F is a m x 1 matrix of doubles where m is the total number of objective functions inside F.
+On input, the variable x contains the current point and, on output, the variable F must contain the objective function values.
+ By default, the gradient options for intfminimax are turned off and and fmincon does the gradient opproximation of minmaxObjfun. In case the GradObj option is off and GradCon option is on, intfminimax approximates minmaxObjfun gradient using the numderivative toolbox.
+ If we can provide exact gradients, we should do so since it improves the convergence speed of the optimization algorithm.
+
+ The exitflag allows to know the status of the optimization which is given back by Bonmin.
+ - 0 : Optimal Solution Found
+- 1 : InFeasible Solution.
+- 2 : Objective Function is Continuous Unbounded.
+- 3 : Limit Exceeded.
+- 4 : User Interrupt.
+- 5 : MINLP Error.
+ For more details on exitflag, see the Bonmin documentation which can be found on http://www.coin-or.org/Bonmin
+
+A few examples displaying the various functionalities of intfminimax have been provided below. You will find a series of problems and the appropriate code snippets to solve them.
+Example
+ Here we solve a simple objective function, subjected to no constraints.
+ 
+
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+x0 = [0.1,0.1];
+
+xopt = [4 4]
+fopt = [0 -64 -2 -8 0]
+intcon = [1]
+maxfopt = 0
+
+[x,fval,maxfval,exitflag] = intfminimax(myfun, x0,intcon)
+ |  |  | |
+
+Example
+ We proceed to add simple linear inequality constraints.
+
+ 
+
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+x0 = [0.1,0.1];
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+intcon = [1];
+
+[x,fval,maxfval,exitflag,output,lambda] = intfminimax(myfun, intcon,x0,A,b) |  |  | |
+
+Example
+Here we build up on the previous example by adding linear equality constraints.
+We add the following constraints to the problem specified above:
+ 
+
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+x0 = [0.1,0.1];
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+
+intcon = [1];
+
+[x,fval,maxfval,exitflag,output,lambda] = intfminimax(myfun, intcon,x0,A,b,Aeq,beq) |  |  | |
+
+Example
+In this example, we proceed to add the upper and lower bounds to the objective function.
+ 
+
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+x0 = [0.1,0.1];
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+
+lb = [-1;-%inf];
+ub = [%inf;1];
+
+intcon = [1];
+
+[x,fval,maxfval,exitflag,output,lambda] = intfminimax(myfun, intcon,x0,A,b,Aeq,beq,lb,ub) |  |  | |
+
+
+Example
+Finally, we add the non-linear constraints to the problem. Note that there is a notable difference in the way this is done as compared to defining the linear constraints.
+ 
+
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+x0 = [0.1,0.1];
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1,-1; 2, 1];
+beq = [1;2];
+
+lb = [-1;-%inf];
+ub = [%inf;1];
+
+function [c, ceq]=nlc(x)
+c=[x(1)^2-1,x(1)^2+x(2)^2-1];
+ceq=[];
+endfunction
+intcon = [1];
+
+[x,fval,maxfval,exitflag,output,lambda] = intfminimax(myfun, intcon,x0,A,b,Aeq,beq,lb,ub) |  |  | |
+
+Example
+ We can further enhance the functionality of fminimax by setting input options. We can pre-define the gradient of the objective function and/or the hessian of the lagrange function and thereby improve the speed of computation. This is elaborated on in example 6. We take the following problem, specify the gradients, and the jacobian matrix of the constraints. We also set solver parameters using the options.
+ 
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+
+function G=myfungrad(x)
+G = [ 4*x(1) - 48, -2*x(1), 1, -1, 1;
+2*x(2) - 40, -6*x(2), 3, -1, 1; ]'
+endfunction
+
+
+function [c, ceq]=confun(x)
+
+c = [1.5 + x(1)*x(2) - x(1) - x(2), -x(1)*x(2) - 10]
+
+ceq=[]
+endfunction
+
+function [DC, DCeq]=cgrad(x)
+
+
+
+
+
+DC= [
+x(2)-1, -x(2)
+x(1)-1, -x(1)
+]'
+DCeq = []'
+endfunction
+
+Options = list("MaxIter", [3000], "CpuTime", [600],"GradObj",myfungrad,"GradCon",cgrad);
+
+x0 = [0,10];
+
+xopt = [0.92791 7.93551]
+fopt = [6.73443 -189.778 6.73443 -8.86342 0.86342]
+maxfopt = 6.73443
+
+intcon = [1];
+
+[x,fval,maxfval,exitflag,output] = intfminimax(myfun,intcon,x0,[],[],[],[],[],[], confun, Options) |  |  | |
+
+Example
+Infeasible Problems: Find x in R^2 such that it minimizes the objective function used above under the following constraints:
+ 
+
+
+
+function f=myfun(x)
+f(1)= 2*x(1)^2 + x(2)^2 - 48*x(1) - 40*x(2) + 304;
+f(2)= -x(1)^2 - 3*x(2)^2;
+f(3)= x(1) + 3*x(2) -18;
+f(4)= -x(1) - x(2);
+f(5)= x(1) + x(2) - 8;
+endfunction
+
+x0 = [0.1,0.1];
+
+A=[1,1 ; 1,1/4 ; 1,-1];
+b=[2;1;1];
+
+Aeq = [1/3,-5; 2, 1];
+beq = [11;8];
+
+intcon = [1];
+
+[x,fval,maxfval,exitflag,output,lambda] = intfminimax(myfun,intcon, x0,A,b,Aeq,beq) |  |  | |
+
+
+
+
+
+
+
+
diff --git a/help/en_US/scilab_en_US_help/intfminunc.html b/help/en_US/scilab_en_US_help/intfminunc.html
new file mode 100644
index 0000000..a188795
--- /dev/null
+++ b/help/en_US/scilab_en_US_help/intfminunc.html
@@ -0,0 +1,173 @@
+
+
+ intfminunc
+
+
+
+
+
+
+
+ FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > intfminunc
+
+
+ intfminunc
+ Solves an unconstrainted multi-variable mixed integer non linear programming optimization problem
+
+
+Calling Sequence
+ xopt = intfminunc(f,x0)
+xopt = intfminunc(f,x0,intcon)
+xopt = intfminunc(f,x0,intcon,options)
+[xopt,fopt] = intfminunc(.....)
+[xopt,fopt,exitflag]= intfminunc(.....)
+[xopt,fopt,exitflag,gradient,hessian]= intfminunc(.....)
+
+Input Parameters
+ - f :
+
A function, representing the objective function of the problem.
+ - x0 :
+
A vector of doubles, containing the starting values of variables of size (1 X n) or (n X 1) where 'n' is the number of Variables.
+ - intcon :
+
A vector of integers, representing the variables that are constrained to be integers.
+ - options :
+
A list, containing the option for user to specify. See below for details.
+Outputs
+ - xopt :
+
A vector of doubles, containing the computed solution of the optimization problem.
+ - fopt :
+
A double, containing the the function value at x.
+ - exitflag :
+
An integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
+ - gradient :
+
A vector of doubles, containing the objective's gradient of the solution.
+ - hessian :
+
A matrix of doubles, containing the Lagrangian's hessian of the solution.
+
+Description
+ Search the minimum of a multi-variable mixed integer non linear programming unconstrained optimization problem specified by :
+Find the minimum of f(x) such that
+ 
+ intfminunc calls Bonmin, which is an optimization library written in C++, to solve the bound optimization problem.
+ Options
+The options allow the user to set various parameters of the Optimization problem. The syntax for the options is given by:
+ options= list("IntegerTolerance", [---], "MaxNodes",[---], "MaxIter", [---], "AllowableGap",[---] "CpuTime", [---],"gradobj", "off", "hessian", "off" );
+ - IntegerTolerance : A Scalar, a number with that value of an integer is considered integer.
+- MaxNodes : A Scalar, containing the maximum number of nodes that the solver should search.
+- CpuTime : A scalar, specifying the maximum amount of CPU Time in seconds that the solver should take.
+- AllowableGap : A scalar, that specifies the gap between the computed solution and the the objective value of the best known solution stop, at which the tree search can be stopped.
+- MaxIter : A scalar, specifying the maximum number of iterations that the solver should take.
+- gradobj : A string, to turn on or off the user supplied objective gradient.
+- hessian : A scalar, to turn on or off the user supplied objective hessian.
+ The default values for the various items are given as:
+ options = list('integertolerance',1d-06,'maxnodes',2147483647,'cputime',1d10,'allowablegap',0,'maxiter',2147483647,'gradobj',"off",'hessian',"off")
+
+ The exitflag allows to know the status of the optimization which is given back by Ipopt.
+ - 0 : Optimal Solution Found
+- 1 : InFeasible Solution.
+- 2 : Objective Function is Continuous Unbounded.
+- 3 : Limit Exceeded.
+- 4 : User Interrupt.
+- 5 : MINLP Error.
+ For more details on exitflag, see the Bonmin documentation which can be found on http://www.coin-or.org/Bonmin
+
+A few examples displaying the various functionalities of intfminunc have been provided below. You will find a series of problems and the appropriate code snippets to solve them.
+
+Example
+ We begin with the minimization of a simple non-linear function.
+ Find x in R^2 such that it minimizes:
+ 
+
+
+
+function y=f(x)
+y= x(1)^2 + x(2)^2;
+endfunction
+
+x0=[2,1];
+intcon = [1];
+[xopt,fopt]=intfminunc(f,x0,intcon)
+ |  |  | |
+
+Example
+ We now look at the Rosenbrock function, a non-convex performance test problem for optimization routines. We use this example to illustrate how we can enhance the functionality of intfminunc by setting input options. We can pre-define the gradient of the objective function and/or the hessian of the lagrange function and thereby improve the speed of computation. This is elaborated on in example 2. We also set solver parameters using the options.
+ 
+
+
+
+function y=f(x)
+y= 100*(x(2) - x(1)^2)^2 + (1-x(1))^2;
+endfunction
+
+x0=[-1,2];
+intcon = [2]
+
+options=list("MaxIter", [1500], "CpuTime", [500]);
+
+[xopt,fopt,exitflag,gradient,hessian]=intfminunc(f,x0,intcon,options)
+ |  |  | |
+
+
+
+Example
+ Unbounded Problems: Find x in R^2 such that it minimizes:
+ 
+
+
+
+
+
+function [y, g, h]=f(x)
+y = -x(1)^2 - x(2)^2;
+g = [-2*x(1),-2*x(2)];
+h = [-2,0;0,-2];
+endfunction
+
+x0=[2,1];
+intcon = [1]
+options = list("gradobj","ON","hessian","on");
+[xopt,fopt,exitflag,gradient,hessian]=intfminunc(f,x0,intcon,options) |  |  | |
+
+
+
+
+
diff --git a/help/en_US/scilab_en_US_help/intqpipopt.html b/help/en_US/scilab_en_US_help/intqpipopt.html
new file mode 100644
index 0000000..2fbad90
--- /dev/null
+++ b/help/en_US/scilab_en_US_help/intqpipopt.html
@@ -0,0 +1,267 @@
+
+
+ intqpipopt
+
+
+
+
+
+
+
+ FOSSEE Optimization Toolbox >> FOSSEE Optimization Toolbox > intqpipopt
+
+
+ intqpipopt
+ Solves a linear quadratic problem.
+
+
+Calling Sequence
+ xopt = intqpipopt(H,f)
+xopt = intqpipopt(H,f,intcon)
+xopt = intqpipopt(H,f,intcon,A,b)
+xopt = intqpipopt(H,f,intcon,A,b,Aeq,beq)
+xopt = intqpipopt(H,f,intcon,A,b,Aeq,beq,lb,ub)
+xopt = intqpipopt(H,f,intcon,A,b,Aeq,beq,lb,ub,x0)
+xopt = intqpipopt(H,f,intcon,A,b,Aeq,beq,lb,ub,x0,options)
+xopt = intqpipopt(H,f,intcon,A,b,Aeq,beq,lb,ub,x0,options,"file_path")
+[xopt,fopt,exitflag,output] = intqpipopt( ... )
+
+Input Parameters
+ - H :
+
A symmetric matrix of doubles, representing the Hessian of the quadratic problem.
+ - f :
+
A vector of doubles, representing coefficients of the linear terms in the quadratic problem.
+ - intcon :
+
A vector of integers, representing the variables that are constrained to be integers.
+ - A :
+
A matrix of doubles, containing the coefficients of linear inequality constraints of size (m X n) where 'm' is the number of linear inequality constraints.
+ - b :
+
A vector of doubles, related to 'A' and represents the linear coefficients in the linear inequality constraints of size (m X 1).
+ - Aeq :
+
A matrix of doubles, containing the coefficients of linear equality constraints of size (m1 X n) where 'm1' is the number of linear equality constraints.
+ - beq :
+
A vector of double, vector of doubles, related to 'Aeq' and represents the linear coefficients in the equality constraints of size (m1 X 1).
+ - lb :
+
A vector of doubles, containing the lower bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
+ - ub :
+
A vector of doubles, containing the upper bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
+ - x0 :
+
A vector of doubles, containing the starting values of variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
+ - options :
+
A list, containing the option for user to specify. See below for details.
+ - file_path :
+
path to bonmin opt file if used.
+Outputs
+ - xopt :
+
A vector of doubles, containing the computed solution of the optimization problem.
+ - fopt :
+
A double, containing the value of the function at xopt.
+ - exitflag :
+
An integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
+ - output :
+
A structure, containing the information about the optimization. See below for details.
+
+Description
+ Search the minimum of a constrained linear quadratic optimization problem specified by :
+ 
+ intqpipopt calls Bonmin, a library written in C++ to solve the quadratic problem.
+ The exitflag allows to know the status of the optimization which is given back by Ipopt.
+ - 0 : Optimal Solution Found
+- 1 : InFeasible Solution.
+- 2 : Objective Function is Continuous Unbounded.
+- 3 : Limit Exceeded.
+- 4 : User Interrupt.
+- 5 : MINLP Error.
+ For more details on exitflag, see the Bonmin documentation which can be found on http://www.coin-or.org/Bonmin
+
+ The output data structure contains detailed information about the optimization process.
+It is of type "struct" and contains the following fields.
+ - output.constrviolation: The max-norm of the constraint violation.
+
+A few examples displaying the various functionalities of intqpipopt have been provided below. You will find a series of problems and the appropriate code snippets to solve them.
+Example
+ Here we solve a simple objective function.
+ Find x in R^6 such that it minimizes:
+ 
+
+
+
+f=[1; 2; 3; 4; 5; 6]; H=eye(6,6);
+
+intcon = [2 ,4];
+
+[xopt,fopt,exitflag,output,lambda]=qpipoptmat(H,f) |  |  | |
+
+Example
+ We proceed to add simple linear inequality constraints.
+
+ 
+
+
+f=[1; 2; 3; 4; 5; 6]; H=eye(6,6);
+
+A= [0,1,0,1,2,-1;
+-1,0,2,1,1,0];
+b = [-1; 2.5];
+
+intcon = [2 ,4];
+
+[xopt,fopt,exitflag,output,lambda]=intqpipopt(H,f,intcon,A,b) |  |  | |
+
+Example
+ Here we build up on the previous example by adding linear equality constraints.
+We add the following constraints to the problem specified above:
+ 
+
+
+
+f=[1; 2; 3; 4; 5; 6]; H=eye(6,6);
+
+A= [0,1,0,1,2,-1;
+-1,0,2,1,1,0];
+b = [-1; 2.5];
+
+Aeq= [1,-1,1,0,3,1;
+-1,0,-3,-4,5,6;
+2,5,3,0,1,0];
+beq=[1; 2; 3];
+
+intcon = [2 ,4];
+
+[xopt,fopt,exitflag,output,lambda]=intqpipopt(H,f,intcon,A,b,Aeq,beq) |  |  | |
+
+Example
+ In this example, we proceed to add the upper and lower bounds to the objective function.
+ 
+
+
+
+f=[1; 2; 3; 4; 5; 6]; H=eye(6,6);
+
+A= [0,1,0,1,2,-1;
+-1,0,2,1,1,0];
+b = [-1; 2.5];
+
+Aeq= [1,-1,1,0,3,1;
+-1,0,-3,-4,5,6;
+2,5,3,0,1,0];
+beq=[1; 2; 3];
+
+lb=[-1000; -10000; 0; -1000; -1000; -1000];
+ub=[10000; 100; 1.5; 100; 100; 1000];
+
+intcon = [2 ,4];
+
+[xopt,fopt,exitflag,output,lambda]=intqpipopt(H,f,intcon,A,b,Aeq,beq,lb,ub) |  |  | |
+
+Example
+ In this example, we initialize the values of x to speed up the computation. We further enhance the functionality of qpipoptmat by setting input options.
+
+
+f=[1; 2; 3; 4; 5; 6]; H=eye(6,6);
+
+A= [0,1,0,1,2,-1;
+-1,0,2,1,1,0];
+b = [-1; 2.5];
+
+Aeq= [1,-1,1,0,3,1;
+-1,0,-3,-4,5,6;
+2,5,3,0,1,0];
+beq=[1; 2; 3];
+
+lb=[-1000; -10000; 0; -1000; -1000; -1000];
+ub=[10000; 100; 1.5; 100; 100; 1000];
+
+x0 = repmat(0,6,1);
+options = list("MaxIter", 300, "CpuTime", 100);
+
+intcon = [2 ,4];
+
+[xopt,fopt,exitflag,output,lambda]=intqpipopt(H,f,intcon,A,b,Aeq,beq,lb,ub,x0,options) |  |  | |
+
+Example
+Infeasible Problems: Find x in R^6 such that it minimizes the objective function used above under the following constraints:
+ 
+
+
+
+f=[1; 2; 3; 4; 5; 6]; H=eye(6,6);
+f=[1; 2; 3; 4; 5; 6]; H=eye(6,6);
+
+A= [0,1,0,1,2,-1;
+-1,0,2,1,1,0];
+b = [-1; 2.5];
+
+Aeq= [0,1,0,1,2,-1;
+-1,0,-3,-4,5,6];
+beq=[4; 2];
+
+intcon = [2 ,4];
+
+[xopt,fopt,exitflag,output,lambda]=intqpipopt(H,f,intcon,A,b,Aeq,beq) |  |  | |
+
+Example
+ Unbounded Problems: Find x in R^6 such that it minimizes the objective function used above under the following constraints:
+ 
+
+
+
+f=[1; 2; 3; 4; 5; 6]; H=eye(6,6); H(1,1) = -1;
+
+A= [0,1,0,1,2,-1;
+-1,0,2,1,1,0];
+b = [-1; 2.5];
+
+Aeq= [1,-1,1,0,3,1;
+-1,0,-3,-4,5,6];
+beq=[1; 2];
+intcon = [2 ,4];
+
+[xopt,fopt,exitflag,output,lambda]=intqpipopt(H,f,intcon,A,b,Aeq,beq) |  |  | |
+
+Authors
+ - Akshay Miterani and Pranav Deshpande
+
+
+
+
+
diff --git a/help/en_US/scilab_en_US_help/jhelpmap.jhm b/help/en_US/scilab_en_US_help/jhelpmap.jhm
index eb373d9..565fb48 100644
--- a/help/en_US/scilab_en_US_help/jhelpmap.jhm
+++ b/help/en_US/scilab_en_US_help/jhelpmap.jhm
@@ -2,12 +2,18 @@
|
|
|
| |