fminimax
Solves minimax constraint problem
Calling Sequence
xopt = fminimax(fun,x0)
xopt = fminimax(fun,x0,A,b)
xopt = fminimax(fun,x0,A,b,Aeq,beq)
xopt = fminimax(fun,x0,A,b,Aeq,beq,lb,ub)
xopt = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlinfun)
xopt = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlinfun,options)
[xopt, fval] = fminimax(.....)
[xopt, fval, maxfval]= fminimax(.....)
[xopt, fval, maxfval, exitflag]= fminimax(.....)
[xopt, fval, maxfval, exitflag, output]= fminimax(.....)
[xopt, fval, maxfval, exitflag, output, lambda]= fminimax(.....)
Input Parameters
fun:
The function to be minimized. fun is a function that has a vector x as an input argument, and contains the objective functions evaluated at x.
x0 :
A vector of doubles, containing the starting values of variables of size (1 X n) or (n X 1) where 'n' is the number of Variables.
A :
A matrix of doubles, containing the coefficients of linear inequality constraints of size (m X n) where 'm' is the number of linear inequality constraints.
b :
A vector of doubles, related to 'A' and represents the linear coefficients in the linear inequality constraints of size (m X 1).
Aeq :
A matrix of doubles, containing the coefficients of linear equality constraints of size (m1 X n) where 'm1' is the number of linear equality constraints.
beq :
A vector of double, vector of doubles, related to 'Aeq' and represents the linear coefficients in the equality constraints of size (m1 X 1).
lb :
A vector of doubles, containing the lower bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
ub :
A vector of doubles, containing the upper bounds of the variables of size (1 X n) or (n X 1) where 'n' is the number of variables.
nonlinfun:
A function, representing the Non-linear Constraints functions(both Equality and Inequality) of the problem. It is declared in such a way that non-linear inequality constraints (c), and the non-linear equality constraints (ceq) are defined as separate single row vectors.
options :
A list, containing the option for user to specify. See below for details.
Outputs
xopt :
A vector of doubles, containing the computed solution of the optimization problem.
fval :
A vector of doubles, containing the values of the objective functions at the end of the optimization problem.
maxfval:
A double, representing the maximum value in the vector fval.
exitflag :
An integer, containing the flag which denotes the reason for termination of algorithm. See below for details.
output :
A structure, containing the information about the optimization. See below for details.
lambda :
A structure, containing the Lagrange multipliers of lower bound, upper bound and constraints at the optimized point. See below for details.
Description
fminimax minimizes the worst-case (largest) value of a set of multivariable functions, starting at an initial estimate, a problem generally referred to as the minimax problem.
\min_{x} \max_{i} F_{i}(x)\\
\textrm{Such that} \:\begin{cases}
& c(x) \leq 0 \\
& ceq(x) = 0 \\
& A.x \leq b \\
& Aeq.x = beq \\
& minmaxLb \leq x \leq minmaxUb
\end{cases}
Currently, fminimax calls fmincon which uses the Ipopt solver.
max-min problems can also be solved with fminimax, using the identity
\max_{x} \min_{i} F_{i}(x) = -\min_{x} \max_{i} \left( -F_{i}(x) \right)
Options
The options allow the user to set various parameters of the Optimization problem. The syntax for the options is given by:
options= list("MaxIter", [---], "CpuTime", [---], "GradObj", ---, "GradCon", ---);
MaxIter : A Scalar, specifying the Maximum Number of iterations that the solver should take.
CpuTime : A Scalar, specifying the Maximum amount of CPU Time in seconds that the solver should take.
GradObj : A function, representing the gradient function of the Objective in Vector Form.
GradCon : A function, representing the gradient of the Non-Linear Constraints (both Equality and Inequality) of the problem. It is declared in such a way that gradient of non-linear inequality constraints are defined first as a separate Matrix (cg of size m2 X n or as an empty), followed by gradient of non-linear equality constraints as a separate matrix (ceqg of size m2 X n or as an empty) where m2 & m3 are number of non-linear inequality and equality constraints respectively.
Default Values : options = list("MaxIter", [3000], "CpuTime", [600]);
The default values for the various items are given as:
options = list("MaxIter", [3000], "CpuTime", [600]);
The objective function must have a header :
F = fun(x)
where x is a n x 1 matrix of doubles and F is a m x 1 matrix of doubles where m is the total number of objective functions inside F.
On input, the variable x contains the current point and, on output, the variable F must contain the objective function values.
By default, the gradient options for fminimax are turned off and and fmincon does the gradient opproximation of minmaxObjfun. In case the GradObj option is off and GradCon option is on, fminimax approximates minmaxObjfun gradient using the numderivative toolbox.
Syntax
Some syntactic details about fminimax, including the syntax for the gradient, defining the non-linear constraints, and the constraint derivative function have been provided below:
If the user can provide exact gradients, it should be done, since it improves the convergence speed of the optimization algorithm.
Furthermore, we can enable the "GradObj" option with the statement :
minimaxOptions = list("GradObj",fGrad);
This will let fminimax know that the exact gradient of the objective function is known, so that it can change the calling sequence to the objective function. Note that, fGrad should be mentioned in the form of N x n where n is the number of variables, N is the number of functions in objective function.
The constraint function must have header:
[c, ceq] = confun(x)
Where x is a n x 1 matrix of minimax doubles, c is a 1 x nni matrix of doubles and ceq is a 1 x nne matrix of doubles (nni : number of nonlinear inequality constraints, nne : number of nonlinear equality constraints).
On input, the variable x contains the current point and, on output, the variable c must contain the nonlinear inequality constraints and ceq must contain the nonlinear equality constraints.
By default, the gradient options for fminimax are turned off and and fmincon does the gradient opproximation of confun. In case the GradObj option is on and GradCons option is off, fminimax approximates confun gradient using numderivative toolbox.
If we can provide exact gradients, we should do so since it improves the convergence speed of the optimization algorithm.
Furthermore, we must enable the "GradCon" option with the statement :
minimaxOptions = list("GradCon",confunGrad);
This will let fminimax know that the exact gradient of the objective function is known, so that it can change the calling sequence to the objective function.
The constraint derivative function must have header :
[dc,dceq] = confungrad(x)
where dc is a nni x n matrix of doubles and dceq is a nne x n matrix of doubles.
The exitflag allows the user to know the status of the optimization which is returned by Ipopt. The values it can take and what they indicate is described below:
0 : Optimal Solution Found
1 : Maximum Number of Iterations Exceeded. Output may not be optimal.
2 : Maximum amount of CPU Time exceeded. Output may not be optimal.
3 : Stop at Tiny Step.
4 : Solved To Acceptable Level.
5 : Converged to a point of local infeasibility.
For more details on exitflag, see the ipopt documentation which can be found on http://www.coin-or.org/Ipopt/documentation/
The output data structure contains detailed information about the optimization process.
It is of type "struct" and contains the following fields.
output.Iterations: The number of iterations performed.
output.Cpu_Time : The total cpu-time taken.
output.Objective_Evaluation: The number of Objective Evaluations performed.
output.Dual_Infeasibility : The Dual Infeasiblity of the final soution.
output.Message: The output message for the problem.
The lambda data structure contains the Lagrange multipliers at the end of optimization. In the current version the values are returned only when the the solution is optimal.
It has type "struct" and contains the following fields.
lambda.lower: The Lagrange multipliers for the lower bound constraints.
lambda.upper: The Lagrange multipliers for the upper bound constraints.
lambda.eqlin: The Lagrange multipliers for the linear equality constraints.
lambda.ineqlin: The Lagrange multipliers for the linear inequality constraints.
lambda.eqnonlin: The Lagrange multipliers for the non-linear equality constraints.
lambda.ineqnonlin: The Lagrange multipliers for the non-linear inequality constraints.
A few examples displaying the various functionalities of fminimax have been provided below. You will find a series of problems and the appropriate code snippets to solve them.
Example
Here we solve a simple objective function, subjected to no constraints.
\begin{eqnarray}
\mbox\min_{x} \max_{i}\ f_{i}(x)\\
\end{eqnarray}
\\
\begin{eqnarray}
&f_{1}(x) &= 2 \boldsymbol{\cdot} x_{1}^{2} + x_{2}^{2} - 48x_{1} - 40x_{2} + 304\\
&f_{2}(x) &= -x_{1}^{2} - 3x_{2}^{2}\\
&f_{3}(x) &= x_{1} + 3x_{2} - 18\\
&f_{4}(x) &= -x_{1} - x_{2}\\
&f_{5}(x) &= x_{1} + x_{2} - 8
\end{eqnarray}
Example
We proceed to add simple linear inequality constraints.
\begin{eqnarray}
\hspace{70pt} &x_{1} + x_{2}&\leq 2\\
\hspace{70pt} &x_{1} + x_{2}/4&\leq 1\\
\hspace{70pt} &-x_{1} + x_{2}&\geq -1\\
\end{eqnarray}
Example
Here we build up on the previous example by adding linear equality constraints.
We add the following constraints to the problem specified above:
\begin{eqnarray}
&x_{1} - x_{2}&= 1
\\&2x_{1} + x_{2}&= 2
\end{eqnarray}
Example
In this example, we proceed to add the upper and lower bounds to the objective function.
\begin{eqnarray}
-1 &\leq x_{1} &\leq \infty\\
-\infty &\leq x_{2} &\leq 1
\end{eqnarray}
Example
Finally, we add the non-linear constraints to the problem. Note that there is a notable difference in the way this is done as compared to defining the linear constraints.
\begin{eqnarray}
x_{1}^2-1&\leq 0\\
x_{1}^2+x_{2}^{2}-1&\leq 0\\
\end{eqnarray}
Example
We can further enhance the functionality of fminimax by setting input options. We can pre-define the gradient of the objective function and/or the hessian of the lagrange function and thereby improve the speed of computation. This is elaborated on in example 6. We take the following problem, specify the gradients, and the jacobian matrix of the constraints. We also set solver parameters using the options.
\begin{eqnarray}
1.5 + x_{1} \boldsymbol{\cdot} x_{2} - x_{1} - x_{2} &\leq 0\\
-x_{1}\boldsymbol{\cdot} x_{2} - 10 &\leq 0
\end{eqnarray}
Example
Infeasible Problems: Find x in R^2 such that it minimizes the objective function used above under the following constraints:
\begin{eqnarray}
&x_{1}/3 - 5x_{2}&= 11
\\&2x_{1} + x_{2}&= 8
\\ \end{eqnarray}
Authors
Animesh Baranawal