Upload
others
View
17
Download
0
Embed Size (px)
Citation preview
Outline
110112 NatSciLab - Numerical SoftwareIntroduction to MATLAB
Onur Oktay
Jacobs University Bremen
Spring 2010
Optimization with Matlab Least Squares problem 1–norm minimization
Outline
1 Optimization with Matlab
2 Least Squares problemLinear Least Squares MethodNonlinear Least Squares Method
3 1–norm minimization
Optimization with Matlab Least Squares problem 1–norm minimization
Outline
1 Optimization with Matlab
2 Least Squares problemLinear Least Squares MethodNonlinear Least Squares Method
3 1–norm minimization
Optimization with Matlab Least Squares problem 1–norm minimization
Optimization Toolbox - Function list
Minimizationbintprog Binary integer programmingfgoalattain Multiobjective goal attainmentfminbnd Minimize a single-variable function on fixed intervalfmincon Constrained minimizationfminimax Minimax constraint problemfminsearch Derivative-free unconstrained minimizationfminunc Unconstrained minimizationfseminf Semi-infinitely constrained minimizationlinprog Solve linear programming problemsquadprog Solve quadratic programming problems
Least Squares (Curve Fitting)lsqcurvefit Nonlinear least-squares data fittinglsqlin Constrained linear least-squares data fittinglsqnonlin Nonlinear least-squares data-fittinglsqnonneg Least-squares with nonnegative constraint
Optimization with Matlab Least Squares problem 1–norm minimization
Optimization - Constrained minimization
minimize E(u)subject to A1u = b1 (linear equality constraint)
A2u ≤ b2 (linear inequality constraint)G(u) = 0 (nonlinear equality constraint)H(u) ≤ 0 (nonlinear inequality constraint)~r1 ≤ u ≤ ~r2 (domain constraint)
E : RN → R objective function to be minimizedLinear constraints:
- u is a solution to the linear equation A1u = b- Each entry of the vector A2u − b2 is < 0.
Nonlinear constraints:- n equality constraints G(u) = (g1(u), g2(u), . . . , gn(u)),
g1(u) = 0, g2(u) = 0, . . . , gn(u) = 0.- m inequality constraints H(u) = (h1(u), h2(u), . . . , hm(u)),
h1(u) ≤ 0, h2(u) ≤ 0, . . . , hm(u) ≤ 0.
Optimization with Matlab Least Squares problem 1–norm minimization
Optimization - fminunc
fminunc finds a minimum of a E : RN → R, starting at an initial pointx0 ∈ RN . This is generally referred to as unconstrained nonlinearoptimization.
Usage
x = fminunc(f,x0,options)f is a function handlex0 is the starting pointoptions [OPTIONAL] optimization options (see optimset)
x is a local minimum of the input function.
Simple Example:>> f = @(x)(x(1)ˆ2 + x(2)ˆ2); % anonymous function handle>> x0 = [1,1];>> x = fminunc(f,x0);
Optimization with Matlab Least Squares problem 1–norm minimization
Optimization - optimset
We use optimset to set the optimization options. For example,
If the gradient Df and the Hessian D2f are available,>> options = optimset( ’GradObj’ , ’on’ , ’Hessian’ , ’on’ )
both of them are set to ’off’ by default.When Gradient and Hessian are ’on’, the input function f.m mustreturn,
- function value f as first output,- gradient value Df as the second output,- Hessian value D2f as the third output.
This will speed up the calculations.
Remember that, for a function F : RN → R,
- the gradient DF = (∂F/∂x1, ∂F/∂x2, . . . , ∂F/∂xN) is the vector ofpartial derivatives of F ,
- the Hessian D2F = [∂2F/∂xi∂xj ] is the N × N matrix of secondpartial derivatives of F .
Optimization with Matlab Least Squares problem 1–norm minimization
Optimization - optimset
Some (not all) of the other uses of optimset.
Display
’off’ displays no output.’iter’ displays output at each iteration.’notify’ displays output only if the algorithm diverges.’final’ (default) displays just the final output.
FunValCheck’on’ displays an error when the objective function
returns a complex number, Inf, NaN.’off’ (default) displays no error.
MaxFunEvalsMaximum number of function evaluations.
Default = 200*numberOfVariables.
MaxIterMaximum number of iterations.
Default = 200*numberOfVariables.
TolFunTermination tolerance on the function value.
Default = 10−4.
TolXTermination tolerance on x.
Default = 10−4.
Optimization with Matlab Least Squares problem 1–norm minimization
Optimization - fminunc
Example - sumsin.m
function [ f, Df, D2f ] = sumsin(x)% for simplicity, for the moment we assume that the input is a vectorN = length(x); % number of variablesf = sum( sin(x).ˆ2 );if nargout == 2; % Compute Df only if it is called
Df = 2∗sin(x).∗cos(x);elseif nargout == 3; % Compute D2f only if it is called
D2F = diag( 2∗cos(2∗x) );end
>> options = optimset( ’GradObj , ’on’ , ’Hessian’ , ’on’ );>> x0 = [4,1];>> x = fminunc(@sumsin,x0,options);
Optimization with Matlab Least Squares problem 1–norm minimization
Optimization - fminsearch
Use fminsearch if E : RN → R is not differentiable, e.g.,E(x) = sum(abs(x)).
fminsearch finds a minimum of a E : RN → R, starting at a x0 ∈ RN .
Usage
x = fminsearch( f, x0, options )f is a function handlex0 is the starting pointoptions [OPTIONAL] optimization options (see optimset)
x is a local minimum of the input function.
Example:>> f = @(x)(x(1)ˆ2 + x(2)ˆ2); % anonymous function handle>> x0 = [4,0];>> x = fminsearch(f,x0);>> options = optimset( ’Tolfun’, 1e-6 );>> xx = fminsearch(@sumsin, x0, options);
Optimization with Matlab Least Squares problem 1–norm minimization
Outline
1 Optimization with Matlab
2 Least Squares problemLinear Least Squares MethodNonlinear Least Squares Method
3 1–norm minimization
Optimization with Matlab Least Squares problem 1–norm minimization
Description of the Least Squares problem
We start withData: {(xk , yk), k = 1, 2, ..., r}.Goal: Fit the data into a model Y = f (X , c).
- f is the modeling function- c = (c1, c2, ..., cN) set of parameters to be determined by LS
Least squares problem
Find c = (c1, c2, ..., cN), which minimizes the sum of the squares
E(c) =r
∑
k=1
(
yk − f (xk , c))2
among all possible choices (unconstrained) of parameters c ∈ RN .
We say that the LS method is linear if f is of the form
f (X , c) =N
∑
n=1
cnfn(X)
Optimization with Matlab Least Squares problem 1–norm minimization
Example1
- Data: week3–example1.mat
- Model: f (X , c) = c1 + c2X + c3X2
LS method gave c1 = 3.0159, c2 = −2.0464, c3 = 1.0204.
0 0.2 0.4 0.6 0.8 11.8
2
2.2
2.4
2.6
2.8
3
x
y
data (xk,y
k)
model Y = X2−2X+3
Optimization with Matlab Least Squares problem 1–norm minimization
Linear Least Squares Method
LS method is linear ⇔ f is of the form
f (X , c) =N
∑
n=1
cnfn(X).
Linear LS can be written in the matrix form. Designate- x = [x1; x2; . . . ; xr ], y = [y1; y2; . . . ; yr ], c = [c1; c2; . . . ; cN ] as
column vectors.
- A = [f1(x), f2(x), . . . , fN(x)] as an r × N matrix.fn applies to x entrywise: fn(x) = [fn(x1); fn(x2); . . . ; fn(xr )].
- Now, f (x, c) = A ∗ c, andE(c) =
∑rk=1
(
yk − f (xk , c))2
= ‖y − A ∗ c‖22Then, the linear LS problem is
Find c = (c1, c2, ..., cN), which minimizes
E(c) = ‖y − Ac‖2 = norm(y − A ∗ c, 2)
among all possible choices (unconstrained) of parameters c ∈ Rr .
Optimization with Matlab Least Squares problem 1–norm minimization
Linear Least Squares Method
y − f (x , c) =
y1y2y3...
yr
−
f1(x1) f2(x1) . . . fN(x1)f1(x2) f2(x2) . . . fN(x2)f1(x3) f2(x3) . . . fN(x3)
......
. . ....
f1(xr ) f2(xr ) . . . fN(xr )
c1c2...
cN
Best method to solve linear LS is mldivide. Back to Example 1:
>> load week3-example1.mat>> x = data(:,1); y = data(:,2); r=length(x);>> A = [ones(r,1), x, x.ˆ2]; % f1(x) = 1, f2(x) = x , f3(x) = x2
>> c = A \ y % LS solution>> yLS = A∗c; LSerror = y - A∗c;>> figure(1); plot(x,y, ’o’); hold on; plot(x, yLS, ’r’)>> figure(2); plot(1:r, LSerror);
Optimization with Matlab Least Squares problem 1–norm minimization
Linear Least Squares Method
Example 2
Cost minimization. A producer wants to model its total cost TC as afunction of the number of the items produced X. TC is the sum of theelectricity usage Y, and the maintenance costs Z.
{(xk , yk , zk), k = 1, ..., r} is gathered over a period of r days,
We want to fit this data to the modelf (X , c) = Y + Z , Y = c1 + c2X2, Z = c3log(X) + c4sin(0.01πX)
f (X , c) = [1, X2, log(X), sin(0.01πX)] ∗ c ⇒ LS is linear.
>> load week3-example2.mat>> x = data2(:, 1); y = data2(:, 2); z = data2(:, 3);>> A = [ ones(r,1), x.ˆ2, log(x), sin(0.01*pi*x) ];>> TC = y+z;>> c = A \ TC>> yLS = A( : , [1,2] ) ∗ c( [1; 2] ); LSerrory = y - yLS;>> zLS = A( : , [3,4] ) ∗ c( [3; 4] ); LSerrorz = z - zLS;
Optimization with Matlab Least Squares problem 1–norm minimization
Nonlinear Least Squares Method
When f is not linear, we directly minimize E : RN → R
E(c) =r
∑
k=1
(
yk − f (xk , c))2
.
A twice differentiable function E : RN → R has a local minimumat c = (c1, c2, . . . , cN) if
- The gradient, the vector of the 1st partial derivatives at c is zero.- The Hessian, the matrix of the 2nd partial derivatives at c, has all
positive eigenvalues.- Use, e.g., fminunc, lsqnonlin. Provide the gradient and the Hessian
if possible, which will speed up the calculations.
If E : RN → R is not differentiable, use fminsearch.
Now, lets see examples on how to use fminunc, lsqnonlin, fminsearchwith Nonlinear LS problems. →
Optimization with Matlab Least Squares problem 1–norm minimization
Nonlinear Least Squares Method
Example 3
{(xk , yk), k = 1, ..., r} in week3-example3.mat,
We want to fit this data to the model Y = f (X , c), where
f (X , c) = c1 + cos(c2X + c3)
Form
E(c1, c2, c3) =r
∑
k=1
(
yk − c1 − cos(c2xk + c3))2
Write a Matlab function E.m, where the inputs are c,x,y .
Use a Matlab function to find the minimum of G. For example,>> c0 = [1,2,3]; c = fminsearch(@(c)E(c,x,y), c0)>> c0 = [1,0,0]; c = fminunc(@(c)E(c,x,y), c0)
Optimization with Matlab Least Squares problem 1–norm minimization
Outline
1 Optimization with Matlab
2 Least Squares problemLinear Least Squares MethodNonlinear Least Squares Method
3 1–norm minimization
Optimization with Matlab Least Squares problem 1–norm minimization
1–norm minimization
We use 1–norm minimization if we know that
The vector y - f(x,c) has only a few nonzero entries.
1–norm minimization problem
Find c = (c1, c2, ..., cN), which minimizes
E(c) =r
∑
k=1
∣
∣yk − f (xk , c)∣
∣
among all possible choices (unconstrained) of parameters c ∈ RN .
In most cases, 1–norm minimization results in better outcomecompared to LS.
Optimization with Matlab Least Squares problem 1–norm minimization
1–norm minimization
Example 4
{(xk , yk), k = 1, ..., r} in week3-example4.mat,
We know that the data strictly obeys the model Y = f (X , c),
f (X , c) = c1 + cos(c2X + c3)
When collecting data, only a few yk ’s are corrupted.
We want to find the unknown parameters c1, c2, c3.
We use fminsearch since E is not differentiable everywhere.
>> E = @(c,x,y)( sum(abs( c(1)+cos(c(2)*x+c(3))-y )) );>> c0 =[1,2,1] ; c = fminsearch(@(c)E(c,x,y), c0)
Optimization with Matlab Least Squares problem 1–norm minimization
1–norm minimization
Example 4
0 1 2 3 4 5 62.5
3
3.5
4
4.5
5
5.5
x
y
dataY = 4 + cos(2x−3)
Optimization with Matlab Least Squares problem 1–norm minimization
Recommended Reading
“Scientific Computing with MATLAB and Octave” by Quarteroni &Saleri, Chapter 3, Sections 3.1 and 3.4.
OutlineMain PartOptimization with MatlabLeast Squares problemLinear Least Squares MethodNonlinear Least Squares Method
1–norm minimization