31
Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear and adjoint models for variational data assimilation Angela Benedetti with contributions from: Marta Janisková, Philippe Lopez, Lars Isaksen, Gabor Radnoti and Yannick Tremolet

Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Tangent linear and adjoint models for variational data assimilation

Angela Benedetti

with contributions from:

Marta Janisková, Philippe Lopez, Lars Isaksen, Gabor Radnoti and Yannick Tremolet

Page 2: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Introduction

• 4D-Var is based on minimization of a cost function which measures the distance between the model with respect to the observations and with respect to the background state

• The cost function and its gradient are needed in the minimization.

• The tangent linear model provides a computationally efficient (although approximate) way to calculate the model trajectory, and from it the cost function. The adjoint model is a very efficient tool to compute the gradient of the cost function.

• Overview:

– Brief introduction to 4D-Var with focus on TL/AD aspects

– General definitions of Tangent Linear and Adjoint models and why they are extremely useful in variational assimilation

– Writing TL and AD models and testing them

– Brief mention of automatic differentiation software

Page 3: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

[ ]( ) [ ]( )1 10 0 0 0

0

1 1( ) ( )2 2

n TTb b i i i i i i i

iJ H M H M

J Jb o

− −

=

= − − + − −∑x x B x x x y R x y

4D-Var

In 4D-Var the cost function can be expressed as follows:

B background error covariance matrix, R observation error covariance matrix (instrumental + interpolation + observation operator error), M forward nonlinear forecast model (time evolution of the model state, index i), H observation operator (model space observation space).

[ ]( )0

1 ' 1x 0 0 0

0min ( ) [ ] 0

nT T

b i i i i i ii

J J t ,t H M− −

=

′⇔ ∇ = − + − =∑B x x M H R x y

H’T = adjoint of observation operator and M’T = adjoint of forecast model.

Page 4: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Incremental 4D-Var at ECMWF

• The gradient of the cost function to be minimized is:

( )0

1 ' 1 '0 0

0min [ , ] 0

nT T

x i i i i ii

J J B x M t t H R H x dδ δ δ− −

=

′⇔ ∇ = + − =∑

• In incremental 4D-Var, the cost function is minimized in terms of increments:

with the model state defined at any time ti as:

( )i i i tid y H x= −

• 4D-Var cost function can then be approximated to the first order by:

where is the so-called departure computed using the nonlinear model and observational operator.

0 0[ , ]i ix M t t xδ δ′=

and are the tangent linear models which are used in the computations of incremental updates during the minimization (iterative procedure). and are the adjoint models which are used to obtain the gradient of the cost function with respect to the initial condition.

TM ′

t bx x= at t=0) tx is the trajectory around which the linearization is performed (

'TiH

M ′ 'iH

)],['()],['(21

21)( 00

'1

000

'0

100 iii

n

i

Tiii

T dxttMHRdxttMHxBxxJ −−+= −

=

− ∑ δδδδδ

00 ),(, titiitii xttMxxxx =+= δ

Page 5: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

where is the linearised version of about and are the departures from observations.

Details on linearisation

'iH

In the first order approximation, a perturbation of the control variable (initial condition) evolves according to the tangent linear model:

where i is the time-step index. The perturbation of the cost function around the initial state is:

( ) ( )

( ) ( )

1 ' 1 '0 0 0 0

0

1 ' ' ' ' 1 ' ' ' '0 0 1 0 0 1 0 0

0

1 1( , )2 2

1 1 ..... .....2 2 i i

n TTi i i i i i i

in TT

i i i i i i ii

J x x x B x H x d R H x d

x B x H M M M x d R H M M M x d

δ δ δ δ δ δ

δ δ δ δ

− −

=

− −− −

=

= + − − =

= + − −

∑ix ( )i i i id y H x= −

iH

1'

11

)(−− =

∂∂

=−

iiix

ii xMx

xxMx

i

δδδ

Page 6: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

The gradient of the cost function with respect to is given by:

Details of the linearisation (cnt.)

0xδ

0

1 ' ' ' ' 1 ' ' ' '0 1 0 1 0 0

0

1 ' ' ' ' 1 ' ' ' '0 0 1 1 0 0

0

1 ' ' 1 '0 0

0

( ... ) ( ... )

... ( ... )

[ , ] ( )

nT

x i i i i i i ii

nT T T T

i i i i i i iin

T Ti i i i i

i

J B x H M M M R H M M M x d

B x M M M H R H M M M x d

B x M t t H R H x d

δ δ δ

δ δ

δ δ

− −− −

=

− −− −

=

− −

=

∇ = + − =

= + − =

= + −

0xδThe optimal initial perturbation is obtained by finding the value of for which:

00x Jδ∇ =

The gradient of the cost function with respect to the initial condition is provided by the adjoint solution at time t=0. Let’s see how…

remembering that ( )T T TAB B A=

Page 7: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

For any linear operator there exist an adjoint operator such as:

Definition of adjoint operator

' *, ,x M y M x y=

'M

where is an inner scalar product and x, y are vectors (or functions) of the space where this product is defined. It can be shown that for the inner product defined in the Euclidean space :

0

*0x J xδ∇ = −

*M

* 'TM M=

,

We will now show that the gradient of the cost function at time t=0 is provided by the solution of the adjoint equations at the same time:

Page 8: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Usually the initial guess is chosen to be equal to the background so that the initial perturbation The gradient of the cost function is hence simplified as:

Adjoint solution

0 0xδ =

0

' ' 10

0[ , ]

nT T

x i i ii

J M t t H R dδ−

=

∇ = −∑We choose the solution of the adjoint system as follows:

bx0x

We then substitute progressively the solution into the expression for *

0x*ix

nidRHxMxxMx

x

iiiii

n

,....,1,

0

1**1

*1

*

*1

*1

*0

*1

=+=

=

=

−++

+ (arbitrary “final” condition)

(initial condition)

(iterative solution)

Page 9: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Adjoint solution (cnt.)

0

*0 xx Jδ⇒ = −∇

Finally, regrouping and remembering that and that and we obtain the following equality:

*1 0nx + =

* 'TM M=

0

* ' ' 10

0[ , ]

nT T

i i ii

x M t t H R d−

=

=∑

The gradient of the cost function with respect to the control variable (initial condition) is obtained by a backward integration of the adjoint model.

* 'TH H=

...))((

)(

11*

1*12

1*2

*2

*1

*3

*3

*2

*1

11*

1*1

*2

*2

*1

*0

==++=

=+=−−

dRHMdRHMMxMMMdRHMxMMx

Page 10: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Iterative steps in the 4D-Var Algorithm

1. Integrate forward model gives .

2. Integrate adjoint model backwards gives .

3. If then stop.

4. Compute descent direction (Newton, CG, …).

5. Compute step size :

6. Update initial condition:

Page 11: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Finding the minimum of cost function J is an iterative minimization procedure

cost function J J(xb)

Jmini

m mDρ

1m m m mx x Dρ+ = +

Page 12: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

An analysis cycle in 4D-Var 1st ifstraj: • Non-linear model is used to compute the high-res trajectory (T1279 operational, 12-h forecast) • High-res departures are computed at exact obs time and location • Trajectory is interpolated at low res (T159)

1st ifsmin (70 iterations): • Iterative minimization at T159 resolution • Tangent linear with simplified physics to calculate the increments • The Adjoint is used to compute the gradient of the cost function with respect to the departure in initial condition • Analysis increment at initial time is interpolated back linearly from low-res to high-res and it provides a new initial state for the 2nd trajectory run 2nd ifstraj: • repeat 1st ifstraj and interpolates at T255 resolution 2nd ifsmin (30 iterations): • repeat 1st ifsmin at T255 Last ifstraj: • Uses updated initial condition to run another 12-h forecast and stores analysis departures in the Observational Data Base (ODB)

iδx

0δxJ∇0δx2 minimizations in the old

configuration Now 3 minimizations

are operational!

Page 13: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Recap on TL and AD models

Page 14: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Simple example of adjoint writing

Page 15: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

As an alternative to the matrix method, adjoint coding can be carried out using a line-by-line approach.

Simple example of adjoint writing (cnt.) Often the adjoint variables in mathematical formulations are indicated with an asterisk

Do not forget the last equation!!! That too is part of the adjoint!

Page 16: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

More practical examples on adjont coding: the Lorenz model

where is the Prandtl number, the Rayleigh number, and the aspect ratio. is the intensity of convection, is the maximum temperature difference is the stratification change due to convection. Details on the Lorenz model can be found in the references.

Page 17: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

The linear code in Fortran

Linearize each line of the code one by one, and set dx/dt=y for simplicity:

y(1) = -p*x(1) +p*x(2) :Nonlinear statement (1)yd(1) = -p*xd(1) +p*xd(2) :Tangent linear

y(2) = x(1)*(r-x(3)) -x(2) :Nonlinear statement (2)yd(2) = xd(1)*(r-x(3)) -x(1)*xd(3) -xd(2) :Tangent linear

…etc Remember that p, r, b are constants; x(1), x(2) and x(3) are the independent variables; y(1), y(2) and y(3) are the dependent variables. We chose the suffix “d” for the tangent linear variable for consistency with the automatic differentation software TAPENADE (see optional practical exercise). Adjoint variables are indicated with the suffix “b”. This is just a convention. Note that in the ECMWF Integrated Forecast System (IFS) the tangent linear and adjoint variables are indicated without any subscripts and the nonlinear trajectory (x) is indicated with the suffix “5” (x5).

Page 18: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Adjoint of one instruction We start from the tangent linear code:

yd(1)=-p*xd(1)+p*xd(2) In matrix form, it can be written as:

which can easily be transposed (asterisk indicates adjoint variables):

The corresponding adjoint code in FORTRAN is: xb(1)=xb(1)-p*yb(1) xb(2)=xb(2)+p*yb(1) yb(1)=0

−=

1

2

1

1

2

1

0010001

yxx

ppyxx

δδδ

δδδ

−=

*

*2

*1

*1

*2

*1

1000

1001

yxx

pp

yxx

δ

δ

δ

δ

δ

δ

Page 19: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Adjoint of one instruction (II) We start again from the tangent linear code: yd(2)= xd(1)*(r-x(3))-xd(2)- x(1)*xd(3)

In matrix form, it can be written as:

which can easily be transposed (asterisk indicates transposition):

The corresponding adjoint code in FORTRAN is: xb(1)=xb(1)+(r-x(3))*yb(2) xb(2)=xb(2)- yb(2) xb(3)=xb(3)- x(1)*yb(2) yb(2)=0

−−−

=

2

3

2

1

132

3

2

1

01)(010000100001

yxxx

xxryxxx

δδδδ

δδδδ

−−−

=

*2

*3

*2

*1

1

3

*2

*3

*2

*1

0000100

1010)(001

y

x

x

x

x

xr

y

x

x

x

δ

δ

δ

δ

δ

δ

δ

δ

These terms come from the trajectory! Needs to be stored in memory or recomputed

Page 20: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Trajectory

The trajectory has to be available. It can be: • saved which costs memory, • recomputed which costs CPU time.

Depending on the complexity of the code, one option or the other is adopted (or both options at the same time).

Page 21: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

The Adjoint Code

Property of adjoints (transposition):

Application: where represents the line of the tangent linear model. The adjoint code is made of the transpose of each line of the tangent linear code in reverse order.

Page 22: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Adjoint of loops In the TL code for the Lorenz model we have: DO i=1,3 xd(i)=xd(i)+dt*yd(i) ENDDO dt is a constant for our purposes. This loop can be written explicitly: xd(1)=xd(1)+dt*yd(1) xd(2)=xd(2)+dt*yd(2) xd(3)=xd(3)+dt*yd(3) We can now transpose and reverse the lines to get the adjoint: yb(3)=yb(3)+dt*xb(3) yb(2)=yb(2)+dt*xb(2) yb(1)=yb(1)+dt*xb(1) which is equivalent to DO i=3,1,-1 !Reverse order of indeces! yb(i)=yb(i)+dt*xb(i) ENDDO

Page 23: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Conditional statements (“IF” statements)

• What we want is the adjoint of the statements which were actually executed in the direct model.

• We need to know which “branch” of the IF statement was executed

• The result of the conditional statement has to be stored: it is part of the trajectory !!!

Page 24: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Tangent linear code Adjoint code

δx = 0 δx* = 0 δx = A δy + B δz δy* = δy* + A δx*

δz* = δz* + B δx*

δx* = 0 δx = A δx + B δz

δz* = δz* + B δx*

δx* = A δx* do k = 1, N δx(k) = A δx(k−1) + B δy(k) end do

do k = N, 1, − 1 (Reverse the loop!) δx*(k −1) = δx*(k−1) + A δx*(k) δy*(k ) = δy*(k) + B δx*(k) δx*(k) = 0 end do

if (condition) tangent linear code if (condition) adjoint code

Summary of basic rules for line-by-line adjoint coding (1)

And do not forget to initialize local adjoint variables to zero !

Adjoint statements are derived from tangent linear ones in a reversed order

Order of operations is important when variable is updated!

Page 25: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Tangent linear code Trajectory and adjoint code

if (x > x0) then δx = A δx / x x = A Log(x) end if

------------- Trajectory ---------------- xstore = x (storage for use in adjoint) if (x > x0) then x = A Log(x) end if --------------- Adjoint ------------------ if (xstore > x0) then δx* = A δx* / xstore

end if

Summary of basic rules for line-by-line adjoint coding (2)

The most common sources of error in adjoint coding are: 1) Pure coding errors 2) Forgotten initialization of local adjoint variables to zero 3) Mismatching trajectories in tangent linear and adjoint (even slightly) 4) Bad identification of trajectory updates

To save memory, the trajectory can be recomputed just before the adjoint calculations (again it depends on the complexity of the model).

Page 26: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

More facts about adjoints

• The adjoint always exists and it is unique, assuming spaces of finite dimension. Hence, coding the adjoint does not raise questions about its existence, only questions related to the technical implementation.

• In the meteorological literature, the term adjoint is often improperly used to denote the adjoint of the tangent linear of a non-linear operator. In reality, the adjoint can be defined for any linear operator. One must be aware that discussions about the existence of the adjoint usually should address the existence of the tangent linear model.

• Without re-computation, the cost of the TL is usually about 1.5 times that of the non-linear code, the cost of the adjoint between 2 and 3 times.

• The tangent linear model is not strictly necessary to run a 4D-Var system (but it is needed in the incremental 4D-Var formulation in use operationally at ECMWF). It is also needed as an intermediate step to write and test the adjoint.

Page 27: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

machine precision reached

Pertu

rbat

ion

scal

ing

fact

or

Test for tangent linear model

Page 28: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Test for adjoint model

The adjoint test is truly unforgiving. If you do not have a ratio of the norm close to 1 within the precision of the machine, you know there is a bug in your adjoint. At the end of your debugging you will have a perfect adjoint. If properly tested, the adjoint is the only piece of code on Earth to be entirely bug-free (although you may still have an imperfect tangent linear)!

Page 29: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Test of adjoint in practice…

• Compute perturbed variable (y) using perturbation in input variables (x,z) with the tangent linear code

• Compute TL norm: • Call adjoint routine to obtain gradients in x and z with respect to initial

perturbation in x and z from perturbation in y. • Compute the norm from the adjoint calculation, using unperturbed state and

gradients: • According to the test of adjoint NORM_TL must be equal to NORM_AD

to the machine precision!

zzyx

xyy δδδ

∂∂

+∂∂

=

2_ yTLNORM δ=

***

***

zzyzz

xxyxx

δδδ

δδδ

∂∂

+=

∂∂

+=

*0

*0_ zzxxADNORM δδδδ +=

( )00 , zx δδ

Page 30: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Automatic differentiation

• Because of the strict rules of tangent linear and adjoint coding, automatic differentiation is possible.

• Existing tools: TAF (TAMC), TAPENADE (Odyssée), ...

– Reverse the order of instructions, – Transpose instructions instantly without typos !!! – Especially good in deriving tangent linear codes!

• There are still unresolved issues:

– It is NOT a black box tool, – Cannot handle non-differentiable instructions (TL is wrong), – Can create huge arrays to store the trajectory, – The codes often need to be cleaned-up and optimised.

• Look in the “Supplementary material” for more information!

Page 31: Tangent linear and adjoint models for variational …aosc.umd.edu/~dkleist/docs/da/ECMWF_DA_TUTORIAL/...Data Assimilation Training Course, Reading, 10-14 March 2014 Tangent linear

Data Assimilation Training Course, Reading, 10-14 March 2014

Useful References

• Variational data assimilation: Lorenc, A., 1986, Quarterly Journal of the Royal Meteorological Society, 112, 1177-1194. Courtier, P. et al., 1994, Quarterly Journal of the Royal Meteorological Society, 120, 1367-1387. Rabier, F. et al., 2000, Quarterly Journal of the Royal Meteorological Society, 126, 1143-1170. • The adjoint technique: Errico, R.M., 1997, Bulletin of the American Meteorological Society, 78, 2577-2591. • Tangent-linear approximation: Errico, R.M. et al., 1993, Tellus, 45A, 462-477. Errico, R.M., and K. Reader, 1999, Quarterly Journal of the Royal Meteorological Society, 125, 169-195. Janisková, M. et al., 1999, Monthly Weather Review, 127, 26-45. Mahfouf, J.-F., 1999, Tellus, 51A, 147-166.

• Lorenz model: X. Y. Huang and X. Yang. Variational data assimilation with the Lorenz model. Technical Report 26, HIRLAM, April 1996. Available on ftp site (see notes for practical session). E. Lorenz. Deterministic nonperiodic flow. J. Atmos. Sci., 20:130-141, 1963. • Automatic differentiation: Giering R., Tangent Linear and Adjoint Model Compiler, Users Manual Center for Global Change Sciences, Department of Earth, Atmospheric, and PlanetaryScience,MIT,1997 Giering R. and T. Kaminski, Recipes for Adjoint Code Construction, ACM Transactions on Mathematical Software, 1998 TAMC: http://www.autodiff.org/ TAPENADE: http://www-sop.inria.fr/tropics/tapenade.html

• Sensitivity studies using the adjoint technique Janiskova, M. and J.-J. Morcrette., 2005. Investigation of the sensitivity of the ECMWF radiation scheme to input

parameters using adjoint technique. Quart. J. Roy. Meteor. Soc., 131,1975-1996.