52
SPM SPM short short course – course – Mai Mai 200 200 7 7 Linear Models Linear Models and and Contrasts Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Embed Size (px)

Citation preview

Page 1: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

SPM SPM short short course – course – Mai Mai 20020077Linear ModelsLinear Models and and ContrastsContrasts

Jean-Baptiste Poline

Neurospin,I2BM, CEASaclay, France

Page 2: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

realignment &coregistration smoothing

normalisation

Corrected p-values

images Adjusted dataDesignmatrix

Anatomical Reference

Spatial filter

Random Field Theory

Your question:a contrast

Statistical MapUncorrected p-values

General Linear Model Linear fit

statistical image

Page 3: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Make sure we understand the testing procedures : tMake sure we understand the testing procedures : t-- and F and F--teststests

PlanPlan

An example – almost realAn example – almost real

Make sure we know all about the estimation (fitting) part ...Make sure we know all about the estimation (fitting) part .... .

But what do we test exactly ?But what do we test exactly ?

Page 4: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Temporal series fMRI

Statistical image(SPM)

voxel time course

One voxel = One test (t, F, ...)One voxel = One test (t, F, ...)amplitude

time

General Linear Modelfittingstatistical image

Page 5: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Regression example…Regression example…

= + +

voxel time series

90 100 110

box-car reference function

-10 0 10

90 100 110

Mean value

Fit the GLM

-2 0 2

Page 6: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Regression example…Regression example…

= + +

voxel time series

90 100 110

box-car reference function

-2 0 2

90 100 110

Mean value

Fit the GLM

-2 0 2

Page 7: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

……revisited : matrix formrevisited : matrix form

= + +

s= + +Yerror

1 2 f(t)

Page 8: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Box car regression: design matrix…Box car regression: design matrix…

= +

= +Y X

data v

ecto

r

(v

oxel

time s

eries

)

design

mat

rix

param

eters

erro

r vec

tor

Page 9: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Add more reference functions ...Add more reference functions ...

Discrete cosine transform basis functionsDiscrete cosine transform basis functions

Page 10: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

……design matrixdesign matrix

=

+

= +Y X

data v

ecto

r

design

mat

rix

param

eters

erro

r vec

tor

= the b

etas (

here :

1 to

9)

Page 11: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Fitting the model = finding some Fitting the model = finding some estimateestimate of the betas of the betas= = minimising the sum of square of the minimising the sum of square of the residualsresiduals S S22

raw fMRI time series adjusted for low Hz effects

residuals

fitted “high-pass filter”

fitted box-car

the squared values of the residuals s2number of time points minus the number of estimated betas

Page 12: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

We put in our model regressors (or covariates) that represent We put in our model regressors (or covariates) that represent how we think the signal is varying (of interest and of no interest how we think the signal is varying (of interest and of no interest alike) alike) QUESTION IS : WHICH ONE TO INCLUDE ?QUESTION IS : WHICH ONE TO INCLUDE ?

Take home ...Take home ...

Coefficients (=Coefficients (= parameters) are parameters) are estimated using the Ordinary estimated using the Ordinary Least Squares (OLS) by minimizing the fluctuations, - variability Least Squares (OLS) by minimizing the fluctuations, - variability – variance – of the noise – the residuals– variance – of the noise – the residuals

Because the parameters depend on the scaling of the regressors Because the parameters depend on the scaling of the regressors included in the model, one should be careful in comparing included in the model, one should be careful in comparing manually entered regressorsmanually entered regressors

The residuals, their sum of squareThe residuals, their sum of squares ands and the the resulting resulting tests (t,F), tests (t,F), do notdo not depend on the scaling of the regressors depend on the scaling of the regressors..

Page 13: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Make sure we understand t and F testsMake sure we understand t and F tests

PlanPlan

Make sure we all know about the estimation (fitting) part ...Make sure we all know about the estimation (fitting) part .... .

But what do we test exactly ?But what do we test exactly ?

An example – almost realAn example – almost real

Page 14: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

A contrast = a linear combination of parameters: c´

c’ = 1 0 0 0 0 0 0 0

divide by estimated standard deviation

T test - one dimensional contrasts - SPM{T test - one dimensional contrasts - SPM{tt}}

SPM{t}T =

contrast ofestimated

parameters

varianceestimate

T =

ss22c’(X’X)c’(X’X)++cc

c’bc’b

box-car amplitude > 0 ?=

> 0 ? =>

Compute 1xb + 0xb + 0xb + 0xb + 0xb + . . . and

b b b b b ....

Page 15: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

How is this computed ? (t-test)How is this computed ? (t-test)contrast ofestimated

parameters

varianceestimate

YY = = X X + + ~ ~ N(0,I) N(0,I) (Y : at one position)(Y : at one position)

b = (X’X)b = (X’X)+ + X’Y X’Y (b(b: : estimatestimatee of of ) -> ) -> beta??? images beta??? images

e = Y - Xbe = Y - Xb (e(e:: estimat estimatee of of ))

ss22 = (e’e/(n - p)) = (e’e/(n - p)) (s(s:: estimat estimate of e of n: n: time pointstime points, p: param, p: parameterseters)) -> -> 1 image ResMS1 image ResMS

Estimation [Y, X] [b, s]

Test [b, s2, c] [c’b, t]

Var(c’b) Var(c’b) = s= s22c’(X’X)c’(X’X)++c c (compute for each contrast c, proportional to (compute for each contrast c, proportional to ss22))

t = c’b / sqrt(st = c’b / sqrt(s22c’(X’X)c’(X’X)++c) c) (c’b -> (c’b -> images spm_con???images spm_con???

compute the t images -> compute the t images -> images spm_t??? images spm_t??? ))

under the null hypothesis Hunder the null hypothesis H00 : t ~ Student : t ~ Student-t-t( df ) df = n-p( df ) df = n-p

Page 16: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Tests multiple linear hypotheses : Does X1 model anything ?

FF--test : a reduced model or ...test : a reduced model or ...

This (full) model ?

H0: True (reduced) model is X0

X1 X0

S2

Or this one?

X0

S02 F =

errorvarianceestimate

additionalvariance

accounted forby tested

effects

F ~ ( S02 - S2 ) / S2

Page 17: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 00 0 0 0 1 0 0 00 0 0 0 0 1 0 00 0 0 0 0 0 1 00 0 0 0 0 0 0 1

c’ =

SPM{F}

tests multiple linear hypotheses. Ex : does DCT set model anything?

FF--test : a reduced model or ... multi-dimensional test : a reduced model or ... multi-dimensional contrasts ? contrasts ?

test H0 : c´ b = 0 ?H0: 3-9 = (0 0 0 0 ...)

X1 (3-9)X0

This model ? Or this one ?

H0: True model is X0

X0

Page 18: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

How is this computed ? (F-test)How is this computed ? (F-test)Error

varianceestimate

additionalvariance accounted for

by tested effects

Test [b, s, c] [ess, F]

F ~ (sF ~ (s0 0 - s) / s- s) / s2 2 -> image -> image spm_ess???spm_ess???

-> image of F : -> image of F : spm_F???spm_F???

under the null hypothesis : F ~ F(under the null hypothesis : F ~ F(p - p0p - p0, n-p), n-p)

bb00 = (X = (X’X’X))+ + XX’Y’Y

ee00 = Y - X = Y - X0 0 bb00 (e(e:: estimat estimate e of of ))

ss2200 = (e = (e00’e’e00/(n - p/(n - p00)) )) (s(s:: estimat estimatee of of n: time, pn: time, p: parameters): parameters)

Estimation [Y, X0] [b0, s0]

Estimation [Y, X] [b, s]

YY == X X + + ~ N(0, ~ N(0, I) I)

YY == XX + + ~ N(0, ~ N(0, I) I) XX : X Reduced : X Reduced

Page 19: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

T tests are simple combinations of the betas; they are either T tests are simple combinations of the betas; they are either positive or negative (b1 – b2 is different from b2 – b1)positive or negative (b1 – b2 is different from b2 – b1)

T and F test: take home ...T and F test: take home ...

F tests can be viewed as testing for the additional variance F tests can be viewed as testing for the additional variance explained by a larger model wrt a simpler model, orexplained by a larger model wrt a simpler model, or

F test the sum of the squares of one or several combinations of F test the sum of the squares of one or several combinations of the betas the betas

in testing “single contrast” with an F test, for ex. b1 – b2, the in testing “single contrast” with an F test, for ex. b1 – b2, the result will be the same as testing b2 – b1. It will be exactly the result will be the same as testing b2 – b1. It will be exactly the square of the t-test, testing for both positive and negative effects.square of the t-test, testing for both positive and negative effects.

Page 20: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Make sure we understand t and F testsMake sure we understand t and F tests

PlanPlan

Make sure we all know about the estimation (fitting) part ...Make sure we all know about the estimation (fitting) part .... .

But what do we test exactly ? Correlation between regressorsBut what do we test exactly ? Correlation between regressors

An example – almost realAn example – almost real

Page 21: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

« Additional variance » : Again« Additional variance » : Again

Independent contrasts

Page 22: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

« Additional variance » : Again« Additional variance » : Again

correlated regressors, for examplegreen: subject ageyellow: subject score

Testing for the green

Page 23: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

« Additional variance » : Again« Additional variance » : Again« Additional variance » : Again« Additional variance » : Again

correlated contrasts

Testing for the red

Page 24: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

« Additional variance » : Again« Additional variance » : Again

Entirely correlated contrasts ?

Non estimable !

Testing for the green

Page 25: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

« Additional variance » : Again« Additional variance » : Again

If significant ? Could be G or Y !Entirely correlated contrasts ?Non estimable !

Testing for the green and yellow

Page 26: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Make sure we understand t and F testsMake sure we understand t and F tests

PlanPlan

Make sure we all know about the estimation (fitting) part ...Make sure we all know about the estimation (fitting) part .... .

But what do we test exactly ? Correlation between regressorsBut what do we test exactly ? Correlation between regressors

An example – almost realAn example – almost real

Page 27: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

A real example   A real example   (almost !)(almost !)

Factorial design with 2 factors : modality and category 2 levels for modality (eg Visual/Auditory)3 levels for category (eg 3 categories of words)

Experimental Design Design Matrix

V

A

C1

C2

C3C1

C2

C3

V A C1 C2 C3

Page 28: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Asking ouAsking ourrselves some questions ...selves some questions ...V A C1 C2 C3

• Design Matrix not orthogonal • Many contrasts are non estimable• Interactions MxC are not modelled

Test C1 > C2 : c = [ 0 0 1 -1 0 0 ]Test V > A : c = [ 1 -1 0 0 0 0 ]

[ 0 0 1 0 0 0 ]Test C1,C2,C3 ? (F) c = [ 0 0 0 1 0 0 ] [ 0 0 0 0 1 0 ]

Test the interaction MxC ?

Page 29: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Modelling the interactionsModelling the interactions

Page 30: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Asking ouAsking ourrselves some questions ...selves some questions ...

V A V A V ATest C1 > C2 : c = [ 1 1 -1 -1 0 0 0]

Test the interaction MxC :[ 1 -1 -1 1 0 0 0]

c = [ 0 0 1 -1 -1 1 0][ 1 -1 0 0 -1 1 0]

• Design Matrix orthogonal• All contrasts are estimable• Interactions MxC modelled• If no interaction ... ? Model is too “big” !

C1 C1 C2 C2 C3 C3

Test V > A : c = [ 1 -1 1 -1 1 -1 0]

Test the category effect :[ 1 1 -1 -1 0 0 0]

c = [ 0 0 1 1 -1 -1 0][ 1 1 0 0 -1 -1 0]

Page 31: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Asking ouAsking ourrselves some questions ... With a selves some questions ... With a more flexible modelmore flexible model

V A V A V ATest C1 > C2 ?Test C1 different from C2 ?from c = [ 1 1 -1 -1 0 0 0]to c = [ 1 0 1 0 -1 0 -1 0 0 0 0 0 0]

[ 0 1 0 1 0 -1 0 -1 0 0 0 0 0]becomes an F test!

C1 C1 C2 C2 C3 C3

Test V > A ? c = [ 1 0 -1 0 1 0 -1 0 1 0 -1 0 0]

is possible, but is OK only if the regressors coding for the delay are all equal

Page 32: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Convolution model

Design andcontrast

SPM(t) orSPM(F)

Fitted andadjusted data

Page 33: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

F tests have to be used whenF tests have to be used when- Testing for >0 and <0 effectsTesting for >0 and <0 effects- Testing for more than 2 levelsTesting for more than 2 levels- Conditions are modelled with more than one regressorConditions are modelled with more than one regressor

Toy example: take home ...Toy example: take home ...

F tests can be viewed as testing for F tests can be viewed as testing for - the additional variance explained by a larger model wrt a the additional variance explained by a larger model wrt a simpler model, orsimpler model, or- the sum of the squares of one or several combinations of the the sum of the squares of one or several combinations of the betas (here the F test b1 – b2 is the same as b2 – b1, but two betas (here the F test b1 – b2 is the same as b2 – b1, but two tailed compared to a t-test).tailed compared to a t-test).

Page 34: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Make sure we understand t and F testsMake sure we understand t and F tests

PlanPlan

A bad model ... And a better oneA bad model ... And a better one

Make sure we all know about the estimation (fitting) part ...Make sure we all know about the estimation (fitting) part .... .

A (nearly) real exampleA (nearly) real example

But what do we test exactly ? Correlation between regressorsBut what do we test exactly ? Correlation between regressors

Page 35: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

A A badbad model ... model ...A A badbad model ... model ...

True signal and observed signal (---)

Model (green, pic at 6sec)

TRUE signal (blue, pic at 3sec)

Fitting (b1 = 0.2, mean = 0.11)

=> Test for the green regressor not significant

Residual (still contains some signal)

Page 36: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

= +

Y X

1= 0.22 2= 0.11

A A badbad model ... model ...A A badbad model ... model ...

Residual Variance = 0.3

P(Y| 1 = 0) => p-value = 0.1

(t-test)

P(Y| 1 = 0) =>p-value = 0.2

(F-test)

Page 37: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

A « A « betterbetter » model ... » model ...A « A « betterbetter » model ... » model ...

True signal + observed signal

Global fit (blue)and partial fit (green & red)Adjusted and fitted signal

=> t-test of the green regressor significant=> F-test very significant=> t-test of the red regressor very significant

Residual (a smaller variance)

Model (green and red)and true signal (blue ---)Red regressor : temporal derivative of the green regressor

Page 38: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

A A betterbetter model ... model ...A A betterbetter model ... model ...

= +

Y X

1= 0.22 2= 2.15 3= 0.11

Residual Var = 0.2

P(Y| 1 = 0)p-value = 0.07

(t-test)

P(Y| 1 = 0, 2 = 0)p-value = 0.000001

(F-test)

Page 39: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Flexible models : Flexible models : Gamma BasisGamma BasisFlexible models : Flexible models : Gamma BasisGamma Basis

Page 40: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Test flexible models if there is little a priori information

The residuals should be looked at ...!

In general, use the F-tests to look for an overall effect, then look at the response shape

Interpreting the test on a single parameter (one regressor) can be difficult: cf the delay or magnitude situationBRING ALL PARAMETERS AT THE 2nd LEVEL

Summary ... (2)Summary ... (2)

Page 41: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Lunch ?Lunch ?

Page 42: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Make sure we understand t and F testsMake sure we understand t and F tests

Correlation in our model : do we mind ? Correlation in our model : do we mind ?

PlanPlan

A bad model ... And a better oneA bad model ... And a better one

Make sure we all know about the estimation (fitting) part ...Make sure we all know about the estimation (fitting) part .....

A (nearly) real exampleA (nearly) real example

?

Page 43: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

True signal

CorrelationCorrelation between regressors between regressorsCorrelationCorrelation between regressors between regressors

Fit (blue : global fit)

Residual

Model (green and red)

Page 44: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

= +

Y X

1= 0.79 2= 0.85 3 = 0.06

CorrelationCorrelation between regressors between regressorsCorrelationCorrelation between regressors between regressors

Residual var. = 0.3P(Y| 1 = 0)

p-value = 0.08(t-test)

P(Y| 2 = 0)p-value = 0.07

(t-test)

P(Y| 1 = 0, 2 = 0)p-value = 0.002

(F-test)

Page 45: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

true signal

CorrelationCorrelation between regressors - 2 between regressors - 2CorrelationCorrelation between regressors - 2 between regressors - 2

Residual

Fit

Model (green and red)red regressor has been

orthogonalised with respect to the green one remove everything that correlates with

the green regressor

Page 46: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

= +

Y X

b1= 1.47 b2= 0.85 b3 = 0.06

Residual var. = 0.3P(Y| 1 = 0)

p-value = 0.0003(t-test)

P(Y| 2 = 0)p-value = 0.07

(t-test)

P(Y| 1 = 0, 2 = 0)p-value = 0.002

(F-test)

See « explore design »

CorrelationCorrelation between regressors -2 between regressors -2

0.79*** 0.85 0.06

Page 47: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Design orthogonality :Design orthogonality : « explore design » « explore design »

BewareBeware: when there : when there areare more than 2 regressors (C1,C2,C3 more than 2 regressors (C1,C2,C3,,...), ...), you may think that there is little correlation (light grey) between you may think that there is little correlation (light grey) between them, but C1 + C2 + C3 may be correlated with C4 + C5 them, but C1 + C2 + C3 may be correlated with C4 + C5

Black = completely correlated White = completely orthogonal

Corr(1,1) Corr(1,2)1 2

1 2

1

2

1 2

1 2

1

2

Page 48: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

We implicitly test We implicitly test for an for an additionaladditional effect only, be careful if effect only, be careful if there is correlationthere is correlation

SummarySummary

- Orthogonalisation = decorrelation : not generally neededOrthogonalisation = decorrelation : not generally needed- Parameters and test on the non modified regressor changeParameters and test on the non modified regressor change

It is always simpler to have orthogonal regressors and therefore It is always simpler to have orthogonal regressors and therefore designs !designs !

In case of correlation, use F-tests to see the overall significance. In case of correlation, use F-tests to see the overall significance. There is generally no way to decide There is generally no way to decide to which regressorto which regressor the the « common » part should be attributed to« common » part should be attributed to

Original regressors may not matter: it’s the contrast you are Original regressors may not matter: it’s the contrast you are testing which should be as decorrelated as possible from the rest testing which should be as decorrelated as possible from the rest of the design matrix of the design matrix

Page 49: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Conclusion : check your modelsConclusion : check your models

www.madic.org !

Check your residuals/model Check your residuals/model - multivariate toolbox - multivariate toolbox

Check group homogeneity Check group homogeneity - Distance - Distance

toolbox toolbox

Check your HRF form Check your HRF form - HRF toolbox - HRF toolbox

Page 50: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France
Page 51: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

Implicit or explicit Implicit or explicit ((decorrelation (or decorrelation (or orthogonalisation)orthogonalisation)

Implicit or explicit Implicit or explicit ((decorrelation (or decorrelation (or orthogonalisation)orthogonalisation)

C1C1C2C2

XbXb

YY

XbXb

ee

Space of XSpace of X

C1C1

C2C2

LC2 :

LC1:

test of C2 in the implicit model

test of C1 in the explicit model

C1C1C2C2

XbXb

LC1

LC2

C2C2

cf Andrade et al., NeuroImage, 1999

This generalises when testing several regressors (F tests)

Page 52: SPM short course – Mai 2007 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France

1 0 11 0 10 1 1 0 1 1 1 0 11 0 10 1 10 1 1

X =X =

MeanMeanCond 1Cond 1 Cond 2Cond 2

Y = Xb+e; Y = Xb+e;

^̂̂̂ ““completely” correlated ... completely” correlated ...

Parameters are not unique in general ! Some contrasts have no meaning: NON ESTIMABLE

c = [1 0 0] is not estimable (no specific information in the first regressor);

c = [1 -1 0] is estimable;

YY

XbXb

ee

Space of XSpace of X

C1C1

C2C2

Mean = C1+C2Mean = C1+C2