If you can't read please download the document
Upload
james-neill
View
16.240
Download
0
Embed Size (px)
Citation preview
Survey Design II
Lecture 9
Survey Research & Design in Psychology
James Neill, 2011
Multiple Linear Regression II
& Analysis of Variance I
7126/6667 Survey Research & Design in PsychologySemester 1, 2011, University of Canberra, ACT, AustraliaJames T. NeillHome page: http://ucspace.canberra.edu.au/display/7126Lecture page: http://en.wikiversity.org/wiki/Survey_research_methods_and_design_in_psychologyhttp://ucspace.canberra.edu.au/pages/viewpage.action?pageId=48398957
Image sourceshttp://commons.wikimedia.org/wiki/File:Vidrarias_de_Laboratorio.jpgLicense: Public domainhttp://commons.wikimedia.org/wiki/File:3D_Bar_Graph_Meeting.jpg'License: CC-by-A 2.0Author: lumaxart (http://www.flickr.com/photos/lumaxart/)
Description: Explains advanced use of multiple linear regression, including residuals, interactions and analysis of change, then introduces the principles of ANOVA starting with explanation of t-tests.
This lecture is accompanied by two computer-lab based tutorial, the notes for which are available here: http://ucspace.canberra.edu.au/display/7126/Tutorial+-+Multiple+linear+regressionhttp://ucspace.canberra.edu.au/display/7126/Tutorial+-+ANOVA
Overview
Multiple Linear Regression II
Analysis of Variance I
Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svgLicense: Public domain
Multiple Linear Regression II
Summary of MLR I
Partial correlations
Residual analysis
Interactions
Analysis of change
Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svgLicense: Public domain
Howell (2009).
Correlation & regression
[Ch 9]
Howell (2009).
Multiple regression
[Ch 15; not 15.14 Logistic Regression]
Tabachnick & Fidell (2001).
Standard & hierarchical regression in SPSS (includes example
write-ups)
[Alternative chapter from eReserve]
Readings - MLR
As per previous lecture
Summary of MLR I
Check assumptions LOM, N, Normality, Linearity, Homoscedasticity, Collinearity, MVOs, Residuals
Choose type Standard, Hierarchical, Stepwise, Forward, Backward
Interpret Overall (R2, Changes in R2 (if hierarchical)), Coefficients (Standardised & unstandardised), Partial correlations
Equation If useful (e.g., is the study predictive?)
These residual slides are based on Francis (2007) MLR (Section 5.1.4) Practical Issues & Assumptions, pp. 126-127 and Allen and Bennett (2008)
Note that according to Francis, residual analysis can test:Additivity (i.e., no interactions b/w Ivs) (but this has been left out for the sake of simplicity)
Partial correlations (rp)
rp between X and Y after controlling for (partialling out) the influence of a 3rd variable from both X and Y.
Image source: http://www.gseis.ucla.edu/courses/ed230bc1/notes1/con1.htmlIf IVs are correlated then you should also examine the difference between the zero-order and partial correlations.
Partial correlations (rp): Examples
Does years of marriage (IV1) predict marital satisfaction (DV) after number of children is controlled for (IV2)?
Does time management (IV1) predict university student satisfaction (DV) after general life satisfaction is controlled for (IV2)?
Image source: http://www.gseis.ucla.edu/courses/ed230bc1/notes1/con1.htmlIf IVs are correlated then you should also examine the difference between the zero-order and partial correlations.
Partial correlations (rp) in MLR
When interpreting MLR coefficients, compare the 0-order and partial correlations for each IV in each model draw a Venn diagram
Partial correlations will be equal to or smaller than the 0-order correlations
If a partial correlation is the same as the 0-order correlation, then the IV operates independently on the DV.
Partial correlations (rp) in MLR
To the extent that a partial correlation is smaller than the 0-order correlation, then the IV's explanation of the DV is shared with other IVs.
An IV may have a sig. 0-order correlation with the DV, but a non-sig. partial correlation. This would indicate that there is non-sig. unique variance explained by the IV.
Semi-partial correlations (sr2)
in MLR
The sr2 indicate the %s of variance in the DV which are uniquely explained by each IV.
In PASW, the srs are labelled part. You need to square these to get sr2.
For more info, see Allen and Bennett (2008) p. 182
Part & partial correlations in SPSS
In Linear Regression - Statistics dialog box, check Part and partial correlations
Image source: http://www.gseis.ucla.edu/courses/ed230bc1/notes1/con1.htmlIf IVs are correlated then you should also examine the difference between the zero-order and partial correlations.
Multiple linear regression -
Example
Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
.18
.32
.46.52.34YX1X2Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/The partial correlation between Worry and Distress is .46, which uniquely explains considerably more variance than the partial correlation between Ignore and Distress (.18).
Residual analysis
Image source: UnknownResiduals are the distance between the predicted and actual scores.Standardised residuals (subtract mean, divide by standard deviation).
Residual analysis
Three key assumptions can be tested using plots of residuals: Linearity: IVs are linearly related to DV
Normality of residuals
Equal variances (Homoscedasticity)
These residual slides are based on Francis (2007) MLR (Section 5.1.4) Practical Issues & Assumptions, pp. 126-127 and Allen and Bennett (2008)
Note that according to Francis, residual analysis can test:Additivity (i.e., no interactions b/w Ivs) (but this has been left out for the sake of simplicity)
Residual analysis
Assumptions about residuals:Random noise
Sometimes positive, sometimes negative but, on average, 0
Normally distributed about 0
Residual analysis
Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
Residual analysis
Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
Histogram of the residuals they should be approximately normally distributed.This plot is very slightly positively skewed we are only concerned about gross violations.
Residual analysis
The Normal P-P (Probability) Plot of Regression Standardized
Residuals can be used to assess the assumption of normally
distributed residuals.If the points cluster reasonably tightly
along the diagonal line (as they do here), the residuals are
normally distributed.Substantial deviations from the diagonal may
be cause for concern.
Allen & Bennett, 2008, p. 183
Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
Normal probability plot of regression standardised residuals.Shows the same information as previous slide reasonable normality of residuals.
Residual analysis
Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
Histogram compared to normal probability plot.Variations from normality can be seen on both plots.
Residual analysis
The Scatterplot of standardised residuals against standardised
predicted values can be used to assess the assumptions of
normality, linearity and homoscedasticity of residuals. The absence
of any clear patterns in the spread of points indicates that these
assumptions are met.
Allen & Bennett, 2008, p. 183
Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
A plot of Predicted values (ZPRED) by Residuals (ZRESID).This should show a broad, horizontal band of points (it does).Any fanning out of the residuals indicates a violation of the homoscedasticity assumption, and any pattern in the plot indicates a violation of linearity.
Why the big fuss
about residuals?
assumption violation Type I error rate
(i.e., more false positives)
Why the big fuss
about residuals?
Standard error formulae (which are used for confidence intervals and sig. tests) work when residuals are well-behaved.
If the residuals dont meet assumptions these formulae tend to underestimate coefficient standard errors giving overly optimistic p-values and too narrow CIs.
Interactions
Image source: http://commons.wikimedia.org/wiki/File:Color_icon_orange.pngImage license: Public domainImage author: User:Booyabazooka, http://en.wikipedia.org/wiki/User:Booyabazooka
Interactions
Additivity refers to the assumption that the IVs act independently, i.e., they do not interact.
However, there may also be interaction effects - when the magnitude of the effect of one IV on a DV varies as a function of a second IV.
Also known as a moderation effect.
Interactions occur potentially in situations involving univariate analysis of variance and covariance (ANOVA and ANCOVA), multivariate analysis of variance and covariance (MANOVA and MANCOVA), multiple linear regression (MLR), logistic regression, path analysis, and covariance structure modeling. ANOVA and ANCOVA models are special cases of MLR in which one or more predictors are nominal or ordinal "factors.
Interaction effects are sometimes called moderator effects because the interacting third variable which changes the relation between two original variables is a moderator variable which moderates the original relationship. For instance, the relation between income and conservatism may be moderated depending on the level of education.
Interactions
Some drugs interact with each other to reduce or enhance other's effects e.g., Pseudoephedrine ArousalCaffeine ArousalPseudoeph. X Caffeine Arousal
X
Image source: http://www.flickr.com/photos/comedynose/3491192647/License: CC-by-A 2.0, http://creativecommons.org/licenses/by/2.0/deed.enAuthor: comedynose, http://www.flickr.com/photos/comedynose/
Image source: http://www.flickr.com/photos/teo/2068161/License: CC-by-SA 2.0, http://creativecommons.org/licenses/by-sa/2.0/deed.enAuthor: Teo, http://www.flickr.com/photos/teo/
Interactions
Physical exercise in natural environments may provide multiplicative benefits in reducing stress e.g., Natural environment StressPhysical exercise StressNatural env. X Phys. ex. Stress
Image source: http://www.flickr.com/photos/hamed/258971456/License: CC-by-A 2.0Author: Hamed Saber, http://www.flickr.com/photos/hamed/
Interactions
Model interactions by creating cross-product term IVs, e.g.,:Pseudoephedrine
Caffeine
Pseudoephedrine x Caffeine
(cross-product)
Compute a cross-product term, e.g.:Compute PseudoCaffeine = Pseudo*Caffeine.
Example hypotheses:Income has a direct positive influence on ConservatismEducation has a direct negative influence on Conservatism Income combined with Education may have a very -ve effect on Conservatism above and beyond that predicted by the direct effects i.e., there is an interaction b/w Income and Education
Interactions
Y = b1x1 + b2x2 + b12x12 + a + eb12 is the product of the first two slopes (b1 x b2)
b12 can be interpreted as the amount of change in the slope of the regression of Y on b1 when b2 changes by one unit.
Likewise, power terms (e.g., x squared) can be added as independent variables to explore curvilinear effects.
Interactions
Conduct Hierarchical MLR
Step 1:Pseudoephedrine
Caffeine
Step 2:Pseudo x Caffeine (cross-product)
Examine R2, to see whether the interaction term explains additional variance above and beyond the direct effects of Pseudo and Caffeine.
Interactions
Possible effects of Pseudo and Caffeine on Arousal:None
Pseudo only (incr./decr.)
Caffeine only (incr./decr.)
Pseudo + Caffeine (additive inc./dec.)
Pseudo x Caffeine (synergistic inc./dec.)
Pseudo x Caffeine (antagonistic inc./dec.)
Interactions
Cross-product interaction terms may be highly correlated (multicollinear) with the corresponding simple IVs, creating problems with assessing the relative importance of main effects and interaction effects.
An alternative approach is to run separate regressions for each level of the interacting variable.
For example, conduct a separate regression for males and females.Advanced notes: It may be desirable to use centered variables (where one has subtracted the mean from each datum) -- a transformation which often reduces multicollinearity. Note also that there are alternatives to the crossproduct approach to analysing interactions: http://www2.chass.ncsu.edu/garson/PA765/regress.htm#interact
Interactions SPSS example
Image source: James Neill, Creative Commons Attribution 3.0 Australia, http://creativecommons.org/licenses/by/3.0/au/deed.en
Fabricated data
Analysis of Change
Image source: http://commons.wikimedia.org/wiki/File:PMinspirert.jpgImage license: CC-by-SA, http://creativecommons.org/licenses/by-sa/3.0/ and GFDL, http://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_LicenseImage author: Bjrn som tegner, http://commons.wikimedia.org/wiki/User:Bj%C3%B8rn_som_tegner
Analysis of change
Example research question: In group-based mental health interventions, does the quality of social support from group members (IV1) and group leaders (IV2) explain changes in participants mental health between the beginning and end of the intervention (DV)?
Analysis of change
Hierarchical MLRDV = Mental health after the intervention
Step 1IV1 = Mental health before the intervention
Step 2IV2 = Support from group members
IV3 = Support from group leader
Step 1MH1 should be a highly significant predictor. The left over variance represents the change in MH b/w Time 1 and 2 (plus error).Step 2If IV2 and IV3 are significant predictors, then they help to predict changes in MH.
Analysis of change
Strategy: Use hierarchical MLR to partial out pre-intervention individual differences from the DV, leaving only the variances of the changes in the DV b/w pre- and post-intervention for analysis in Step 2.
Analysis of change
Results of interestChange in R2 how much variance in change scores is explained by the predictors
Regression coefficients for predictors in step 2
Step 1MH1 should be a highly significant predictor. The left over variance represents the change in MH b/w Time 1 and 2 (plus error).Step 2If IV2 and IV3 are significant predictors, then they help to predict changes in MH.
Summary (MLR II)
Partial correlation
Unique variance explained by IVs; calculate and report sr2.Residual analysis
A way to test key assumptions.
Summary (MLR II)
Interactions
A way to model (rather than ignore) interactions between IVs.Analysis of change
Use hierarchical MLR to partial out baseline scores in Step 1 in order to use IVs in Step 2 to predict changes over time.
Student questions
?
MLR practice quiz
MLR practice quiz (Wikiversity)
For example, conduct a separate regression for males and females.Advanced notes: It may be desirable to use centered variables (where one has subtracted the mean from each datum) -- a transformation which often reduces multicollinearity. Note also that there are alternatives to the crossproduct approach to analysing interactions: http://www2.chass.ncsu.edu/garson/PA765/regress.htm#interact
ANOVA I
Analysing differences t-tests One sample
Independent
Paired
Image sources: http://commons.wikimedia.org/wiki/File:Information_icon4.svgLicense: Public domainhttp://commons.wikimedia.org/wiki/File:3D_Bar_Graph_Meeting.jpg'License: CC-by-A 2.0Author: lumaxart (http://www.flickr.com/photos/lumaxart/)
Howell (2010):Ch3 The Normal Distribution
Ch4 Sampling Distributions and Hypothesis Testing
Ch7 Hypothesis Tests Applied to Means
Readings Analysing differences
Analysing differences
Correlations vs. differences
Which difference test?
Parametric vs. non-parametric
Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svgLicense: Public domain
Correlational vs
difference statistics
Correlation and regression techniques reflect the
strength of association
Tests of differences reflect
differences in central tendency of variables between groups and
measures.
Correlation/regression techniques reflect the strength of association between continuous variables
Tests of group differences (t-tests, ANOVA) indicate whether significant differences exist between group means
Correlational vs
difference statistics
In MLR we see the world as made of covariation.
Everywhere we look, we see relationships.
In ANOVA we see the world as made of differences.
Everywhere we look we see differences.
In MLR we see world is made of covariation. In ANOVA we see the world as made of differences.In MLR, everywhere we look, we see patterns and relationships. In ANOVA we view everything as having the same, more or less than other things.
Correlational vs
difference statistics
LR/MLR e.g.,
What is the relationship between gender and height in humans?
t-test/ANOVA e.g.,
What is the difference between the heights of human males and
females?
Are the differences in a sample generalisable to a population?
Image soruce: Unknown
How many groups?
(i.e. categories of IV)More than 2 groups = ANOVA models2
groups:
Are the groups independent or dependent?Independent groupsDependent
groups1 group =
one-sample t-testPara DV =
Independent samples t-testPara DV =
Paired samples t-testNon-para DV =
Mann-Whitney UNon-para DV =
Wilcoxon
Which difference test? (2 groups)
Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
A t-test is used to determine whether a set or sets of scores
are from the same population.
- Coakes & Steed (1999), p.61
Parametric vs.
non-parametric statistics
Parametric statistics inferential test that assumes certain characteristics are true of an underlying population, especially the shape of its distribution.Non-parametric statistics inferential test that makes few or no assumptions about the population from which observations were drawn (distribution-free tests).
Parametric vs.
non-parametric statistics
There is generally at least one non-parametric equivalent test for each type of parametric test.
Non-parametric tests are generally used when assumptions about the underlying population are questionable (e.g., non-normality).
Parametric statistics commonly used for normally distributed interval or ratio dependent variables.
Non-parametric statistics can be used to analyse DVs that are non-normal or are nominal or ordinal.
Non-parametric statistics are less powerful that parametric tests.
Parametric vs.
non-parametric statistics
So, when do I use a
non-parametric test?
Consider non-parametric tests when (any of the following):Assumptions, like normality, have been violated.
Small number of observations (N).
DVs have nominal or ordinal levels of measurement.
Some Commonly Used Parametric & Nonparametric Tests
Compares groups classified by two different
factorsFriedman;
2 test of independence2-way ANOVACompares three or more
groupsKruskal-Wallis1-way ANOVACompares two related samplesWilcoxon
matched pairs signed-rankt test (paired)Compares two independent
samplesMann-Whitney U; Wilcoxon rank-sumt test
(independent)PurposeNon-parametricParametricSome commonly used
parametric & non-parametric tests
Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
Adapted from: http://www.tufts.edu/~gdallal/npar.htm
t-tests
t-tests
One-sample t-tests
Independent sample t-tests
Paired sample t-tests
Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svgLicense: Public domain
Why a t-test or ANOVA?
A t-test or ANOVA is used to determine whether a sample of scores are from the same population as another sample of scores.
These are inferential tools for examining differences between group means.
Is the difference between two sample means real or due to chance?
A t-test is used to determine whether a set or sets of scores
are from the same population.
- Coakes & Steed (1999), p.61
t-tests
One-sample
One group of participants, compared with fixed, pre-existing value
(e.g., population norms)
Independent
Compares mean scores on the same variable across different
populations (groups)
Paired
Same participants, with repeated measures
Major assumptions
Normally distributed variables
Homogeneity of variance
In general, t-tests and ANOVAs are robust to violation of assumptions, particularly with large cell sizes, but don't be complacent.
Use of t in t-tests
t reflects the ratio of between group variance to within group variance
Is the t large enough that it is unlikely that the two samples have come from the same population?
Decision: Is t larger than the critical value for t? (see t tables depends on critical and N)
Image soruce: Unknown
68%95%99.7%
Ye good ol normal distribution
Image source: Unknown
One-tail vs. two-tail tests
Two-tailed test rejects null hypothesis if obtained t-value is extreme is either direction
One-tailed test rejects null hypothesis if obtained t-value is extreme is one direction (you choose too high or too low)
One-tailed tests are twice as powerful as two-tailed, but they are only focused on identifying differences in one direction.
One sample t-test
Compare one group (a sample) with a fixed, pre-existing value (e.g., population norms)
Do uni students sleep less than the recommended amount?
e.g., Given a sample of N = 190 uni students who sleep M = 7.5
hrs/day (SD = 1.5), does this differ significantly from 8 hours
hrs/day ( = .05)?
Also called: One sample t-test
One-sample t-test
Image source: James Neill, Creative Commons Attribution 3.0 Australia, http://creativecommons.org/licenses/by/3.0/au/deed.en
Fabricated data
Independent groups t-test
Compares mean scores on the same variable across different populations (groups)
Do Americans vs. Non-Americans differ in their approval of Barack Obama?
Do males & females differ in the amount of sleep they get?
Assumptions
(Indep. samples t-test)
LOMIV is ordinal / categorical
DV is interval / ratio
Homogeneity of Variance: If variances unequal (Levenes test), adjustment made
Normality: t-tests robust to modest departures from normality, otherwise consider use of Mann-Whitney U test
Independence of observations (one participants score is not dependent on any other participants score)
Adapted from slide 23 of Howell Ch12 Powerpoint:IV is ordinal / categorical e.g., gender
DV is interval / ratio e.g., self-esteem
Homogeneity of VarianceIf variances unequal (Levenes test), adjustment made
Normality t-tests robust to modest departures from
normality
(often violated without consequences)
look at histograms, skewness, & kurtosis;
consider use of Mann-Whitney U test if departure from normality is severe, particularly if sample size is small (e.g., < 50)
Independence of observations
(one participants score is not dependent on any other participants
score)
Do males and females differ in in amount of sleep per night?
Image source: James Neill, Creative Commons Attribution 3.0 Australia, http://creativecommons.org/licenses/by/3.0/au/deed.en
Fabricated data
Do males and females differ in memory recall?
Image source: S;ode 23 pf Howell Ch12 PowerpointIndependent samples t-test
Adolescents'
Same Sex Relations in
Single Sex vs. Co-Ed Schools
Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
Adolescents'
Opposite Sex Relations in
Single Sex vs. Co-Ed Schools
Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
Independent samples t-test
1-way ANOVA
Comparison b/w means of 2 independent sample variables =
t-test
(e.g., what is the difference in Educational Satisfaction between
male and female students?)
Comparison b/w means of 3+ independent sample variables = 1-way
ANOVA
(e.g., what is the difference in Educational Satisfaction between
students enrolled in four different faculties?)
Also called related samples t-test or repeated measures t-test
Paired samples t-test
Same participants, with repeated measures
Data is sampled within subjects
Pre- vs. post- treatment ratings
Different factors e.g., Voters approval ratings of candidates X and Y
Also known as: Related samples t-test and Repeated measures
t-testHow much do you like the colour red?
How much do you like the colour blue?
Is there a difference between peoples liking of red and blue?
Assumptions
(Paired samples t-test)
LOM:IV: Two measures from same participants (w/in subjects) a variable measured on two occasions or
two different variables measured on the same occasion
DV: Continuous (Interval or ratio)
Normal distribution of difference scores (robust to violation with larger samples)
Independence of observations (one participants score is not dependent on anothers score)
Does an intervention have an effect?
There was no significant difference between pretest and posttest scores (t(19) = 1.78, p = .09).
Image source: James Neill, Creative Commons Attribution 3.0 Australia, http://creativecommons.org/licenses/by/3.0/au/deed.en
Data based on Howell (2010), pp. 485-587dsaf
Adolescents' Opposite Sex
vs. Same Sex Relations
Image source: James Neill, Creative Commons Attribution 3.0 Australia, http://creativecommons.org/licenses/by/3.0/au/deed.en
Paired samples t-test
1-way repeated measures ANOVA
Comparison b/w means of 2 within subject variables = t-test
Comparison b/w means of 3+ within subject variables = 1-way
ANOVA
(e.g., what is the difference in Campus, Social, and Education
Satisfaction?)
Summary
(Analysing Differences)
Non-parametric and parametric tests can be used for examining differences between the central tendency of two of more variables
Develop a conceptualisation of when to each of the parametric tests from one-sample t-test through to MANOVA (e.g. decision chart).
Summary
(Analysing Differences)
t-tests One-sample
Independent-samples
Paired samples
What will be covered in ANOVA II?
1-way ANOVA
1-way repeated measures ANOVA
Factorial ANOVA
Mixed design ANOVA
ANCOVA
MANOVA
Repeated measures MANOVA
References
Allen, P. & Bennett, K. (2008). SPSS for the health and behavioural sciences. South Melbourne, Victoria, Australia: Thomson.
Francis, G. (2007). Introduction to SPSS for Windows: v. 15.0 and 14.0 with Notes for Studentware (5th ed.). Sydney: Pearson Education.
Howell, D. C. (2010). Statistical methods for psychology (7th ed.). Belmont, CA: Wadsworth.
Open Office Impress
This presentation was made using Open Office Impress.
Free and open source software.
http://www.openoffice.org/product/impress.html
Click to edit the title text format
Click to edit the outline text formatSecond Outline LevelThird Outline LevelFourth Outline LevelFifth Outline LevelSixth Outline LevelSeventh Outline LevelEighth Outline LevelNinth Outline Level
Click to edit the title text format
Click to edit the outline text formatSecond Outline LevelThird Outline LevelFourth Outline LevelFifth Outline LevelSixth Outline LevelSeventh Outline LevelEighth Outline LevelNinth Outline Level
Click to edit the title text format
Click to edit the outline text formatSecond Outline LevelThird Outline LevelFourth Outline LevelFifth Outline LevelSixth Outline LevelSeventh Outline LevelEighth Outline LevelNinth Outline Level
Click to edit the title text format