Multivariate Regression 11/19/2013. Readings Chapter 8 (pp 187-206) Chapter 9 Dummy Variables and...

Preview:

Citation preview

Multivariate Regression

11/19/2013

Readings

• Chapter 8 (pp 187-206)

• Chapter 9 Dummy Variables and Interaction Effects (Pollock Workbook)

OPPORTUNITIES TO DISCUSS COURSE CONTENT

Office Hours For the Week

• When– Tuesday 10-12– Thursday 8-12– And by appointment

Course Learning Objectives

1. Students will be able to interpret and explain empirical data.

2. Students will achieve competency in conducting statistical data analysis using the SPSS software program.

BIVARIATE REGRESSION ANALYSISRatio and Intervals!

Bivariate Linear Regression

• Bivariate linear regression is an important statistical technique in the Social Sciences. It allows us to measure the effects of an independent variable on a dependent variable.

• It regresses all the values on to a line that best describes the relationship.

Rules for Regression

• If you have a Ratio/Interval Dependent variable that takes on at least 11 values

• You need data that are not nominal• If you have 30 or more cases (N>30) • If you have a linear relationship. It will not

work with curvilinear or exponential relationships.

The Dependent Variable

The Constant, where the line crosses the y-axis

The Regression Equation!

The Slope and direction of the line

The independent variable

Weight and MPG

• What is the Constant?• A positive or Negative relationship?• Is it significant relationship and and Why• What is the predicted MPG of a car that

weighs 2000 lbs?

MULTIPLE REGRESSION

What we can do with it

• Test the significance, strength and direction of more than one independent variable on the dependent variable, while controlling for the other independent variables.

• We can compare the strength of each independent variable against each other

• We can examine an entire model at one time!

The Model

• Y is the dependent variable• a is the constant• b1x1- first beta coefficient and first

independent variable• b2x2- first beta coefficient and first

independent variable

This allows us to model additive relationships

Computing a Multiple regression

• You put more than one independent variable where you say "independents“

• D.V. Women09 (% of women in parliament) – IV1- womenyear2 (date

of enfranchisement) – IV2- pr-sys (pr system) – IV3- pmat12_3

(postmaterialism)

Regression Outputs

• These have 3 parts1. The Model Summary 2. ANOVA3. The Variables/Model

THINGS THAT BEGIN WITH “R”Part I

With So Many, How do we know?

• There are many R's out there: – lower case "r" for correlation – upper case "R" for regression

Correlation (small r)

• r- the pearson’s product movement

• r2- The squared pearson correlation coefficient.

The R-Square (large R)• this is a measure of association

for the entire model • This is a PRE measure that tells

us what percent of the total variation in the dependent variable is explained by our model.

• The higher the number, the better our model predicts.

• We can increase the R value of our model, by increasing the number of variables, even insignificant ones!

Adjusted R-Square

• this "adjusts" for the addition of independent variables. In equations with more than 1 independent variable, it will always be smaller than the R Square.

• This is the preferred measure and a PRE model

What the R’s look like

The R Square

Adj R-Square, the preferred measure

THE ANALYSIS OF VARIANCE (ANOVA)

Part II

ANOVA

• A Way of testing the null hypothesis for the entire model- We Look at the F-Score

• H0 = that there is not relationship between our variables and the dependent variable

• HA = There is at least 1 significant variable in the model

What The F-Score tells us• It is like a chi-square for

Regression. The F-score tells us if we have a significant regression model

• If the F-Score is not significant, we accept the null hypothesis (no relationship).

• It usually tells us at least one of our variables is significant.

• It is a way of examining the entire regression.

The F-Score

• We look at the Sig value and use the p<.05 measurement

• In the model above, our p value is .001 – We Reject the null hypothesis – At least one variable is significant

THE MODELPart III

The Model

• What it tells us– Variable relationships and direction– Variable significance– Variable Strength

Old Friends

Beta Values• Measure the change in the

dependent variable

• Show the direction of the relationship

T-Tests• Test the significance of each

independent variable on the dependent variable

• Accept or reject the null for that variable

Standardized Beta Coefficients

• They show us the variables which have the greatest influence.

• These are measured in absolute value

• The larger the standardized beta, the more influence it has on the dependent variable.

Looking at our Model

Beta Values

T-Score- Significance

TRYING IT OUT

Turning Texas Blue (through Turnout)

• Data• Dependent Variable – Turnout 2012

• Independent Variables- – COLGRADCollege %– HISPPER Hispanic percent – BLACKPER African-

American percent

Another One

• D.V. Palin_therm-post (Feeling thermometer for Palin 0-100)

• IV's– enviro_jobs (Environment vs.

jobs tradeoff) 0=envir, 1=middle, 2=jobs

– educ_r- education in years – Gunown- do you own a gun

(1=yes, 5=no) – relig_bible_word (Is Bible

actual word of God?) 1=yes, 0=No

Another one from the states

• Gay Rights involves many concepts. The Lax-Phillips index uses content validity to address this issue at the state level. It examines the support for the following issues– Adoption – Hate Crimes legislation – Health Benefits – Housing Discrimination – Job Discrimination – Marriage Laws – Sodomy Laws – Civil Unions – It then averages these to get a statewide level

State Example

• Dependent Variable- gay_support (higher is more supportive on Lax-Phillips)

• Independent Variables – relig_import (% of people in

state that say religion provides a great deal of guidance)

– south (1=south, 0= NonSouth – abortlaw (restrictions on

abortion)

Tautology

• it is tempting to use independent variables that are actually components of the dependent variable.

• How you will notice this: – if the dependent variables seem to be measures of

each other (human development vs. education) they probably are, (female literacy and literacy rate)

– High Adj. R-Squares (above .900)

Multicollinearity

• Your independent variables should not only be independent of the d.v. (non tautological) but they should be independent of each other!

• Picking independent variables that are very closely related, or are actually part of the same measure What can happen here is these variables will negate the influence of each other on the dependent variable.

Symptoms of Multicollinearity

• the multiple regression equation is statistically significant (big R values, even a significant ANOVA), but none of the t-ratios are statistically significant

• the addition of the collinear independent variable radically changes the values of the standardized beta coefficients (they go from positive to negative, or weak to strong), without a corresponding change in the ADJ R-square.

• Variables, that you would swear on a stack of bibles should be related, are not

Solving Tautology and Multicolinearity

• Solving tautology- Drop the independent variable

• What to do About Multicollinearity – run bivariate correlations on each of your

variables. If the r-square value is >.60. – You will want to drop one of the variables, or

combine them into a single measure.

Data collection

Collecting Primary Data

• Direct Observation

• Document Analysis

• Interview Data

DOCUMENT ANALYSIS

Document Analysis (The Written Record)

• What is it

• When to use it

Types of Document Analysis

• The Episodic Record

• The Running Record

Limitations and Advantages

Recommended