30
Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Embed Size (px)

Citation preview

Page 1: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Research Methodology

Lecture No : 11

(Goodness Of Measures)

1

Page 2: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Recap

• Measurement is the process of assigning numbers or labels to objects, persons, states of nature, or events.

• Scales are a set of symbols or numbers, assigned by rule to individuals, their behaviors, or attributes associated with them

2

Page 3: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

3

Page 4: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

• Using these scales we complete the development of our instrument.

• It is to bee seen if these instruments accurately and measure the concept.

4

Page 5: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Sources of Measurement Differences

Why do ‘scores’ vary? Among the reasons legitimate differences, are differences due to error (systematic or random)

1. That there is a true difference in what is being measured.

2. That there are differences in stable characteristics of individual respondents On satisfaction measures, there are systematic

differences in response based on the age of the respondent.

04/18/23 5

Page 6: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

3.Differences due to short term personal factors – mood swings, fatigue, time constraints, or other transistory factors. Example – telephone survey of same person, difference may be due to these factors (tired versus refreshed) may cause differences in measurement.

4.Differences due to situational factors – calling when someone may be distracted by something versus full attention.

04/18/23 6

Page 7: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

• 5.Differences resulting from variations in administering the survey – voice inflection, non verbal communication, etc.

• Differences due to the sampling of items included in the questionnaire.

7

Page 8: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

7. Differences due to a lack of clarity in measurement instrument

(measurement instrument error). Example; unclear or ambiguous questions.

8. Differences due to mechanical or instrument factors – blurred questionnaires, bad phone connections.

04/18/23 8

Page 9: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Goodness of Measure

• Once we have operationalized, and assigned scales we want to make sure that these instruments developed measure the concept accurately and appropriately.

• Measure what is suppose to be measured• Measure as well as possible

04/18/23 9

Page 10: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

• Validity : checks as to how well an instrument that is developed measured the concept

• Reliability: checks how consistently an instrument measures

10

Page 11: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

11

Page 12: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Ways to Check for Reliability

How to check for reliability of measurement instruments or the stability of measures and internal consistency of measures?

Two methods are discussed to check the stability .1. Stability

(a) Test – Retest Use the same instrument, administer the test

shortly after the first time, taking measurement in as close to the original conditions as possible, to the same participants.

04/18/23 12

Page 13: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

If there are few differences in scores between the two tests, then the instrument is stable. The instrument has shown test-retest reliability.

Problems with this approach. Difficult to get cooperation a second time Respondents may have learned from the first

test, and thus responses are altered Other factors may be present to alter results

(environment, etc.)

13

Page 14: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

(b) Equivalent Form ReliabilityThis approach attempts to overcome some of the

problems associated with the test-retest measurement of reliability.

Two questionnaires, designed to measure the same thing, are administered to the same group on two separate occasions (recommended interval is two weeks).

04/18/23 14

Page 15: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

If the scores obtained from these tests are correlated, then the instruments have equivalent form reliability.

Tough to create two distinct forms that are equivalent.

An impractical method (as with test-retest) and not used often in applied research.

15

Page 16: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

(2)Internal Consistency Reliability

This is a test of the consistency of respondents answer to all the items in a measure . The items should ‘hang together as a set.

i.e. the items are independent measures of the same concept, they will correlated with one another

04/18/23 16

Page 17: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Developing questions on the Concept Enriched Job

Page 18: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Validity

• Definition: Whether what was intended to be measured was actually measured?

04/18/23 18

Page 19: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Face Validity• The weakest form of validity• Researcher simply looks at the measurement

instrument and concludes that it will measure what is intended.

• Thus it is by definition subjective.

04/18/23 19

Page 20: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Content Validity

The degree to which the instrument items represent the universe of the concepts under study.

In English: did the measurement instrument cover all aspects of the topic at hand?

04/18/23 20

Page 21: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Criterion Related Validity• The degree to which the measurement instrument

can predict a variable known as the criterion variable.

04/18/23 21

Page 22: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

• Two subcategories of criterion related validity• Predictive Validity

– Is the ability of the test or measure to differentiate among individuals with reference to a future criterion.

– E.g. an instrument which is suppose to measure the aptitude of an individual, when used can be compared with the future job performance of a different individual. Good performance (Actual) should also have scored high in the aptitude test and vise versa

22

Page 23: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

• Concurrent Validity– Is established when the scale discriminates

individuals who are known to be different that is they should score differently on the test.

– E.g. individuals who are happy at availing welfare and individuals who prefer to do job must score differently on a scale/ instrument which measures work ethics.

Page 24: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Construct Validity• Does the measurement conform to some underlying

theoretical expectations. If so then the measure has construct validity.

• i.e. If we are measuring consumer attitudes about product purchases then do the measure adhere to the constructs of consumer behavior theory.

• This is the territory of academic researchers

04/18/23 24

Page 25: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

• Two approaches are used to measure construct validity

• Convergent Validity– A high degree of correlation among 2 different

measures intended to measure same construct• Discriminant Validity

– The degree of low correlation among varaibles that are assumed to be different.

04/18/23 25

Page 26: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

• To check validity through Correlation analysis, Factor Analysis, Multi trait , Multi matrix correlation etc

26

Page 27: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

• Reflective vs Formative measure scales:• In some multi item measure where it is measuring

different dimensions of a concept do not hang together

• Such is the case of Job Description Index measure which measures job satisfaction from 5 different dimension i.e Regular Promotions, Fairly good chance for promotion, Income adequate, Highly Paid, good opportunity for accomplishment.

27

Page 28: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

• In this case some items of dimensions Income adequate and Highly paid to be correlated but dimension items of Opportunity for Advancement and Highly Paid might not correlated.

• In this measure not all the items would related to each other as it’s dimensions address different aspect of job satisfaction.

• This measure /scale is termed as Formative scale

28

Page 29: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

• In some cases the measure dimensions and items correlate.

• In this kind of measure/scale the different dimensions share a common basis ( common interest)

• An example is of a scale on Attitude towards the Offer scale.

• Since the items are all focused on the price of an item, all the items are related hence this scale is termed as Reflective Scale.

29

Page 30: Research Methodology Lecture No : 11 (Goodness Of Measures) 1

Recap

30