8771 Reliability and Validity-6

Embed Size (px)

Citation preview

  • 8/11/2019 8771 Reliability and Validity-6

    1/17

    Reliability and Validity of

    Research InstrumentsAn overview

  • 8/11/2019 8771 Reliability and Validity-6

    2/17

    Measurement error

    Error variance--the extent of variability in

    test scores that is attributable to error rather

    than a true measure of behavior.

    Observed Score=true score + error variance(actual score obtained) (stable score) (chance/random error)

    (systematic error)

  • 8/11/2019 8771 Reliability and Validity-6

    3/17

    Validity

    The accuracy of the measure in reflecting

    the concept it is supposed to measure.

  • 8/11/2019 8771 Reliability and Validity-6

    4/17

    Reliability

    Stability and consistency of the measuring

    instrument.

    A measure can be reliable without being

    valid, but it cannot be valid without being

    reliable.

  • 8/11/2019 8771 Reliability and Validity-6

    5/17

    Instrumentvalidity

    The definition of instrument validityis the

    extent to which an instrument measures

    what it is supposed to. Validity isestablished by correlating the scores with a

    similar instrument. Also, expert review

    establishes validity.

  • 8/11/2019 8771 Reliability and Validity-6

    6/17

    Validity

    The extent to which, and how well, a

    measure measures a concept.

    face

    content

    construct

    concurrent

    predictive

    criterion-related

  • 8/11/2019 8771 Reliability and Validity-6

    7/17

    Face validity

    Just on its face the instrument appears to be

    a good measure of the concept. intuitive,

    arrived at through inspection e.g. Concept=pain level

    Measure=verbal rating scale rate your pain

    from 1 to 10.Face validity is sometimes considered a subtype

    of content validity.Question--is there any time when face validity is not desirable?

  • 8/11/2019 8771 Reliability and Validity-6

    8/17

    Content validity

    is the extent to which a measuringinstrument provides adequate coverage ofthe topic under study.

    If the instrument contains a representative

    sample of the universe, the content validityis good.

    It can also be determined by using a panel ofpersons who shall judge how well the

    measuring instrument meets the standards,but there is no numerical way to express it.

    A CVI (content validity index) of .80 or

    more is desirable.

  • 8/11/2019 8771 Reliability and Validity-6

    9/17

    Construct validity Sensitivity of the instrument to pick up

    minor variations in the concept beingmeasured.

    Can an instrument to measure anxiety pick up different levels

    of anxiety or just its presence or absence? Measure two

    groups known to differ on the construct.

    Ways of arriving at construct validity

    Hypothesis testing method

    Convergent and divergent

    Multitrait-multimatrix method

    Contrasted groups approach

    factor analysis approach

  • 8/11/2019 8771 Reliability and Validity-6

    10/17

    Concurrent validity

    Correspondence of one measure of a

    phenomenon with another of the same

    construct.(administered at the same time)

    Two tools are used to measure the same concept and

    then a correlational analysis is performed. The

    tool which is already demonstrated to be valid isthe gold standard with which the other measure

    must correlate.

  • 8/11/2019 8771 Reliability and Validity-6

    11/17

    Concurrent validity-Does your attitude survey give

    scores that agree with other things that go along with

    attitude? For example, if someone scores low, indicating

    that they ahve a negative attitude, are low attitude scoresconcurrent with (happen at the same time as) negative

    remarks from that person? High bolld pressure? If you

    administer your attutude survey to someone who is

    cheerful and smiling a lot, but they rate low, indicating anegative attitude, your survey may not have concurrent

    validity.

  • 8/11/2019 8771 Reliability and Validity-6

    12/17

    Predictive validity

    The ability of one measure to predict

    another future measure of the same concept.

    If IQ predicts SAT, and SAT predicts QPA, then shouldnt IQ predict QPA (wecould skip SATs for admission decisions)

    If scores on a parenthood readiness scale indicate levels of integrity, trust,

    intimacy and identity couldnt this test be used to predict successful

    achievement of the devleopmental tasks of adulthood?

    The researcher is usually looking for a more efficient way to measure a

    concept.

  • 8/11/2019 8771 Reliability and Validity-6

    13/17

    Predictive validity-Can your attitude survey

    predict? For example, if someone scores high,

    indicating that they have a positive attitude, canhigh attitude scores also be predictive of job

    promotion?

  • 8/11/2019 8771 Reliability and Validity-6

    14/17

    Criterion related validity The ability of a measure to measure a criterion

    (usually set by the researcher).

    If the criterion set for professionalism is nursing is

    belonging to nursing organizations and readingnursing journals, then couldnt we just count

    memberships and subscriptions to come up with a

    professionalism score.

    Can you think of a simple criterion to measure

    leadership?

    Concurrent and predictive validity are often listed as

    forms of criterion related validity.

  • 8/11/2019 8771 Reliability and Validity-6

    15/17

    Reliability Reliability has to do with accuracy and

    precision of a measurement procedure.

    Homogeneity, equivalence and stability of a

    measure over time and subjects. The

    instrument yields the same results overrepeated measures and subjects.

    Expressed as a correlation coefficient (degree of agreement

    between times and subjects) 0 to +1.Reliability coefficient expresses the relationship between

    error variance, true variance and the observed score.

    The higher the reliability coefficient, the lower the error

    variance. Hence, the higher the coefficient the morereliable the tool! .70 or higher acceptable.

  • 8/11/2019 8771 Reliability and Validity-6

    16/17

    Reliability can be improved in the following two ways:-

    a) By standardizing the conditions under which themeasurement takes place i.e. we must ensure that external

    sources of variation such as boredom, fatigue etc. areminimized to the extent possible. That will improvestability aspect.

    b) By carefully designed directions for measurementwith no variation from group to group, by using trainedand motivated persons to conduct the research and also by

    broadening the sample of items used. This will improveequivalence aspect.

  • 8/11/2019 8771 Reliability and Validity-6

    17/17

    Stability

    The same results are obtained over repeated

    administration of the instrument.

    Test-restest reliability

    parallel, equivalent or alternate forms