62
Dr Pamela Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist The Questionnaire Design Pitfalls of Multiple Modes

Dr Pamela Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

  • Upload
    feryal

  • View
    22

  • Download
    0

Embed Size (px)

DESCRIPTION

Dr Pamela Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist. The Questionnaire Design Pitfalls of Multiple Modes. Acknowledgements. Other main members of UK “Mixed Modes and Measurement Error” grant team: Gerry Nicolaas Ipsos MORI - PowerPoint PPT Presentation

Citation preview

Page 1: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Dr Pamela CampanelliSurvey Methods Consultant

Chartered StatisticianChartered Scientist

The Questionnaire Design Pitfalls of Multiple Modes

Page 2: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

AcknowledgementsOther main members of UK “Mixed Modes and Measurement Error” grant team:Gerry Nicolaas Ipsos MORIPeter Lynn University of EssexAnnette Jäckle University of EssexSteven Hope University College London

Grant funding from:UK Economic and Social Research Council (Award RES-175-25-0007)

Page 3: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Larger Project Looked for Evidence of Mode Differences by

• Question Content• Question Format

• Type of task• Characteristics of the task• Implementation of the

task

• Made recommendations

Today a few highlights

Page 4: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Sensitive QuestionsMixed Mode Context

• Very well-known• Sensitive questions prone to social desirability effects

in interviewer modes (see Tourangeau and Yan, 2007; Kreuter, Presser and Tourangeau, 2008)

• But not all questions (Fowler, Roman, and Di, 1998)

• Difference by time frame (Fowler, Roman, and Di, 1998)

Modes to UseSA

(Self-administered)

Recommendations• If mixed mode design includes

• F2F interviews, ask sensitive questions in a paper SA form or use CASI

• TEL interview, pre-test sensitive questions across modes that will be used to see if there are differences

Page 5: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Non-Sensitive: Factual Versus Subjective (1)

Mixed Mode Context

• Subjective questions more prone to mode effects than factual questions (see Lozar Manfreda and Vehovar, 2002; Schonlau et al, 2003)

• But factual questions also susceptible (Campanelli, 2010)

• Subjective scalar questions can be prone to TEL positivity bias

Page 6: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

TEL (and F2F) Positivity BiasDillman et al (2009) - aural versus visual effect• TEL Rs giving more extreme positive answers  Ye et al (2011) - TEL Rs giving more extreme positive answers• But found that F2F was like TEL• Concluded caused by a MUM effect

Hope et al (2011) – TEL Rs giving more extreme positive answers• But no trace of this in F2F (with a showcard and

without a show card)

Thus, actual cause for the TEL positivity bias is still unclear

Page 7: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Non-Sensitive: Factual Versus Subjective (2)

Modes to Use

F2FTEL?SA 

Recommendations

Factual Questions• Use Dillman’s uni-mode principles and test to

see if there are differences across modes  Subjective scalar questions• Avoid TEL, if possible, due to TEL positivity bias• Test F2F to see if positivity bias is present

Page 8: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Inherently Difficult Questions (1)

General Questionnaire Design Context

Inherent difficulty: Question is difficult due to conceptual, comprehension and/or recall issues

• Survey satisficing should be greater for inherently difficult questions (Krosnick, 1991)

 • But this is not true for all inherently difficult questions

(Hunt et al, 1982; Sangster and Fox, 2000; Nicolaas et al, 2011)

Page 9: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Inherently Difficult Questions (2)

N56y. What are the things that you like about your neighbourhood? Do you like your neighbourhood because of its community spirit? Yes……. 1 No…….. 2

N57y. Do you like your neighbourhood because it feels safe? Yes……. 1 No…….. 2

Etc.

EXAMPLE:

Nicolaas et al (2011)

Page 10: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Inherently Difficult Questions (3)Modes to Use

F2FTEL?SA?

 

Recommendations

General Questionnaire Design Context • In general, before use, test questions that are

inherently difficult to see how feasible the questions are for Rs • (Testing can be done with cognitive interviewing or

Belson’s (1981) respondent debriefing method)

Mixed Modes Context• In mixed mode design, pre-test questions with

inherent difficulty across modes that will be used to see if there are differences

Page 11: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Mark All That Apply vs. Yes/No for Each (1)

Mark at that apply Yes/No for eachThis card shows a number of different ways for reducing poverty. In your opinion, which of the following would be effective in reducing poverty? MARK ALL THAT APPLY. Increasing pensions 1 Investing in education for children 2 Improving access to childcare 3 Redistribution of wealth 4 Increasing trade union rights 5Reducing discrimination 6 Increasing income support 7Investing in job creation 8None of these 9 

Next are a number of questions about different ways for reducing poverty. In your opinion, which of the following would be effective? Would increasing pensions reduce poverty?  Yes 1No 2 Would investing in education for children reduce poverty?  Yes 1No 2 Etc.

Page 12: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Mark All That Apply vs. Yes/No for Each (2)

General Questionnaire Design Context

‘Mark all that apply’ is problematic

• Sudman and Bradburn (1982)

• Rasinski et al (1994), Smyth et al (2006) and Thomas and Klein (2006)

• Thomas and Klein (2006)

• Smyth et al (2006)

• Nicolaas et al (2011)

Page 13: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Mark All That Apply vs. Yes/No for Each (3)

Mixed Mode Context

• Smyth et al (2008) - student sample

• Nicolaas et al (2011) - probability sample of the adult population

• More research needed

Page 14: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Mark All That Apply vs. Yes/No for Each (4)Mark all that applyModes to Use

F2F?SA?

 

Recommendations• The ‘mark all that apply’ format is prone to lower

reporting of items, quicker processing time and primacy effects. Therefore probably best to avoid.

• However, it may be less likely to show mode effects in a mixed mode design (F2F with showcard versus SA).

Modes to UseF2FTELSA

Recommendations• The ‘Yes/No for each’ format is strongly supported as

superior to ‘mark all that apply’ by Smyth et al (2006, 2008). But• It can add to the time taken to complete a

questionnaire • Long lists of items should be avoided to reduce

potential R burden• The results from Nicolaas et al (2011) suggest that the

format should be tested across modes before use

Yes/No for each

Page 15: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Ranking versus Rating (1) Ranking Battery of Rating QuestionsWhat would you consider most important in

improving the quality of your neighbourhood?  Please rank the following 7 items from 1 (meaning most important) to 7 (meaning least important).

Less traffic □Less crime □More / better shops □Better schools □More / better facilities for leisure activities □Better transport links □More parking spaces □

Next are a number of questions about improving your neighbourhood? How important would less traffic be for improving the quality of your neighbourhood?  Very important 1Moderately important 2Somewhat important 3Or not important at all? 4 Etc.

Page 16: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Ranking versus Rating (2)

General Questionnaire Design Context

Ranking • Is difficult (Fowler, 1995)  • Primacy effects (see Stern, Dillman & Smyth, 2007)  • Better quality (see Alwin and Krosnick, 1985; Krosnick, 1999;

Krosnick, 2000).

Page 17: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Ranking versus Rating (3)

Mixed Modes Context • Rating more susceptible to non-differentiation in Web than

TEL (Fricker et al, 2005)

• Similarly, rating sometimes more susceptible to non-differentiation in Web or TEL than F2F (Hope et al, 2011)

• Ranking more susceptible to non-differentiation in Web than F2F (TEL not tested) (Hope et al, 2011)

Page 18: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Ranking versus Rating (4)

Modes to Use

F2FNOT TEL

SA?  

Recommendations

Avoid use of ranking in mixed mode studies• Ranking could be considered for F2F surveys if the list is

short • Ranking is not feasible for TEL surveys (unless 4 categories

or less)• Ranking is often problematic in SA modes• Ranking with programme controls in Web may irritate or

confuse some Rs 

Modes to Use

F2FTEL?SA ?

Recommendations

• Avoid long sequences of questions using the same rating scale in mixed mode designs that include Web and possibly TEL

• Could try rating task followed by ranking of the duplicates (except in postal where skip patterns would be too difficult)

Rating

Ranking

Page 19: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Agree/Disagree Questions

• For additional problems see Fowler (1995), Converse and Presser (1986), Saris et al (2010) and recent Holbrook AAPOR Webinar 

Mixed Modes Context• Differences across modes were found with more acquiescence bias in

the interview modes and curiously, more middle category selection in SA (Hope et al, 2011)

General Questionnaire Design Context• Agree/Disagree questions are a

problematic format in all modes

• They create a cognitively complex task

• Are susceptible to acquiescence bias

Modes to UseShould not be used

in any mode

Recommendations• Avoid use of agree-disagree scales and use alternative

formats, such as questions with item specific (IS) response options 

This neighbourhood is not a bad place to live. Strongly agree 1Agree 2Neither agree nor disagree 3Disagree 4Or strongly disagree? 5

Page 20: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Use of Middle Category (1)

And how satisfied or dissatisfied are you with street cleaning?  Very satisfied 1Moderately satisfied 2Slightly satisfied 3Neither satisfied nor dissatisfied 4Slightly dissatisfied 5Moderately dissatisfied 6Very dissatisfied 7 

Page 21: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Use of Middle Category (2)General Questionnaire Design Context • Kalton et al (1980)

• Krosnick (1991) and Krosnick and Fabrigar (1997)

• Schuman and Presser (1981)

• Krosnick and Presser (2010)

• Krosnick and Fabrigar (1997)

• O’Muircheartaigh, Krosnick and Helic (1999)

• Hope et al (2011)

Page 22: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Use of Middle Category (3)Mixed modes context

• More use of the middle category in visual (as opposed to aural) mode (Tarnai and Dillman, 1992)

• More selection of middle categories on end-labelled scales than fully labelled scales, but less so for TEL (Hope et al 2011)

• More use of the middle category in Web as opposed to F2F or TEL (Hope et al 2011)

 

Page 23: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Use of Middle Category (4)

Modes to Use

F2FTEL?SA?

Recommendations

• Probably best not to use middle categories with a mixed modes study with SA

• If mixed mode design includes • TEL interviews be cautious of the

use of end-labelled scales

Page 24: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

OverallTypology of Questions

Page 25: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

A classification of question characteristics relevant to measurement error

Question content Topic: behaviour, other factual, attitude, satisfaction, other subjective Sensitivity Inherent difficulty: conceptual, comprehension, recall

Question format   

OpenClosed

Ratio/interval Ordinal NominalType of task Number

Date Short textual/

verbal

Unconstrained textual/verbal

Visual analogue scale

Agree/disagree Rating-unipolar Rating-bipolar Numeric bands Battery of rating

scales

Yes/no Mark all Ranking

Characteristics of the task

      Number of categories Middle categories Full/end labels Branching

 

Implementation of question

Use of instructions, probes, clarification, etc. Edit checks DK/refused explicit or implicit Formatting of

response boxes Labelling of

response boxes

Size of answer box/text field

Delineation of answer space 

Formatting of response lists  Showcards

Page 26: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

In Summary1) Mode is a characteristic of a question 2) Good questionnaire design is key to minimising

many measurement differences 3) But we are unlikely to eliminate all differences as

there are different types of satisficing in different modes

4) We need to do more to assess any remaining differences and find ways to adjust for these (more on this in the next few slides)

Page 27: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Assessing Mixed Mode Measurement Error (1)

Quality indicatorsFor example:• Mean item nonresponse rate • Mean length of responses to open question • Mean number of responses in mark all that

apply • Psychometric scaling properties • Comparison of survey estimates to a ‘gold’

standard (de Leeuw 2005; Kreuter et al, 2008; Voogt and Saris, 2005)• Although validation data often hard or

impossible to obtain• Etc.

Page 28: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Assessing Mixed Mode Measurement Error (2)

How was the mixed mode data collected? What are the confounding factors or limitations?• Random assignment

• R’s randomly assigned to mode (Nicolaas et al, 2011): But this is not always possible

• Random group changes mode during the interview (Heerwegh, 2009)

• In both cases non-compatibility can occur due to differential nonresponse bias

• R choses mode of data collection • May reduce nonresponse, but selection and

measurement error effects are confounded (Vannieuwenhuyze et al, 2010)

Page 29: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Assessing Mixed Mode Measurement Error (3)Ways to separate sample composition from mode effects

• Compare mixed mode data to that of a comparable single-mode survey (Vannieuwenhuyze et al, 2010)

• Statistical modelling: • Weighting (Lee, 2006) • Multivariate model (Dillman et al, 2009) • Latent variable models (Biemer, 2001)

• Propensity score matching (Lugtig et al, 2011) • Matching Rs from two survey modes which share

the same background characteristics • Identify Rs who are unique to a specific survey mode

and those who are found in both modes• May be a useful technique

Page 30: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Assessing Mixed Mode Measurement Error (4)

The size of effects between modes

Depends on the type of analyses, which

Depends on the type of reporting needed

For example:• Reporting of

• Means• Percentages for extreme categories• Percentages for all categories

Page 31: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

We hope that today’s talk has given you. . .

• More understanding of the theoretical and practical differences in how Rs react to different modes of data collection

• More awareness of specific question attributes that make certain questions less portable across modes

• More knowledge and confidence in executing your own mixed modes questionnaires

Page 32: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Thank you all for listening

[email protected]

Complete table of results and recommendations available upon request

Page 33: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Appendix

Page 34: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Open Questions (1)

General Questionnaire Design Context - SA

Lines in text boxes versus an open box• Christian and Dillman (2004)• But Ciochetto et al (2006)

Slightly larger answer spaces (Christian and Dillman, 2004)

Option 1: Unconstrained textual/verbal open questions (i.e., fully open questions)

Page 35: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Open Questions (2)Option 1: Fully open questions (continued)

Mixed Mode Context• TEL Rs give less detailed answers to open-ended questions than

F2F Rs (Groves and Kahn, 1979; Sykes & Collins, 1988; de Leeuw and van der Zouwen, 1988)

• Paper SA Rs give less complete answers to open-ended questions than F2F or TEL Rs (Dillman, 2007; de Leeuw,1992, Groves and Kahn, 1979)

• Web Rs provide 30 more words on average than paper SA Rs (Schaeffer and Dillman, 1998)

• Positive effects of larger answer spaces may also apply to interview surveys (Smith, 1993; 1995) 

Page 36: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Open Questions (3)

Modes to Use

F2FTELSA?

Recommendations

• If mixed mode design includes SA,• Minimise the use of open questions (as

less complete answers are obtained)

• Pre-test SA visual layout 1) To ensure that the question is

understood as intended2) To check if there are differences across

modes

Option 1: Fully open questions (continued)

Page 37: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Open Questions (4)

General Questionnaire Design Context - SA

• Small changes in visual design can have large impact on measurement

• Examples • Couper, Traugott and Lamias (2001)

• Smith (1993; 1995)

• Dillman et al (2004)

• Martin et al (2007)

Option 2: Open question requiring a number, date, or short textual/verbal response

Page 38: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Open Questions (5)Option 2: Short number, date or textual/verbal response (continued)

Mixed Modes Context Modes to Use

F2FTELSA? 

Recommendations

• Test SA visual layout 1) To ensure that the question is

understood as intended2) To check if there are differences across

modes

Page 39: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

End-labelled versus Fully-labelled (1)

On the whole, how satisfied are you with the present state of the economy in Great Britain, where 1 is very satisfied and 7 is very dissatisfied?

General Questionnaire Design Context

• Krosnick and Fabrigar (1997) suggest that fully-labelled scales are

• Easier to answer • More reliable and valid

• Two formats are not equivalent• Fully-labelled scales produce more positive responses

(Dillman and Christian, 2005; Campanelli et al, 2012)

• End-labelled scales have a higher percent of Rs in the middle category (Campanelli et al, 2012; not discussed in text but in tables of Dillman and Christian, 2005)

Page 40: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

End-labelled versus Fully-labelled (2)Mixed Modes Context

• Although higher endorsement of middle categories on end-labelled scales

• Less true for TEL Rs (Campanelli et al, 2012)

Modes to Use

F2FTEL?SA

Recommendations

• Be careful of the use of end-labelled scales as these are more difficult for Rs

• If mixed mode design includes • TEL interviews be cautious of the use

of end-labelled scales 

Page 41: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Branching versus No Branching (1)In the last 12 months would you say your health has been good or not good? 

Good 1Not good 2 

IF GOOD: Would you say your health has been fairly good or very good? 

Fairly good 1Very good 2 

IF NOT GOOD: Would you say your health has been not very good or not good at all?

Not very good 1Not good at all 2 

General Questionnaire Design Context

• In TEL surveys, ordinal scales are often changed into a sequence of two or more branching questions in order to reduce the cognitive burden

• Krosnick and Berent (1993)

• Malhotra et al (2009)

• Hunter (2005)

• Nicolaas et al (2011)

Page 42: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Branching versus No Branching (2)Mixed Modes Context

• Nicolaas et al (2000) found more extreme responses to attitude questions in the branched format in TEL mode (but unclear whether more valid)

• Nicolaas et al (2011) found • Mode differences between F2F, TEL and Web, but with

but with no clear patterns• No mode difference for the non-branching format

• More research needed

Page 43: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Branching versus No Branching (3)Modes to Use

F2FTELSA 

Recommendations

• As branching may improve reliability and validity, if used, it should be used across all modes • But testing is recommended to see if mode

differences are present

• Due to R non-compliance with skip patterns in paper SA1, Dillman (2007) recommends • Avoidance of branching questions in mixed mode

surveys that include a postal component• Instead reduce number of categories so that

branching is not required

1 Dillman (2007) shows that the skips after a filter question can be missed by a fifth of postal survey Rs

Page 44: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Implementation of task

Page 45: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Use of instructions, probes, clarifications, etc. (1)

Can I check, is English your first or main language? INTERVIEWER: If ‘yes', probe - 'Is English the only language you speak or do you speak any other languages, apart from languages you may be learning at school as part of your studies?' Yes - English only 1Yes - English first/main and speaks other languages 2No, another language is respondent's first or main language 3Respondent is bilingual 4 

Page 46: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Use of instructions, probes, clarifications, etc. (2)

• It is common practice to provide interviewers with additional information that can be used if necessary to improve the quality of information from Rs

• Although not yet studied in mixed modes, it is likely that this may result in differences across modes in a study that uses SA alongside interviewer modes

Page 47: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Use of instructions, probes, clarifications, etc. (3)

Modes to Use

F2FTELSA 

Recommendations

• Where possible, all instructions and clarifications should be added to the question for all modes (rather than being left to the discretion of the interviewer) or excluded from all modes

 • Dillman (2007) recommends that

interviewer instructions be evaluated for unintended response effects and their use for SA modes considered

Page 48: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Don’t Know (1)What, if any, is your religion? None 1Christian 2Buddhist 3Hindu 4Jewish 5Muslim 6Sikh 7Another religion 8 (Spontaneous only) (Don’t know 98)(Refused 99)

General Questionnaire Design Context

• Offering explicit ‘don’t know’ response greatly increases cases in this category

• Particularly true for R’s with • lower educational attainment

(see Schuman and Presser, 1981; Krosnick et al, 2002)  

• Common practice not to provide an explicit ‘don’t know’ in TEL and F2F

• In SA modes, the ‘don’t know’ option

tends to be either an explicit response option or it is omitted altogether

Page 49: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Don’t Know (2)

Mixed Mode Context

• Treating ‘don’t know’ differently in different modes may result in different rates of ‘don’t know’ across the modes

• Fricker et al (2005)

• Dennis and Li (2007) • Bishop et al (1980)

• Vis-Visschers (2009)

Page 50: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Don’t Know (3)Modes to Use

F2FTELSA 

Recommendations

• Spontaneous ‘don’t know’ can be offered in mixed mode designs that include only interviewer administered modes (i.e., TEL & F2F).

 • For mixed mode designs that include both

interviewer-administered and SA modes, it is generally recommended not to allow ‘don’t know’ as a response option.

 • Further research is required to compare

spontaneous ‘don’t know’ in TEL and F2F with alternative methods of dealing with ‘don’t know’ in Web questionnaires (e.g. allowing questions to be skipped without further prompting).

 • For questions where it is likely that many Rs may not

know the answer, explicit don’t knows should be used across all modes. 

Page 51: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (1)

Alwin, D., & Krosnick, J. (1985). The Measurement of Values in Surveys: A Comparison of Ratings and Rankings. Public Opinion Quarterly, 49(4), 535-552.

Belson, W. (1981). The Design and Understanding of Survey Questions. Aldershot, England: Gower.

Biemer, P. (2001). Non-response Bias and Measurement Bias in a Comparison of Face to Face and Telephone Interviewing. Journal of Official Statistics, 17(2), 295-320.

Bishop, G., Oldendick, R., Tuchfarber, A. & Bennett, S. (1980). Pseudo-Opinions on Public Affairs. Public Opinion Quarterly, 44 (2), 198-209.

Campanelli, P. (2010). Internal analysis documents from ESRC Survey Design and Measurement Initiative grant on Mixed Modes and Measurement Error.

Campanelli, P., Gray, M., Blake, M. and Hope, S. (2012). Suspicious and Non-Suspicious Response Patterns which Are and Are Not Problematic. Unpublished paper.

Page 52: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (2)Christian, L. & Dillman, D. (2004). The Influence of Graphical and

Symbolic Language Manipulations on Responses to Self-Administered Questions. Public Opinion Quarterly, 68(1), 57-80.

Ciochetto, S., Murphy, E & Agarwal, A. (2006). Usability Testing of Alternative, Design Features for the 2005 National Census Test (NCT) Internet Form: Methods, Results, and Recommendations of Round-2 Testing. Human-Computer Interaction Memorandum #85, Washington, DC: U. S. Census Bureau (Usability Laboratory).

Converse, J. & Presser, S. (1986). Survey Questions: Handcrafting the Standardized Questionnaire. Thousand Oaks, California: Sage.

Couper, M., Traugott, M., & Lamias, M. (2001). Web Survey Design and Administration. Public Opinion Quarterly, 65(2), 230.

de Leeuw, E. & van der Zouwen, J. (1988). Data Quality in Telephone and Face-to-Face Surveys: a Comparative Meta-Analysis. In R. Groves et al. (Eds), Telephone survey methodology. Hoboken, New Jersey: Wiley.

Page 53: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (3)de Leeuw, E. (1992). Data Quality in Mail, Telephone and Face to Face

Surveys. Dissertatie. T. T. Publikaties Amsterdam. de Leeuw, E.D. (2005). To Mix or Not to Mix Data Collection Modes in

Surveys. Journal of Official Statistics, 21(2), 233-255.Dennis, M. & Li, R. (2007). More Honest Answers to Web Surveys? A

Study on Data Collection Mode Effects. IMRO’s Journal of Online Research, published on 10/10/2007. http://ijor.mypublicsquare.com/view/more-honest-answers.

Dillman, D. (2007). Mail and Internet Surveys: The Tailored Design Method, 2rd edition. Hoboken, New Jersey: Wiley.

Dillman, D. & Christian, L. (2005). Survey Mode as a Source of Instability in Responses Across Surveys. Field Methods, 17(1), 30-52.

Dillman, D., Parsons, N. & Mahon-Haft, T. (2004). Cognitive Interview Comparisons of the Census 2000 Form and New Alternatives. Technical Report 04-030 of the Social and Economic Sciences Research Center, Washington State University, Pullman, Washington. http://www.sesrc.wsu.edu/dillman/papers/2004/connectionsbetweenopticalfeatures.pdf

Page 54: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (4)Dillman, D., Smyth, J. & Christian, L.M. (2009). Internet, Mail and Mixed-

Mode Surveys: The Tailored Design Method, 3rd edition. Hoboken, New Jersey: Wiley.

Fricker, S., Galesic, M., Tourangeau, R. & Yan, T. (2005). An Experimental Comparison of Web and Telephone Surveys. Public Opinion Quarterly, 69(3), 370-392.

Fowler, F.J. (1995). Improving Survey Questions: Design and Evaluation. Thousand Oaks, California: Sage.

Fowler, F.J., Roman, A. & Di, Z. (1998). Mode Effects in a Survey of Medicare Prostate Surgery Patients. Public Opinion Quarterly, 62(1), 29.

Groves, R.M. & Kahn, R.L. (1979). Surveys by Telephone: A National Comparison with Personal Interview. New York: Academic Press.

Heerwegh, D. (2009). Mode differences between face-to-face and web surveys: An Experimental Investigation of Data Quality and Social Desirability Effects. International Journal of Public Opinion Research, 21(1), 111-121.

Page 55: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (5)Hope, S., Campanelli, P., Nicolaas, G., Lynn, P., Jäckle, A. (2011). The

role of the interviewer in producing mode effects: Results from a mixed modes experiment. ESRA Conference, 21 July 2011.

Hunt, S., Sparkman, R. & Wilcox, J. (1982). The Pretest in Survey Research: Issues and Preliminary Findings. Journal of Marketing Research, 19(2), 269-273.

Hunter, J. (2005). Cognitive Test of the 2006 NRFU: Round 1, Statistical Research Division Study Series, Survey Methodology #2005–07, U.S. Census Bureau. http://www.census.gov/srd/papers/pdf/ssm2005-07.pdf

Jäckle, A., Lynn, P., Campanelli, P., Nicolaas, G., & Hope, S. (2011). How and When Does the Mode of Data Collection Affect Survey Measurement? ESRA Conference, 21 July 2011.

Kalton, G., Roberts, J. & Holt, D. (1980). The Effects of Offering a Middle Response Option with Opinion Questions. Statistician, 29, 65-78.

Page 56: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (6)

Kreuter, F., Presser, S. & Tourangeau, R. (2008). Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity. Public Opinion Quarterly, 72(5), 847-865.

Krosnick, J. (1991). Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys. Applied Cognitive Psychology, 5, 213-236.

Krosnick, J. (1999). Survey Research. Annual Review of Psychology, 50: 537–567.

Krosnick, J. (2000). The Threat of Satisficing in Surveys: The Shortcuts Respondents Take in Answering Questions. Survey Methods Newsletter, 20(1), 2000 (Published by the National Centre for Social Research, London, UK).

Krosnick, J. & Berent, M. (1993). Comparisons of Party Identification and Policy Preferences: The Impact of Survey Question Format. American Journal of Political Science, 37(3), 941-964.

Page 57: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (7)Krosnick, J. & Fabrigar, L. (1997). Designing Rating Scales for Effective

Measurement in Surveys. In L. Lyberg et al. (Eds), Survey Measurement and Process Quality (pp. 141-164). Hoboken, New Jersey: Wiley.

Krosnick, J., Holbrook, A., Berent, M., Carson, R., Hanemann, W., Kopp, R., Mitchell, R., et al. (2002). The Impact of "No Opinion" Response Options on Data Quality: Non-Attitude Reduction or an Invitation to Satisfice? Public Opinion Quarterly, 66(3), 371-403.

Krosnick, J., Narayan, S. & Smith, W. (1996). Satisficing in Surveys: Initial Evidence. In M. T. Braverman & J. K. Slater (Eds.), Advances in Survey Research (Vol. 70, pp. 29-44). San Francisco: Jossey-Bass.

Krosnick, J. and Presser, S. (2010). Question and Questionnaire Design. Handbook of Survey Research, 2nd Edition. In James D. Wright and Peter V. Marsden (Eds). San Diego, CA: Elsevier.

Lee, S. (2006). Propensity Score Adjustment as a Weighting Scheme for Volunteer Panel Web Surveys. Journal of Official Statistics, 22(2), 329-349.

Lozar Manfreda, K. & Vehovar, V. (2002). Mode Effect in Web Surveys. In the proceedings from The American Association for Public Opinion Research (AAPOR) 57th Annual Conference, 2002.

Page 58: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (8)Lugtig, P., Lensvelt-Mulders, G., Frerichs, R., & Greven, A. (2011).

Estimating Nonresponse Bias and Mode Effects in a Mixed-Mode Survey. International Journal of Market Research, 53(5).

Malhotra, N., Krosnick, J. & Thomas, R. (2009). Optimal Design of Branching Questions to Measure Bipolar Constructs. Public Opinion Quarterly, 73(2): 304-324.

Martin, E., Childs, H., Hill, J., Gerber, E., & Styles, K. (2007). Guidelines for Designing Questionnaires for Administration in Different Modes., US Census Bureau, Washington. http://www.census.gov/srd/mode-guidelines.pdf

Nicolaas, G., Campanelli, P., Hope, S., Jäckle, A. & Lynn, P. (2011). Is It a Good Idea to Optimise Question Format for Mode of Data Collection? Results from a Mixed Modes Experiment, ISER Working paper, no. 2011-31, ISER, University of Essex.

Nicolaas, G., Thomson, K. & Lynn, P. (2000). Feasibility of Conducting Electoral Surveys in the UK by Telephone. National Centre for Social Research.

O'Muircheartaigh, C., Krosnick, J. & Helic, A. (2001). Middle Alternatives, Acquiescence, and the Quality of Questionnaire Data, Irving B. Harris Graduate School of Public Policy Studies, University of Chicago, 2001.

Page 59: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (9)Rasinski, K., Mingay, D., & Bradburn, N. (1994). Do Respondents

Really “Mark All That Apply” On Self-Administered Questions? The Public Opinion Quarterly, 58(3), 400-408.

Roy, L., Gilmour, G. & Laroche, D. (2004). The Internet Response Method: Impact on Canadian Census of Population Data. Statistics Canada Internal Report, 2004. http://www.amstat.org/sections/srms/proceedings/y2006/Files/JSM2006-000808.pdf

Sangster, R. & Fox, J. (2000). Housing Rent Stability Bias Study. Washington, DC: U.S. Bureau of Labor Statistics, Statistical Methods Division.

Saris, W., Revilla, M., Krosnick, J. & Shaeffer, E. (2010). Comparing Questions with Agree / Disagree Response Options to Questions with Item-Specific Response Options. Survey Research Methods, 4(1), 61-79.

Schaeffer, D. & Dillman, D. (1998). Development of a Standard E-mail Methodology. Public Opinion Quarterly, 62(3), 378-397 http://www.schonlau.net/publication/03socialsciencecomputerreview_propensity_galley.pdf

Page 60: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (10)Schonlau, M., Zapert, K., Simon, L., Sanstad, K., Marcus, S., Adams, J.,

Spranca, M., et al. (2003). A Comparison Between Responses From a Propensity-Weighted Web Survey and an Identical RDD Survey. Social Science Computer Review, 22(1), 128-138.

Schuman, H. & Presser, S. (1981). Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, & Context. New York: Academic Press.

Smith, T. (1993). Little Things Matter: A Sampler of How Differences in Questionnaire Format Can Affect Survey Responses, GSS Methodological Report no. 78, Chicago: National Opinion Research Center.

Smith, T. (1995). Little Things Matter: A Sample of How Differences in Questionnaire Format Can Affect Survey Responses, paper presented at the annual meeting of the American Association for Public Opinion Research.

Smyth, J., Dillman, D., Christian, L. & Stern, M. (2006). Comparing Check-All and Forced-Choice Question Formats in Web Surveys. Public Opinion Quarterly, 70(1), 66-77.

Page 61: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (11)Smyth, J., Christian, L. & Dillman, D. (2008). Does “Yes or No” on the

Telephone Mean the Same as “Check-All-That-Apply” on the Web? Public Opinion Quarterly, 72(1), 103-113.

Stern, M., Dillman, D. & Smyth, J. (2007). Visual Design, Order Effects, and Respondent Characteristics in a Self-Administered Survey. Survey Research Methods, 1(3), 121-138.

Sudman, S. & Bradburn, N.M. (1982). Asking Questions. San Francisco, California: Jossey-Bass.

Sykes, W. & Collins, M. (1988). Effect of Mode of Interview: Experiments in the UK. In R. Groves et al. (Eds), Telephone survey methodology. Hoboken, New Jersey: Wiley.

Tarnai, J. & Dillman, D. (1992). Questionnaire Context as a Source of Response Differences in Mail vs. Telephone Surveys. In: N. Schwarz, H. J. Hippler, and S. Sudman (Eds.) Order Effects in Social and Psychological Research. New York: Springer-Verlag.

Thomas, R. K. and Klein, J. D. (2006). Merely incidental?: Effects of response format on self-reported behaviour. Journal of Official Statistics, (22) 221-244.

Page 62: Dr Pamela  Campanelli Survey Methods Consultant Chartered Statistician Chartered Scientist

Full References (12)Tourangeau, R. & Yan, T. (2007). Sensitive Questions in Surveys.

Psychological Bulletin, 133(5), 859-883. Vannieuwenhuyze, J., Loosveldt, G., and Molenberghs, G. (2010). A

Method for Evaluating Mode Effects in Mixed-Mode Surveys. Public Opinion Quarterly, 74(5), 1027-1045.

Vis-Visschers, R. (2009). Presenting 'don’t know' in web surveys. Paper presented at 7th Quest Workshop Bergen Norway, 18-20 May 2009.

Ye, C., Fulton, J. & Tourangeau, R. (2011). More Positive or More Extreme? A Meta-Analysis of Mode Differences in Response Choice. Public Opinion Quarterly, 75(2), 349–365.