Upload
jayson-small
View
217
Download
0
Embed Size (px)
Citation preview
Chapter 20Chapter 20
Deciding on what to evaluate: the strategy
Deciding on what to evaluate: the strategy
IntroductionIntroduction
• What’s the purpose of the evaluation?
• What data to collect?
• What product, system, or prototype are you testing?
• What constraints do you have?
• Answers to above form da strategy
• What’s the purpose of the evaluation?
• What data to collect?
• What product, system, or prototype are you testing?
• What constraints do you have?
• Answers to above form da strategy
Purpose of evaluationPurpose of evaluation
• Qualitative or quantitative?• Qualitative: not easily defined or measured
Sometimes obtained from user comments, e.g., “easy”, “difficult”, “boring”, etc. so…
Listen to your subjects (video camera, yes)
• Quantitative: explicit usability metrics Clearly easier to crunch the numbers if you have some
numbers to crunch Of course measurements need to be set up: in the
code, with a stopwatch, or “wrapper” program, like Tobii’s ClearView (records time, keystrokes, etc.)
• Qualitative or quantitative?• Qualitative: not easily defined or measured
Sometimes obtained from user comments, e.g., “easy”, “difficult”, “boring”, etc. so…
Listen to your subjects (video camera, yes)
• Quantitative: explicit usability metrics Clearly easier to crunch the numbers if you have some
numbers to crunch Of course measurements need to be set up: in the
code, with a stopwatch, or “wrapper” program, like Tobii’s ClearView (records time, keystrokes, etc.)
Priorities and levelsPriorities and levels
• Prioritize usability requirements What’s more important: domain, users, tasks,
environment, constraints (costs, budgets, timescales, technology)? What’s most important drives design
Erm, what’s this got to do with evaluation?
• Setting usability metric levels: Has to do with baseline and desired performance
levels, i.e., “speed will improve by 50%” Can be based on model, e.g., Fitts Law Can be stated as a hypothesis
• Prioritize usability requirements What’s more important: domain, users, tasks,
environment, constraints (costs, budgets, timescales, technology)? What’s most important drives design
Erm, what’s this got to do with evaluation?
• Setting usability metric levels: Has to do with baseline and desired performance
levels, i.e., “speed will improve by 50%” Can be based on model, e.g., Fitts Law Can be stated as a hypothesis
What type of data to collectWhat type of data to collect
• Quantitative or qualitative data?
• Didn’t we already go over this? (Doncha hate it when textbooks are overly
repetitive; I don’t know what it is about HCI books, but they tend to be this way)
Anyway, so this is a kind of wasted slide… Oh wait, I get it--it’s the second question of da
strategy
• Quantitative or qualitative data?
• Didn’t we already go over this? (Doncha hate it when textbooks are overly
repetitive; I don’t know what it is about HCI books, but they tend to be this way)
Anyway, so this is a kind of wasted slide… Oh wait, I get it--it’s the second question of da
strategy
What to test?What to test?
• (Question 3 of da strategy)
• What’s being evaluated: low-fidelity prototype or high-fidelity prototype? (Why not an existing system?)
• Low-fidelity: more for guidance and direction of design (more exploratory in nature)
• High-fidelity: used for exposing problems with preliminary version of UI
• (Question 3 of da strategy)
• What’s being evaluated: low-fidelity prototype or high-fidelity prototype? (Why not an existing system?)
• Low-fidelity: more for guidance and direction of design (more exploratory in nature)
• High-fidelity: used for exposing problems with preliminary version of UI
What are the constraints?What are the constraints?
• (Question 4 of …)• Hmm, they say this is the most important, is it?
Practically speaking, I guess so…• These are the pragmatic concerns:
How much time do I have to run the experiment? Money? (Paying subjects, yeah right…) Equipment available? Subjects? Where to get them (Psyc pool!) How much time do I have to analyze?
• Document the strategy (good idea)
• (Question 4 of …)• Hmm, they say this is the most important, is it?
Practically speaking, I guess so…• These are the pragmatic concerns:
How much time do I have to run the experiment? Money? (Paying subjects, yeah right…) Equipment available? Subjects? Where to get them (Psyc pool!) How much time do I have to analyze?
• Document the strategy (good idea)
Global warming exampleGlobal warming example
• Evaluating the Global Warming CD: Learnability: easy to learn? Satisfaction: enjoyable to use? Navigation: easy to install, navigate, use?
• Exercise 21.1 (good one): Suppose you’re a consultant and you get hauled in by the
Global Warming developers (who think usability testing is a waste of time but they want to adhere to ISO 9241)
What are you going to tell them your strategy is? What concerns/requirements do you have as the
experimenter?
• Evaluating the Global Warming CD: Learnability: easy to learn? Satisfaction: enjoyable to use? Navigation: easy to install, navigate, use?
• Exercise 21.1 (good one): Suppose you’re a consultant and you get hauled in by the
Global Warming developers (who think usability testing is a waste of time but they want to adhere to ISO 9241)
What are you going to tell them your strategy is? What concerns/requirements do you have as the
experimenter?
Global Warming strategyGlobal Warming strategy
• Purpose: evaluate whether navigation will be effective for students
• Concerns: will this be an enjoyable learning experience (will students actually learn anything?)
• Data to collect: comments on the UI during use (what about learning effect?)
• To test: prototype (just UI, no math model)• Constraints: newbie evaluators
• Purpose: evaluate whether navigation will be effective for students
• Concerns: will this be an enjoyable learning experience (will students actually learn anything?)
• Data to collect: comments on the UI during use (what about learning effect?)
• To test: prototype (just UI, no math model)• Constraints: newbie evaluators