A day in the life of Product Development…
$!!!
$*!
^*>
Brainstorming – 1st cut!
OMG!
Not done
!
We’ve already
wasted 2 months!
Suggestions, creative inputs from everybody!
– 2nd cut!
Change the font
Also the language
Design by Committee – 3rd cut!
Finally, final cut!
Wow!
Looks ok to me! Are we
sure ?
Time cost? Overheads?
OMG!
Let's get this straight. Is it
making business sense?!
It was fun. But can we cut it
short?
In hindsight
There has to be a better way!
Where is the actual user?
$!!!
$!!!
$!!!
While the team fights and freezes on the design, this guy stands at the corner
Who is this guy?! And what does HE/SHE want?!
What is User Centered Design?
• User-centered design (UCD) is a process in which the needs, wants and limitations of the
end user of any product are systematically researched and incorporated into each stage of
the design process.
• ISO definition of Usability –
Usability is the effectiveness, efficiency and satisfaction with which a specified set of users
can achieve a specified set of tasks in a particular environment.
Identify
Analyze
Listen
ObserveProfile and
Group
Design
Validate
Focus on End User-the often neglected…
End User
UCD- Design Activities
UX Research and Analysis
Contextual Inquiry, User Interviews, Surveys and Questionnaires
Usability Benchmarking studies
User Goals
User Profiles and Personas
Epic Scenarios
User Journeys
UX Design
UX Guidelines
Information Architecture and Navigation Models
Paper Prototypes/Wireframes
and Visual Design Options
Heuristic Evaluation/Expert Review
Expectancy Testing, Affordance Testing, A/B Testing
Key Screen Designs, UI Style guide, Asset and
Patterns Library
Master Templates/ Specifications
UX Development
High Fidelity prototypes
UI Design and Development
Performance Testing, Cognitive
Walkthroughs, Surveys and Questionnaires
Test Reports and recommendations
Live Site testing
Field Testing, Diary Studies, Surveys, Question
naires
Test Reports and Recommendations
UX Road Map
Legend
UX Research UI Design and Development Usability Testing Usability Analysis Product Management
Legend
Requirements and Analysis Design Development
Integration, Testing, Training, Implementation
Benefits (ROI?)
• Listen from horse‟s mouth
• Eliminate “you know, I think this works..” “Let me tell you…” “as
far as I know…”
• Fix potential road blocks
• Track back potential business enhancers into the road map
• Match tone, pitch to highlight most liked features.
• Cover up lesser rated and or irritating aspects.
• Understand hidden behavioral aspects, cultural issues, computer
literacy levels etc.,
• Cite the reports to defend your product usability and likeability
• Clients listen when they know you care, be better informed before
going to the client
We have UI designers, is this not enough?
• Usability is an end result of entire product phases and not a by-product of UI
design
• Good UI design can provide aesthetic value. when not backed up by
research based data and validations, can become a hindrance on its own
due to individual perceptions and bias
• Usability should not be skin deep. Usability should be intervened into the
product and business culture by being user focused and business goals
oriented, cannot be just at surface level
• Similar to other practice areas and skill sets, UI designers can ensure that
they bring in their previous experience of what worked before. This no way
guarantees that it will work now.
• Validating your product with end users is an industry standard across all
major product development firms
This seems to be more trouble than help, will it not
delay the product…• Choose between unforeseen factors resulting in failure or some additional
effort and cost
• Usability findings gives us visibility into potential issues
• Can help in fine tuning product vision in current or future versions
• Understand and be prepared for potential issues than surprises
• Utilize positive findings from validation process towards better pitches
We have guidelines and best practices, what else
can end user suggest?• Go beyond listening to end users, observe them work, use similar tools or
technologies in their natural environments. Helps in uncovering hidden
issues or unspoken needs
• validation process is not just about listening to the whims and fancies of end
user. It is a skill and measure to ensure what we think works or otherwise
• Guidelines, best practices, internal validations definitely help but cannot be
definitive methods or modes of ensuring what works
OK, what are some metrics/measurement criteria?
Performance metrics (quantitative measures, often used for summative testing) – Task success (whether the participant can complete the task in the allotted time without assistance from the
administrator)
– Time on task (the length of time in seconds required for participants to complete the task)
– Errors (may include clicking on an incorrect menu item, incorrect link, or interacting incorrectly with an on-
screen control)
– Efficiency (e.g. task time, and errors)
– Learn ability
Issues-based metrics (primarily used for formative testing) – Usability issues (# issues found, % of participants who found an issue)
– Severity ratings (rating assigned to usability issues that reflects the impact of each issue on the user„s
satisfaction and ability to complete tasks)
* Metrics derived from NISTIR 7741 - NIST Guide to the Processes Approach for Improving the Usability of Electronic Health Records
OK, what are some metrics/measurement criteria? (Continued…)
• Self-reported metrics (both quantitative and qualitative measures which
provide insights about participant satisfaction) – Post-task ratings (may be a Likert scale from 1 to 5 where, for example, 1 is very difficult and 5 is very
easy).
– Post-session ratings (may be a Likert scale from 1 to 5 where, for example, 1 is very dissatisfied and 5 is
very satisfied).
– System Usability Scale (Participants rate statements on a five-point scale where ―1 - Strongly Disagree‖to ―5 - Strongly Agree. The SUS survey yields a single number that represents a composite measure of
the overall usability of the system).
– NASA TLX (subjective workload assessment tool that asks users to rate how demanding particular tasks
are to perform).
– Specific attribute questions (e.g,. Rate: Strongly Agree to Strongly Disagree for the question ―Overall, I am
satisfied with how well this application supported this task‖).
– Semantic differentials (e.g., ―This task was ―Easy ―Difficult).
– Answers to open-ended questions (For example, ―What did you find to be the most difficult or frustrating
aspect of this application?‖).
• Behavioral metrics (often add context to performance, issues-based, and self-
report metrics) – Verbal (positive / negative) comments and non-verbal behavior. For example, a long pause or a confused
look might indicate lack of understanding.
* Metrics derived from NISTIR 7741 - NIST Guide to the Processes Approach for Improving the Usability of Electronic Health Records
Where do we find some Healthcare usability documentation?
• http://www.nist.gov/itl/hit/upload/Guide_Final_Publication_Version.pdf
• www.himss.org/content/files/himss_definingandtestingemrusability
• http://www.humanfactors.com/downloads/whitepapersrequest.asp?whitepaper=UX
andHealthcare
• http://healthit.ahrq.gov/portal/server.pt/document/907504/09%2810%29-0091-1-
ef_pdf?qid=80974149&rank=5
• http://www.usercentric.com/hit-usability
• http://www.himss.org/ASP/topics_FocusDynamic.asp?faid=358
• http://isvcommunity.com/profiles/blogs/meaningful-usability-in
• http://www.slideshare.net/jinnes/repositioning-user-
experience?src=related_normal&rel=13728
* References quoted from some of the influential usability bodies
Some comments from a previous test and redesign
– Help Desk :
– „Finally this looks like a software”
– “I would feel more comfortable is providing support to this than the previous (product name removed).
– Training Team
– “ Tasks flows are very clear, there won‟t be huge learning curve”
– “ I think it will reduce my training time”
– “ This restricted navigation works perfect! we will be able to control users task flows, which was not possible with
(product name removed).
– Account Manager
– “I thought the demonstration showed major improvement. I appreciate the opportunity to see what is coming
and how it would affect the process of flow from a site perspective.”
– Regional Vice President
– I thought the flow change was excellent. The pages do not look as busy as before.
– I think the changes are definitely more easier and user friendly.
– Implementation Manager
– I have never seen Jimmy and Tina appreciating so much about (product name removed). The fact that
they are positive gives me a huge confidence
Some “quotes”…• Usability cost-benefit data shows that including usability in product development actually cuts the time to
market and increases sales because usability and ease of use build quality into products and catch many expensive problems early on in the cycle when they can be addressed at lower cost. Finally, working with users from the beginning of a product cycle ensures that the product is being designed so that users will be satisfied. -- Claire Marie Karat, human-computer interface researcher at IBM
• What most separates firms with a disciplined approach to customer-experience management from their peers? The use of primary user research. 68% of the disciplined firms report using primary research to understand customers compared to only 21% of the undisciplined firms the largest [gap] that we found.- Forrester Research
• Usability cost-benefit data shows that including usability in product development actually cuts the time to market and increases sales because usability and ease of use build quality into products and catch many expensive problems early on in the cycle when they can be addressed at lower cost. Finally, working with users from the beginning of a product cycle ensures that the product is being designed so that users will be satisfied. -- Claire Marie Karat, human-computer interface researcher at IBM
• Most websites today fail basic tests of usability. -- Forrester Research
• Over the last year online banking has attracted 6.3 million users, but a massive 3.1 million of those have closed their accounts already due to poor website design and inefficient service.- Internet Money Issue 4
What is not Usability
• Usability is not UI Design/Development. UI Design/Development is more a discipline focusing on
the technical aspects of front End development
• Aesthetics and Visual Design does not ensure Usability, Usability is a result of research , domain
expertise, technology and aesthetics.
• Asking users what they want and laundry lists do not ensure usability. A usability process that is
solely dependant on asking what users want is prone to failures as users cannot precisely
articulate what the core issues are. Observe users than ask.
• Marketing research can provide direction and base for Usability Analysis but is not entirety in itself
for lack of ingrained end user focus.
• Common sense is not usability, Common sense based approach towards Usability tends to fail
miserably as common sense differs from person to person and is often biased.
22
Why Usability Validation? Don’t we know enough?
• We do know our domain in our context and experience, but we are not our users, why risk?
• Target users likes and dislikes are influenced by the similar experiences they had before from
various software, gadgets and even from what their neighbor had or has.
• What is usable for me may not be for you, same applies to a group of end users from a particular
domain and or demographic.
• Usable is often a very subjective feeling stemming from the difference between utility and usability
of an app. With no alternative in sight, users will adopt to the available. with more options, they pick
and choose!
• Usability when not measured, subjected and defended against metrics, can hurt negatively due to
varied feedbacks from external market voices.
Measuring UsabilityUsability is the effectiveness, efficiency and satisfaction with which a specified set of users can achieve a specified set of tasks in a particular
environment.
Effectiveness:Task Success
Participant is able to achieve the correct outcome, without assistance, within the time allotted on a per task basis. The total number of successes are calculated for each task and then divided by the total number of times that task was attempted. The results are provided as a percentage.
Task Failures Participant abandons the task, does not reach the correct answer or performs it incorrectly, or reaches the end of the allotted time before successful completion, the task is counted as a Failure.The total number of errors is calculated for each task and then divided by the total number of times that task was attempted. Not all deviations would be counted as errors. Also be expressed as the mean number of failed tasks per participant. An enumeration of errors and error types should be collected.
Efficiency:
Task Deviations
The participant‘s path through the application is recorded. Deviations occur if the participant, for example, visits an incorrect screen, clicks on an incorrect menu item, follows an incorrect link, or interacts incorrectly with an on-screen control. This path is compared to the optimal path. The number of steps in the observed path is divided by the number of optimal steps to provide a ratio of path deviation. Deviations do not necessarily mean failure – simply a less efficient method through the interface. It is strongly recommended that task deviations be reported. Optimal paths (i.e., procedural steps) should be recorded when constructing tasks.
Task Time Each task is timed by the test administrator from “Begin “ to “Done”. Task times are recorded for successes. Observed task times divided by the optimal time for each task is a measure of optimal efficiency. Optimal task performance time, as benchmarked by expert performance under realistic conditions, is recorded when constructing tasks. Target task times used for task times in the Moderator‘s Guide must be operationally defined by taking multiple measures of optimal performance and multiplying by some factor (e.g., 1.25) that allows some time buffer because the participants are presumably nottrained to expert performance. Thus, if expert, optimal performance on a task was 100 seconds then allotted task time performance would be 125 seconds. This ratio should be aggregated across tasks and reported with mean and variance scores.
Satisfaction:
Task Rating
Participant‘s subjective impression of the ease of use of the application is measured by administering both a simple post-task question as well as a post-session questionnaire. After each task, the participant is asked to rate ―Overall, this task was: on a scale of 1 (Very Difficult) to 5 (Very Easy). These data are averaged across participants. Common convention is that average ratings for systems judged easy to use should be 3.3 or above. To measure participants‘ confidence in and likeability of the application overall, the testing team can administer the System Usability Scale (SUS) post-test questionnaire. Questions include, ―I think I would like to use this system frequently,―I thought the system was easy to use, and I would imagine that most people would learn to use this system very quickly.
Measuring UsabilityUsability is the effectiveness, efficiency and satisfaction with which a specified set of users can achieve a specified set of tasks in a particular
environment.
What Users Want What does that mean
Effective—The completeness and accuracy with which they achieve their goals
•Were user goals met?•Were results correct?
Efficient—The speed (with accuracy) in which they complete their tasks
•Involves effective use of navigation, keyboard shortcuts, menus, links, buttons, etc.
Engaging—How pleasant or satisfying the interface is to use
•Design must meet audience needs. A game-like interface can be annoying to people doing repetitive tasks
Error tolerant—The ability of the interface to prevent errors or to help users recover from errors
•Design to prevent user interaction errors when possible•Design to help the user recover from errors that occur•Design error messages to be understandable, and do not use language that “blames the user”
Easy to learn—How well the product supports initial orientation and deeper learning
•Interface allows user to build on their knowledge without effort•Users must be able to draw on past experiences•Learning aids (tutorials, help, etc.) must be part of the design
Known proponents of Usability and UCD
• IBM-http://www-01.ibm.com/software/ucd/ucd.html
• Apple-http://developer.apple.com/library/mac/#documentation/UserExperience/Conceptual/AppleHIGuidelines/XHIGDesignProcess/XHIGDesignProcess.html#//apple_ref/doc/uid/TP40002718-TPXREF101
• Microsoft- http://msdn.microsoft.com/en-us/library/ms997578.aspx
• Oracle- http://usableapps.oracle.com/getInvolved/labTours.html
• IDEO- http://www.ideo.com/work/human-centered-design-toolkit/
• SAP- http://www.sapdesignguild.org/editions/edition8/measuring_up.asp
• World Usability Day- http://www.worldusabilityday.org/
• UPA- http://www.upassoc.org/
• CHI- http://www.chi2011.org/