UT Complete Guide to User Testing

Embed Size (px)

Citation preview

  • 8/16/2019 UT Complete Guide to User Testing

    1/20

    The complete guide to user testing websites, apps, and prototypes 1

    The complete guide to user testing websites, apps, and prototypes

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    2/20

    The complete guide to user testing websites, apps, and prototypes 2

    ContentsINTRO

    DEFINE YOUR OBJECTIVE

    IDENTIFY WHAT AND WHO YOU’RE STUDYINGDetermine your product and devices

    Select your test participants

    Other considerations

    BUILD YOUR TEST PLANCreate a series of tasks

    Write clear questions

    LAUNCH A DRY RUN

    BEWARE OF ERRORS ANALYZE YOUR RESULTS

    CONCLUSION

    03

    04

    05

    07

    14

    15

    17

    19

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    3/20

    The complete guide to user testing websites, apps, and prototypes 3

    IntroUser feedback is the key to making any business successful—whether you’re launching an app, redesigninga website, rening product features, or making ongoing improvements to your customer experience.

    So how do you go about getting that feedback?

    Remote user research is a fast, reliable, and scalable way to get the insights you need to improve yourcustomer experience. Unlike traditional in-lab usability testing or focus groups, you can recruit participantsand get results in a matter of hours, not days or weeks. You’re not limited by your location, schedule, orfacilities. You’ll get candid feedback from real people in your target demographic in their natural environment:at home, in a store, or wherever they’d ordinarily interact with your product.

    Performing remote user research on a regular basis helps you:

    VALIDATE YOUR PRODUCT IDEAS BEFORE COMMITTING RESOURCES

    INFLUENCE DECISION MAKERS IN YOUR COMPANY TO MAKE IMPROVEMENTS

    ALIGN YOUR TEAM ON PRIORITIES

    IDENTIFY OPPORTUNITIES TO DIFFERENTIATE YOUR COMPANY FROM YOUR COMPETITION

    TRANSFORM YOUR COMPANY’S APPROACH TO CUSTOMER EXPERIENCE

    Remote user researchis fast, reliable, andscalable.

    In this eBook, we’ll cover how to plan, conduct, and analyze your remote user research. If you’re newto research, you’ll learn the ropes of setting up a study and getting straight to the insights. And ifyou’re an experienced researcher, you’ll learn best practices for applying your skills in a remotestudy setting.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    4/20

    The complete guide to user testing websites, apps, and prototypes 4

    Dene your objectiveThe rst step toward gathering helpful feedback is setting a clear andfocused objective. If you don’t know exactly what sort of informationyou need to obtain by running your study, it’ll be difficult to stay on trackthroughout the project.

    Ask yourself: What am I trying to learn?

    You don’t need to uncover every usability problem or every user behaviorin one exhaustive study. It’s much easier and more productive to run aseries of smaller studies with one specic objective each. That way, you’llget focused feedback in manageable chunks. Be sure to keep yourobjective clear and concise so that you know exactly what to focus on.

    Example of a complex objective:Can users easily nd our products and make an informed

    purchase decision?

    This objective actually contains three very different components:1. nding a product, 2. getting informed, and 3. making a purchase.

    Example of a better objective:

    Can users nd the information they need?

    As you set your objective, think about the outcomes your stakeholderswill care about. You may be interested in identifying how users interactwith the onboarding ow, for example, but that won’t be helpful if yourteam needs to identify ways to improve the monetization process.

    Keeping your objective front and center will help you structure yourstudies to gain insights on the right set of activities. It’ll also guide you inwhom you recruit to participate in your study, the tasks they’ll perform,and what questions you should ask them.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    5/20

    The complete guide to user testing websites, apps, and prototypes 5

    Identifywhat and who you’re studying Once you know your research objective, it’s time to determine a few specics.

    First, think about what type of product you’re studying and which devices will be used to interact with it.

    Are you looking for feedback on a prototype? A released or unreleased mobile app? A website? If you havemultiple products you want to learn about, then you’ll want to set up multiple studies—it’s best to test oneproduct at a time so you get the most focused feedback.

    Keep the URL or app le handy when you set up your study. This will be where your participants will starttheir study.

    Consider which devices and/or browsers you’ll want to include in your study. If your product is available onmultiple devices, we recommend testing your mobile experience with at least as many users as your desktopexperience.

    Next, consider who you will need to recruit to participate in your study.

    With remote user testing, you don’t need to search far and wide to nd people to give you feedback. Yousimply select your demographics or other requirements when you set up your study.

    PRODUCT TYPE

    Prototype (in a tool like InVision)

    Static image

    Website

    Released app

    Unreleased iOS app

    Unreleased Android app

    STARTING PLACE

    Shareable URL

    Upload to Box or Dropbox; copy shareable URL

    URL

    Name of the app in the App Store/Play Store

    .IPA le

    .APK le

    You can user test

    digital productsat any stage ofdevelopment,from wireframeto prototype tolive product.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    6/20

    The complete guide to user testing websites, apps, and prototypes 6

    So how many test participants should you include?

    It’s been demonstrated that ve participants will uncover 85% of the usability problems on a website , and additional users will produce diminishingreturns. Resist the temptation to double or triple the number of users touncover 100% of your usability problems. It’s more efficient (and easier)to run a study with ve participants, make changes, run another studywith another ve users, and so on.

    Finally, if you are looking for trends and insights beyond basic usabilityissues, you may want to include a larger sample size. We recommend ve toeight participants per audience segment—so if you have three distinct buyerpersonas, you will want to include a total of 15-24 participants in your study.

    Other considerations

    Sometimes, you may be interested in a more elaborate study. Thesemethodologies are also possible with remote user research:

    Longitudinal studies: Watching users interact with a product overa period of time.

    Competitive benchmarking studies: Comparing the user experienceof several similar companies over time.

    Beyond-the-device studies: Observing non-digital experiences, such

    as unboxing a physical item or making a purchase decision in a store.

    Moderated studies: Leading users through your study one-on-onein real time.

    Benchmarking studies: Tracking changes in your user experience ona monthly or quarterly basis.

    To learn more about how and when to use each of these techniques,check out our UX Research Methodologies Guidebook .

    In many cases, you can use basic demographic criteria (such as age,gender, and income level) to recruit your study participants. If you have

    a broad target market or if you’re simply looking to identify basic usabilityissues in your product, then it’s ne to use broad demographics. Thekey is to get fresh eyes on your product and nd out whether peopleunderstand how to use it.

    In other cases, you’ll want to get a closer match for your target audience ,such as people who work in a certain industry, make the health insurancedecisions for their household, own a particular kind of pet, etc. This ismost appropriate if you’re interested in uncovering trends and opinionsfrom your target market rather than pure usability. To recruit the right peoplefor your study, you’ll set up a screener question. Screener questions arequalifying questions that allow only the people who select the “right”answer to participate in your study.

    In rare instances, you may need to use your own existing customers foryour study, such as to test out an advanced feature of your product thata non-customer wouldn’t understand.

    USERTESTING TIP:

    The broader your demographic requirements are, the faster you’llget your results because more people will qualify for your study.

    https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/http://info.usertesting.com/UX-Research-Methodology-Guidebook.htmlhttp://info.usertesting.com/UX-Research-Methodology-Guidebook.htmlhttps://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    7/20The complete guide to user testing websites, apps, and prototypes 7

    Build your test planYour test plan is the series of tasks and questions your participants will follow and respond to during thestudy.

    The key to a successful study is a well-designed plan. Ideally, your test plan will result in qualitative andquantitative feedback. Ask users to perform specic tasks and then answer questions that will give you theinsights you need in a measurable way.

    A task should be an action or activity that you want a user to accomplish at that time.

    Example of a task: Go through the checkout process as far as you can without actually making a purchase.

    Use a question when you want to elicit some form of feedback from a user in their own words.

    Example of a question: Was anything difficult or frustrating about this process?

    Using questions to get quantitative measurements:

    Rating scale, multiple choice, and written response questions can be helpful when you’re running a largenumber of studies and you’re looking to uncover trends. You’ll be able to quickly glance at the resulting datarather than having to watch every video. From there, you can zero in on the outliers and the most surprisingresponses.

    We recommend taking a few moments to think back to your objective and consider the best way to conveyresults to your team. When the results come back, how do you want that feedback to look? Will you wantquantitative data so you can create graphs? Written responses that you can use to create word clouds?Verbal responses so you can create a video clip and share it with your team?

    Establishing the type of deliverable you need from the outset will help you determine the right way to collectthe information you need.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    8/20The complete guide to user testing websites, apps, and prototypes 8

    Consider using both broad and specic tasks in your tests

    Broad, open-ended tasks help you learn how your users think. These typesof tasks can be useful when considering branding, content, layouts, or any ofthe “intangibles” of the user experience. They give you the chance to assessusability, content, aesthetics, and users’ sentimental responses. Broad tasksare also good for observing natural user behavior. If you give participants thespace to explore, they will!

    Here’s an example of a broad task: Find a hotel that you’d liketo stay in for a vacation to Chicago next month. Share yourthoughts out loud as you go.

    Specic tasks help pinpoint where users get confused or frustrated trying todo something specic. They’re useful when you’re focused on a particularfeature or area of your product and you need to ensure that the participantsinteract with it.

    Here’s an example of a specic task: Use the search bar to ndhotels in Chicago, specifying that you want a non-smoking doubleroom from July 10-15. Then sort the results by the highest ratings.

    Create a series of tasks

    The key to collecting actionable feedback is getting your participants totake action by performing specic tasks. Tasks are the steps that guidethe user from the start to nish of your study. Along the way, you’ll askthem to perform a series of activities on your app, site, or prototype. Asthey work, they’ll be sharing their thoughts out loud.

    When you’re creating tasks for your study, focus on the insights you wantto gather about the specic objective you determined. If you have a lot ofareas you want to test, we recommend breaking these up into differentstudies. In most cases, it’s best to keep your study around 15 minutes inlength, so keep this in mind as you plan your tasks.

    This chart highlights various question types and shows the kind ofresults that can be expected when those questions are put to use.

    TYPE EXAMPLE BENEFITS

    Verbalresponse

    Multiplechoice

    Ratingscale

    Writtenresponse

    Describe and demonstratewhat, if anything, was mostfrustrating about this app.

    Do you trust this company?• Yes • No

    How likely are you toreturn to this site again?“Not at all likely” t o “Very likely”

    What do you think ismissing from this page,

    if anything?

    Produces qualitative feed-back and makes great clips

    for a highlight reel.

    Great for collectingcategorical responses.(These can be nominal,

    dichotomous, or even ordinal.)

    Good for collectingordinal variables.(Low, medium, high)

    Good for running post-studyanalysis & collecting quotes

    for user stories.

    USERTESTING TIP:

    When you’re not sure where to focus your test, try this: Run avery open-ended test using broad tasks like “ Explore this appas you naturally would for about 10 minutes, speaking yourthoughts aloud.” You’re sure to nd areas of interest to studyin a more targeted follow-up test.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    9/20The complete guide to user testing websites, apps, and prototypes 9

    Plan tasks using logical ow

    The structure of your study is important. We recommend starting withbroad tasks (exploring the home page, using search, adding an item toa basket) and moving in a logical ow toward specic tasks. The morenatural the ow is, the more realistic the study will be—and the betteryour results will be.

    Example of poor logical ow:Create an account > nd an item > check out > search the site >evaluate the global navigation options.

    Example of good logical ow:Evaluate the global navigation options > search the site > nd anitem > create an account > checkout.

    Ask yourself whether you’re interested in discovering the users’ natural

    journey or whether you need them to a rrive at a parti cular destination.If it’s about the journey, give the participants the freedom to use theproduct in their own way. But if you’re more focused on the destination,guide them to the right location through your sequence of tasks.

    If you’re interested in both the journey and the destination, give the

    users the freedom to nd the right place on their own. Then, in subsequenttasks, tell them where they should be. You can even include the correctURL or provide instructions on how to navigate to that location.

    Also, if you think a specic task will require the user to do somethingcomplicated or has a high risk of failure, consider putting that tasknear the end of the study. This will help prevent the test participantsfrom getting stuck or off track right in the beginning, throwing off theresults of your entire test.

    Make task instructions concise

    People are notorious for skimming through written content, whether they’reinteracting with a digital product or reading instructions for a user test.

    One way to ensure that your participants read your whole task is tomake the task short and your language concise.

    Example of poor task wording:“Add the item to your cart. Now shop for another item and add itto your shopping cart. Then change the quantity of the rst itemfrom 1 to 2. Now go through the whole checkout process usingthe following information...”

    Example of good, concise task wording:This single task should be split into at least four separate tasks:

    1. Add the item to your cart.

    2. Shop for another item and add it to your cart.

    3. On the shopping cart, please update the quantity of the rst item you added from 1 to 2.

    4. Now proceed through the entire checkout process.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    10/20The complete guide to user testing websites, apps, and prototypes 10

    WRITE CLEAR QUESTIONS

    Once you’ve mapped out a sequence of tasks for users to attempt, it’s time to start drafting your questions.It’s important to structure questions accurately and strategically to get reliable answers and gain the insightsthat you really want.

    Tips for gathering factual responses

    Don’t use industry jargon. Terms like “sub-navigation” and “affordances” probably won’t resonate with theaverage user, so don’t include them in your questions unless you’re certain your actual target customer usesthose words on a daily basis.

    Dene new terms or concepts in the questions themselves (unless the goal of your study is to see if theyunderstand these terms/concepts).

    If you’re asking about some sort of frequency, such as how often a user visits a particular site, make sure youdene the timeline clearly. Always put the timeline at the beginning of the sentence.

    BAD: How often do you visit Amazon.com?

    BETTER: How often did you visit Amazon.com in the past six months?

    BEST: In the past six months, how often did you visit Amazon.com?

    After you’ve written the question, consider the possible answers. If the respondent could give you the answer“It depends,” then you should make the question more specic. It’s best to ask about rst-hand experiences.People are notoriously unreliable in predicting their own future behavior, so ask about what people have actually

    done, not what they would do. It’s not always possible, but try your best to avoid hypotheticals and hearsay.

    Example of asking about what someone will do or would do:How often do you think you’ll visit this site in the next six months?

    Example of asking about what someone has done:In the past three months, how often have you visited this site?

    Example of hearsay: How often do your parents log into Facebook?

    Better example: Skip this question and ask the parents directly!

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    11/20The complete guide to user testing websites, apps, and prototypes 11

    Tips for gathering subjective data

    Opinions are tricky. To accurately gather user opinions in a remotestudy, the name of the game is making sure that all participants areanswering the same question. The stimulus needs to be standardized.

    To make sure your participants are all responding to the same stimulus,give them a reminder of which page or screen they should be lookingat when they respond to the question. For example, “Now that you’re inthe ‘My Contacts’ screen, what three words would you use to describethis section?”

    You’re not judging the intelligence of your respondents when analyzingtheir results, so make sure that your questions don’t make them feelthat way. Place the fault on the product, not the test participant.

    Bad example: “I was very lost and confused.” (agree/disagree)

    Good example: “The site caused me to feel lost and confused.”(agree/disagree)

    Be fair, realistic, and consistent with the two ends of a rating spectrum.

    Bad example:

    “After going through the checkout process, to what extent do you trustor distrust this company?” I distrust it slightly ←→ I trust it with my life

    Good example:“After going through the checkout process, to what extent do you trustor distrust this company?” I strongly distrust this company ←→ I stronglytrust this company

    Subjective states are relative. “Happy” in one context can mean something very different from “happy” in another context. For instance:

    Happy Not happy

    (Happy = opposite of not happy)

    Happy Neutral Unhappy

    (Happy = the best!)

    Very happy Happy Neutral Unhappy Very unhappy

    (Happy = better than neutral, but not the best)

    Plus, emotional states are very personal and mean different things todifferent people. Being “very condent” to a sheepish person may meansomething very different from what it means to an experienced executive.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    12/20The complete guide to user testing websites, apps, and prototypes 12

    Avoid asking vague or conceptual questions. Break concepts up when you’re asking the questions andput them back together when you’re analyzing the results. For example, imagine that you want to measureparents’ satisfaction with a children’s online learning portal. Satisfaction is a vague and complex concept.

    Is it the thoroughness of the information? Is it the quality of interaction between the teachers and studentsonline? Is it the visual design? Is it the quality or difficulty of the assignments posted there?

    Instead of asking about overall satisfaction, ask about all the criteria independently. When you’re analyzingthe results, you can create a composite “satisfaction” rating based on the results from the smaller pieces.

    Avoid leading questions With leading questions, you inuence the participants’ responses by including small hints in the phrasing ofthe questions themselves. More often than not, you’ll subconsciously inuence the outcome of the responsesin the direction that you personally prefer.

    Leading questions will result in biased, inaccurate results, and you won’t actually learn anything helpful.In fact, it might lead you to make awed decisions. While it may lead to the answer you “want” to hear, itdoesn’t help your team make improvements, so don’t be tempted to use leading questions!

    BAD: “How much better is the new version than the original home page?”

    GOOD: “Compare the new version of the home page to the original. Which do you prefer?”

    USERTESTING TIP:

    If you’re asking about task success, remember to dene what a success is. If the previous task instructs a

    user to nd a tablet on Amazon and add it to the cart and you ask “Were you successful?” be sure to clarifywhether you are asking about nding a tablet or adding it to the cart.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    13/20The complete guide to user testing websites, apps, and prototypes 13

    Best practices for different question types

    Written response questions result in shortanswers that can be used to collect impressionsand opinions.

    • Ask questions that can be answered in acouple of words or sentences at most.Typing long responses can become frustratingfor participants, especially on mobile devices.Good example: What three words would youuse to describe this app?

    • Use these questions sparingly. The greatestvalue in remote user research usually comesfrom hearing participants speak their thoughtsaloud naturally. Written response questionsare good for getting a snapshot of the users’impressions, but if you overuse them, thequality of the responses will often degradeafter several questions.

    • Create a word cloud from all your users’responses to quickly see which words they’reusing to describe their experience.

    Multiple choice questions are great for collectingyes/no responses or answers that can’t be appliedto a scale.

    • Multiple choice responses should beexhaustive, meaning that every possibleresponse should be included in yourresponse options. At the same time, youwant a manageable number of responses.We recommend two to six response optionsper question. If you suspect that there are

    just too many options, do your best to guesswhich options will be mentioned most, andthen include an “Other” option.

    • Ask only one question at a time. Don’t do this:“Did you nd the tablet you were looking for,and was it where you expected to nd it?Yes/No” Instead, break it up into two separatequestions.

    • Choose mutually exclusive responses sinceusers will only be able to select one answer.If it’s possible for more than one answer to betrue, include a “More than one of these” option.

    Rating scale questions allow you to measureparticipants’ reactions on a spectrum. They’re agreat way to benchmark common tasks andcompare the results with a similar test run onyour competitor’s product.

    • Use relative extremes: Make the negativefeeling have the lowest numerical valueand the positive answer have the highestnumerical value. In other words, makedifficult = 1 and easy = 5, not the otherway around.

    • Stay consistent throughout the test! Use thesame end labels and the same wording whenyou’re repeating a question.

    • Consider asking “why?” after a multiple-choiceor ratings scale. Then, when you get yourresults back, you can go back and hear theparticipants discuss their answers or responses.Asking “why?” also prompts people to thinkcritically about their answer.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    14/20The complete guide to user testing websites, apps, and prototypes 14

    If you set your study up in UserTesting, you can easily run one test, review it,and then add more participants to the same study. Or, if you need to makechanges, you can create a similar study with the click of a button and thenedit the tasks or questions as needed.

    At the end of every study, it’s a good practice to review your notes to identifyany weak portions of your test and rewrite these tasks or questions for nexttime. This will save you time in the long run, especially if you’re testingout prototypes rst, and then running more people through a similar testonce you’ve pushed out the new changes to your website or app.

    Another reason to spend time to get your questions just right is if you plan torun a benchmark study. When you run an ongoing review of your website orapp over time, you shouldn’t change your questions along the way or you’llrisk skewing the results and ruining any chances of an accurate comparison.

    Before you launch your study to all of your participants, we recommendconducting a dry run (sometimes called a pilot study) with one or twoparticipants. This will help you determine whether there were any awsor confusing instructions within your test. The primary goal is to makesure that people who read your tasks and questions interpret them inthe way they were meant to be understood.

    Here’s a good structure for doing this:

    Release your study to just one or two participants.

    Listen to them as they process each task and question, and take note ofany trouble they encounter while trying to complete their rst pass.

    Once you’ve made adjustments and you feel pretty good about yourtest, we recommend having one more person run through the test toensure you’ve cleared up any confusing or leading questions.

    Launch a dry run

    1

    2

    3

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    15/20The complete guide to user testing websites, apps, and prototypes 15

    Beware of errorsThere are several common errors that can skew your data. The good news is that if you’re aware of them, youcan avoid them. Below, we’ve outlined the most common error types along with some suggestions on reducingyour chances of running into these problems in your studies.

    SAMPLING ERROR Sampling error occurs when you recruit the wrong participants to participate in your study. When this happens,you may end up with a bunch of opinions from people outside your target market—and therefore, they aren’tvery helpful.

    For example, perhaps your target market includes SaaS sales executives, so you tried to recruit people who workin software sales, but the actual participants ended up being electronics store retail associates.

    SOLUTION:

    Ask clear and precise screener questions to help qualify potential study participants. If you’re uncertain whetheryour screener questions are accurately capturing the right users, do a dry run with a small handful of participants.As the rst step of your study, have them describe aloud what they do for a living (or how familiar they are with yourindustry, or whatever criteria will help you determine whether they’re actually your target market).

    RESEARCHER ERROR With this type of error, participants misunderstand a task or question because of the way it was worded. Studyparticipants will often take instructions literally and search for the exact terminology that you include in your tasks.

    SOLUTION 1:

    Try out your study with several people and monitor their reaction to your questions. You’ll quickly learn whetheror not your questions are accurately communicating what you want them to.

    SOLUTION 2:

    Be aware of your target audience and ask questions in a manner that naturally resonates with them. Use plainlanguage. Slang, jargon, regionalisms, and turns of phrase can easily confuse participants during a study.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    16/20

    The complete guide to user testing websites, apps, and prototypes 16

    RESPONDENT ERROR

    In this case, the participants are giving you inaccurate or false information.There are several reasons that this may occur:

    • They don’t trust you with their personal information.• They’re uncomfortable sharing details of their personal lives.• They’ve become fatigued and have resorted to bogus responses

    to get through the test quickly.• They don’t understand whether you’re looking for their opinion or

    the “right” answer.

    SOLUTION 1:

    Reassure participants that their responses won’t be shared publicly.

    SOLUTION 2:

    At the very beginning of your study, be sure to explain that if they haveto ll out any personal information, their responses will be blurred out toprotect their identity.

    SOLUTION 3:

    Keep your test short (around 15 minutes, in most cases) so you don’tfatigue your participants.

    FAULTY PARTICIPANT RECALL

    These errors occur when a participant is unable to correctly remember theevent you’re asking about. This happens when your question asks them torecall something too far in the past or in too much detail .

    SOLUTION:

    Do a gut check. Can you remember the specics of something similar?If not, revise your question.

    SOCIAL DESIRABILITY ERROR

    With this error, participants feel pressured to give a response thatthey think is most popularly accepted in society, even if it’s not true.For example, if you ask people about their tech-savviness, people mayover-report their abilities because they think it’s “better” than not beingtech-savvy.

    SOLUTION 1:

    When you’re looking for test participants, be sure to explain that you valuethe skillsets or demographic characteristics you’re requesting. Emphasizethat you hope to learn how your product will be useful or benecial to peoplelike them.

    SOLUTION 2:

    Reassure your participants that they’ll remain anonymous.

    ACQUIESCENCE BIAS When acquiescence occurs, the participant will tell you what they think youwant to hear out of fear of offending you. For example, they may dislike yourapp but don’t want to make you feel bad about your work. This is morecommon in moderated tests than unmoderated tests.

    SOLUTION 1:If you’re too close to the product (for example, if you’re the designer), youmay want to use an impartial moderator to moderate your tests for you.Skilled researchers can help ensure impartiality, reducing barriers tosharing the truth.

    SOLUTION 2:

    Reassure participants that you value their truth and honesty and that none oftheir feedback will be taken personally.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    17/20

    The complete guide to user testing websites, apps, and prototypes 17

    Analyze your results Once you’ve launched your study and gotten your results back, it’s timeto get to work on analysis. Before you do anything, start by thinking backto your objective. The objective is important and easy to lose sight of.

    Be sure not to get distracted by irrelevant ndings, and stay focused.(However, you may choose to keep a side record of interesting ndingsthat aren’t related to the objective but may prove useful later.) Withyour objective in mind, you can jump into the data and videos.

    WHAT TO LOOK FOR

    When we’re reviewing full recordings of user tests, we often makeannotations along the way. These annotations are captured alongsidethe video, so you can easily jump back to interesting moments later on.

    As you review the data from your questions, keep an eye out for anyresponses that are outside the norm. If nine participants thought a taskwas fairly easy but one thought it was extremely difficult, investigatefurther by watching the video that goes along with the unusual response.What about that participant’s experience was different?

    Take note of user frustrations as well as items that users nd particularly

    helpful or exciting. These become discussion points for design teamsand can often help to uncover opportunities for improvements to futurereleases. It’s important to identify the things that people love, too, sothat you don’t inadvertently try to “x” something that’s not brokenwhen trying to improve the user experience of your product.

    Explore correlations

    When reviewing rating scale and multiple-choice responses, look forcorrelations between the length of time spent on a task and negativeresponses from participants. This is a starting point for identifying areas

    of the user experience that may be particularly troublesome.

    It’s important to remember, however, that correlation doesn’t tell thewhole story. Don’t jump to conclusions based on correlations alone;be sure to watch the relevant moments of the videos to hear whatparticipants are thinking.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    18/20

    The complete guide to user testing websites, apps, and prototypes 18

    HARNESS THE POWER OF THE SPREADSHEET

    It’s easy to gather, analyze, and share your ndings right within the UserTesting dashboard. You’ll be able tosee time on task, responses to metrics questions, and more at a glance.

    However, if you nd that you want to perform more detailed analysis, you can download the data from yourstudy into an Excel spreadsheet. This can be especially helpful for compiling ndings from multiple studiesside by side, breaking down patterns in studies with lots of participants, and comparing responses fromdifferent demographic groups.

    SHARE YOUR FINDINGS

    Research is meant to be shared! Once you’ve established your ndings, it’s time to present them to your

    stakeholders. In most cases, this means hosting a meeting with the involved departments and sharing therelevant ndings in a presentation.

    Here are a few ideas for successfully relaying your research ndings:

    Create a highlight reel of video clips.

    Use charts to represent any interesting metrics data from your questions.

    Back up your claims with user quotes from the studies.

    Use a word cloud to display the most common words used throughout your study.

    Be careful not to place blame on any of your teammates. If you have a lot of negative ndings, chooseyour words carefully. “Users found this feature frustrating” is much easier to hear than “This feature isterrible.”

    Encourage team members to ask questions about the ndings, but remind them not to make excuses.They’re there to learn about the customer experience, not to defend their design decisions.

    Sharing research ndings with stakeholders and colleagues in multiple departments can be a great way topromote a user-centered culture in your company.

    Sharing your research

    ndings will helpyour company stayfocused on the userexperience.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    19/20

    The complete guide to user testing websites, apps, and prototypes 19

    ConclusionWe encourage you to make remote user research a standard part ofyour development process. User feedback is crucial for designing anddeveloping a successful product, and remote research makes it fast andeasy to get that feedback.

    With a clear objective, the right tasks, and carefully planned and wordedquestions, you’ll gather useful, actionable insights.

    And whether you plan on conducting your research on your own orenlisting the help of UserTesting’s expert research team, you’re wellon your way to improving your customer experience.

    http://www.usertesting.com/

  • 8/16/2019 UT Complete Guide to User Testing

    20/20

    The complete guide to user testing websites apps and prototypes 20

    Create great experiences

    UserTesting is the fastest and most advanced user experience research platform on the market. We give marketers,product managers, and UX teams on-demand access to people in their target audience who deliver audio, video,

    and written feedback on websites, mobile apps, prototypes, and even physical products and locations.

    2672 Bayshore Parkway, Mountain View, CA 94043www .usertesting.com | 1.800.903.9493 | [email protected]

    http://www.usertesting.com/http://www.usertesting.com/mailto:support%40usertesting.com?subject=mailto:support%40usertesting.com?subject=http://www.usertesting.com/http://www.usertesting.com/http://www.usertesting.com/