8
Assess the Assessment on improving a heuristic evaluation Sourav Sarkar

Improving Heuristic Evaluations

Embed Size (px)

DESCRIPTION

Heuristic Evaluation is an important part of any interface evaluation and re-design exercise. This discount usability testing method can impart important information to help mitigate usability issues. There is ongoing debate about the effectiveness of a Heuristic Evaluation. Just finding out a large number of problems with the interface doesn’t make this method any superior or more effective. The quality of the problems themselves is important. The presentation tries to analyze the process and suggest some possible solutions to mitigate the problems.

Citation preview

Page 1: Improving Heuristic Evaluations

Assess the Assessment

on improving a heuristic evaluation

Sourav Sarkar

Page 2: Improving Heuristic Evaluations

OBSERVATION

EVALUATION & CATEGORIZATION

DOCUMENTATION

IMPLEMENTATION

Heuristic Evaluation – a journeyHeuristic Evaluation is an important part of any interface evaluation and re-design exercise. This discount usability testing method can impart important information to help mitigate usability issues.

ScenarioAn interaction designer is carrying out a Heuristic Evaluation. The designer has the interface and Nielsen’s 10 principles at his disposal.

Observes the interface. This could be a single webpage or a complete website. Navigates through the website and uses the functionalities first-hand. In the absence of a working prototype, resigns to visual inspection alone.

Figures out the usability problems and categorizes them. This exercise can span across several un-related webpages. Marks each problem against one of the 10 principles and further categorizes them w.r.t. the severity of

the issue.

Documents the problems and prepares a usability report. Describes each issue. Places screenshots pointing out the specific occurrence(s) of the issue. Recommends solutions to these specific problems.

Uses the points to bring about improvements in the redesign phase. Checks out recommended solutions previously noted and also brings about improvements to other usability problems throughout the site.

Page 3: Improving Heuristic Evaluations

Observes the interface. This could be a single webpage or a complete website. Navigates through the website and uses the

functionalities first-hand. In the absence of a working prototype, resigns to visual inspection alone.

Figures out the usability problems and categorizes them. This exercise can span across several un-related webpages. Marks

each problem against one of the 10 principles and further categorizes them w.r.t. the severity of the issue.

Documents the problems and prepares a usability report. Describes each issue. Places screenshots pointing out the

specific occurrence(s) of the issue. Recommends solutions to these specific problems.

Uses the points to bring about improvements in the redesign phase. Checks out recommended solutions previously noted

and also brings about improvements to other usability problems throughout the site.

Heuristic Evaluation – problems and possible solutionsThere is ongoing debate about the effectiveness of a Heuristic Evaluation. Just finding out a large number of problems with the interface doesn’t make this method any superior or more effective. The quality of the problems themselves is important.

What are the problems…

• Usability problems occur in context and not sporadically. Evaluating discrete pages will highlight issues in isolation, issues that might not prop up at a task-flow level.

• It is tedious process to scan through a full website to pick out pointers from various pages. On the other hand, without a comprehensive look a the full website, it would not be possible to point out problems in entirety.

What can be improved…

• Choose screen-flows based on the primary task-flows in the website. Rather, pick out the top 3 important flows and then evaluate each screen.

• Have multiple evaluators for the portal. That will give more chances to finding out different problems within a short time-span. It will also allow for scanning more of the website than a single evaluator can.

Page 4: Improving Heuristic Evaluations

Observes the interface. This could be a single webpage or a complete website. Navigates through the website and uses the

functionalities first-hand. In the absence of a working prototype, resigns to visual inspection alone.

Figures out the usability problems and categorizes them. This exercise can span across several un-related webpages. Marks

each problem against one of the 10 principles and further categorizes them w.r.t. the severity of the issue.

Documents the problems and prepares a usability report. Describes each issue. Places screenshots pointing out the

specific occurrence(s) of the issue. Recommends solutions to these specific problems.

Uses the points to bring about improvements in the redesign phase. Checks out recommended solutions previously noted

and also brings about improvements to other usability problems throughout the site.

Heuristic Evaluation – problems and possible solutionsThere is ongoing debate about the effectiveness of a Heuristic Evaluation. Just finding out a large number of problems with the interface doesn’t make this method any superior or more effective. The quality of the problems themselves is important.

What are the problems…

• It is not possible to find out find out domain-specific usability problems without domain knowledge.

• A simple heuristic evaluation based on Nielsen’s 10 principles is not enough to categorize all usability problems.

What can be improved…

• Allocate evaluators based on their evaluation experience index, a standard metric indicating the capability of the evaluator.

• Have different evaluation parameters (heuristics) based on the type of interface/business. This categorization can be standardized. Also use other types of evaluations e.g.: • Expert Evaluations• Connell & Hammond's 30 Usability

Principles• Gerhardt-Powals 10 Cognitive Engineering

Principles• Bastien and Scapin’s set of 18 Ergonomic

criteria• Smith & Mosier's 944 guidelines for the

design of user-interfaces

• Even in simple evaluations, the number of issues pointed out will vary depending on the capability and experience of the evaluator.

• Not all findings suggest usability problems. The actual number (quantitative data) of usability problems detected through user research might not coincide with the evaluation findings.

• Conduct user testing concurrently with heuristic evaluations. This will make sure no false alarms (testing says no problem) or misses (testing finds out an undetected problem) are left unaccounted for.

• Have SMEs set the heuristics for the BUs they hold expertise in. Being an expert in both usability and the business domain will help the evaluator find out stark problems and not only the minor ones.

Page 5: Improving Heuristic Evaluations

Observes the interface. This could be a single webpage or a complete website. Navigates through the website and uses the

functionalities first-hand. In the absence of a working prototype, resigns to visual inspection alone.

Figures out the usability problems and categorizes them. This exercise can span across several un-related webpages. Marks

each problem against one of the 10 principles and further categorizes them w.r.t. the severity of the issue.

Documents the problems and prepares a usability report. Describes each issue. Places screenshots pointing out the

specific occurrence(s) of the issue. Recommends solutions to these specific problems.

Uses the points to bring about improvements in the redesign phase. Checks out recommended solutions previously noted

and also brings about improvements to other usability problems throughout the site.

Heuristic Evaluation – problems and possible solutionsThere is ongoing debate about the effectiveness of a Heuristic Evaluation. Just finding out a large number of problems with the interface doesn’t make this method any superior or more effective. The quality of the problems themselves is important.

What are the problems…

• Sometimes it becomes difficult to relate to snippets or screenshots. It becomes tough to relate to these findings.

What can be improved…

• Have hyperlinks to the respective pages in the portal as the screenshots if possible. This will allow for instant display of the problem in a better context.

• Have more specific contextual pointers/parameters like content, navigation, branding, adherence to business flows etc. and rate the interface on each of these parameters so that it can provide an at-a-glance idea of the key affected areas.

• Just pointing out the number of problems in terms of their severity or Nielsen’s 10 principles doesn’t give enough information on where to focus the redesign on.

Page 6: Improving Heuristic Evaluations

Observes the interface. This could be a single webpage or a complete website. Navigates through the website and uses the

functionalities first-hand. In the absence of a working prototype, resigns to visual inspection alone.

Figures out the usability problems and categorizes them. This exercise can span across several un-related webpages. Marks

each problem against one of the 10 principles and further categorizes them w.r.t. the severity of the issue.

Documents the problems and prepares a usability report. Describes each issue. Places screenshots pointing out the

specific occurrence(s) of the issue. Recommends solutions to these specific problems.

Uses the points to bring about improvements in the redesign phase. Checks out recommended solutions previously noted

and also brings about improvements to other usability problems throughout the site.

Heuristic Evaluation – problems and possible solutionsThere is ongoing debate about the effectiveness of a Heuristic Evaluation. Just finding out a large number of problems with the interface doesn’t make this method any superior or more effective. The quality of the problems themselves is important.

What are the problems…

• The evaluation serves only as an appendix and not as a business document. It cannot be used as a document to base redesign time estimates upon.

What can be improved…

• Give weightage to issues and form a CI-wide issue-rating scale for the UI to help in time estimations for redesign.

• Check the effectiveness of the redesign through a next level of evaluation by another evaluator(s).

• There is no guarantee that the evaluation points are actually being rectified through the redesign. This is probable since the evaluator might not be redesigning the interface.

• More of then than not, here is no evaluation of the evaluation. The pointers are not checked before being handed out.

• Have SMEs holding domain knowledge review and validate the evaluation.

Page 7: Improving Heuristic Evaluations

Extended Strategies

• 1+ novice and 1+ expert for evaluation. The expert can even set the parameters or do a review of the evaluation.

• Test the evaluators against real usability tests. See the accuracy of the evaluators and rate them. Have this exercise/rating done every 6 months.

• Provide weightage to the parameters on which the ratings are being done. These weightages will be provided post discussion with multiple people (stakeholders, SME, usability expert).

• When a final rating is arrived at, use this rating to communicate the effort or complexity required for the redesign.

Page 8: Improving Heuristic Evaluations

Thank You