interaction design: building an ePortfolio
Blog
site map
resources
tools

Evaluating designs

Evaluate your work as soon and as often as constraints alllow.
(Jones and Marsden, 2006, p. 196)

NOTE: Its recommended all group members complete the two tabs

 

What, why, where, when

“In practice, evaluation is needed many times during the development process. In early iterations, the choice can be made quickly by the team members themselves, or the captive “clients” who are assigned to the process.” (Moggridge, 2007, p. 735)

Defining evaluation

Evaluation in interaction design is finding out how users respond to the interactive experience you have designed and perhaps what modifications are still necessary to make it work.

  • links
    So what happens when we don’t evaluate? Tog illustrates how the results of US elections were impacted because of the lack of evaluation. Begin with this 2001 analysis of the Florida ballot and the implications for us in developing interactive systems.
  • text
    Read your text to find out the what, why, where and when to evaluate. Complete the activity in the PowerPoint to define the terms associated with evaluation.

Approaches

There are three main evaluation approaches: usability testing, field studies and analytical evaluation. You have already been introduced to the methods that these approaches use: observing users, and asking users through interviews and questionnaires.

  • text
    Explore these three main approaches as outlined in your text.
  • Note particularly how these approaches can be combined to gain a broad understanding of issues related to the designs and the summary tables used to describe the characteristics of different evaluation approaches.
  • If you have already done some evaluation of your design, categorise the activities according to this table.
  • Remember that opportunistic or quick and dirty approaches tend to be the forms of evaluation used early in the design process.

Case Studies

To help you understand how different evaluation techniques can be used, your text (Preece, Rogers and Sharp, 2007) provides 6 case studies that outline different types of evaluation being used.

  • text
    Read the text or view the powerpoint of the cases.
  • For each of the 6 case studies think about the role of evaluation in the design of the system, note:
    • the artefacts that were evaluated,
    • when during the design they were evaluated,
    • which methods were used
    • what was learned from the evaluations,
    • other issues
      • how was the design advanced after each evaluation?
      • what were the main constraints that influenced the evaluation?
      • how did the use of different approaches and methods build on and complement each other to give a broader picture of the evaluation?
      • which parts of the evaluation were directed at usability goals and which at user experience goals?
  • Use the comparison document to help you organise your ideas.

Evaluation studies

While ‘quick and dirty’ evaluations can occur at the beginning of testing your designs, as you progress your prototyping, more rigorous evaluations are required. To facilitate this, your text provides a framework that should be used for more formal evaluation studies.

DECIDE Framework

  • text
    Review your text and the supporting PowerPoint to understand what’s involved in the DECIDE framework
  • How closely does this relate to the process you used for the Design research?
  • Will you need to use this framework for any of your designs?
  • links
    To check your understanding of the framework, apply it to any of the three studies in links.

Usability testing & field studies

A common approach to design evaluation is usability testing which evaluates the usability of a product either in a laboratory setting or in its context of use.

  • text
    If this approach is likely to be useful to test the usability of your prototypes, explore the approach in more detail in your text or the supporting PowerPoint.
  • As you read note particularly the difference between experimental usability testing where a rigorous scientific approach is used and more generic usability testing which is usually focused on identifying usability issues.

Analytical evaluation

While usability testing involves using user participants in the evaluation study, many analytical evaluation approaches seek expert reviews rather than interacting with product users

  • Inspections, heuristic evaluation and walkthroughs rely on experts, people with a background in HCI or usability techniques to review the design prototype against a range of criteria.
  • text
    If this approach is likely to be useful to evaluate your design prototypes, explore the approach in more detail in your text or the supporting PowerPoint
  • As you read note particularly the advantages and disadvantages of this approach and the site listed where you can design your own heuristic evaluation.
  • Predictive modelling is another approach that evaluates a system without testing users, instead using expert evaluators role-playing users against conditions derived from a theoretical model.
  • Your text provides an overview of three models: GOMS; Keystroke, and Fitt’s Law. Review these to see if they can help you evaluate your designs.

Evaluate your designs

You may have already performed some formative evaluations on your designs and received feedback about aspects that require further development.

  • Evaluate one of your low fidelity prototypes with an informal usability test and another using a heuristic evaluation. Ask members of other groups to perform the heuristic assessment of one of your prototypes
  • Note the difference in feedback you receive from each type of evaluation. Which is more useful?
  • With your high fidelity prototype, use the DECIDE framework, to plan and conduct an evaluation using any of the approaches you think is most relevant to your needs.

 

 

 

 

Assessment
learning tasks get to know one another Develop a company website What is Web 2.0? what is interaction design
ePortfolio
Mobile Design
Site info