Wednesday, June 18, 2008

Day 2, Morning - Usability Week San Francisco

This morning's session was delivered by Kara Pernice, Director of Research at NNG who heads the East Coast operations. Her background, like Amy Schade's, is chock full of practitioner experience, usability advocacy, and support for NNG's seemingly lucrative report writing business. Her focus today was Analyzing and Reporting Findings. Key points that stuck with me included:
  • Affinity diagramming can be a great prioritization tool. Basically, you get a team together, group the issues into categories (quietly, each on their own), and then together, vote/assign ratings (high-1, medium-2, low-3).
  • The people best able to identify usability issues in this order include: 1) Person with product knowledge and usability expertise; 2) Person with usability expertise; and 3) Person with only product knowledge.
  • Avoid mixing issues during a usability ratings exercise. Focus only on the criticality of the usability issues (don't factor in business priorities, time to fix, etc. -- do this later). A usability issue is still an issue even if other factors may ultimately make it a low priority.
  • Assigning severity ratings to usability issues involves three parameters: Impact, Frequency, and Persistence (Is there a learnable work-around?).
  • When reporting results, don't say 20% of users had this problem if 20% is one user. Say "1 user" instead. Otherwise, the "numbers" people will just think you're an idiot.
  • Usability reports should include what happened, why it happened (interpretations), simple quantitative data (e.g., pass/fail rates), positive and negative findings, and recommendations.
  • During testing, if users have nothing to say (thinking out loud), it could be because they aren't having any major problems -- a good thing! (In my experience, it might be good to check though...sometimes users forget to think out loud. Simply ask, "What are you thinking?" if it seems too quiet.)
  • NNG's heuristic usability reports tend to be 100 pages or more; they are typically longer than usability test reports because they (the experts) are able to identify more points than users do. (Subtle sales pitch?)
  • Usability issues should be tracked in a database. (Yes, some vehicle for communicating and tracking issues -- similar to QA issues -- amongst a large, dispersed team, is helpful. If usability issues only exist in a report, there is more likelihood that they'll collect dust rather than be addressed.)
  • After assigning usability ratings to issues, only then assign other ratings like time/resources to fix. You can use a grid to total up the ratings and prioritize again based with consideration for other factors.
  • When presenting findings, consider including user quotes, annotated screenshots, videos, photos, and charts/graphs.

I'll share more about the Day 2, afternoon session with Jakob soon.

1 comment:

Anonymous said...

Sounds like you're getting a lot out of this Jodi.

As a fellow follower and practicer of Nielsenism an't wait to get the full report.

John