Here are some carousel usability guidelines I'm documenting for a work project. The guidelines are based on usability tests I've recently observed. I've tried to find examples out there on the web to illustrate key points.
- On initial page load, display some carousel movement like Apple to draw attention to fact that there's more content available.
- If using arrows, include them on both sides of the carousel even if one is initially disabled. The visibility of the two arrows will help users recognize the component as a carousel.
- If one arrow is initially disabled, make sure it appears that way by graying it out, while making the active arrow "pop".
- Highlight arrows and the carousel thumbnails on rollover to help communicate that they're clickable.
- Make arrows large enough to be easily noticed and clicked on. High button/background contrast also helps.
- Make the carousel images large enough so users can get a sense of what they represent.
- Consider rotating the featured content like PopPhoto.com, but give users control by also offering pause, previous and next options.
- Use clickable text or image links to represent content within the carousel rather than inexplicit numbers or icons that don't tell users much of anything.
- Consider displaying the number of items within the carousel like GameDaily.com to help users understand when they've viewed them all.
- Consider small circles to indicate how many individual items or sets of items are in the carousel; let this be an alternative way to navigate by making the circles clickable.
- Load the carousel on the page before the user's screensaver kicks in -- if you get my drift :)
Friday, November 7, 2008
Tuesday, November 4, 2008
Creating Tables in Visio
Hey, I learned something new in Visio 2003 today! (Unfortunately, I'm still on the old version at my day job.) I've had occasion to wish there was an easy way to create tables in Visio, and today I discovered how.
- Open File > Shapes > Charts & Graphs > Charting Shapes (US) stencil.
- Drag the various grids and charts onto your page.
- Adjust the number of rows and columns when prompted.
- Right click to make further adjustments to colors, grid lines and such.
Enjoy!
Sunday, October 5, 2008
Day 3 - Usability Week in San Francisco
I know, this looks bad. Four months later, I'm finally getting around to summarizing Day 3. The impetus - I'm giving a presentation at work summarizing my conference experience. Since I'm preparing this for work, I figured I might as well share it with anyone else that might be interested. Who knows, maybe I'll get this blog thing going again :)
During Day 3, NNG covered a hodge-podge of objectives including:
Field Studies
During Day 3, NNG covered a hodge-podge of objectives including:
- Field Studies
- Ethics of User Research
- Usability Report Analysis
- Variants of User Testing
- Special Areas of Research
- Financing Usability
- Cost Benefit Analysis
- Successful Usability Programs
Field Studies
- Field studies can help define new features, tell you about tasks and work-arounds you may not have known about, and may identify new customers.
- Perform field studies early in the project while the information is still actionable.
- When recruiting for field studies, consider users of competitive or similar products (not just yours).
- Recognize that purchasers of your product may not be the users of your product.
- Make sure users know what their commitment entails (time, job shadowing, communication).
- Remember that a field study is about WATCHING people do their work; it is not to discuss their work. (Remember, what users say they do and what they actually do may be different.) Users should pretend you're not even there.
- 3-4 field study observers are recommended (e.g., product manager, developer, usability expert), each with an assigned role (e.g., facilitator, photographer, note taker).
- Consider creating a template with some guidelines, high level goals and plenty of room for notes and drawings.
- Count to 20 before you interrupt a user.
- Always let the user name the objects they're interacting with.
- While doing a field study, look for: processes, reasons, pain, tools, people, places, artifacts.
- Capture user quotes.
- Immediately debrief with observers to help you remember details and build consensus.
- Reserve a room to analyze findings ("war" room).
- Study outcomes may include: user profiles/personas, task lists/flows, prioritized issues, new feature ideas, dictionary of user terms, photos, videos, artifacts.
- Co-discovery is when you two users attempt to do tasks together (e.g., house hunting). Consider if task is commonly done by two people.
- Remote testing is when faciltator and participant are in different locations; a good choice if you can't physically be there (e.g., international).
- Competitive studies are used when you want to test your own design as well as 1-3 competitors -- provides information on design elements that work and don't work, and allows you to avoid repeating others' mistakes.
- Longitudinal studies follow users over an extended period of time; users record their experiences and make comments.
- Eye tracking allows you to see where users are looking (first read, scan path, gaze time).
- Users with disabilities
- Low-literacy users
- Senior Citizens
- Children
- International testing
- Hardware testing
I would add "domain-specific" testing here as well (e.g., automotive manufacturer).
Cost/Benefit Analysis- Before/after metrics can include: sales, support calls, productivity (time to complete task), training time, customer satisfaction.
- The cost of training is the cost of bad usability.
- If you don't do a usability study, you could actually have negative improvement; however if you do a usability study, you'll always pick the design that proves improvement.
Successful Usability Programs
A good usability professional is experienced, balances diplomacy with assertiveness, is somewhat technical, and is driven by data; he/she is NOT timid, persecuted, or judging/finger-pointing.
Wednesday, June 18, 2008
Day 2, Afternoon - Usability Week San Francisco
Dr. Nielsen began the afternoon session with an introduction to the Usability Toolbox -- the arsenal of tools at our disposal to rid the world of really bad design. They include:
Pre-Design
In Pre-Design, field studies, cardsorting, testing the old design and testing competitive designs are your most valuable tools. Since some people may be resistant to testing the old design, Nielsen points out that, while you may know it's bad, you may not know WHY it's bad. If you just redesign, what you create will be different, but not necessarily better. Your old design is your best prototype for your new design. Your competitors are your second best prototypes.
Design
In the Design phase, iterative testing with low- to high-fidelity prototypes is most valuable, and final polishing can be done with a heuristic evaluation. Nielsen recommends developing low-fidelity paper prototypes early in the design process. During the training, we engaged in a team-based paper prototyping exercise. We used actual paper, markers, scissors, and sticky notes. At first, this seemed rather grade school, but as we got into it, I began to appreciate the team-based approach, the outpouring of creative ideas, and how easy it was to make changes to our design. We then tested the paper prototype, with one person acting as facilitator, another acting as the computer (changing pages or adding stickies as the user clicked), and another person taking notes. The user used the wrong end of a pen to indicate clicking, and the right end to write in forms.
In a heuristic evaluation, a small set (usually 2-3) of usability experts examine the interface to judge it's compliance with known usability principles (heuristics). Reviews can be done on draft designs, as well as on specifications and wireframes. Generally, one should take two passes through an interface when doing a heuristic evaluation -- the first to inspect the task flow and the second to inspect page details.
Post-Design
In Post-Design, Nielsen suggests tweaking the design as needed based on log file analysis and surveys. Regarding search log analysis, he said that people generally go for the easiest interaction (that they think will help them achieve their goal). If they use the search feature, they probably couldn't find it easily via other navigation options. Search logs can reveal what people are looking for, and the terms they use to refer to that information.
For surveys, Nielsen recommends a 1-7 rating scale, with 1 being really bad and 7 being really good. In 317 NNG studies, the mean rating is 4.9, leaning slightly toward the positive. Nielsen points out that it's really human nature that people tend to be more positive. The 1 and 2 ratings are hardly ever used, so it's almost like you're really using a 5 point scale. He also indicated that surveys weren't very reliable because what people say and what they do don't always match. One really great survey question, however, is "Why are you visiting our site today?"
Have Web Sites Improved in the Last 10 Years?
Based on the 317 surveys NNG has conducted, the results suggest not. Nielsen explains this by indicating that while web sites have certainly improved (we can do much more), user expectations have also increased. So satisfaction ratings are relative to user expectations. A site could get a good satisfaction rating one year, and even if nothing changes, get a poor satisfaction rating another year simply because user expectations have risen.
At end of Day 2, I felt the training content was really Usability 101, but I also recognized that I was picking up little nuggets of knowledge here and there that one could only get by spending time with someone like Nielsen and his team who have many, many years of practical usability experience.
- Server traffic log analysis
- Search log analysis
- Heuristic evaluation/expert review
- User testing
- Low-fidelity paper prototyping
- Surveys
- Field studies
- Participatory design
- Competitive studies
- Cardsorting
- Measurement studies
Pre-Design
In Pre-Design, field studies, cardsorting, testing the old design and testing competitive designs are your most valuable tools. Since some people may be resistant to testing the old design, Nielsen points out that, while you may know it's bad, you may not know WHY it's bad. If you just redesign, what you create will be different, but not necessarily better. Your old design is your best prototype for your new design. Your competitors are your second best prototypes.
Design
In the Design phase, iterative testing with low- to high-fidelity prototypes is most valuable, and final polishing can be done with a heuristic evaluation. Nielsen recommends developing low-fidelity paper prototypes early in the design process. During the training, we engaged in a team-based paper prototyping exercise. We used actual paper, markers, scissors, and sticky notes. At first, this seemed rather grade school, but as we got into it, I began to appreciate the team-based approach, the outpouring of creative ideas, and how easy it was to make changes to our design. We then tested the paper prototype, with one person acting as facilitator, another acting as the computer (changing pages or adding stickies as the user clicked), and another person taking notes. The user used the wrong end of a pen to indicate clicking, and the right end to write in forms.
In a heuristic evaluation, a small set (usually 2-3) of usability experts examine the interface to judge it's compliance with known usability principles (heuristics). Reviews can be done on draft designs, as well as on specifications and wireframes. Generally, one should take two passes through an interface when doing a heuristic evaluation -- the first to inspect the task flow and the second to inspect page details.
Post-Design
In Post-Design, Nielsen suggests tweaking the design as needed based on log file analysis and surveys. Regarding search log analysis, he said that people generally go for the easiest interaction (that they think will help them achieve their goal). If they use the search feature, they probably couldn't find it easily via other navigation options. Search logs can reveal what people are looking for, and the terms they use to refer to that information.
For surveys, Nielsen recommends a 1-7 rating scale, with 1 being really bad and 7 being really good. In 317 NNG studies, the mean rating is 4.9, leaning slightly toward the positive. Nielsen points out that it's really human nature that people tend to be more positive. The 1 and 2 ratings are hardly ever used, so it's almost like you're really using a 5 point scale. He also indicated that surveys weren't very reliable because what people say and what they do don't always match. One really great survey question, however, is "Why are you visiting our site today?"
Have Web Sites Improved in the Last 10 Years?
Based on the 317 surveys NNG has conducted, the results suggest not. Nielsen explains this by indicating that while web sites have certainly improved (we can do much more), user expectations have also increased. So satisfaction ratings are relative to user expectations. A site could get a good satisfaction rating one year, and even if nothing changes, get a poor satisfaction rating another year simply because user expectations have risen.
At end of Day 2, I felt the training content was really Usability 101, but I also recognized that I was picking up little nuggets of knowledge here and there that one could only get by spending time with someone like Nielsen and his team who have many, many years of practical usability experience.
More later about Day 3!
Day 2, Morning - Usability Week San Francisco
This morning's session was delivered by Kara Pernice, Director of Research at NNG who heads the East Coast operations. Her background, like Amy Schade's, is chock full of practitioner experience, usability advocacy, and support for NNG's seemingly lucrative report writing business. Her focus today was Analyzing and Reporting Findings. Key points that stuck with me included:
I'll share more about the Day 2, afternoon session with Jakob soon.
- Affinity diagramming can be a great prioritization tool. Basically, you get a team together, group the issues into categories (quietly, each on their own), and then together, vote/assign ratings (high-1, medium-2, low-3).
- The people best able to identify usability issues in this order include: 1) Person with product knowledge and usability expertise; 2) Person with usability expertise; and 3) Person with only product knowledge.
- Avoid mixing issues during a usability ratings exercise. Focus only on the criticality of the usability issues (don't factor in business priorities, time to fix, etc. -- do this later). A usability issue is still an issue even if other factors may ultimately make it a low priority.
- Assigning severity ratings to usability issues involves three parameters: Impact, Frequency, and Persistence (Is there a learnable work-around?).
- When reporting results, don't say 20% of users had this problem if 20% is one user. Say "1 user" instead. Otherwise, the "numbers" people will just think you're an idiot.
- Usability reports should include what happened, why it happened (interpretations), simple quantitative data (e.g., pass/fail rates), positive and negative findings, and recommendations.
- During testing, if users have nothing to say (thinking out loud), it could be because they aren't having any major problems -- a good thing! (In my experience, it might be good to check though...sometimes users forget to think out loud. Simply ask, "What are you thinking?" if it seems too quiet.)
- NNG's heuristic usability reports tend to be 100 pages or more; they are typically longer than usability test reports because they (the experts) are able to identify more points than users do. (Subtle sales pitch?)
- Usability issues should be tracked in a database. (Yes, some vehicle for communicating and tracking issues -- similar to QA issues -- amongst a large, dispersed team, is helpful. If usability issues only exist in a report, there is more likelihood that they'll collect dust rather than be addressed.)
- After assigning usability ratings to issues, only then assign other ratings like time/resources to fix. You can use a grid to total up the ratings and prioritize again based with consideration for other factors.
- When presenting findings, consider including user quotes, annotated screenshots, videos, photos, and charts/graphs.
I'll share more about the Day 2, afternoon session with Jakob soon.
Tuesday, June 17, 2008
Day 1, Afternoon - Usability Week San Francisco
After the lunch break, the moment I'd been waiting for arrived as Dr. Nielsen entered the conference room to begin his portion of the presentation, Foundations of Usability. Before he even got started, I was at his feet asking him to autograph one of his books, Usability Engineering, that I just so happened to bring along. He did, of course, oblige and I restrained the urge to also ask for a picture until later in the conference (not wanting to appear too stalker-like).
He began his presentation with a reference to the "old days" when a developer's attitude was that people should be grateful just to be able to use a computer; usability was not a priority. User-centered design (UCD), a more popular concept in recent years, has been a big step forward. Key elements of UCD include:
His other major points that afternoon related to the ideal number of users to test with. For years we've heard him say 4-6 users, and that's still his message today. Elaborate tests are costly and intimidating; discount methods with 4-6 users can be quick, cheap and more practical. He strongly advises doing more iterative testing with small groups, rather than one large test, strengthening his argument with charts and graphs, and using terms like Lambda, which I strained my brain to understand. He adds that, while some of what each user does during a test will be different, some will be the same, and a pattern will emerge. Once a pattern is established, adding more users really doesn't add much value. I have personally experienced this as a test facilitator and observer, and I can tell you, by the 5th or 6th user, the findings start to get really boring...little new information emerges at that point.
Stay tuned for a run-down on Day 2!
He began his presentation with a reference to the "old days" when a developer's attitude was that people should be grateful just to be able to use a computer; usability was not a priority. User-centered design (UCD), a more popular concept in recent years, has been a big step forward. Key elements of UCD include:
- Focus on users' needs, tasks & goals
- Spend time on initial research (observations) and requirements (user-defined)
- Emphasize an iterative design process
- Evaluate system using real users
- Learnability (on Web, this has to be a matter of seconds)
- Efficiency of use (important if product will be used often)
- Memorability (important if product will be used intermittently)
- Errors (caused by system not being designed well enough to prevent them)
- Subjective satisfaction
His other major points that afternoon related to the ideal number of users to test with. For years we've heard him say 4-6 users, and that's still his message today. Elaborate tests are costly and intimidating; discount methods with 4-6 users can be quick, cheap and more practical. He strongly advises doing more iterative testing with small groups, rather than one large test, strengthening his argument with charts and graphs, and using terms like Lambda, which I strained my brain to understand. He adds that, while some of what each user does during a test will be different, some will be the same, and a pattern will emerge. Once a pattern is established, adding more users really doesn't add much value. I have personally experienced this as a test facilitator and observer, and I can tell you, by the 5th or 6th user, the findings start to get really boring...little new information emerges at that point.
Stay tuned for a run-down on Day 2!
Labels:
Jakob Nielsen,
NNG Usability Week,
UCD,
User-centered design
Day 1, Morning - Usability Week San Francisco
Let the training begin! My day began at 8:30 a.m. with continental breakfast along with a few hundred other usability professionals. I sat down at a table with three women -- each coming from a different locale (Florida, Washington D.C., New Zealand) and each working in a different industry (Food, Government, Technology). In looking at the attendance roster, it was interesting to note that the people attending this event were coming from every corner of North America and beyond, as well as a wide range of industries. Dr. Nielsen has made quite a name for himself as a usability practitioner and writer; there are few areas where his expertise isn't valuable.
The program I enrolled in, Usability in Practice, started at 9 a.m. The presenter for the morning session was Amy Schade, a User Experience Specialist who works in NNG's New York office. Her pedigree includes a wide variety of hands-on experience in various industries, numerous training presentations, and she has co-authored many of NNG's reports. During her morning delivery, we covered User Testing Methodology (Planning User Tests, Conducting User Tests). Some key points I felt were especially relevant to our work at Wunderman included:
The program I enrolled in, Usability in Practice, started at 9 a.m. The presenter for the morning session was Amy Schade, a User Experience Specialist who works in NNG's New York office. Her pedigree includes a wide variety of hands-on experience in various industries, numerous training presentations, and she has co-authored many of NNG's reports. During her morning delivery, we covered User Testing Methodology (Planning User Tests, Conducting User Tests). Some key points I felt were especially relevant to our work at Wunderman included:
- The ability to observe user behavior is one of the key advantages of facilitated user testing. Sometimes what people say they do and what they actually do doesn't match up.
- Listening (not talking) during a user test is of utmost importance. The facilitator should stay quiet, observe and take notes. You want to test the design, not how good your instructions are; don't interrupt.
- When deciding what to test, identify the top 10 tasks to start. Don't try to cover everything at one time. Focus on large concepts or specific features -- ideally the most common tasks or those that have the most impact on the business.
- Metrics to collect can include Success Rate (pass/fail or partial credit with 0-4 rating scale), Task Time (use a stopwatch), Error Rate (including why error occurred), and Satisfaction Ratings (0-7 rating scale).
- Recruit the right participants; testing with the wrong users will (most likely) get you the wrong results.
- When developing screeners, ask open-ended questions (e.g., How much money have you spent online in the last year?), and be careful not to reveal the desired answer in the question.
- Ask your test recruiting vendor how they handle "no shows"; consider having "floaters" available -- people who are at the testing facility for an extended period in the event someone doesn't show up.
- Task writing guidelines include avoiding wording used in the design and micro-steps (too much detail).
- Conduct a pilot study of the test (at least 24 hours beforehand) to ensure tasks can be completed, tasks are clear, time allotted is reasonable, and number of tasks is appropriate.
- It is NOT necessary to return users to the home page at the beginning of a task (not realistic).
- Don't let your notetaking be distracting. (My personal experience is that it's better to have a facilitator focused on facilitating, and another person responsible for notetaking, ideally where the user can't see them.)
- Avoid mixing marketing questions (e.g., Would you use this feature?) with usability questions. If this is a requirement, save the marketing questions until the end.
- Some questions are just bad (e.g., Did you notice this link here? If you have to ask, they probably didn't.)
- You shouldn't have to describe every element on the page. Let the user discover on their own. If they don't mention, it's either not an issue, or it's not important to them.
- If you don't have a usability lab, it's not hard to mock one up. Consider a room where the user/facilitator sit at the front, with the screen projected on the wall, and observers sit behind the user/facilitator (out of sight).
Subscribe to:
Posts (Atom)