Involving older adults in design of the user experience: Inclusive design

Despite the reality of differences due to aging, research has also shown that in many cases, we do not need a separate design for people who are age 50+. We need better design for everyone.

Everyone performs better on web sites where the interaction matches users’ goals; where navigation and information are grouped well; where navigation elements are consistent and follow conventions; where writing is clear, straightforward, in the active voice, and so on. And, much of what makes up good design for younger people helps older adults as well.

For example, we know that most users, regardless of age, are more successful finding information in broad, shallow information architectures than they are with deep, narrow hierarchies. When web sites make their sites easier to use for older adults, all of their users perform better in usability studies. The key is involving older adults in user research and usability testing throughout design and development. Continue reading Involving older adults in design of the user experience: Inclusive design

Researcher as director: scripts and stage direction

For most teams, the moderator of user research sessions is the main researcher. Depending on the comfort level of the team, the moderator might be a different person from session to session in the same study. (I often will moderate the first few sessions of a study and then hand the moderating over to the first person on the design team who feels ready to take over.)

To make that work, it’s a good practice to create some kind of checklist for the sessions, just to make sure that the team’s priorities are addressed. For a field study or a formative usability test, a checklist might be all a team needs. But if the team is working on sussing out nuanced behaviors or solving subtle problems, we might want a bit more structure. Continue reading Researcher as director: scripts and stage direction

Overcoming fear of moderating UX research sessions

It always happens: Someone asks me about screwing up as an amateur facilitator/moderator for user research and usability testing sessions. This time, I had just given a pep talk to a bunch of user experience professionals about sharing responsibility with the whole team for doing research. “But what if the (amateur) designer does a bad job of moderating the session?”

What not to do

There are numerous ways in which a moderator can foul things up. Here are just a few possibilities that might render the data gathered useless:

  • Leading the participant
  • Interrupting or intervening at the wrong time
  • Teaching or training rather than observing and listening
  • Not following a script or checklist
  • Arguing with the participant

Rolf Molich and Chauncey Wilson put together an extensive list of the many wrong things moderators could do. There are dozens of behaviors on the list. I have committed many of these sins myself at some point. It’s embarrassing, but it is not the end of the world. So, here, let’s talk about what to do to be the best possible moderator in your first session. Continue reading Overcoming fear of moderating UX research sessions

Beware the Hawthorne Effect

In a clear and thoughtful article in the May 3, 2007 Journal of Usability Studies (JUS) put out by the Usability Professionals’ Association, Rich Macefield blasts the popular myths around the legendary Hawthorne effect. He goes on to explain very specifically how no interpretation of the Hawthorne effect applies to usability testing.

Popular myth – and Mayo’s (1933) original conclusion – says that human subjects in any kind of research will perform better just because they’re aware they’re being studied.

Several researchers have reviewed the original study that generated the finding, and they say that’s not what really happened. Parsons (1974) was the first to say that the improvement in performance of subjects in the original study was more likely due to feedback they got from the researchers about their performance and what they learned from getting that feedback.

Why it doesn’t apply to usability tests

Macefield convincingly demonstrates why the Hawthorne effect just doesn’t figure in to well designed and professionally executed usability tests:

  • The Hawthorne studies were longitudinal, most usability tests are not.
  • The subjects were experts, most participants are novices at something in a usability test because what they are using is new.
  • The metrics used in the Hawthorne studies were different from most usability tests.
  • The subjects in the Hawthornestudies had horrible, boring jobs, so they may have been motivated to perform better because of attention they got from researchers; it’s possible in usability tests that participants are experiencing unwanted interruptions by being included or that they’re just doing the test to get paid. 
  • The Hawthorne subjects may have thought that taking part in the study would improve their chances for raises or promotions; the days of usability test participants thinking that their participating in studies might help them get jobs are probably over.

What about feedback and learning effects?

We want feedback to be part of a good user interface, don’t we? Yes. And we want people to learn from using an interface, don’t we? Again, yes. But, as Macefield says, let’s make sure that all the feedback and learning from a usability test comes from the UI and not the researcher/moderator. Instead, get to the cause of problems from qualitative data such as the verbal protocol from participants’ thinking aloud to see how they’re thinking about the problem.

Look at effects across tasks or functions

Macefield suggests that if you’re getting grief, add a control group to compare against and then look at performance across tasks. For example, you might expect that the test group (using an “improved” UI) would be more efficient or effective in all elements of a test than a control group. But it’s possible that the test group did better on one task but both groups had a similar level of problems on a different task. If this happens, it is unlikely that the moderator has given feedback or prompted learning to create the effect of improved performance because the effect should be global across tasks across groups.

Macefield closes the article with a couple of pages that could be a lesson out of Defense Against the Dark Arts, setting out very specific ways to argue against any assertion that your findings might be “contaminated.” But don’t just zoom to the end of the piece. The value of the article is in knowing the whole story.

Moderating tips and techniques

Getting the right information from the participant can be a difficult. As the moderator, you must attend to many things besides what the participant doing and saying. Focusing on a few specific behaviors of your own will help you have a better test.

Focus your attention on what’s happening now

  • Quickly build rapport with the participant
  • Listen attentively
  • Be open to what might happen in a session – be ready to learn from the participant

Tips for being a better moderator

Be the neutral observer – avoid priming or teaching. If you’re too close to the product or the domain, you may train participants without realizing it by using keywords in your task scenarios or materials.

Observe at the expense of collecting data, if you must. It is difficult to take notes and to watch the participant at the same time. If things are happening quickly or you find yourself missing things the participant is saying or doing, just stop taking notes. Instead, listen and spend time between sessions making notes about what happened. Go through your recordings later if you need to, or ask observers to share their notes.

Play dumb – don’t answer questions. If participants perceive that you are an expert on the product, they may ask you questions about it or look for your approval on actions. Instead, let her know that you are learning too, and that you’ll note her questions but won’t always be able to answer them.

Flex the script and test plan. Even after you pilot test your test, you may have to adjust on-the-fly when participants do unpredictable things. That’s okay. You’re learning important things that fit into your aggregate patterns of use.

Practice and get feedback. Ask co-workers and observers to give you feedback about how you conduct sessions and how you ask questions.

Your own self-awareness is your best tool for moderating test sessions successfully. Following these guidelines should help you get valid, reliable data from your participants, even if your attention is slightly divided.

When to ask participants to think out loud

I was taught that one of the most important aspects of moderating usability study sessions was to encourage participants to think out loud as they worked on tasks. While the technique is good and useful in many usability test situations, it isn’t always the best approach.

Get data about how and why people do things
This “verbal protocol,” as it is known, can be an extremely useful thing to have. If the participant is good at thinking aloud, you all hear about how how she is forming the task and how she is thinking about reaching her goal. You will hear about why she is doing the things she is doing, and the words she uses to describe it all. You also will get verbal feedback about how the participant feels about what is happening because she may say that she’s frustrated, or annoyed or even happy.

What the data means
Hearing how a participant forms a task tells you whether the designers and the user are thinking of (modeling) the task in the same way.

Hearing why a participant is taking a particular step tells you where your design does and does not support users’ goals.

Hearing the words gives you labels for navigation, links, and buttons. Your information architecture should match the participant’s vocabulary.

Hearing the emotion tells you how severe a problem may be.

These are all good things to know.
How to get a good think-aloud
Some people think aloud naturally, or at least will verbalize their questions and frustrations just because there’s someone else in the room (that would be you, the moderator). But most people need to be primed to do it, and some even need practice doing it.

In your introduction to the session, ask participants to tell you what’s going through their minds as they do tasks.

Consider incorporating a practice task that lasts for half a minute, just to get participants to try it out. Encourage them and quickly move on.

When you describe the task scenario to participants, remind them to think aloud.

During the task, when something seems to be frustrating, annoying or hindering — and the participant isn’t talking — ask her to tell you what she’s thinking.

Know that there’s more going on than you can hear

People filter automatically
Participants can’t tell you everything they’re thinking. And really, you don’t want that. Humans can process on a number of cognitive tracks at the same time. Most study participants will automatically be able to distinguish between what is related to the situation and what isn’t.

This is a test
They also may filter what they tell you beyond this basic distinction. For example, they want to do well. Although you tell participants you are not testing them, a participant might feel some level of test, even if she’s just in competition with The Machine.

Participants may fear failure or embarrassment. In usability studies, people often persist at times when they would normally ask for help.

People tend to give positive feedback
Participants want to give you a good session. People are conditioned to say and do things for the approval of others. They want the moderator to approve of their performance.
Participants take responsibility for bad design
People who are novices at a task or are working with something outside their experience may excuse the design by taking responsibility for a design problem. For example, they may say they could do it now that they (have failed and) have done it once. Or they just need more time to learn the site. This is especially common among older adults who are unsure of their computer or other appropriate skills.

When you might not want to use think-aloud
There are times when using think-aloud can conflate or dilute your data. There are other situations in which using think-aloud is just difficult, or won’t work for the type of participants you have in your study.

Time on task
If you want to measure how much time it takes people to complete a task because you are particularly concerned with efficiency, introducing think-aloud is probably a bad idea. Talking about what you’re thinking slows you down while you choose words to convey your ideas about what’s happening and why.

Audio feedback in a user interface
Some interfaces incorporate audio feedback to indicate statuses or modes. These auditory cues may be overlapped by the participant talking so the participant may miss something important happening – or you might. Also, many blind people and people with severe vision impairments use screen readers to use software and web sites. If you’re tuned in, you can learn things by listening to the screen reader as it works. And, although most of the people with visual disabilities who use screen readers who I have observed can listen and talk at the same time (like sighted people can see or read and talk at the same time), as a sighted moderator, my auditory channel is challenged by listening to both the screen reader and the participant at the same time.

You’re interrupting a taxed thought process
People who have short-term memory loss, are medicated, or have other cognitive limitations tend to stop talking when they encounter obstacles to reaching their goals. You might be tempted to prompt these people to “tell me what you’re thinking,” but try not to. They’re concentrating on working around the obstacle. If you watch closely, you can see their solution unfold. After it does, then ask them about how they got to it.

An alternative to think-aloud: Retrospective review
“Retrospective review” is just a fancy name for asking people to tell you what happened after the fact. Go back to a particular point in the task, set the context, and ask the participant to tell you what was happening. For example, say something like this: “When you got to this point on the registration form [pointing to a field], you stopped talking. Tell me about what you were trying to do and what was happening.” The participant may revise what happened, but you will have good notes and the memory of someone who was observing closely, not trying to perform, so you can pinpoint issues that you thought were happening. Invite the participant to correct your perceptions.

If you have the tools and time available, you can go to the video recording so the participant can see what he did and respond to that by giving you a play-by-play commentary.

It’s a great tool, used at the right time with the right participants
Think-aloud or verbal protocol can give you rich data about vocabulary and effectiveness of design. From it, you can also get some impression of the severity of problems for a particular participant or the level of satisfaction for someone who had a positive experience. Use think-aloud in exploratory or formative studies to help you understand how users are modeling their tasks. Consider carefully whether to use it in other situations, though, to ensure that you’re not adding to the cognitive load that the participant is already experiencing.