Testing in the wild defined

Lately I’ve been talking a lot about “usability testing in the wild.” There are a lot of people out there who make their livings as usability practitioners. Those people know that the conventional way to do usability testing is in a laboratory setting. If you have come to this blog from outside the world of user experience research, that may never have occurred to you.

Some of the groups I’ve been working with recently do all their testing in the wild. That is, they never set foot in a lab, but instead conduct evaluations wherever their users normally do the tasks the groups are interested in observing. That setting could be a grocery store, City Hall, on the bus, or at a home or workplace – or any number of other places.

A “wild” usability test sometimes has another feature: it is lightly planned or even ad hoc. Just last night I was on a flight from Boston to San Francisco. I’ve been working with a team to develop a web site that lists course offerings and a way to sign up to take the courses. As I was working through the navigation and checking wireframes, the guy in the seat next to me couldn’t help looking over at my screen. He asked me about the site and the offerings, explaining that they looked like interesting topics. I didn’t have a prototype, but I did have the wireframes. So, after we talked for a moment about what he did for a living and what seemed interesting about the topics listed, I showed him the wireframe for the first page of the site and said, “Okay, from the list of courses here, is there something you would want to take?” He said yes, so I said, “What do want to do next, then?” He told me and I showed him the next appropriate wireframe. And we were off.

I learned heaps for the team about whether this user found the design useful and what he valued about it. It also gave me some great input for a more formal usability test later. Testing in the wild is great for early testing of concepts and ideas you have about a design. It’s one quick, cheap way to gain insights about designs so teams can make better design decisions.

The importance of rehearsal

You have designed a study. Everyone seems to be buying in. Scheduling participants is working out and the mix looks good. What’s left to be done except just doing the sessions? Three things:

  1. Practice.
  2. Practice.
  3. Practice.

There are three rounds of practice that I do before I do a “real” session. Jeez, I can hear you say, why would I need to practice so much? Why would you, Dana, who have been doing usability testing for so many years, need to practice so much? I do it for a couple of reasons:

  • It gives me multiple opportunities to clarify the intent of the test, the tasks, and the data measures.
  • I can focus on observing the participant in each regular session because any kinks have been worked out.

Walk through the script and gather tools and materials
The first is to walk through my test plan and script. I read the script aloud even though I’m by myself. While I’m doing that, I do two things: adjust the wording to sound more natural, and gather tools and materials I’ll need to do the sessions.

Do a dress rehearsal or dry run
For the second round of practice, I do a dry run of the now refined script with someone I know filling the role of the participant. We do everything you would normally do in a session, from greeting and filling out forms, to doing tasks, to closing the session. I might occasionally stop the session to adjust the script or to make notes about what to do differently next time. I might even ask the participant (usually a friend, neighbor, or colleague) questions about whether the test is making sense. It’s a combination of dress rehearsal and “logic and accuracy” test to get the sequence down and to make sure you’ve got all the necessary pieces.

Pilot the protocol
Finally, there’s the pilot test session. In this pilot, I work with a “real” participant – someone who was screened and scheduled along with all of the other participants. I conduct the session in the same way I intend to conduct all of the following sessions. The twist this time is that observers from the design team should be present. At the end of the session, I debrief with them about the protocol.

Don’t waste good participant data
There have been times when I’ve been rushed by a client or was just too cavalier about going into a usability test and did not rehearse. I paid for it by having rough sessions that I couldn’t use all the data from. Every time it’s a reminder that preparation and practice are as important to getting good data as a good test design is.