Researcher as director: scripts and stage direction

For most teams, the moderator of user research sessions is the main researcher. Depending on the comfort level of the team, the moderator might be a different person from session to session in the same study. (I often will moderate the first few sessions of a study and then hand the moderating over to the first person on the design team who feels ready to take over.)

To make that work, it’s a good practice to create some kind of checklist for the sessions, just to make sure that the team’s priorities are addressed. For a field study or a formative usability test, a checklist might be all a team needs. But if the team is working on sussing out nuanced behaviors or solving subtle problems, we might want a bit more structure. Continue reading Researcher as director: scripts and stage direction

Looking for love: Deciding what to observe for

 

The team I was working with wanted to find out whether a prototype they had designed for a new intranet worked for users. Their new design was a radical change from the site that had been in place for five years and in use by 8,000 users. Going to this new design was a big risk. What if users didn’t like it? Worse, what if they couldn’t use it?

We went on tour. Not to show the prototype, but to test it. Leading up to this moment we had done heaps of user research: stakeholder interviews, field observations (ethnography, contextual inquiry – pick your favorite name), card sorting, taxonomy testing. We learned amazing things, and as our talented interaction designer started translating all that into wireframes, we got pressure to show them. We knew what we were doing. But we wanted to be sure. So we made the wireframes clickable and strung them together to make them feel like they were doing something. And then we asked (among other things): Continue reading Looking for love: Deciding what to observe for

Popping the big question(s): How well? How easily? How valuable?

When teams decide to do usability testing on a design, it is often because there’s some design challenge to overcome. Something isn’t working. Or, there’s disagreement among team members about how to implement a feature or a function. Or, the team is trying something risky. Going to the users is a good answer. Otherwise, even great teams can get bogged down. But how do you talk about what you want to find out? Testing with users is not binary – you probably are not going to get an up or down, yes or no answer. It’s a question of degree. Things will happen that were not expected. The team should be prepared to learn and adjust. That is what iterating is for (in spite of how Agile talks about iterations).

Ask: How well
Want to find out whether something fits into the user’s mental model? Think about questions like these:

  • How well does the interaction/information information architecture support users’ tasks?
  • How well do headings, links, and labels help users find what they’re looking for?
  • How well does the design support the brand in users’ minds?

Ask: How easily
Want to learn whether users can quickly and easily use what you have designed? Here are some questions to consider:

  • How easily and successfully do users reach their task goals?
  • How easily do users recognize this design as belonging to this company?
  • How easily and successfully do they find the information they’re looking for?
  • How easily do users understand the content?
  • How easy is it for users to understand that they have found what they were looking for?
  • How easy or difficult is it for them to understand the content?

Ask: How valuable

  • What do users find useful about the design?
  • What about the design do they value and why?
  • What comments do participants have about the usefulness of the feature?

Ask: What else?

  • What questions do your users have that the content is not answering?
  • What needs do they have that the design is not addressing?
  • Where do users start the task?

Teams that think of their design issues this way find that their users show them what to do in the way they perform with a design. Rarely is the result of usability testing an absolute win or lose for a design. Instead, you get clues about what’s working – and what’s not – and why. From that, you can make a great design.

Getting ready for sessions: Don’t forget…

There are a bunch of things to do to get ready for any test besides designing the test and recruiting participants.

  • make sure you know the design well enough to know what should happen as the participant uses it
  • copy any materials you need for taking notes
  • copy of all the forms and questionnaires for participants, including honorarium receipts
  • organize the forms in some way that makes sense for you. (I like a stand-up accordion file folder, in which I sort a set of forms for each participant into each slot. I stand up the unused sets and then when they’ve been filled out, they go back in on their sides.)
  • check in with Accounting or whoever on money for honoraria or goodies for give-aways
  • get a status report from the recruiter
  • double-check the participant mix
  • make sure you have contact information for each participant
  • check that you have all the equipment, software, or whatever that you need for the participant to be able to do tasks
  • run through the test a couple of times yourself
  • double-check the equipment you’re going to use (I use a digital audio recorder, so I need memory sticks for that, along with rechargeable batteries)
  • charge all the batteries
  • double-check the location

Which gets us to where you’re going to do the sessions. But let’s talk about that later.

Data collecting: Tips and tricks for taking notes

A common mistake people make when they’re new to conducting usability tests is taking verbatim notes.

Note taking for summative tests can be pretty straightforward. For those you should have benchmark data that you’re comparing against or at least clear success criteria. In that case, data collecting could (and probably should) be done mostly by the recording software (such as Morae). But for formative or exploratory tests, note taking can be more complex.

Why is it so tempting to write down everything?

Interesting things keep happening! Just last week I was the note taker for a summative test in which I noticed (after about 30 sessions), that women and men seemed to be holding the stylus for marking what we were testing differently and that it seemed that difference was causing a specific category of errors.

But the test wasn’t about using the hardware. This issue wasn’t something we had listed in our test plan as a measure. It was interesting, but not something we could investigate for this test. We will include it as an incidental observation in the report as something to research later.

Note taking don’ts

  • Don’t take notes yourself if you are moderating the session if you can help it.
  • Don’t take verbatim notes. Ever. If you want that, record the sessions and get transcripts. (Or do what Steve Krug does, and listen to the recordings and re-dictate them into a speech recognition application.)
  • Don’t take notes on anything that doesn’t line up with your research questions.
  • Don’t take notes on anything that you aren’t going to report on (either because you don’t have time or it isn’t in the scope of the test).

 

Tips and tricks

  • DO get observers to take notes. This is, in part, what observers are for. Give them specific things to look for. Some usability specialists like to get observer notes on large sticky notes, which is handy for the debriefing sessions.
  • DO create pick lists, use screen shots, or draw trails. For example, for one study, I was trying to track a path through a web site to see if the IA worked. I printed out the first 3 levels of IA in nested lists in 2 columns so it fit on one page of a legal sized sheet of paper. Then I used colored highlighters to draw arrows from one topic label to the next as the participant moved through the site, numbering as I went. It was reasonably easy to transfer this data to Excel spreadsheets later to do further analysis.
  • DO get participants to take notes for you. If the session is very formative, get the participants to mark up wireframes, screen flows, or other paper widgets to show where they had issues. For example, you might want to find out if a flow of screens matches the process a user typically follows. Start the session asking the participant to draw a boxes-and-arrows diagram of their process. At the end of the session, ask the participant to revise the diagram to a) get any refinements they may have forgotten, b) see gaps between their process and how the application works, or c) some variation or combination of a and b.
  • DO think backward from the report. If you have written a test plan, you should be able to use that as a basis for the final report. What are you going to report on? (Hint: the answers to your research questions, using the measures you said you were going to collect.)

Why create a test design?

I get a lot of clients who are in a hurry. They get to a point in their product cycle that they’re supposed to have done some usability activity to exit the development phase they are in and now find they have to scramble to pull it together. How long can it take to arrange and execute a discount usability test, anyway?

Well, to do a usability test right, it does take a few steps. How much time those steps take depends on your situation. Every step in the process is useful.

The steps of a usability test
Jeff Rubin and I think there are these steps to the process for conducting a usability test:

  1. Develop a test plan
  2. Set up the testing environment and plan logistics
  3. Find and select participants
  4. Prepare test materials
  5. Conduct the sessions
  6. Debrief participants and observers
  7. Analyze data and observations
  8. Create findings and recommendations

Notice that “develop a test plan” and “prepare test materials” are different steps.

It might seem like a shortcut to go directly to scripting the test session without designing the test. But the test plan is a necessary step.
Test plan or test design?
There’s a planning aspect to this deliverable. Why are you testing? Where will you test? What are the basic characteristics of the participants? What’s the timing for the test? For the tasks? What other logistics are involved in making this particular test happen? Do you need bogus data to play with, userids, or other props?

To some of us, a test design would be about experiment design. Will you test a hypothesis or is this an exploratory test? What are your research questions? What task scenarios will get you to the answers? Will you compare anything? If so, is it between subjects or within subjects? Will the moderator sit in the testing room or not? What data will you collect and what are you measuring?

It all goes together.

 

Why not just script the session without writing a plan?
Having a plan that you’ve thought through is always useful. You can use the test plan to get buy-in from stakeholders, too. As a representation of what the study will be, it’s understanding the blueprints and renderings before you give the building contractor approval to start building.

With a test plan, you also have a tool for documenting requirements (a frozen test environment, anyone?) for the test and a set of unambiguous details that define the scope of the test. Here, in a test plan, you define the approach to the research questions. In a session script, you operationalize the research questions. Writing a test plan helps you know what you’re going to collect data about and what you’re going to report on, as well as what the general content of the report will be.
Writing a test plan (or design, or whatever you want to call it) will give you a framework for the test in which a session script will fit. All the other deliverables of a usability test stem from the test plan. If you don’t have a plan, you risk using inappropriate participants and getting unreliable data.