What to do with the data: Moving from observations to design direction

 

This article was originally published on December 7, 2009. 

 

What is data but observation? Observations are what was seen and what was heard. As teams work on early designs, the data is often about obvious design flaws and higher order behaviors, and not necessarily tallying details. In this article, let’s talk about tools for working with observations made in exploratory or formative user research.

Many teams have a sort of intuitive approach to analyzing observations that relies on anecdote and aggression. Whoever is the loudest gets their version accepted by the group. Over the years, I’ve learned a few techniques for getting past that dynamic and on to informed inferences that lead to smart design direction and creating solution theories that can then be tested.

 

Collaborative techniques give better designs

The idea is to collaborate. Let’s start with the assumption that the whole design team is involved in the planning and doing of whatever the user research project is.

Now, let’s talk about some ways to expedite analysis and consensus. Doing this has the side benefit of minimizing reporting – if everyone involved in the design direction decisions has been involved all along, what do you need reporting for? (See more about this in the last section of this article.)

Continue reading What to do with the data: Moving from observations to design direction

The essence of usability testing, in your pocket

I’ve encountered a lot of user researchers and designers lately who say to me, “I can’t do all the testing there is to do. The developers are going to have to evaluate usability of the design themselves. But they’re not trained! I’m worried about how to give them enough skills to get good data.”

What if you had a tiny guide that would give your team just the tips they need to guide them in creating and performing usability tests? It’s here!

 

Usability Testing Pocket Guide

This is a 32-page, 3.5 x 5-inch book that includes 11 simple steps along with a quick checklist at the end to help you know whether you’re ready to run your test.

The covers are printed on 100% recycled chipboard. The internal pages are vegetable- based inks on 100% recycled papers. The Pocket Guides are printed by Scout Books and designed by Oxide Design Co.

You can order yours here.

usability-testing-pocket-guide

Wilder than testing in the wild: usability testing by flash mob

It was a spectacularly beautiful Saturday in San Francisco. Exactly the perfect day to do some field usability testing. But this was no ordinary field usability test. Sure, there’d been plenty of planning and organizing ahead of time. And there would be data analysis afterward. What made this test different from most usability tests?

Popping the big question(s): How well? How easily? How valuable?

When teams decide to do usability testing on a design, it is often because there’s some design challenge to overcome. Something isn’t working. Or, there’s disagreement among team members about how to implement a feature or a function. Or, the team is trying something risky. Going to the users is a good answer. Otherwise, even great teams can get bogged down. But how do you talk about what you want to find out? Testing with users is not binary – you probably are not going to get an up or down, yes or no answer. It’s a question of degree. Things will happen that were not expected. The team should be prepared to learn and adjust. That is what iterating is for (in spite of how Agile talks about iterations).

Ask: How well
Want to find out whether something fits into the user’s mental model? Think about questions like these:

  • How well does the interaction/information information architecture support users’ tasks?
  • How well do headings, links, and labels help users find what they’re looking for?
  • How well does the design support the brand in users’ minds?

Ask: How easily
Want to learn whether users can quickly and easily use what you have designed? Here are some questions to consider:

  • How easily and successfully do users reach their task goals?
  • How easily do users recognize this design as belonging to this company?
  • How easily and successfully do they find the information they’re looking for?
  • How easily do users understand the content?
  • How easy is it for users to understand that they have found what they were looking for?
  • How easy or difficult is it for them to understand the content?

Ask: How valuable

  • What do users find useful about the design?
  • What about the design do they value and why?
  • What comments do participants have about the usefulness of the feature?

Ask: What else?

  • What questions do your users have that the content is not answering?
  • What needs do they have that the design is not addressing?
  • Where do users start the task?

Teams that think of their design issues this way find that their users show them what to do in the way they perform with a design. Rarely is the result of usability testing an absolute win or lose for a design. Instead, you get clues about what’s working – and what’s not – and why. From that, you can make a great design.

Getting ready for sessions: Don’t forget…

There are a bunch of things to do to get ready for any test besides designing the test and recruiting participants.

  • make sure you know the design well enough to know what should happen as the participant uses it
  • copy any materials you need for taking notes
  • copy of all the forms and questionnaires for participants, including honorarium receipts
  • organize the forms in some way that makes sense for you. (I like a stand-up accordion file folder, in which I sort a set of forms for each participant into each slot. I stand up the unused sets and then when they’ve been filled out, they go back in on their sides.)
  • check in with Accounting or whoever on money for honoraria or goodies for give-aways
  • get a status report from the recruiter
  • double-check the participant mix
  • make sure you have contact information for each participant
  • check that you have all the equipment, software, or whatever that you need for the participant to be able to do tasks
  • run through the test a couple of times yourself
  • double-check the equipment you’re going to use (I use a digital audio recorder, so I need memory sticks for that, along with rechargeable batteries)
  • charge all the batteries
  • double-check the location

Which gets us to where you’re going to do the sessions. But let’s talk about that later.

Moderating tips and techniques

Getting the right information from the participant can be a difficult. As the moderator, you must attend to many things besides what the participant doing and saying. Focusing on a few specific behaviors of your own will help you have a better test.

Focus your attention on what’s happening now

  • Quickly build rapport with the participant
  • Listen attentively
  • Be open to what might happen in a session – be ready to learn from the participant

Tips for being a better moderator

Be the neutral observer – avoid priming or teaching. If you’re too close to the product or the domain, you may train participants without realizing it by using keywords in your task scenarios or materials.

Observe at the expense of collecting data, if you must. It is difficult to take notes and to watch the participant at the same time. If things are happening quickly or you find yourself missing things the participant is saying or doing, just stop taking notes. Instead, listen and spend time between sessions making notes about what happened. Go through your recordings later if you need to, or ask observers to share their notes.

Play dumb – don’t answer questions. If participants perceive that you are an expert on the product, they may ask you questions about it or look for your approval on actions. Instead, let her know that you are learning too, and that you’ll note her questions but won’t always be able to answer them.

Flex the script and test plan. Even after you pilot test your test, you may have to adjust on-the-fly when participants do unpredictable things. That’s okay. You’re learning important things that fit into your aggregate patterns of use.

Practice and get feedback. Ask co-workers and observers to give you feedback about how you conduct sessions and how you ask questions.

Your own self-awareness is your best tool for moderating test sessions successfully. Following these guidelines should help you get valid, reliable data from your participants, even if your attention is slightly divided.

Why create a test design?

I get a lot of clients who are in a hurry. They get to a point in their product cycle that they’re supposed to have done some usability activity to exit the development phase they are in and now find they have to scramble to pull it together. How long can it take to arrange and execute a discount usability test, anyway?

Well, to do a usability test right, it does take a few steps. How much time those steps take depends on your situation. Every step in the process is useful.

The steps of a usability test
Jeff Rubin and I think there are these steps to the process for conducting a usability test:

  1. Develop a test plan
  2. Set up the testing environment and plan logistics
  3. Find and select participants
  4. Prepare test materials
  5. Conduct the sessions
  6. Debrief participants and observers
  7. Analyze data and observations
  8. Create findings and recommendations

Notice that “develop a test plan” and “prepare test materials” are different steps.

It might seem like a shortcut to go directly to scripting the test session without designing the test. But the test plan is a necessary step.
Test plan or test design?
There’s a planning aspect to this deliverable. Why are you testing? Where will you test? What are the basic characteristics of the participants? What’s the timing for the test? For the tasks? What other logistics are involved in making this particular test happen? Do you need bogus data to play with, userids, or other props?

To some of us, a test design would be about experiment design. Will you test a hypothesis or is this an exploratory test? What are your research questions? What task scenarios will get you to the answers? Will you compare anything? If so, is it between subjects or within subjects? Will the moderator sit in the testing room or not? What data will you collect and what are you measuring?

It all goes together.

 

Why not just script the session without writing a plan?
Having a plan that you’ve thought through is always useful. You can use the test plan to get buy-in from stakeholders, too. As a representation of what the study will be, it’s understanding the blueprints and renderings before you give the building contractor approval to start building.

With a test plan, you also have a tool for documenting requirements (a frozen test environment, anyone?) for the test and a set of unambiguous details that define the scope of the test. Here, in a test plan, you define the approach to the research questions. In a session script, you operationalize the research questions. Writing a test plan helps you know what you’re going to collect data about and what you’re going to report on, as well as what the general content of the report will be.
Writing a test plan (or design, or whatever you want to call it) will give you a framework for the test in which a session script will fit. All the other deliverables of a usability test stem from the test plan. If you don’t have a plan, you risk using inappropriate participants and getting unreliable data.