Where usability testing fits into your research strategy

What, you don’t have a research strategy? Let’s think about the future here.

It’s not uncommon – and not bad – to be working in the present, reacting to the ever-growing demand for usability testing in your organization. “Ever-growing” is good. But when Jared Spool asked me to do a podcast with him recently to talk about what I think makes the difference between a good user experience team and a great user experience team, it got me thinking.

The recipe, based on my observations in dozens of corporations, comes down to these three main ingredients:

  • Vision
  • Strategy
  • Involvement

Vision is an overused word, but here I mean that you and your team have visualized the ideal customer experience — no limits, no constraints. Imagine the best possible interactions a customer could have with your organization at every touch point. Write it down.

Strategy means that you have a plan for reaching the vision. Over the long term, you can learn about and take into account customers’ contexts and goals while matching those up to the goals and objectives of the business.

Involvement calls all interested people in the business together (and that really should be everyone from management to design to development to support and anyone else in the organization) to embrace the vision and carry out the strategy across disciplines.

But I haven’t said much about usability testing yet. Where does it fit in? Everywhere. Part of my strategy would be to teach as many people in the organization to do usability testing as possible. You probably can’t do all the testing that is wanted (let alone needed). If you teach others to do it and coach them along the way, the customer ultimately benefits as the organization gains a closer, smarter understanding of the customer experience and can make evidence-based decisions about how to get to the ideal experience it shares a vision of.

Are you doing “user testing” or “usability testing”?

Calling anything user testing just seems bad. Okay, contrary to the usual content on this blog – which I’ve tried to make about method and technique – this discussion is philosophical and political. If you feel it isn’t decent to talk about the politics of user research in public, then you should perhaps click away right now.

I know, talking about “users” opens up another whole discussion that we’re not going to have here, now. In this post, I want to focus on the difference between “usability testing” and “user testing” and why we should be specific.

When I say “usability test,” what I’m talking about is testing a design for how usable it is. Rather, how unusable it is, because that’s what we can measure: how hard is it to use; how many errors do people make; how frustrated do people feel when using it. Usability testing is about finding the issues that leave a design lacking. By observing usability test sessions, a team can learn about what the issues are and make inferences about why they are happening to then implement informed design solutions.

If someone says “user testing,” what does that mean? Let’s talk about the two words separately.

First, what’s a “user”? It is true that we ask people who use (or who might use) a design to take part in the study of how usable the design is, and some of us might refer to those people as “users” of the product.

Now, “testing” is about using some specified method for evaluating something. If you call it “user testing,” it sure sounds like you are evaluating users, even though what you probably mean to say is that you’re putting a design in front of users to see how they evaluate it. It’s shorthand, but I think it is the wrong shorthand.

If the point is to observe people interacting with a design to see where the flaws in the design are and why those elements aren’t successful, then you’re going beyond user testing. You’re at usability testing. That’s what I do as part of my user research practice. I try not to test the users in the process.