Ending the opinion wars: fast, collaborative design direction

I’ve seen it dozens of times. The team meets after observing people use their design, and they’re excited and energized by what they saw and heard during the sessions. They’re all charged up about fixing the design. Everyone comes in with ideas, certain they have the right solution to the remedy frustrations users had. Then what happens?

On a super collaborative team everyone is in the design together, just with different skills. Splendid! Everyone was involved in the design of the usability test, they all watched most of the sessions, they participated in debriefs between sessions. They took detailed, copious notes. And now the “what ifs” begin:

What if we just changed the color of the icon? What if we made the type bigger? What if we moved the icon to the other side of the screen? Or a couple of pixels? What if?

How do you know you’re solving the right problem? Well, the team thinks they’re on the right track because they paid close attention to what participants said and did. But teams often leave that data behind when they’re trying to decide what to do. This is not ideal.

Getting beyond “what if”

On a super collaborative team, everyone is rewarded for doing the right thing for the user, which in turn, is the right thing for the business. Everyone is excited about learning about the goodness (or badness) of the design by watching users use it. But a lot of teams get stuck in the step after observation. They’re anxious to get to design direction. Who can blame them? That’s where the “what ifs” and power plays happen. Some teams get stuck and others try random things because they’re missing one crucial step: going back to the evidence for the design change.

Observing with an open mind

Observations tell you what happened. That is, you heard participants say things and you saw them do things — many, many interesting, sometimes baffling things. Good things, and bad things. Some of those things backed up your theories about how the design would work. Some of the observations blew your theories out of the water. And that’s what we do usability testing to see: In a low risk situation like a small, closed test, what will it be like when our design is out in the wild.

Brainstorming the why

The next natural step is to make inferences. These are guesses or judgments about why the things you observed happened. We all do this. It’s usually what the banter is all about in the observation room.

“Why” is why we do this usability testing thing. You can’t get to why from surveys or focus groups. But even in direct observation, with empirical evidence, why is sometimes difficult to ferret out. A lot of times the participants just say it. “That’s not what I was looking for.” “I didn’t expect it to work that way.” “I wouldn’t have approached it that way.” “That’s not where I’d start.” You get the idea.

But they don’t always tell you the right thing. You have to watch. Where did they start? What wrong turns did they take? Where did they stop? What happened in the 3 minutes before they succeed or failed? What happened in the 3 minutes after?

It’s important to get judgments and guesses out into the fresh air and sunshine by brainstorming them within the team. When teams make the guessing of the why an explicit act that they do in a room together, they test the boundaries of their observations. It’s also easy to see when different people on the team saw things similarly and where they saw them differently.

Weighing the evidence

And so we come to the crucial step, the one that most teams skip over, and why they end up in the “what ifs” and opinion wars: analysis. I’m not talking about group therapy, though some teams I’ve worked with could use some. Rather, the team now looks at the strength of the data to support design decisions. Without this step, it is far too easy to choose the wrong inference to direct the design decisions. You’re working from the gut, and the gut can be wrong.

Analysis doesn’t have to be difficult or time-consuming. It doesn’t even have to involved spreadsheets.* And it doesn’t have to be lonely. The team can do it together. The key is examining the weight of the evidence for the most likely inferences.

Take all those brainstormed inferences. Throw them in to a hat. Draw one out and start looking at data you have that supports that being the reason for the frustration or failure. Is there a lot? A little? Any? Everyone in the room should be poring through their notes. What happened in the sessions? How much? How many participants had a problem? What kinds of participants had the problem? What were they trying to do and how did they describe it?

Answering questions like these, among the team, gets us to understanding how likely is it that this particular inference is the cause of the frustration. After a few minutes of this, it is not uncommon for the team to collectively have an “ah ha!” moment. Breakthrough comes as the team eliminates some inferences because they’re weak, and keeps others because they are strong. Taking the strong inferences together, along with the data that shows what happened and why snaps the design direction right into focus.

Eliminating frustration is a process of elimination

The team comes to the design direction meeting knowing what the priority issues were. Everyone has at least one explanation for the gap between what the design does and what the participant tried to do. Narrowing those guesses to what is the most likely root cause based on the weight of the evidence – in an explicit, open, and conscious act –takes the “what ifs” out of the next version of a design, and shares the design decisions across the team.

* Though 95% of data analysis does. Sorry.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.