This article was originally published on December 7, 2009.
What is data but observation? Observations are what was seen and what was heard. As teams work on early designs, the data is often about obvious design flaws and higher order behaviors, and not necessarily tallying details. In this article, let’s talk about tools for working with observations made in exploratory or formative user research.
Many teams have a sort of intuitive approach to analyzing observations that relies on anecdote and aggression. Whoever is the loudest gets their version accepted by the group. Over the years, I’ve learned a few techniques for getting past that dynamic and on to informed inferences that lead to smart design direction and creating solution theories that can then be tested.
Collaborative techniques give better designs
The idea is to collaborate. Let’s start with the assumption that the whole design team is involved in the planning and doing of whatever the user research project is.
Now, let’s talk about some ways to expedite analysis and consensus. Doing this has the side benefit of minimizing reporting – if everyone involved in the design direction decisions has been involved all along, what do you need reporting for? (See more about this in the last section of this article.)
Some collaborative analysis techniques I’ve seen work really well with teams are:
- Between-session debriefs
- Rolling issues lists
- K-J analysis
- Cross-matching rolling issues lists with K-Js
Between-session debriefs
Do you just grind through sessions until you’re through them all, only to end up having an excruciatingly long meeting with the team where you’re having to re-play every session because no one was there but you?
Schedule extra time between sessions
Try this: Schedule more time than usual between sessions. If you usually schedule 15 minutes between usability test sessions, for example, then next time, schedule 30 minutes. Use the additional time to debrief with observers.
If the team sees that there will be discussion in between the sessions that will help move the design forward, they’re more engaged. If team members are already observing sessions, then this gives you a chance to manage the conversations that they’re already having.
Knowing that you’re going to want to debrief between sessions, the team is more likely to come to more sessions and to pay full attention. They’ll learn that if they’re at the sessions, they get more say in the design outcome, and the design outcomes will make more sense. If they don’t attend, they don’t get as much say, simply because they’ve observed less and have less evidence for their inferences.
All you have to do is get the team to talk about what they saw and what they heard, and what was most surprising about that. Save the design solutions for later, unless you’re doing rapid iterative testing.
Play ‘guess the reason’
To get teams in the practice of sticking to discussing observations rather than jumping to design conclusions, I’ve tried playing a game called “Guess the Reason” with them. It’s easy. Show a user interface – just one page or screen or panel – and describe the behavior observed. Then ask the team to guess why that happened. It’s a brainstorming activity. The first person to go to a design solution has to put money in the team’s drink fund. You can use the same system during your own debriefs, which can make it fun (and profitable).
Rolling issues lists
I’ve written about these before. Simply put, this technique gets the team further engaged in the collecting of observations and takes the burden off the moderator/researcher.
Gather whiteboard and markers
The idea is that those observations that come out in the debrief get written down on a white board that all the observers can see. Each observation gets tracked to the participants who had the issues. As the moderator, you start the list, but as the sessions go on, you encourage the team to add their own.
Natural consensus through debrief discussion
As team members add, and the team talks about the observations that go onto the list, there’s a natural consensus building that goes on. Does everyone agree that this is something we want to track? Does everyone agree that this way of talking about it makes sense to everyone?
Draws out what is important to the team
When I moderate user research sessions, doing this often means that I don’t have to take notes at all because the team is recording what is important to them. As they’re doing that, I also get to see what is important to the different roles on the team.
K-J analysis
I admit that I stole this idea from User Interface Engineering (UIE). But it’s one of the most powerful tools in the collaboration toolbox. Jared Spool has an excellent article (that doubles as a script) about this technique.
When I do K-Js in workshops, everyone gets really excited. It’s an amazing tool that objectively, democratically identifies what the high priority items are from subjective data.
The technique was invented by Jiro Kawakita to help his co-workers quickly come to consensus on priorities by getting them to discuss only what was really important to the whole team. There are 8 steps:
1. Create a focus question. For a usability test, it might be, “What are the most important changes to make to improve the user experience for this design?” In workshops, I often choose a more philosophical question, like, “What obstacles to teams face in implementing user experience design practices in their organizations?”
2. Get the group together. When I use this technique with teams at the end of a user research project, I invite only people who observed at least one session.
3. Put data or opinions on sticky notes. For the user research focus question, I ask for specific, one-sentence observations that are clear enough for other people to understand. (Team members often bring their computers with them to go through the notes they took during sessions.)
4. Put the sticky notes on the wall. Everyone puts their sticky notes up, in random order on one wall, while reviewing what other people are also putting on the wall. Allow no discussion.
5. Group similar items. This step is like affinity diagramming. Pick up a sticky; find something that seems related to it. Move to another wall, and put those two stickies on the wall, one above the other to form a column. All team members do this step together. Keep going until all the stickies have moved from one wall to the other and all the stickies are in a column. No discussion.
6. Name each column. Using a different color of stickies now, everyone in the room writes down a name for each group and puts their name on the wall above the appropriate column. Everyone must name every column, except if someone else has already stuck up a name that was exactly what you had written down. No discussion.
7. Vote for the most important columns. Everyone writes down what they think are the 3 most important columns. Next, they vote by marking 3 Xs for their most important group, 2 Xs for the second most important, and 1 X for their third most important group. Again, no discussion.
8. Tally the votes, which ranks the columns. On a flip chart or a white board, number a list from 20 to 1. Pull all the column name stickies that have votes and stick them next to the number of votes that are on the sticky. Now the facilitator can read off to the team which groups had the highest votes and thus are the highest priority. Now is the opportunity for discussion as the team determines which stickies can be combined. The decision to combine stickies – and thus, what the most important topics are – must be unanimous.
You’re done. Very cool. Now the team knows exactly what to focus on to discuss, resolve, and remedy. And, if you’re doing a report, you now know what to bother to report on. (See what I meant about reporting?)
Cross-match the rolling issues with the K-J
If your team or your management is into validation, you can now go back to your desk and compare what came out of the rolling issues with what the K-J generated. My experience so far has been that they match up. And it isn’t because everyone at the K-J was primed by being at all the debriefs between sessions. People who observed remotely often contribute to the K-Js live, so you’d think that their data might change the K-J results. Your mileage may vary, but so far, mine matches up.
Directed collaboration is fun and generates better design solutions
When you help the team review together what they saw and heard during user research sessions, there is more likely to be consensus, buy-in, and a shared vision of the design direction. In testing early designs especially, consensus, buy-in, and shared vision are crucial to ending up with great user experiences. Collaborative techniques for analyzing observations turn work into fun, and take the pressure off the researcher to generate results. Because everyone on the team was involved in generating observations and setting priorities, everyone can move on quickly to making informed decisions that lead to coordinated, smart designs.
Collaboration is always tricky, I can imagine some of these techniques are not as easy as they sound depending on the personalities in play. But really useful breakdown to take away and think over, thanks!