Making sense of the data: Collaborative data analysis

I’ve often said that most of the value in doing user research is in spending time with users — observing them, listening to them. This act, especially if done by everyone on the design team, can be unexpectedly enlightening. Insights are abundant.

But it’s data, right? Now that the team has done this observing, what do you know? What are you going to do with what you know? How do you figure that out?

 

The old way: Tally the data, write a report, make recommendations

This is the *usual* sequence of things after the sessions with users are done: finish the sessions; count incidents; tally data; summarize data; if there’s enough data, do some statistical analysis; write a report listing all the issues; maybe apply severity ratings; present the report to the team; make recommendations for changes to the user interface; wait to see what happens.

There are a couple of problems with this process, though. UXers feel pressure to analyze the data really quickly. They complain that no one reads the report. And if you’re an outside consultant, there’s often no good way of knowing whether your recommendations will be implemented.

And, the researcher owns the data. I say this like it’s a problem because it is. Although the team may have observed sessions and now have some image of the users in their heads, the researcher is responsible for representing what happened by reporting on the data and drawing conclusions. The users are re-objectified by this distance between the sessions and the design direction. And, the UXer is now in the position of *suggesting* to designers what to do. How well is that working for you? Teams I work with find it, well, difficult.

 

The better way: Tell stories, come to consensus on priority, discuss theories

Teams that consistently turn out great experiences do one cool thing with data from user research and usability testing. They Talk To One Another. A lot. That’s the process.

Okay, there’s a slightly more systematic way to approach collaborative analysis. I’ve happened on a combination of techniques that work really well, with a major hat tip to User Interface Engineering, which originated most of the techniques in this process. As I’ve tried these techniques with teams, I’ve monkeyed with them a bit, iterating improvements. So here’s my take:

– Tell stories
– Do a KJ analysis on the observations from the sessions
– Explore the priority observations
– Brainstorm inferences
– Examine the weight of evidence to form opinions
– Develop theories about the design

Tell stories to share experiences with users
This is the simplest thing, ever. These teams can’t wait to tell their teammates what happened in sessions. Some teams set up debrief scrums. Some teams send around emails. Some teams use wikis or blogs. The content? A 300-word description of the person in the session, what she did, what was surprising, and anything else interesting about what the observers heard and saw. (Ideally, the structure for the story comes from the focus questions or answers the research questions that the study was designed for.)

Do a KJ analysis to come to consensus on priority issues
KJs, as they’re affectionately known by devotees, are powerful, short sessions in which teams democratically prioritize observations. There are two keys to this technique. First, there’s no point in being there unless you’ve observed sessions with users. Second, because there’s no discussion at all until the last step, every CxO in the company can be there and have no more influence on the design than anyone else in the room. This lack of discussion also means that the analysis happens super fast. (I’ve done this with as many as 45 people, who generated about a thousand observations, and we were done in 45 minutes.)

Explore the priority observations
What did the team see? What did the team hear? After the KJ bubbles the priorities up, either with the whole group of observers or with a subset, pull the key observations from the issues and drill in. Again, this is only what people heard and what they saw (no interpreting). The team usually will have different views on the same issue. That’s good. Getting the different perspectives of business people, technologists, and designers on what they observed will get things ready for the next step.

Brainstorm inferences
These are judgments and guesses about why the things the team observed happening. The question the team should be asking themselves about each observation is What’s the gap between the user’s behavior and what the user interface design supports?

So, say you saw users click choices in a list of items that they wanted to compare. But they never found the button to activate the comparison. When the team looks at the gap between what users did or what they were looking for and how the UI is designed, what can you infer from that? Don’t hold back. Put all the ideas out there. But remember, we’re not going to solutions, yet. We’re still guessing about *What happened*.

Examine the weight of evidence to form opinions
By now the team has pored through what they heard and what they saw. They’ve drawn inferences about what might be happening in the gap between behavior and UI. *Why* are these things happening?

A look at the data will tell you. Note that by now you’re analyzing a relatively small subset of the data from the study because through this process you’ve eliminated a lot of the noise. The team should now be asking, How many incidents were there of the issue? Which participants had the issue? Are there any patterns or trends that give more weight to some of the inferences the team came up with than other inferences?

By collaborating on this examination of the weight of evidence, the team shares points of view, generates feasible solutions, and forms a group opinion on the diagnosis.

Develop theories about the design
By now the team should have inferences with heft. That is, the winning inferences have ample data to support them. Having examined that evidence, the team can easily form a consensus opinion about why the issue is an issue. It’s time to determine a design direction. What might solve the design problem?

In essence, this decision is a theory. The team has a new hypothesis — based on evidence from the study they’ve just done — about the remedies to issues. And it’s time to implement those theories and, guess what, test them with users.

Did you see the report?

Look, Ma, no report! The team has been involved in all of the data analysis. They’ve bought in, signed up, signed off. And, they’re moving on. All without a written report. All without recommendations from you.

 

What’s valuable is having the users in the designers’ heads

Getting the team to spend time with users is a first step. Observing users, listening to users will be enlightening. Keeping the users in the heads of designers through the process of design is more difficult. How do you do that? Collaborate on analyzing observations; explore inferences together; weigh the evidence as a group. From this, consensus on design direction reveals itself.

+ + + + + + + + + + + + + + + +

I stole all these ideas. Yep. User Interface Engineering gets all the credit for these awesome techniques. I just repackaged them. To see the original work, check out these links:
Group Activities to Demonstrate Usability and Design
The KJ-Technique: A Group Process for Establishing Priorities
The Road to Recommendation

 

3 thoughts on “Making sense of the data: Collaborative data analysis”

  1. I love the idea of collaboration and having everyone involved in coming up with what happened, why it happened, and what we should do about it- however, what happens if you have no report? Is anything written down and compiled anywhere? How can you compare previous findings and usability issues in your next iteration if you didn't write anything down? As we all know, our memories are faulty. Additionally, what if someone new joins the team and needs to quickly understand what evaluations, findings, and recommendation happened prior to their joining the team? Where do they get this information?

  2. Hi Ashley,

    If your team is very structured and more slowly moving, rather than agile/Agile, then they might want and use reports. But I doubt it. In my experience, no one goes back to the reports later. Development teams are very much always-move-forward creatures.

    Also, it's rare that new team members spend time looking at reports. They get assignments and work with teammates to accomplish work. But it may be that the culture of your team is different.

    Whenever I've used these techniques with teams, we do have things written down. There's the notes that everyone took during the sessions. And often in these sessions, we're working from rolling issues lists that we've worked on each day. If we're iterating design really fast, we take screen shots and post them on a wiki or team blog. (You might be interested in the techniques here: http://usabilitytestinghowto.blogspot.com/2009/12/what-to-do-with-data-moving-from.html)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.