Usability testing is HOT

For many of us, usability testing is a necessary evil. For others, it’s too much work, or it’s too disruptive to the development process. As you might expect, I have issues with all that. It’s unfortunate that some teams don’t see the value in observing people use their designs. Done well, it can be an amazing event in the life of a design. Even done very informally, it can still show up useful insights that can help a team make informed design decisions. But I probably don’t have to tell you that.

Usability testing can be enormously elevating for teams at all stages of UX maturity. In fact, there probably isn’t nearly enough of it being done. Even on enlightened teams that know about and do usability tests, they’re probably not doing it often enough. There seems to be a correlation between successful user experiences and how often and how much the designers and developers spend time observing users. (hat tip Jared Spool) Continue reading “Usability testing is HOT”

Involving older adults in design of the user experience: Inclusive design

Despite the reality of differences due to aging, research has also shown that in many cases, we do not need a separate design for people who are age 50+. We need better design for everyone.

Everyone performs better on web sites where the interaction matches users’ goals; where navigation and information are grouped well; where navigation elements are consistent and follow conventions; where writing is clear, straightforward, in the active voice, and so on. And, much of what makes up good design for younger people helps older adults as well.

For example, we know that most users, regardless of age, are more successful finding information in broad, shallow information architectures than they are with deep, narrow hierarchies. When web sites make their sites easier to use for older adults, all of their users perform better in usability studies. The key is involving older adults in user research and usability testing throughout design and development. Continue reading “Involving older adults in design of the user experience: Inclusive design”

Bonus research: Do the recruiting yourself

There are some brilliant questions on Quora. This morning, I was prompted to answer one about recruiting.

The question asker asked, How do I recruit prospective customers to shadow as a part of a user-centered design approach? The asker expanded, thusly:

I’m interested in shadowing prospective customers in order to better understand how my tool can fit into their life and complement, supplement, or replace the existing tools that they use. How do I find prospective customers? How do I convince them to let me shadow them?

Seemed like a very thoughtful question. I have some experience with recruiting for field studies and other user research, so I thought I might share my lessons learned. Here’s my answer. Would love to hear yours. Continue reading “Bonus research: Do the recruiting yourself”

Usability testing is broken: Rethinking user research for social interaction design

How many of you have run usability tests that look like this: Individual, one-hour sessions, in which the participant is performing one or more tasks from a scenario that you and your team have come up with, on a prototype, using bogus or imaginary data. It’s a hypothetical situation for the user, sometimes, they’re even role-playing.

Anyone? That’s what I thought. Me too. I just did it a couple of weeks ago.

But that model of usability testing is broken. Why? Because one of the first things we found out is that the task we were asking people to do – doing some basic financial estimates based on goals for retirement – involved more than the person in the room with me.

For the husbands, the task involved their wives because the guys didn’t actually know what the numbers were for the household expenses. For the women, it was their children, because they wanted to talk to them about medical expenses and plans for assisted living. For younger people it was their parents or grandparents, because they wanted to learn from them how they’d managed to save enough to help them through school and retire, too. Continue reading “Usability testing is broken: Rethinking user research for social interaction design”

Researcher as director: scripts and stage direction

For most teams, the moderator of user research sessions is the main researcher. Depending on the comfort level of the team, the moderator might be a different person from session to session in the same study. (I often will moderate the first few sessions of a study and then hand the moderating over to the first person on the design team who feels ready to take over.)

To make that work, it’s a good practice to create some kind of checklist for the sessions, just to make sure that the team’s priorities are addressed. For a field study or a formative usability test, a checklist might be all a team needs. But if the team is working on sussing out nuanced behaviors or solving subtle problems, we might want a bit more structure. Continue reading “Researcher as director: scripts and stage direction”

Is your team stuck in a bubble?

This happens. The team is heads down, just trying to do work, to make things work, and then you realize it. Perspective is gone. Recently I gave a couple of talks about usability testing and collaboratively analyzing data. There was a guy in the first row who was super attentive as I showed screen shots of web sites and walked the attendees through tasks that regular people might try to do on the sites. Sweat beaded on his brow. His hands came up to his forehead in the way that someone who has had a sudden realization reacts. He put his hand over his mouth. I assumed he was simply passionate about web design and was feeling distressed about the crimes this web site committed against its users.

Turns out, he was the web site’s owner. Continue reading “Is your team stuck in a bubble?”

Usability isn’t just about eliminating frustration anymore

[This is an excerpt of an article published in UX Magazine on June 16, 2010.]

I’m a devotee of TED talks. I was once assigned to watch several TED talks to deconstruct what made each a good or a bad presentation. TED topics are wide-ranging, though they generally relate to the categories that make up the “TED” acronym: Technology, Entertainment, and Design. I tend to stick to the design topics, but during my research I came across a video of Martin Seligman talking about positive psychology.

Happiness is a topic I’ve been interested in for a while. According to Darrin McMahon, author of Happiness: A History, happiness is a relatively new construct in the history of humanness. It’s only been in the last 250 years or so in the West that we’ve been safe and healthy enough to think about how we feel emotionally. Continue reading “Usability isn’t just about eliminating frustration anymore”

Overcoming fear of moderating UX research sessions

It always happens: Someone asks me about screwing up as an amateur facilitator/moderator for user research and usability testing sessions. This time, I had just given a pep talk to a bunch of user experience professionals about sharing responsibility with the whole team for doing research. “But what if the (amateur) designer does a bad job of moderating the session?”

What not to do

There are numerous ways in which a moderator can foul things up. Here are just a few possibilities that might render the data gathered useless:

  • Leading the participant
  • Interrupting or intervening at the wrong time
  • Teaching or training rather than observing and listening
  • Not following a script or checklist
  • Arguing with the participant

Rolf Molich and Chauncey Wilson put together an extensive list of the many wrong things moderators could do. There are dozens of behaviors on the list. I have committed many of these sins myself at some point. It’s embarrassing, but it is not the end of the world. So, here, let’s talk about what to do to be the best possible moderator in your first session. Continue reading “Overcoming fear of moderating UX research sessions”

Making sense of the data: Collaborative data analysis

I’ve often said that most of the value in doing user research is in spending time with users — observing them, listening to them. This act, especially if done by everyone on the design team, can be unexpectedly enlightening. Insights are abundant.

But it’s data, right? Now that the team has done this observing, what do you know? What are you going to do with what you know? How do you figure that out?

 

The old way: Tally the data, write a report, make recommendations

This is the *usual* sequence of things after the sessions with users are done: finish the sessions; count incidents; tally data; summarize data; if there’s enough data, do some statistical analysis; write a report listing all the issues; maybe apply severity ratings; present the report to the team; make recommendations for changes to the user interface; wait to see what happens.

There are a couple of problems with this process, though. UXers feel pressure to analyze the data really quickly. They complain that no one reads the report. And if you’re an outside consultant, there’s often no good way of knowing whether your recommendations will be implemented. Continue reading “Making sense of the data: Collaborative data analysis”

You are not your user. No matter how good you think you are.

 

Listen up, people. This is why — quantity is not quality — you are not your user.

 

The lesson for today on participant sampling is Google Buzz. Google has been working on Buzz for some time. And it’s a cool idea. Integrating the sharing of photos, status updates, conversations, and email is a thing a lot of us have been looking for. Buzz makes lots of automatic connections. That’s what integrating applications means.

BUT. One of the features of Buzz was that it would automatically connect you to people whom you have emailed in Gmail. On the surface, a great idea. A slick idea, which worked really well with 20,000 Google employees.

Large samples do not always generate quality data
Twenty thousand. Feedback from 20,000 people is a lot of data. How many of us would kill to have access to 20,000 people? So. How can such a large sample be bad? Large samples can definitely generate excellent data on which to make superfine design decisions. Amazon and Netflix use very large samples for very specialized tests. There’s discussion everywhere, including at the recent Interaction10 conference in Savannah, about cheap methods for doing remote, unmoderated usability testing with thousands of people. More data seems like a good idea.

If you have access to 20,000 people and you can handle the amount of data that could come out of well designed research from that sample, go for it. But it has to be the right sample.

Look outside yourself (and your company)
Google employees are special. They’re very carefully selected by the company. They have skills, abilities, and lives that are very different from most people outside Google. So, there’s the bias of being selected to be a Googler. And then there’s indoctrination as you assimilate into the corporate culture. It’s a rarified environment.

But Google isn’t special in this way. Every organization selects its employees carefully. Every organization has a culture that new people undergo indoctrination and assimilation for, or they leave. In aggregate, the people in an organization begin to behave similarly and think similarly. They aspire to the same things, like wanting products to work.

But what about 37 Signals and/or Apple? They don’t do testing at all. (We don’t actually know this for sure. They may not call it testing.) They design for themselves and their products are very successful in the marketplace. I think that those companies do know a lot about their customers. They’ve observed. They’ve studied. And, over time, they do adjust their designs (look at the difference in interaction design in the iPod from first release in 2001 to now). Apple has also had its failures (Newton, anyone?).

The control thing
By not using an outside sample, Google ran into a major interaction design problem. About as big as it gets. This is a control issue, not a privacy issue, though the complaints were about over sharing. One of the cardinal rules of interaction design is to always let the user feel she’s in control. By taking control of users’ data, Buzz invaded users’ privacy. That’s the unfortunate outcome in this case, and now, users will trust Google less. It’s difficult to regain trust. But I digress.

The moral of today’s lesson: Real users always surprise us
Google miscalculated when it assumed that everyone you email is someone you want to share things with, and that you might want those people connected to one another. In a work setting, this might be true. In a closed community like a corporation, this might be true. But the outside world is much messier.

For example, I have an ex. He emails me. Sometimes, I even email him back. But I don’t want to share things with him anymore. We’re not really friends. I don’t want to connect him to my new family.

Even testing with friends and family might have exposed the problem. Google has a Trusted Tester program. Though there are probably some biases in that sample because of the association with Google employees, they are not Google employees. This makes friends and family who use Gmail one step closer to typical users. But Google didn’t use Trusted Testers for Buzz.

You get to choose your friends in real life. Google could have seen this usage pattern pretty quickly just by testing with a small sample who live beyond the Google garden walls.