The essence of usability testing, in your pocket

I’ve encountered a lot of user researchers and designers lately who say to me, “I can’t do all the testing there is to do. The developers are going to have to evaluate usability of the design themselves. But they’re not trained! I’m worried about how to give them enough skills to get good data.”

What if you had a tiny guide that would give your team just the tips they need to guide them in creating and performing usability tests? It’s here!

 

Usability Testing Pocket Guide

This is a 32-page, 3.5 x 5-inch book that includes 11 simple steps along with a quick checklist at the end to help you know whether you’re ready to run your test.

The covers are printed on 100% recycled chipboard. The internal pages are vegetable- based inks on 100% recycled papers. The Pocket Guides are printed by Scout Books and designed by Oxide Design Co.

You can order yours here.

usability-testing-pocket-guide

Why are researchers afraid of developers?

 

The other evening I was at a party with a whole lot of UX-y people, some of them very accomplished and some of them new to the craft. I grabbed an egg nog (this is why I love this time of the year!) and stepped up to a cluster of people. I knew a couple of them, and as I entered the circle, I overheard one of them saying that he had attended a workshop at his place of work that day on how to talk to developers, and it had really helped.

 

“Helped what?” I said. But what I’d thought was Good lord, it’s not as if developers are a different species. What’s going on here? As I listened longer, I heard others in the circle sympathize. They were afraid of the developers who they were supposed to be on the same team with.

 

Researchers are intimidated by developers because developers have two superpowers. They Make and they Ship. Researchers don’t. Researchers and the data they produce actually get in the way of making and shipping.

 

Developers are not rewarded for listening to researchers. They’re generally not rewarded for implementing findings from research about users. Learning about research results means that it takes more time to do the right thing based on data. (Let’s not even get into getting developers to participate in research.) It makes it harder and more time consuming to ship when you pay attention to research data. Everything about application development methodology is optimized for shipping. Application development processes are not optimized for making something superb that will lead to an excellent user experience.

Continue reading Why are researchers afraid of developers?

Crowd-sourced research: trusting a network of co-researchers

In the fall of 2012, I seized the opportunity to do some research I’ve wanted to do for a long time. Millions of users would be available and motivated to take part. But I needed to figure out how to do a very large study in a short time. By large, I’m talking about reviewing hundreds of websites. How could we make that happen within a couple of months?

Do election officials and voters talk about elections the same way?

I had BIG questions. What were local governments offering on their websites, and how did they talk about it? And, what questions did voters have?  Finally, if voters went to local government websites, were they able to find out what they needed to know? Continue reading Crowd-sourced research: trusting a network of co-researchers

Just follow the script: Working with pro and proto-pro co-researchers

She wrote to me to ask if she could give me some feedback about the protocol for a usability test. “Absolutely,” I emailed back, “I’d love that.”

By this point, we’d had 20 sessions with individual users, conducted by 5 different researchers. Contrary to what I’d said, I was not in love with the idea of getting feedback at that moment, but I decided I needed to be a grown-up about it. Maybe there really was something wrong and we’d need to start over.

That would have been pretty disappointing – starting over – because we had piloted the hell out of this protocol. Even my mother could do it and get us the data we needed. I was deeply curious about what the feedback would be, but it would be a couple of days before the concerned researcher and I could talk. Continue reading Just follow the script: Working with pro and proto-pro co-researchers

Coming soon…

There’s a usability testing revival going on. I don’t know if you know that.

This new testing is leaner, faster, smarter, more collaborative, and covers more ground in less time. How does that happen? Everyone on the team is empowered to go do usability testing themselves. This isn’t science, it’s sensible design research. At it’s essence, usability testing is a simple thing: something to test, somewhere that makes sense, with someone who would be a real user.

But not everyone has time to get a Ph.D. in Human Computer Interaction or cognitive or behavioral psychology. Most of the teams I work with don’t even have time to attend a 2-day workshop or read a 400-page manual. These people are brave and experimental, anyway. Why not give them a tiny, sweet tool to guide them, and just let them have at it? Let us not hold them back. Continue reading Coming soon…

Ending the opinion wars: fast, collaborative design direction

I’ve seen it dozens of times. The team meets after observing people use their design, and they’re excited and energized by what they saw and heard during the sessions. They’re all charged up about fixing the design. Everyone comes in with ideas, certain they have the right solution to the remedy frustrations users had. Then what happens?

On a super collaborative team everyone is in the design together, just with different skills. Splendid! Everyone was involved in the design of the usability test, they all watched most of the sessions, they participated in debriefs between sessions. They took detailed, copious notes. And now the “what ifs” begin:

What if we just changed the color of the icon? What if we made the type bigger? What if we moved the icon to the other side of the screen? Or a couple of pixels? What if? Continue reading Ending the opinion wars: fast, collaborative design direction

The importance of rehearsing

Sports teams drill endlessly. They walk through plays, they run plays, they practice plays in scrimmages. They tweak and prompt in between drills and practice. And when the game happens, the ball just knows where to go.

This seems like such an obvious thing, but we researchers often poo-poo dry runs and rehearsals. In big studies, it is common to run large pilot studies to get the kinks out of an experiment design before running the experiment with a large number of participants.

But I’ve been getting the feeling that we general research practitioners are afraid of rehearsals. One researcher I know told me that he doesn’t do dry runs or pilot sessions because he fears that makes it look to his team like he doesn’t know what he is doing. Well, guess what. The first “real” session ends up being your rehearsal, whether you like it or not. Because you actually don’t know exactly what you’re doing — yet. If it goes well, you were lucky and you have good, valid, reliable data. But if it didn’t go well, you just wasted a lot of time and probably some money. Continue reading The importance of rehearsing

Wilder than testing in the wild: usability testing by flash mob

It was a spectacularly beautiful Saturday in San Francisco. Exactly the perfect day to do some field usability testing. But this was no ordinary field usability test. Sure, there’d been plenty of planning and organizing ahead of time. And there would be data analysis afterward. What made this test different from most usability tests?

Are you testing for delight?

 

Maybe you just read Jared Spool’s article about deconstructing delight. And maybe you want to hear my take, since Jared did such a good job of shilling for my framework. 

Here’s a talk I did a couple of years ago, but have been doing for a while. Have a listen.  (The post below was originally published in May, 2012.)

Everybody’s talking about designing for delight. Even me! Well, it does get a bit sad when you spend too much time finding bad things in design. So, I went positive. I looked at positive psychology, and behavioral economics, and the science of play, and hedonics, and a whole bunch of other things, and came away from all that with a framework in mind for what I call “happy design.” It comes in three flavors: pleasure, flow, and meaning.

I used to think of the framework as being in layers or levels. But it’s not like that when you start looking at great digital designs and the great experiences they are part of. Pleasure, flow and meaning end up commingled.

So, I think we need to deconstruct what we mean by “delight.” I’ve tried to do that in a talk that I’ve been giving. Here are the slides:

 

You can listen to audio of the talk from the IA Summit here.

The form that changed *everything*

There’s a lot of crap going on in the world right now: terrorism, two major wars, and worldwide economic collapse. Let’s not forget the lack of movement on climate change and serious unrest in the Middle East and other places.

People trust governments less than ever — perhaps because of the transparency that ambient technology brings — leading to more regulation of privacy and security, but also to protests. Protests that started in Egypt have rippled around the world.

This wave started with a butterfly. Not the butterfly of chaos theory, but there is a metaphor here that should not be missed: when a butterfly flaps its wings in the Amazon rainforest, there are ripple effects that you might not realize. The butterfly I am talking about is the butterfly ballot used in Palm Beach County, Florida in the US 2000 presidential election.

Palm Beach County, Florida November 2000 ballot