Ditch the book – Come to a virtual seminar on “usability testing in the wild”

I’m excited about getting to do a virtual seminar with the folks at User Interface Engineering (www.uie.com) on Wednesday, October 22 at 1 pm Eastern Time. I’ll be talking about doing “minimalist” usability tests — boiling usability testing down to its essence and doing just what is necessary to gather data to inform design decisions.

If you use my promo code when you sign up for the session — DCWILD — you can get in for the low, low price of $99 ($30 off the regular price of $129). Listen and watch in a conference room with all your team mates and get the best deal ever.

For more about the virtual seminar, see the full description.

Usability testing in the wild – ballots

I’ve been busy the last few weeks doing some of the most challenging usability testing I’ve ever done. There were three locations where I did day-long test sessions. But that wasn’t the challenging part. The adventure came in testing ballots for the November election.

What was wild about it?
This series of tests came together through a project with the Brennan Center for Justice and the Usability Professionals’ Association. The Brennan Center released a report in July called Better Ballots, which reviewed ballot designs and instructions, finding that

  • hundreds of thousands of voters have been disenfranchised by ballot design problems
  • there has been little or no federal or state guidance on ballot design that might have been helpful to elections officials who define and design ballots at the local level
  • usability testing is the best way to ensure that voters can use ballots to vote as they intend


Also in the report, the Brennan Center strongly urged election officials to conduct usability tests on ballots. The recommendation to include usability testing in the ballot design process is a major revelation in the election world. The UPA Voting and Usability Project has developed the LEO Usability Test Kit to help local elections officials to do their own simple, quick usability tests of ballot designs.

But not all local elections officials were ready to do their own usability tests, and some wanted objective outsiders to help evaluate ballots for this particular, important upcoming election.

I did tests in three locations — Marin County, California, Los Angeles County, California, and the home of Las Vegas in Clark County, Nevada — with about 40 participants across the three locations. Several other UPA volunteers conducted tests and reviews in Florida, New Hampshire, and Ohio. In addition, UPAers trained local elections officials on usability testing and the LEO Test Kit in Ohio, Iowa, and a couple of other spots I can’t think of right now.

Pulling together a test in just a few days, including recruiting and scheduling participants
The Brennan Center report was released toward the end of July. Most ballots must be ready to print or roll out right now, the middle of September. The Brennan Center sent the report to every election department in the US and the response was great. Most requests came in in August, so among the five or six UPA Usability and Voting Project members available, we scrambled to cover the requests for tests.

We had the assistance of one of the Brennan Center staff to help coordinate recruiting, although it took some pretty serious networking to get people in to sessions on short notice, often within a few days.

The Brennan Center covered the expenses, but the time and effort spent by the people who worked with local elections officials and conducted the sessions was purely pro bono.

Not knowing what I would be testing until I walked onto the site
For two out of the three tests, I hadn’t seen exactly what I was going to be testing until I walked in the door of the election department. (I got the other ballot two days before the test.) This happened for a couple of reasons. Sometimes the local election official didn’t have a lot of information about what could be evaluated and how that might happen. Sometimes the ballot wasn’t ready until the last minute because of final filing deadlines or other constraints. Sometimes it was all of the above.

Fortunately, the main task is pretty straightforward: Vote! Use the ballot as you normally would. But there are neat variations. Are there write-ins possible? On an electronic voting machine, how do you change a vote? What if you’re mailing in a ballot – what’s different about that and how do design and instructions have to compensate for not having poll workers available to ask questions of?

Giving immediate results and feedback
So, we got copies of ballots or something close to final on an electronic voting machine. We’ve met briefly with the local elections officials (and often with their advisory committees). We’ve recruited participants (sometimes off the street). We’ve conducted 8 or 10 or 15 20-minute sessions in one day. Now it’s time to roll up what we saw in the sessions and to talk with the person who owns the ballot about how the evaluations went.

Handling enthusiastic observers and activists
A lot of people are concerned with the usability, accessibility, and security of ballots and voting systems. You probably are. Some are more concerned about it than others. Those are the people who show up to observe sessions. They’re well informed, they’re enthusiastic, and they’re skeptical. The observers and activists (many signed up to be test participants) were also keenly interested in understanding this activity. How was this different from focus groups or reviews by experts? How do we know that the problems we’ve witnessed are generalizable to other voters in the jurisdiction?

The good news: Mostly, the ballots worked pretty well. The local elections officials usually have the ability to make small changes at this stage and they were willing, especially to improve instructions to voters. By doing this testing, we were able to effect change and to make voting easier for many, many voters. (LA County alone has more than 3 million registered voters.)

Links:
Brennan Center for Justice report Better Ballots
http://www.brennancenter.org/content/resource/better_ballots/

UPA’s Voting and Usability Project

http://www.usabilityprofessionals.org/civiclife/voting/
voting@usabilityprofessionals.org

LEO Usability Testing Kit
http://www.usabilityprofessionals.org/civiclife/voting/leo_testing.html

Ethics guidelines for usability and design professionals working in elections
http://www.usabilityprofessionals.org/civiclife/voting/ethics.html

Information about being a poll worker
http://www.eac.gov/voter/poll%20workers

EAC Effective Polling Place Designs
http://www.eac.gov/election/effective-polling-place-designs

EAC Election Management Guidelines
http://www.eac.gov/election/quick-start-management-guides

Are you doing “user testing” or “usability testing”?

Calling anything user testing just seems bad. Okay, contrary to the usual content on this blog – which I’ve tried to make about method and technique – this discussion is philosophical and political. If you feel it isn’t decent to talk about the politics of user research in public, then you should perhaps click away right now.

I know, talking about “users” opens up another whole discussion that we’re not going to have here, now. In this post, I want to focus on the difference between “usability testing” and “user testing” and why we should be specific.

When I say “usability test,” what I’m talking about is testing a design for how usable it is. Rather, how unusable it is, because that’s what we can measure: how hard is it to use; how many errors do people make; how frustrated do people feel when using it. Usability testing is about finding the issues that leave a design lacking. By observing usability test sessions, a team can learn about what the issues are and make inferences about why they are happening to then implement informed design solutions.

If someone says “user testing,” what does that mean? Let’s talk about the two words separately.

First, what’s a “user”? It is true that we ask people who use (or who might use) a design to take part in the study of how usable the design is, and some of us might refer to those people as “users” of the product.

Now, “testing” is about using some specified method for evaluating something. If you call it “user testing,” it sure sounds like you are evaluating users, even though what you probably mean to say is that you’re putting a design in front of users to see how they evaluate it. It’s shorthand, but I think it is the wrong shorthand.

If the point is to observe people interacting with a design to see where the flaws in the design are and why those elements aren’t successful, then you’re going beyond user testing. You’re at usability testing. That’s what I do as part of my user research practice. I try not to test the users in the process.