Another awesome event you should know about: Web App Summit

Made a resolution to improve your work skills? One really great event to do that at is UIE’s Web App Summit.

The Summit is April 19-22, 2009, in Newport Beach, CA. There’s a ton of great information at http://webappsummit.com. But let me give you a little preview. Though there are killer shorter sessions, the highlight of the Summit is two days of intensive workshops with world class, rockin’ speakers:

If you decide to register, let them know you heard about it from me! Sign up right away to get a discount and an iPod nano.

Disclosure: You’ll notice I’m not speaking at this. I’d go even if I didn’t have personal connections to UIE.

Events: Speaking about usability testing in 2009

I hate staying home. Here’s a list of the events I’m signed up to speak at in 2009. So far.

Feb 6

Washington, DC

NASED

Usability Testing Ballots

April 1

Seattle, WA

WritersUA

Conducting Usability Tests “in the Wild”

May 3-6

Atlanta, GA

STC

Rewriting the Voting Experience On Election Day

June 3

Online Webinar

Web Manager University

Quick, Easy & Insightful: Conducting Usability Testing in the Wild

June 12

Portland, OR

UPA

Improving the User Experience of Voting

July 7-11

Spokane, WA

IACREOT

Improving the User Experience of Voting

July 19-24

San Diego, CA

HCI International

User Experience in Elections: Poll Workers

Just vote.

Though many people who are eligible to vote were hindered (but not prevented from) registering; though there are obstacles to getting to precincts like having to work or not having transportation; though we have all read and heard the many stories about problems with voting machines — a vote has rarely counted for so much in the history of America.

Please vote today.

If you will vote on paper: fill in the bubble completely, or join the arrow.

If you will vote on an electronic machine: check the review screen and the paper record if there is one.

And get your “I voted!” sticker.

Ditch the book – Come to a virtual seminar on “usability testing in the wild”

I’m excited about getting to do a virtual seminar with the folks at User Interface Engineering (www.uie.com) on Wednesday, October 22 at 1 pm Eastern Time. I’ll be talking about doing “minimalist” usability tests — boiling usability testing down to its essence and doing just what is necessary to gather data to inform design decisions.

If you use my promo code when you sign up for the session — DCWILD — you can get in for the low, low price of $99 ($30 off the regular price of $129). Listen and watch in a conference room with all your team mates and get the best deal ever.

For more about the virtual seminar, see the full description.

Usability testing in the wild – ballots

I’ve been busy the last few weeks doing some of the most challenging usability testing I’ve ever done. There were three locations where I did day-long test sessions. But that wasn’t the challenging part. The adventure came in testing ballots for the November election.

What was wild about it?
This series of tests came together through a project with the Brennan Center for Justice and the Usability Professionals’ Association. The Brennan Center released a report in July called Better Ballots, which reviewed ballot designs and instructions, finding that

  • hundreds of thousands of voters have been disenfranchised by ballot design problems
  • there has been little or no federal or state guidance on ballot design that might have been helpful to elections officials who define and design ballots at the local level
  • usability testing is the best way to ensure that voters can use ballots to vote as they intend


Also in the report, the Brennan Center strongly urged election officials to conduct usability tests on ballots. The recommendation to include usability testing in the ballot design process is a major revelation in the election world. The UPA Voting and Usability Project has developed the LEO Usability Test Kit to help local elections officials to do their own simple, quick usability tests of ballot designs.

But not all local elections officials were ready to do their own usability tests, and some wanted objective outsiders to help evaluate ballots for this particular, important upcoming election.

I did tests in three locations — Marin County, California, Los Angeles County, California, and the home of Las Vegas in Clark County, Nevada — with about 40 participants across the three locations. Several other UPA volunteers conducted tests and reviews in Florida, New Hampshire, and Ohio. In addition, UPAers trained local elections officials on usability testing and the LEO Test Kit in Ohio, Iowa, and a couple of other spots I can’t think of right now.

Pulling together a test in just a few days, including recruiting and scheduling participants
The Brennan Center report was released toward the end of July. Most ballots must be ready to print or roll out right now, the middle of September. The Brennan Center sent the report to every election department in the US and the response was great. Most requests came in in August, so among the five or six UPA Usability and Voting Project members available, we scrambled to cover the requests for tests.

We had the assistance of one of the Brennan Center staff to help coordinate recruiting, although it took some pretty serious networking to get people in to sessions on short notice, often within a few days.

The Brennan Center covered the expenses, but the time and effort spent by the people who worked with local elections officials and conducted the sessions was purely pro bono.

Not knowing what I would be testing until I walked onto the site
For two out of the three tests, I hadn’t seen exactly what I was going to be testing until I walked in the door of the election department. (I got the other ballot two days before the test.) This happened for a couple of reasons. Sometimes the local election official didn’t have a lot of information about what could be evaluated and how that might happen. Sometimes the ballot wasn’t ready until the last minute because of final filing deadlines or other constraints. Sometimes it was all of the above.

Fortunately, the main task is pretty straightforward: Vote! Use the ballot as you normally would. But there are neat variations. Are there write-ins possible? On an electronic voting machine, how do you change a vote? What if you’re mailing in a ballot – what’s different about that and how do design and instructions have to compensate for not having poll workers available to ask questions of?

Giving immediate results and feedback
So, we got copies of ballots or something close to final on an electronic voting machine. We’ve met briefly with the local elections officials (and often with their advisory committees). We’ve recruited participants (sometimes off the street). We’ve conducted 8 or 10 or 15 20-minute sessions in one day. Now it’s time to roll up what we saw in the sessions and to talk with the person who owns the ballot about how the evaluations went.

Handling enthusiastic observers and activists
A lot of people are concerned with the usability, accessibility, and security of ballots and voting systems. You probably are. Some are more concerned about it than others. Those are the people who show up to observe sessions. They’re well informed, they’re enthusiastic, and they’re skeptical. The observers and activists (many signed up to be test participants) were also keenly interested in understanding this activity. How was this different from focus groups or reviews by experts? How do we know that the problems we’ve witnessed are generalizable to other voters in the jurisdiction?

The good news: Mostly, the ballots worked pretty well. The local elections officials usually have the ability to make small changes at this stage and they were willing, especially to improve instructions to voters. By doing this testing, we were able to effect change and to make voting easier for many, many voters. (LA County alone has more than 3 million registered voters.)

Links:
Brennan Center for Justice report Better Ballots
http://www.brennancenter.org/content/resource/better_ballots/

UPA’s Voting and Usability Project

http://www.usabilityprofessionals.org/civiclife/voting/
voting@usabilityprofessionals.org

LEO Usability Testing Kit
http://www.usabilityprofessionals.org/civiclife/voting/leo_testing.html

Ethics guidelines for usability and design professionals working in elections
http://www.usabilityprofessionals.org/civiclife/voting/ethics.html

Information about being a poll worker
http://www.eac.gov/voter/poll%20workers

EAC Effective Polling Place Designs
http://www.eac.gov/election/effective-polling-place-designs

EAC Election Management Guidelines
http://www.eac.gov/election/quick-start-management-guides

Retrospective review and memory

One of my favorite radio programs (though I listen to it as a podcast) is Radiolab, “ a show about science,” which is a production of WNYC hosted by Robert Krulwich and Jad Abmurad and distributed by NPR. This show contemplates lots of interesting things from reason versus logic in decision making to laughter to lies and deception.

The show I listened to last night was about how memories are formed. Over time, several analogies have developed for human memory that seem to be related to the technology available at that time. Robert said he thinks of his memory as a filing cabinet. But Jad, who is somewhat younger than Robert, described his mind as a computer hard disk. Neurologists and cognitive scientists they talked to, though, said No, memory isn’t like that at all. In fact, we don’t store memories. We recreate them every time we think of them.

Huh, I thought. Knowing this has implications for user research. For example, there are several points at which usability testing relies on memory: the memory of the participant if we’re asking questions about the past behavior; the memory of the facilitator for taking notes, analyzing data, and drawing inferences; the memories of observers in discussions about what happened in sessions and what it means.

Using a think-aloud technique – getting participants to say what they’re thinking while working through a task – avoids some of this. You have a verbal protocol as “evidence.” If there’s disagreement about what happened among the team members, you can go back to the recording to review what the participant said as well as what they did.

But there are times when think-aloud is not the right technique, either because the participant cannot manage the divided attention of doing a task and talking about it at the same time, or because of other circumstances. In those situations, you might think about doing retrospective review, instead.

“Retrospective review” is just a fancy name for asking people to tell you what happened. If you have the tools and time available, you can go to a recording after a session, so the participant can see what she did and respond to that by giving you a play-by-play commentary.

As soon as participants start viewing or listening to the beginning of an episode – up to 48 hours after doing the task – they’ll remember having done it. They probably won’t be able to tell you how it ended. But they will be able to tell you what’s going to happen next.

And that’s the really useful thing about doing retrospective review. As the participant recreates the memory of the task, you can ask, “What happens next? What will you do next and why?” Pause. Listen. Take notes. And then start playing back the recording again. Sure enough, it’ll be like the participant said. Only now you know why.

Asking participants what happens next in their own stories also avoids most revisionist history. That is, if you ask participants to explain had what happened after they view it, they may rationalize what they did. This isn’t the same as remembering it.

Getting ready for sessions: Don’t forget…

There are a bunch of things to do to get ready for any test besides designing the test and recruiting participants.

  • make sure you know the design well enough to know what should happen as the participant uses it
  • copy any materials you need for taking notes
  • copy of all the forms and questionnaires for participants, including honorarium receipts
  • organize the forms in some way that makes sense for you. (I like a stand-up accordion file folder, in which I sort a set of forms for each participant into each slot. I stand up the unused sets and then when they’ve been filled out, they go back in on their sides.)
  • check in with Accounting or whoever on money for honoraria or goodies for give-aways
  • get a status report from the recruiter
  • double-check the participant mix
  • make sure you have contact information for each participant
  • check that you have all the equipment, software, or whatever that you need for the participant to be able to do tasks
  • run through the test a couple of times yourself
  • double-check the equipment you’re going to use (I use a digital audio recorder, so I need memory sticks for that, along with rechargeable batteries)
  • charge all the batteries
  • double-check the location

Which gets us to where you’re going to do the sessions. But let’s talk about that later.

Where usability testing fits into your research strategy

What, you don’t have a research strategy? Let’s think about the future here.

It’s not uncommon – and not bad – to be working in the present, reacting to the ever-growing demand for usability testing in your organization. “Ever-growing” is good. But when Jared Spool asked me to do a podcast with him recently to talk about what I think makes the difference between a good user experience team and a great user experience team, it got me thinking.

The recipe, based on my observations in dozens of corporations, comes down to these three main ingredients:

  • Vision
  • Strategy
  • Involvement

Vision is an overused word, but here I mean that you and your team have visualized the ideal customer experience — no limits, no constraints. Imagine the best possible interactions a customer could have with your organization at every touch point. Write it down.

Strategy means that you have a plan for reaching the vision. Over the long term, you can learn about and take into account customers’ contexts and goals while matching those up to the goals and objectives of the business.

Involvement calls all interested people in the business together (and that really should be everyone from management to design to development to support and anyone else in the organization) to embrace the vision and carry out the strategy across disciplines.

But I haven’t said much about usability testing yet. Where does it fit in? Everywhere. Part of my strategy would be to teach as many people in the organization to do usability testing as possible. You probably can’t do all the testing that is wanted (let alone needed). If you teach others to do it and coach them along the way, the customer ultimately benefits as the organization gains a closer, smarter understanding of the customer experience and can make evidence-based decisions about how to get to the ideal experience it shares a vision of.