Every 4 years, I get a lot of requests to talk about design in elections from the UX and civic tech communities. Watch the talk at Midwest UX in 2018.
Today, January 31, 2020, is my last day as co-director of the Center for Civic Design. For 6 years, I’ve been co-leading CCD with Whitney Quesenbery. My next adventure is as a founder-partner in a new civic incubator at the National Conference on Citizenship, a federally chartered non-profit based in Washington, DC.
I started doing work in election design in the early 2000s, when Whitney and I first worked together with an awesome collection of other fun folks on volunteer projects through a project of the then Usability Professionals Association (now User Experience Professionals Association). Through the last 2 decades, have become an expert on design in elections, advising and training election administrators all over the U.S. and Canada.
It was fun. And challenging. And it surprised me when I woke up one day in 2013 (after decades working in the private sector) as the co-head of a non-profit where I’d get to work on design to ensure voter intent every day.
Over my years working in election design, I designed and led research ranging from understanding poll workers’ attitudes about security in elections and their jobs in polling places, to mapping the gap between how local election officials think and the questions voters’ have. I led research on what became the Anywhere Ballot, usability of county and state election websites, how voters find and use information about elections, and where language access and acculturation is important for people with low English and low civics proficiency. I was the originator and the managing editor of the Field Guides To Ensuring Voter Intent.
I got to work with smart, mission-driven people who were excited about solving problems through design and who are curious about other human beings and their experiences. And I met and worked with thousands of election administrators who are some of the hardest-working and under-appreciated people in government. Together, over a long time, we incrementally made elections better.
I expect to continue talking about and working on understanding the journey of U.S. voters until I can’t talk anymore. Right now, I’m excited about moving to an adjacent space, one that I’ve been thinking about for a while: user-centered policy design.
Of course CCD goes on, with Whitney at the helm along with a great team working on helping jurisdictions implement better vote-by-mail, modernizing voter registration, and getting ready for new language access determinations. And, of course voting systems standards work!
A key element of designing for delight is understanding where your product is in its maturity. One way to look at that is through the lens of the Kano Model. You can learn about the Kano Model and our addition of pleasure, flow, and meaning through a couple of sources:
Read Jared’s article on understanding the Kano Model (8-minute read on uie.com)
Watch Jared talk about the Kano Model (45-minute video)
Dana and Jared have both written about different aspects of delight. It’s not just about dancing hamsters. Delight is much more nuanced than that. The three key elements are pleasure, flow, and meaning.
Read Jared’s overview of pleasure, flow, and meaning. (10-minute read on uie.com)
Read Dana’s series at UX Magazine
- “Beyond frustration: three levels of happy design” (9-minute read)
- “Pleasant things work better” (8-minute read)
- “Beyond task completion: flow in design” (6-minute read)
Design can be used for good, or evil. Jared wrote about a technique that we use in our workshop that he calls “despicable design.” Going to the dark side can reveal a lot about how your team approaches designing its users’ experiences.
Read Jared’s article, “Despicable Design — When “going evil” is the perfect technique” (12-minutes at uie.com)
In our workshop, we also use sentiment words to help teams narrow down how they want people to feel or perceive a service. Here are the basics about sentiment analysis. And a piece from NNG about using the Microsoft Desirability Toolkit from which our use of sentiment words comes.
One of the tricks to making sure that I’ve designed the right study to learn what I need to learn is to tie everything together so I can be clear from the planning all the way through to the results report why I’m doing the study and what it is actually about. User research needs to be intentionally designed in exactly the same way that products and services must be intentionally designed.
What’s the customer problem?
It starts with identifying a problem that needs to be solved, and the contexts in which the problem is happening. This is a kind of meta research, I guess. From there, I can work with my team to understand deeply why we are doing the research at all, what the objective of the particular study is, and what we want to be different because we have done the research.
Why are you doing the study?
When the team shares understanding about why you’re doing the study and what you want to get out of it — along with envisioning what will be different because you will have done the study — forming solid research questions is a snap. You need research questions to set the boundaries of the study, determine what behaviors you want to learn about from participants, and what data you can reasonably collect in the constraints you have to answer your research questions.
This article was originally published on December 7, 2009.
What is data but observation? Observations are what was seen and what was heard. As teams work on early designs, the data is often about obvious design flaws and higher order behaviors, and not necessarily tallying details. In this article, let’s talk about tools for working with observations made in exploratory or formative user research.
Many teams have a sort of intuitive approach to analyzing observations that relies on anecdote and aggression. Whoever is the loudest gets their version accepted by the group. Over the years, I’ve learned a few techniques for getting past that dynamic and on to informed inferences that lead to smart design direction and creating solution theories that can then be tested.
Collaborative techniques give better designs
The idea is to collaborate. Let’s start with the assumption that the whole design team is involved in the planning and doing of whatever the user research project is.
Now, let’s talk about some ways to expedite analysis and consensus. Doing this has the side benefit of minimizing reporting – if everyone involved in the design direction decisions has been involved all along, what do you need reporting for? (See more about this in the last section of this article.)
(This article was originally published on May 30, 2008. This is a refresh.)
Research that you do alone ends up in only your head. No matter how good the report, slide deck, or highlights video, not all the knowledge gets transferred to your teammates. This isn’t your fault. It just is.
So what to do? Enlist as many people on your team as possible to help you by observing your usability testing sessions. You can even give your observers jobs, such as time-keeper if you’re measuring time on task. Or, if you are recording sessions, it could be an observer’s job to start and stop the recordings and to label and store them properly.
The key is to involve the other people on the team – even managers – so they can
- help you
- learn from participants
- share insights with you and other observers
- buy in
- reach consensus on what the issues are and how to solve them
Who should observe: Everyone
Ideally, everyone on the design and development team should observe sessions. Every designer, every programmer, every manager on the project should watch as real people use their designs. People on the wider team who are making design decisions should also observe sessions. I’m talking about QA testers, project managers, product managers, product owners, legal people, compliance people, operations people — everyone.
Usability testing is a fantastic source of data on which to make design decisions. You get to see what is frustrating to users and why, first hand. Of course you know this.
There are other sources of data that you should be paying attention to, too. For example, observing training can be very revealing. One of the richest sources of data about frustration is the call center. That is a place that hears a lot of pain.
Capturing frustration in real time
Often, the calls that people make to the call center surface issues that you’ll never hear about in usability testing. The context is different. When someone is in your usability study, you’ve given them the task and there’s a scenario in which the participants are working. This gives you control of the situation, and helps you bound the possible issues you might see. But when someone calls the call center, it could be anything from on boarding to off boarding, with everything in between as fair game for encountering frustration. The call center captures frustration in real time.
We could talk a lot about what it means that organizations have call centers, but let’s focus on what you can learn from the call center and how to do it.
I’ve been thinking a lot lately about where the teams I work with are on the scale from design literacy, to fluency, to mastery. I don’t mean can they talk design, I mean, how close are they to understanding the differences between good and bad design? What steps will it take to move these teams closer to mastering design to deliver great experiences?
One of the most effective tools for moving teams from literacy to fluency — getting them to the point where they can see the differences between design that will work for users and design that won’t — is usability testing.
Sometimes teams just need a tiny nudge to make better design decisions. To level up their understanding of their users. To so something about elements of a product or a service that are frustrating to users.
Sometimes, that nudge comes from a product manager or a sales ops person or a customer service rep. But this time, the nudge can come from you. This tiny book of steps and tips that makes learning from users feel simple and obvious. Like anybody can do it.
In 11 steps, your teams can work with subject matter experts and other users to collect data on the usability of their work in every sprint and all along the development path.
This 32-page, 3.5 x 5-inch book includes a quick checklist to help you know whether what you’re doing will work.
The covers are printed on 100% recycled chipboard. The internal pages are vegetable-based inks on 100% recycled papers. The Pocket Guides are printed by Scout Books and designed by Oxide Design Co.
Intercepting is an exercise in self-awareness. Who you choose and how you approach them exposes who you are and what you think. What your fears are. The inner voice is loud. As a practice, we worry about bias in user research. Let me tell you, there’s nothing like doing intercepts for recruiting that exposes bias in the researcher.
Why would you do recruiting by intercepting, anyway? Because our participants were hard to find.
Hard-to-find participants walk among us
Typically, we focus recruiting on behaviors. Do these people watch movies? Clip coupons? Ride bicycles? Shop online? Take medicine?
The people we wanted to talk to do not take part in a desired behavior. They don’t vote.
We did intercepts because we couldn’t figure out a way to find the people we wanted through any conventional recruiting method. How do you recruit on a negative behavior? Or rather, how do you find people who aren’t doing something, especially something they are likely to think they should be doing — so they might lie about it?
Maybe you just read Jared Spool’s article about deconstructing delight. And maybe you want to hear my take, since Jared did such a good job of shilling for my framework.
Here’s a talk I did a couple of years ago, but have been doing for a while. Have a listen. (The post below was originally published in May, 2012.)
Everybody’s talking about designing for delight. Even me! Well, it does get a bit sad when you spend too much time finding bad things in design. So, I went positive. I looked at positive psychology, and behavioral economics, and the science of play, and hedonics, and a whole bunch of other things, and came away from all that with a framework in mind for what I call “happy design.” It comes in three flavors: pleasure, flow, and meaning.
I used to think of the framework as being in layers or levels. But it’s not like that when you start looking at great digital designs and the great experiences they are part of. Pleasure, flow and meaning end up commingled.
So, I think we need to deconstruct what we mean by “delight.” I’ve tried to do that in a talk that I’ve been giving. Here are the slides:
There are also a few articles about the delight framework.