Election-related research

After seeing people leave polling places in Florida on Election Day during the presidential election in 2000 who said they weren’t sure they voted for the person they wanted to vote for, Dana was inspired to learn more about how elections worked.

She was fortunate to work with amazing research partners and to connect with brilliant election professionals all over the United States to design and build products that would help ensure that voters can vote the way they intend, and that their votes are counted as cast.

Below is a chronological list of Dana’s research and deliverables related to voting and elections.

YearTopicDescription
2006Kit for testing usability of ballotsWith others, develop a kit for local election officials to do their own usability testing
2008Language of instructions on ballotsWith Ginny Redish, et al, studied effect of plain language in instructions on ballots
2009Style guide for voting system documentationWith Susan Becker, developed a set of style guidelines and checklists directed to voting system makers
2010First usability test of RCVField testing of different formats of ranked choice voting
2012Field Guides To Ensuring Voter IntentFirst 4 Field Guides To Ensuring Voter Intent published
2012County election websitesWith Cyd Harrell, et al, cataloged county sites, researched voters’ questions
2013Voter guide researchWith Whitney Quesenbery, field research on information needs of low propensity voters
2013Poll workers and election securityWith Whitney Quesenbery, studied practices and attitudes of poll workers and election integrity
2013Anywhere BallotWith Kathryn Summers and Drew Davies, developed and tested first prototype of first standards compliant, accessible digital user interface available through a web browser, now the basis of all commercially available digital ballot designs
2017Voter JourneyWith Maggie Ollove, tying together findings from studies, visualized the voter experience
2018Information ecosystemWith Christopher Patten, studied where voters get information and what questions they ask about voting and elections
2019New citizens & civic engagementWith Christopher Patten, studied barriers to voting for voters with low English and low civics literacy
2019State election websitesWith Christopher Patten, repeated 2012 study on county websites, this time on state websites for voters

Story-driven experience research on pandemic unemployment

The COVID-19 pandemic shut downs and stay-at-home orders starting in March 2020 put millions of Americans out of work. On March 28, 2020, Congress passed the CARES Act, which, among other things, made billions of dollars available in new pandemic unemployment programs, but states struggled to implement and deliver these benefits. As of the end of July, 30 million people had applied for unemployment assistance, many of them for the first time, ever.

In my role with Project Redesign at NCoC.org, and in partnership with New America’s New Practice lab, I led a team of researchers to interview people from across the U.S. in May and June to learn what it has been like to apply for unemployment and other benefits during the pandemic.

Telling the story of living experts in near-real time

When Tara McGuinness contacted me about doing this project, she was ready to experiment with methods and techniques. She and her colleagues in the New Practice Lab at New America wanted to bring the experience people were having to life for law makers and advocates who usually make their decisions recommendations based on what they see in spreadsheets. We wanted to do a few big things with our approach.

Amplify voices and elevate stories of real people. Those real people are the experts on the experience they’re having — living experts. To convey the experience people had applying for unemployment and other benefits during the pandemic, we documented “thick data” in 2- to 4-page stories. The purpose of those stories was to make the people real for readers, who we hoped would be law makers and their staff members.

Urgently learn about the lived experience. The urgency came from 2 factors. One was that the depth of the needs changed over just a few weeks. As time passed, the experience claimants had shifted from applying to waiting for funds to show up. In the meantime, bills needed to be paid, savings were depleted, jobs disappeared permanently. The other consideration was that Congress would be drafting the next stimulus and recovery bills in May and June. We had a chance to get the stories to people who influenced the design of those bills.

Conduct the research in the open. Show the work. To invite everyone in. Those stories, along with a few slides that highlighted a story or two and key takeaways, went to partners on the project, community-based organizations, media, and to legislative staff on Capitol Hill. We had to give up being precious about how polished the deliverables were. They needed to be just good enough.

We had to give up being precious about how polished the deliverables were. They needed to be just good enough.

It was our intention to invite partners and community-based organizations and legislative staff to observe the interviews. Just scheduling the interviews and conducting them within the time we had proved to be pretty challenging, so we didn’t get to do that. I feel like we know how to do it now, and probably could pull it off for future research projects.

We did develop a research kit and a workbook for anyone who wants to do research like this. You can download the research plan, interview guide, and story template, along with a few other tools for free from the project website.

Share what we learned publicly, as we learned it. Typically, a research team would do the interviews, transcribe them, code the transcriptions, and analyze the coding to identify findings. As you can imagine, this would take a while — for 33 interviews, it could take months. We didn’t have months. We didn’t even have weeks.

As we captured what was working for claimants, what wasn’t, who helped them, who needed help, what time passing felt like, what else people had going on, we wrote up the interviews into stories within a day of doing the interview. We collected interviews together each week, and distributed those collections of 5 to 15 interviews each week.

We were not focused on making policy recommendations. Instead, we documented the stories and drew observations and insights from what we heard.

Doing experience research during a pandemic

There were more than the usual constraints on “field” research in May and June 2020. For example, we couldn’t actually go into the field. It wasn’t safe for the research team or the participants because anyone could be carrying COVID-19.

We also needed to hurry up and do this work because the benefits that were designed to help people were also scheduled to run out. Arranging home visits and traveling takes a lot of time. Congress had set time limits on the first stimulus and recovery bills. We wanted to use what we learned to help the people designing the next set of stimulus and recovery bills make good decisions.

So, it’s not like we could do ethnographic research, really. No access to living spaces. Not enough time. We went into the study thinking like ethnographers, but we needed a few shortcuts. So we borrowed an idea I learned from Kate Gomoll for the interview stories, and some techniques from collective story harvesting. So, we’re calling it experience research, and we relied on the living experts to help us begin to understand what they were going through. We used approaches lovingly borrowed from equity-centered community design, developed by Creative Reaction Lab.

Delivering stories, briefs, and a workbook

The 2- to 4-page stories we wrote after each interview, centering on a few focus questions, were our first deliverables, and we published them each week of the project as we completed interviews. Over 3 weeks, we conducted 33 interviews.

Next, using theme or lenses that we identified ahead of the interviews, we pulled observations and quotes from each interview and conducted some light synthesis. We used that synthesis to write briefs for each of the themes.

We wanted not only to convey insights from these interviews, but also to make the methods and techniques available to others. We hope that researchers in the public and private sectors will pick up inspiration, at least, from how we did the study. We also want to make it easy for organizations that don’t have professional researchers to learn what is happening with their constituents. Our research workbook and kit is available for free to anyone who wants to use them.

The links below open up PDFs. (Sorry about that. Soon we’ll have a beautiful website with fully accessible content.)

Overview

Themed briefs

Doing research like this yourself

About the project 

Full report — Stories, briefs, participants, methods, mechanics, team

Would I work this way again? Yes.

Leading a diverse team of researchers, most of them part-time contributors, to collect a clear and specific snapshot of a civic experience through stories was intense, gratifying, and humbling (in the best possible ways). I learned from everyone who contributed — both researchers and participants. Years later, I carry that experience with me along with the voices of many of the people we interviewed. This project both tested some ways of conducting research and capturing what we learned, and formed a kind of playbook for how I’ve since shaped the work and training of practitioners of human-centered research and design. Key takeaways on approach and practice:

  • Stories and names are powerful. Not everyone in the study allowed us to use their real names, but being able to tell the story of a person’s experience carries a grounded truth that is meaningful and compelling for readers. Attaching a name makes the story a human story.
  • Veracity in insights and findings is shaped by both the people who conduct the research and the participants. The lived experiences of both intersect in the research design, data gathering, discussion, analysis, and conclusions.
  • Breaking the reporting into briefs that were thematic worked well for reaching different audiences. The format of asserting the insight in the headings within the briefs was effective, especially for busy readers.
  • Opening up preliminary findings to subject matter experts and interested stakeholders each week was extremely challenging but was helpful in testing out what we thought we were learning from what we heard in the interviews.
  • I’m proud of building the research kit and methods workbook, and that teams working in state governments picked these artifacts up and ran their own studies. They reported that the methods were effective for not only gaining understanding of the qualitative experience their public was having, but also bringing along partners and stakeholders so they could make better-informed policy implementation decisions.

Bonus research: Do the recruiting yourself

There are some brilliant questions on Quora. This morning, I was prompted to answer one about recruiting.

The question asker asked, How do I recruit prospective customers to shadow as a part of a user-centered design approach? The asker expanded, thusly:

I’m interested in shadowing prospective customers in order to better understand how my tool can fit into their life and complement, supplement, or replace the existing tools that they use. How do I find prospective customers? How do I convince them to let me shadow them?

Seemed like a very thoughtful question. I have some experience with recruiting for field studies and other user research, so I thought I might share my lessons learned. Here’s my answer. Would love to hear yours. Continue reading “Bonus research: Do the recruiting yourself”

Translating research questions to data

There’s an art to asking a question and then coming up with a way to answer it. I find myself asking, What do you want to find out? The next question is How do we know what the answer is?

Maybe the easiest thing is to take you through an example.

Forming the right question

On a study I’m working on now, we have about 10 research questions, but the heart of the research is about this one:

Do people make more errors on one version of the system than the other?

Note that this is not a hypothesis, which would be worded something more like, “We expect people to make more mistakes and to be more likely to not complete tasks on the B version of the system than on the A version of the system.” (Some would argue that there are multiple hypotheses embedded in that statement.)

But in our study, we’re not out to prove or disprove anything. Rather, we just want to compare two versions to see what works well about each one and what doesn’t.

 

Choosing data to answer the question

There are dozens of possible measures you can look at in a usability test. Here are just a few examples:

Continue reading “Translating research questions to data”