Talking to strangers in the street: Recruiting by intercepting people

 

Intercepting is an exercise in self-awareness. Who you choose and how you approach them exposes who you are and what you think. What your fears are. The inner voice is loud. As a practice, we worry about bias in user research. Let me tell you, there’s nothing like doing intercepts for recruiting that exposes bias in the researcher.

Why would you do recruiting by intercepting, anyway? Because our participants were hard to find.

Hard-to-find participants walk among us

Typically, we focus recruiting on behaviors. Do these people watch movies? Clip coupons? Ride bicycles? Shop online? Take medicine?

The people we wanted to talk to do not take part in a desired behavior. They don’t vote.

We did intercepts because we couldn’t figure out a way to find the people we wanted through any conventional recruiting method. How do you recruit on a negative behavior? Or rather, how do you find people who aren’t doing something, especially something they are likely to think they should be doing — so they might lie about it?

Continue reading Talking to strangers in the street: Recruiting by intercepting people

Four secrets of getting great participants who show up

What if you had a near-perfect participant show rate for all your studies? The first time it happens, it’s surprising. The next few times, it’s refreshing — a relief. Teams that do great user research start with the recruiting process, and they come to expect near perfect attendance.

Secret 1: Participants are people, not data points
The people who opt in to a study have rich, complex lives that offer rich, complex experiences that a design may or may not fit into. People don’t always fit nicely into the boxes that screening questionnaires create.

Screeners can be constraining not in a good way. An agency that isn’t familiar with your design or your audience or both — and may not be experienced with user research — may eliminate people who could be great in user research or usability testing. Teams we work with find that participants who are selected through open-ended interviews conducted voice-to-voice become engaged and invested in the study. The conversation helps the participant know they’re interesting to you, and that makes them feel wanted. The team learns about variations in the user profile that they might want to design for. Continue reading Four secrets of getting great participants who show up

The true costs of no-shows

One of the first things people say when they call up looking for help with recruiting is that they want to recruit “12 for 8” or “20 for 15”. They know what they want to end up with. They’ve got to get data. Managers are showing up to observe. They’ve gone through a lot to get a study to happen at all. They don’t want to risk putting a study together only to get less data than they need. So, compensating for a show rate of between 60% and 80% means over-recruiting.

Even though a recruiting agency probably won’t charge for no-shows, those no-shows can be costly in lots of ways. Continue reading The true costs of no-shows

Involving older adults in design of the user experience: Inclusive design

Despite the reality of differences due to aging, research has also shown that in many cases, we do not need a separate design for people who are age 50+. We need better design for everyone.

Everyone performs better on web sites where the interaction matches users’ goals; where navigation and information are grouped well; where navigation elements are consistent and follow conventions; where writing is clear, straightforward, in the active voice, and so on. And, much of what makes up good design for younger people helps older adults as well.

For example, we know that most users, regardless of age, are more successful finding information in broad, shallow information architectures than they are with deep, narrow hierarchies. When web sites make their sites easier to use for older adults, all of their users perform better in usability studies. The key is involving older adults in user research and usability testing throughout design and development. Continue reading Involving older adults in design of the user experience: Inclusive design

Bonus research: Do the recruiting yourself

There are some brilliant questions on Quora. This morning, I was prompted to answer one about recruiting.

The question asker asked, How do I recruit prospective customers to shadow as a part of a user-centered design approach? The asker expanded, thusly:

I’m interested in shadowing prospective customers in order to better understand how my tool can fit into their life and complement, supplement, or replace the existing tools that they use. How do I find prospective customers? How do I convince them to let me shadow them?

Seemed like a very thoughtful question. I have some experience with recruiting for field studies and other user research, so I thought I might share my lessons learned. Here’s my answer. Would love to hear yours. Continue reading Bonus research: Do the recruiting yourself

You are not your user. No matter how good you think you are.

 

Listen up, people. This is why — quantity is not quality — you are not your user.

 

The lesson for today on participant sampling is Google Buzz. Google has been working on Buzz for some time. And it’s a cool idea. Integrating the sharing of photos, status updates, conversations, and email is a thing a lot of us have been looking for. Buzz makes lots of automatic connections. That’s what integrating applications means.

BUT. One of the features of Buzz was that it would automatically connect you to people whom you have emailed in Gmail. On the surface, a great idea. A slick idea, which worked really well with 20,000 Google employees.

Large samples do not always generate quality data
Twenty thousand. Feedback from 20,000 people is a lot of data. How many of us would kill to have access to 20,000 people? So. How can such a large sample be bad? Large samples can definitely generate excellent data on which to make superfine design decisions. Amazon and Netflix use very large samples for very specialized tests. There’s discussion everywhere, including at the recent Interaction10 conference in Savannah, about cheap methods for doing remote, unmoderated usability testing with thousands of people. More data seems like a good idea.

If you have access to 20,000 people and you can handle the amount of data that could come out of well designed research from that sample, go for it. But it has to be the right sample.

Look outside yourself (and your company)
Google employees are special. They’re very carefully selected by the company. They have skills, abilities, and lives that are very different from most people outside Google. So, there’s the bias of being selected to be a Googler. And then there’s indoctrination as you assimilate into the corporate culture. It’s a rarified environment.

But Google isn’t special in this way. Every organization selects its employees carefully. Every organization has a culture that new people undergo indoctrination and assimilation for, or they leave. In aggregate, the people in an organization begin to behave similarly and think similarly. They aspire to the same things, like wanting products to work.

But what about 37 Signals and/or Apple? They don’t do testing at all. (We don’t actually know this for sure. They may not call it testing.) They design for themselves and their products are very successful in the marketplace. I think that those companies do know a lot about their customers. They’ve observed. They’ve studied. And, over time, they do adjust their designs (look at the difference in interaction design in the iPod from first release in 2001 to now). Apple has also had its failures (Newton, anyone?).

The control thing
By not using an outside sample, Google ran into a major interaction design problem. About as big as it gets. This is a control issue, not a privacy issue, though the complaints were about over sharing. One of the cardinal rules of interaction design is to always let the user feel she’s in control. By taking control of users’ data, Buzz invaded users’ privacy. That’s the unfortunate outcome in this case, and now, users will trust Google less. It’s difficult to regain trust. But I digress.

The moral of today’s lesson: Real users always surprise us
Google miscalculated when it assumed that everyone you email is someone you want to share things with, and that you might want those people connected to one another. In a work setting, this might be true. In a closed community like a corporation, this might be true. But the outside world is much messier.

For example, I have an ex. He emails me. Sometimes, I even email him back. But I don’t want to share things with him anymore. We’re not really friends. I don’t want to connect him to my new family.

Even testing with friends and family might have exposed the problem. Google has a Trusted Tester program. Though there are probably some biases in that sample because of the association with Google employees, they are not Google employees. This makes friends and family who use Gmail one step closer to typical users. But Google didn’t use Trusted Testers for Buzz.

You get to choose your friends in real life. Google could have seen this usage pattern pretty quickly just by testing with a small sample who live beyond the Google garden walls.

Yes or No: Make your recruiter smarter

In response to my last post about writing effective screeners, c_perfetti asks:

 

I agree open-ended questions in a screener are best.

But one reason some usability professionals use ‘yes/no’ questions is because they don’t have confidence that the external recruiters can effectively assess what an acceptable open ended answer would be.

In some cases, they may find that asking a ‘yes/no’ question is the safer approach.

How would you handle this concern?

You asked a great open-ended question! What you need is a smarter recruiter.

There are two things you can do to make your recruiter smarter: brief her on the study, and give her the answers.

Brief your recruiter

Basically what we’re talking about is giving your recruiter enough literacy in the domain you’re in to be intelligent when screening rather than a human SurveyMonkey. You can make them work smarter for you by doing two things:

  • Spend 15 minutes before the recruit startsexplaining to the recruiting agency the purpose and goals of the study, the format of the sessions, what you’re hoping to find out, and who the participant is. For this last, you should be able to give the agency a one- or two-word envisionment of the participant: “The participant has recently been diagnosed with high cholesterol or diabetes or both and has to make some decisions about what to do going forward. She hasn’t done much research yet, but maybe a little.” 
  • Insist that the agency work with you. Tell them to call you after the first two interviews they do and walk through how it went. Questions will come up. Encourage them to call you and ask questions rather than guessing or interpreting for themselves.

With this training done, you can trust your recruiting agency a bit more. If you continue to work with the agency, over time they’ll learn more about what you want, but you’ll also have a relationship that is more collaborative.

Tell the recruiter what the answers might be

Now, to your question about Yes/No.

Using Yes/No leads to one of two things: inviting the respondent to cheat by just saying “yes!” or scaring the respondent into giving the “wrong” answer because it might be bad or embarrassing to give the “right” answer. In the screening interview, this can be scary or accusatory to the respondent: “Do you have high cholesterol?” (And saying “no” would disqualify him from the study.) Or just super easy to say “yes” because the question is too broad or ambiguous. “Do you download movies from the Web?” could be stretched to mean ‘watch videos on YouTube,’ or bit torrenting adult entertainment, but what it means is ‘Do you use a service from which you get on-demand or instant access to commercial, Hollywood movies and then watch them?’

If it’s the main qualifier for the study – Do you do X? – that can be avoided by putting out the call for participants the right way. Check the headlines on craigslist.org (usually in Jobs/ETC or in Volunteers), for example. There you’ll see pre-qualifying titles on the postings, and that’s the place to put the question, “Do you have high cholesterol?” or “Do you use a headphone with your mobile phone?” You still have to verify by asking open-ended questions.

If you find yourself wanting to ask a Yes/No question:

  • Craft an open-ended question and provide what several possible right answers might befor the recruiters to use as reference (but not something they should read to respondents). Possible alternative script for the recruiter: 

 

“Tell me about the last cholesterol test you had. What did the doctor say?”
[Recruiter: Listen for answers like this
___ He said that I’m okay but I should probably watch what I eat and get more exercise. My total cholesterol was .
___ He said that if I didn’t make a change I’d have to start taking meds/a prescription/away my cheese. My total cholesterol was .
___ He said that I am a high risk for heart disease. I could have a heart attack. My total cholesterol was ]

  • Think of one key question that would call the respondent out on fibbing to get into the study. For a gaming company, we wanted people who had experience with a particular game. Anyone can look up the description of a game online and come up with plausible answers. We added in a question asking what the respondent’s favorite character was and why. Our client provided a list of possible answers: names and powers. The responses were fascinating and indicated deeper knowledge of the game than a cheater could get from the cover art or the YouTube trailer.

The short answer: You should still avoid Yes/No questions in screeners. First, think about what you’re really asking and what you want to find out by asking it. Is it really a yes/no question? Then train your recruiter a little bit beforehand, and anticipate what the answers to the open-ended questions might be.

Why your screener isn’t working

I get that not every researcher wants to or has time to do her own recruiting of participants. Recruiting always seems like an ideal thing to outsource to someone else. As the researcher, you want to spend your time designing, doing, and analyzing research.

So, you find an agency to do the recruiting. Some are very appealing: They’re cheap, they’re quick, and they have big databases of people. You send requirements, they send a list of people they’ve scheduled.

How do you get the most out of an agency doing the recruiting? Write a great screener — and test it. How do you get a great screener? Here are a few tips.

Seven screener best practices

  1. Focus questions on the behavior you want to see in the test. For example, for a hotel reservations website, you might want to know Does the person book his own travel online? For a website for a hospital network, the behavior might be Does the person have a condition we treat? Is the person looking for treatment?
  • Limit the number of questions.If the question does not qualify or disqualify a respondent for the study, take the question out. If you want to collect information besides the selection criteria, develop a background questionnaire for the people selected for the study.
  • Think about how you’re going to use the data collected from the screener.Are you going to compare user groups based on the answers to screener questions? For example, if you’re asking in your screener for people who are novices, intermediates, and experts with your product, are you actually going to have a large enough sample of participants to compare the data you collect in the usability test? If not, don’t put requirements in your screener for specific numbers of participants with those qualities. Instead, ask for a mix.
  • Avoid Yes/No responses. This is difficult to do, but worthwhile. Yes/No questions are very easy for respondents to guess what the “right” answer is to get into the study. In combination, a series of gamed Yes/No responses can make a respondent look like he fits your profile when he really doesn’t.
  • Ask open-ended questions if at all possible.This gets respondents to volunteer information in answer to a real question rather than picking the “right” choice from a list of options that the recruiter reads to them. You can give the recruiter the choices you think people will come up with and a pick list for the recruiter to use to note the data. But the recruiter should not read the list to the respondent. For example, on the hospital website, you might ask, “Tell me about your health right now. What were the last three things you visited a doctor for?”
  • Avoid using number of hours or frequency as a measure of or a proxy for expertise.I was looking for tech savvy people for one study. One respondent told us she spent 60 hours a week on the Web. When she got into the lab, it was clear she didn’t know how to use a browser. When I asked her what she does on the Web, she said this computer didn’t look like hers at all. That she starts in a place where she clicks on a picture and it brings up her favorite game. Turns out, her son-in-law had set up a series of shortcuts on her desktop. She knew the games were on the Web, but that was all she knew about the Web.
  • Watch the insider jargon. If you’re using industry or company terms for products or services that you want to test, you may prime respondents for what you’re looking for and lead them to giving the right answer. Again, open-ended questions can help here. This is where you start looking at your product from the user’s point of view.


Need help developing a screener? Need help with doing recruiting? Contact me about recruiting services my company offers. We’ve got a great process and a 90% show rate.