What if you had a near-perfect participant show rate for all your studies? The first time it happens, it’s surprising. The next few times, it’s refreshing — a relief. Teams that do great user research start with the recruiting process, and they come to expect near perfect attendance.
Secret 1: Participants are people, not data points
The people who opt in to a study have rich, complex lives that offer rich, complex experiences that a design may or may not fit into. People don’t always fit nicely into the boxes that screening questionnaires create.
Screeners can be constraining not in a good way. An agency that isn’t familiar with your design or your audience or both — and may not be experienced with user research — may eliminate people who could be great in user research or usability testing. Teams we work with find that participants who are selected through open-ended interviews conducted voice-to-voice become engaged and invested in the study. The conversation helps the participant know they’re interesting to you, and that makes them feel wanted. The team learns about variations in the user profile that they might want to design for.
Secret 2: Participants are in the network
Let’s say the source is panel or a database (versus a customer list). People who sign up to be in panels or recruiting databases tend to be people who take part in studies to make easy money. Many are the kind of people who fill out surveys to win prizes. These people might be good participants, or they might not.
Teams that find study participants through personal, professional, and community networks find that when the network snowball of connections works, people respond because they’re interested and have something to offer (or a problem you might solve for them with your design).
They also come partially pre-screened. Generally, your friends of friends of friends don’t want to embarrass the people who referred them. If the call for participants is clear and compelling, the community coordinator at the church, school, club, union, or team will remember to mention the study as soon as they encounter someone they know who might fit. Don’t worry: the connections soon get far enough away from you and your direct network that your data will be just as objective and clean as can be.
Secret 3: Participants want to help you
They want to be picked for your team. They want to share their experiences and demonstrate their expertise. When teams are open to the wide range of participants’ experiences, they learn from participants during screening. Those selected become engaged in the research. These are participants who call when they’re going to be late, or apologize for having to switch times. They want to work with you. One team we worked with had a participant call from a car accident before calling the police. (They rescheduled!)
Secret 4: Participants need attention
You know all the details that go into a study. Participants need confirmation and reminding. Teams that send detailed email confirmations get respectable show rates. Teams that send email confirmations, and then email reminders just before the sessions get good show rates. Teams that send email confirmations, email reminders, and then call the participants to remind them in a friendly, inviting tone get stellar show rates.
Some teams use the call before the session to start the “official” research. Rather than the recruiter doing the final call, the researcher phones to explain the study and the roles, and ask some of the warm up questions you might normally start a regular session with. These researchers establish a relationship with the participant. They also get a head start, leaving more time when they’re face-to-face with a participant to observe behavior rather than interview.
Perfect attendance is worth the effort
When all the scheduled participants show up, the gold stars come not only for efficient use of the time in the lab and keeping clients and team members eyes and ears with users. It’s likely the team ends up with better, more appropriate, more informative participants, overall. That means better, more reliable data to inform design decisions.
Great article! Not a lot of resources for this stuff. Just found some more tips on this topic here: http://www.marketingprofs.com/articles/2012/7089/usability-testing-on-a-dime-a-three-week-diy-plan