In the fall of 2012, I seized the opportunity to do some research I’ve wanted to do for a long time. Millions of users would be available and motivated to take part. But I needed to figure out how to do a very large study in a short time. By large, I’m talking about reviewing hundreds of websites. How could we make that happen within a couple of months?
Do election officials and voters talk about elections the same way?
I had BIG questions. What were local governments offering on their websites, and how did they talk about it? And, what questions did voters have? Finally, if voters went to local government websites, were they able to find out what they needed to know? Continue reading Crowd-sourced research: trusting a network of co-researchers
Sports teams drill endlessly. They walk through plays, they run plays, they practice plays in scrimmages. They tweak and prompt in between drills and practice. And when the game happens, the ball just knows where to go.
This seems like such an obvious thing, but we researchers often poo-poo dry runs and rehearsals. In big studies, it is common to run large pilot studies to get the kinks out of an experiment design before running the experiment with a large number of participants.
But I’ve been getting the feeling that we general research practitioners are afraid of rehearsals. One researcher I know told me that he doesn’t do dry runs or pilot sessions because he fears that makes it look to his team like he doesn’t know what he is doing. Well, guess what. The first “real” session ends up being your rehearsal, whether you like it or not. Because you actually don’t know exactly what you’re doing — yet. If it goes well, you were lucky and you have good, valid, reliable data. But if it didn’t go well, you just wasted a lot of time and probably some money. Continue reading The importance of rehearsing
Despite the reality of differences due to aging, research has also shown that in many cases, we do not need a separate design for people who are age 50+. We need better design for everyone.
Everyone performs better on web sites where the interaction matches users’ goals; where navigation and information are grouped well; where navigation elements are consistent and follow conventions; where writing is clear, straightforward, in the active voice, and so on. And, much of what makes up good design for younger people helps older adults as well.
For example, we know that most users, regardless of age, are more successful finding information in broad, shallow information architectures than they are with deep, narrow hierarchies. When web sites make their sites easier to use for older adults, all of their users perform better in usability studies. The key is involving older adults in user research and usability testing throughout design and development. Continue reading Involving older adults in design of the user experience: Inclusive design
For most teams, the moderator of user research sessions is the main researcher. Depending on the comfort level of the team, the moderator might be a different person from session to session in the same study. (I often will moderate the first few sessions of a study and then hand the moderating over to the first person on the design team who feels ready to take over.)
To make that work, it’s a good practice to create some kind of checklist for the sessions, just to make sure that the team’s priorities are addressed. For a field study or a formative usability test, a checklist might be all a team needs. But if the team is working on sussing out nuanced behaviors or solving subtle problems, we might want a bit more structure. Continue reading Researcher as director: scripts and stage direction
There are a bunch of things to do to get ready for any test besides designing the test and recruiting participants.
- make sure you know the design well enough to know what should happen as the participant uses it
- copy any materials you need for taking notes
- copy of all the forms and questionnaires for participants, including honorarium receipts
- organize the forms in some way that makes sense for you. (I like a stand-up accordion file folder, in which I sort a set of forms for each participant into each slot. I stand up the unused sets and then when they’ve been filled out, they go back in on their sides.)
- check in with Accounting or whoever on money for honoraria or goodies for give-aways
- get a status report from the recruiter
- double-check the participant mix
- make sure you have contact information for each participant
- check that you have all the equipment, software, or whatever that you need for the participant to be able to do tasks
- run through the test a couple of times yourself
- double-check the equipment you’re going to use (I use a digital audio recorder, so I need memory sticks for that, along with rechargeable batteries)
- charge all the batteries
- double-check the location
Which gets us to where you’re going to do the sessions. But let’s talk about that later.
What, you don’t have a research strategy? Let’s think about the future here.
It’s not uncommon – and not bad – to be working in the present, reacting to the ever-growing demand for usability testing in your organization. “Ever-growing” is good. But when Jared Spool asked me to do a podcast with him recently to talk about what I think makes the difference between a good user experience team and a great user experience team, it got me thinking.
The recipe, based on my observations in dozens of corporations, comes down to these three main ingredients:
Vision is an overused word, but here I mean that you and your team have visualized the ideal customer experience — no limits, no constraints. Imagine the best possible interactions a customer could have with your organization at every touch point. Write it down.
Strategy means that you have a plan for reaching the vision. Over the long term, you can learn about and take into account customers’ contexts and goals while matching those up to the goals and objectives of the business.
Involvement calls all interested people in the business together (and that really should be everyone from management to design to development to support and anyone else in the organization) to embrace the vision and carry out the strategy across disciplines.
But I haven’t said much about usability testing yet. Where does it fit in? Everywhere. Part of my strategy would be to teach as many people in the organization to do usability testing as possible. You probably can’t do all the testing that is wanted (let alone needed). If you teach others to do it and coach them along the way, the customer ultimately benefits as the organization gains a closer, smarter understanding of the customer experience and can make evidence-based decisions about how to get to the ideal experience it shares a vision of.
You have designed a study. Everyone seems to be buying in. Scheduling participants is working out and the mix looks good. What’s left to be done except just doing the sessions? Three things:
There are three rounds of practice that I do before I do a “real” session. Jeez, I can hear you say, why would I need to practice so much? Why would you, Dana, who have been doing usability testing for so many years, need to practice so much? I do it for a couple of reasons:
- It gives me multiple opportunities to clarify the intent of the test, the tasks, and the data measures.
- I can focus on observing the participant in each regular session because any kinks have been worked out.
Walk through the script and gather tools and materials
The first is to walk through my test plan and script. I read the script aloud even though I’m by myself. While I’m doing that, I do two things: adjust the wording to sound more natural, and gather tools and materials I’ll need to do the sessions.
Do a dress rehearsal or dry run
For the second round of practice, I do a dry run of the now refined script with someone I know filling the role of the participant. We do everything you would normally do in a session, from greeting and filling out forms, to doing tasks, to closing the session. I might occasionally stop the session to adjust the script or to make notes about what to do differently next time. I might even ask the participant (usually a friend, neighbor, or colleague) questions about whether the test is making sense. It’s a combination of dress rehearsal and “logic and accuracy” test to get the sequence down and to make sure you’ve got all the necessary pieces.
Pilot the protocol
Finally, there’s the pilot test session. In this pilot, I work with a “real” participant – someone who was screened and scheduled along with all of the other participants. I conduct the session in the same way I intend to conduct all of the following sessions. The twist this time is that observers from the design team should be present. At the end of the session, I debrief with them about the protocol.
Don’t waste good participant data
There have been times when I’ve been rushed by a client or was just too cavalier about going into a usability test and did not rehearse. I paid for it by having rough sessions that I couldn’t use all the data from. Every time it’s a reminder that preparation and practice are as important to getting good data as a good test design is.
I haven’t been in a usability test lab for about a year. Ironically, since I was writing a book about usability testing, much of my work was field research to learn about particular audiences and their tasks.
And, though my usual position about labs is that exploratory usability testing is probably better done in the user’s environment, I’m excited about getting back into the lab.
Good reasons to test in a lab
I’m doing these upcoming tests in a lab facility because
- The testing is quantitative and summative. That is, I’m doing very specific counts of errors and failures that are strictly defined, so I want to control other aspects of the test such as the computer setup.
- I don’t want to interact much with the participants. I only want to direct participants when to start their tasks. Otherwise, I will intervene in the session only at prescribed points, so I will direct the session from a different room from where the participants are working.
- I may have observers, but I won’t know until the last minute. Though I prefer it if observers arrive before the session starts and stay through a whole session, at a facility they can come and go because they can observe from a separate room.
Good reasons to test in the field
I recently did a usability study in the field. Why?
- I wanted to learn about the user’s environment (rather than controlling it). In the exploratory study I’m thinking of, I got the best of both worlds: usability testing data in a realistic situation. I learned about lighting levels, surrounding noise, and what the participant’s desk setup was like. But I also got to observe relationships and interactions the participant had with others, typical interruptions (and recovery from those), and how the thing I was testing fit into the person’s work.
- It was convenient for the participants. They don’t have travel to the testing site. The interruption of their typical day is minimized.
- The sessions were informal enough that observers could be present in the room (after they had been properly trained). In fact, people from neighboring cubes often chimed in comments or questions because they’d overheard what we were talking about. I took this to be a good thing because I learned about that communication dynamic, but those eavesdroppers often contributed information that was useful to me in my study.
In a future post, I’ll talk about what to look for in a lab facility if you’re renting one and how to find one.
I get a lot of clients who are in a hurry. They get to a point in their product cycle that they’re supposed to have done some usability activity to exit the development phase they are in and now find they have to scramble to pull it together. How long can it take to arrange and execute a discount usability test, anyway?
Well, to do a usability test right, it does take a few steps. How much time those steps take depends on your situation. Every step in the process is useful.
The steps of a usability test
Jeff Rubin and I think there are these steps to the process for conducting a usability test:
- Develop a test plan
- Set up the testing environment and plan logistics
- Find and select participants
- Prepare test materials
- Conduct the sessions
- Debrief participants and observers
- Analyze data and observations
- Create findings and recommendations
Notice that “develop a test plan” and “prepare test materials” are different steps.
It might seem like a shortcut to go directly to scripting the test session without designing the test. But the test plan is a necessary step.
Test plan or test design?
There’s a planning aspect to this deliverable. Why are you testing? Where will you test? What are the basic characteristics of the participants? What’s the timing for the test? For the tasks? What other logistics are involved in making this particular test happen? Do you need bogus data to play with, userids, or other props?
To some of us, a test design would be about experiment design. Will you test a hypothesis or is this an exploratory test? What are your research questions? What task scenarios will get you to the answers? Will you compare anything? If so, is it between subjects or within subjects? Will the moderator sit in the testing room or not? What data will you collect and what are you measuring?
It all goes together.
Why not just script the session without writing a plan?
Having a plan that you’ve thought through is always useful. You can use the test plan to get buy-in from stakeholders, too. As a representation of what the study will be, it’s understanding the blueprints and renderings before you give the building contractor approval to start building.
With a test plan, you also have a tool for documenting requirements (a frozen test environment, anyone?) for the test and a set of unambiguous details that define the scope of the test. Here, in a test plan, you define the approach to the research questions. In a session script, you operationalize the research questions. Writing a test plan helps you know what you’re going to collect data about and what you’re going to report on, as well as what the general content of the report will be.
Writing a test plan (or design, or whatever you want to call it) will give you a framework for the test in which a session script will fit. All the other deliverables of a usability test stem from the test plan. If you don’t have a plan, you risk using inappropriate participants and getting unreliable data.