Usability testing is a fantastic source of data on which to make design decisions. You get to see what is frustrating to users and why, first hand. Of course you know this.
There are other sources of data that you should be paying attention to, too. For example, observing training can be very revealing. One of the richest sources of data about frustration is the call center. That is a place that hears a lot of pain.
Capturing frustration in real time
Often, the calls that people make to the call center surface issues that you’ll never hear about in usability testing. The context is different. When someone is in your usability study, you’ve given them the task and there’s a scenario in which the participants are working. This gives you control of the situation, and helps you bound the possible issues you might see. But when someone calls the call center, it could be anything from on boarding to off boarding, with everything in between as fair game for encountering frustration. The call center captures frustration in real time.
We could talk a lot about what it means that organizations have call centers, but let’s focus on what you can learn from the call center and how to do it.
There are two kinds of data that being friends with the call center can get you. First, nearly every call that comes in is sorted, routed, and documented in some way. (By the way, there’s a similar, parallel process for anything that comes in by chat or email.) All of these passive data collections can give you some insights about what people get frustrated enough about that they dig around to find the phone number and make time to call.
Second, there’s real-time use happening — basically, live usability testing, in context, in real time. Call center agents are acting as life lines to get callers through the tasks they’re trying to do to reach their own goals.
Call categories tell you how often people get desperate on which issues
Callers often are asked to self-sort as they go through the voice user interface on the phone: Press 1 for delayed delivery. Press 2 for missing license key. Press 3 for questions about the notice you received. And so on. You should care about this interface because it’s part of the caller’s experience with your organization and its products. But you can also learn a lot about issues that you can design for by paying attention to the categories of calls and the numbers of calls that come through that category. Does one category dominate? It’s probably a candidate for attention from your team.
There is often an option for the caller to bypass the phone tree and go directly to a human being. You definitely want to know how often this happens, because it can be an indicator of the level of freakout the caller is dealing with. Maybe the caller isn’t comfortable getting through the phone tree, or maybe the phone tree really doesn’t have an option that reflects what they’re calling about, or maybe they are just off-the-charts frustrated.
Queue loads over time show patterns in use
The selections callers make in the phone tree push their call through queues that show up in a dashboard on the agent’s desktop. Queues might also be managed for severity, which is an interesting thing to track.
But more typically, call center managers notice upticks or downturns at different times. For example, tax software for individual filers gets very little action until people start thinking about doing their tax returns, all the way through the filing date. The rest of the year is quiet. For government services, upticks might be driven by external events. For example, more Americans are born in September than in any other month. If you work in an agency that has age-related benefits, you might see more calls in August and September as a result of that demographic fact. Or the upticks might be driven by other factors, such as policy changes or world events. For example, US Citizenship and Immigration Services has noticed a pattern over decades that more people apply to become naturalized citizens in the year leading up to a presidential election.
All of those pattern heuristics are indicators for services, training, or features that your team might want to plan for.
If you’re friends with the call center, you can get regular reports about what’s happening, just like they do. You can then see trends over time, and anticipate what issues people might call about whenever something in your product or service changes.
This call may be recorded for quality and training purposes
Calls are recorded for a variety of reasons. One reason is legal liability documentation. But more often, organizations use recordings of calls as a way to review how agents are doing at their jobs. You may be able to get recordings of calls to listen to outside of the call center. As with watching videos of usability test sessions, you lose something by not being present in the moment. In the case of calls to a call center, you won’t get the chance to ask the agent any follow up questions about why a call was handled the way it was. But as a way of getting some insight, listening to recordings can be helpful filler and continuous perspective.
(Note: There may be restrictions on how recordings are handled that you should be aware of and that you should respect. For example, your human resources department, or if the agents are unionized, the local, may be concerned about protecting the anonymity of the agents. Be clear about what you’re trying to learn from the calls when you ask for permission to listen to either recordings or live calls.)
Listen, and you shall hear
That’s all handy enough, but sitting in the call center, listening in on calls, can be incredibly enlightening. Yes! This is a thing you can do. The phones are already set up for doing this, as most call center workers are trained to do their jobs, first by listening in to calls that other agents handle, and then by having supervisors listen in on their calls as they interact with callers.
Here are some pro tips for listening in on calls:
- Get a briefing on the systems that agents have to use to document calls. These systems often drive what the interaction in the call is like. For example, is the call scripted? Or are agents allowed to go off script? Are there categories that the agents are supposed to use to classify calls? What are the resources that agents have for finding answers?
- Ask about agent incentives and autonomy. Get the basic facts about whether agents are rewarded for keeping calls short. If agents are encouraged to end calls quickly, you may hear different solutions from when agents other incentives.
- Just listen to the first few calls. Don’t focus on any particular thing. Learn how the calls get done.
- Ask follow up questions in the few seconds between calls. Never ever ever interrupt the call for any reason. Ever.
- Ask agents what the top issues are. The stats in the reports show one picture, but agents will also have a gauge on how callers are feeling about what’s happening with callers.
- Take note of how callers ask their questions and describe their complaints. The words they use may not match what’s in your design. You’ll want to understand what’s happening there, and why. For example, are these callers who are new to the design? If that’s happening a lot, you may not be designing well for people to quickly learn when they first start interacting with your design. There may be jargon embedded that is confusing unless users have more domain knowledge than you realized when you made the design.
- Listen for where in the design callers are stuck or otherwise frustrated. Just like in a usability test, you’ll start to see patterns in where people encounter obstacles.
- Switch agents after a few calls. Though they all get similar training, their experience and approach may affect the outcomes of the calls. Or they may take all the same steps in a different order.
- Listen to enough calls. Just like doing enough user research sessions to complete your picture of what’s happening and why, listening to enough calls to reach the saturation point is going to help you round out the data you have about the issues people are calling about. But you don’t have to do a bajillion calls in one sitting. You should think about sitting in on calls regularly, and working with the call center on a periodic basis as a way to help fill gaps in what you and your team know about what’s happening with your design.
- Listen and watch for how agents resolve the issues people call with. Are they closing tickets with novel approaches that you didn’t expect? Maybe these are workarounds that you should either design for, or fix the underlying problem causing the workaround.
While usability testing probably is the heart of your evaluation toolbox, getting different perspectives on usability regularly can give you a fresh look at your work. Look around for other sources of data in your organization — they’re everywhere. Start by making friends with the call center team. They see and hear the pain everyday.
Such a great topic. So relevant to every Enterprise!
How do you approach asking about agent incentives? Any special tips to prevent defensiveness?
Are there useful ways to use quantitative data as a first filter to figure out how I want to spend my listening time?
Did you ever work with agents who had different "levels" or certifications? How did that impact your research plan?