“Older people can’t use technology”: Refuting assumptions through research 

In the 2000s, as the organization known now as AARP saw the “silver Tsunami” of Baby Boomers entering old age coming, I was fortunate to get to do some research along with Ginny Redish about the experience of older adults as they interact with the world. 

AARP wasn’t the only organization looking at this, though it makes sense that they would. AARP is a nonprofit that offers services to people over age 50 and lobbies Congress on issues related to Social Security, Medicare, and other policies that affect older people and their families. 

Fidelity, a wealth management company with a robust human-centered design practice, realized that many of the daily users of their website were in their 80s. It made sense to understand the experience older people had as they interacted with the information and available transactions online. 

(This was relatively early days for the internet. There were not a lot of transactions you could perform online. While stock trading came online fairly early in the 2000s, exchanging mutual funds – the backbone of most retirement accounts – was not available until 2003 or so.) 

Your perception of what older people are capable is probably wrong

Going into the research, what we heard from mostly people younger than age 50 was that older people couldn’t cope with technology. That they would struggle with websites in ways that younger people would not. 

What we all realized pretty quickly was that there was a huge range of ability, attitudes, and aptitudes in this massive demographic.  But if a website was not usable, it wasn’t because the user was 63 or 89. It was because the website wasn’t usable by lots of people, independent of age. 

We took apart what people meant when they assumed that older adults couldn’t deal with technology and what we found was a medicalized mental model. The assumption was that as the warranty runs out on what were assumed to be perfectly functioning body parts. Then, when you start to experience accelerated aging, you become disabled. You might have poorer eyesight or hearing. Dexterity and mobility degrade. In addition, the mental model of older people held by younger people is that older people also are feeble-minded. Older people are prone to short-term memory loss or just general degradation of cognition. 

As Amy Lee, then the head of customer experience for web at AARP said in her forward to our report: 

“The existing heuristics seemed to me to be focused on people’s disabilities rather than on people’s abilities. Not everyone over 50 has eyesight poor enough to require maximizing the size or contrast of text of a web page. Not every person over 50 has problems with motor control or significant short term memory loss. The diversity of this demographic group is stunning. Not everyone over 50 is new to the Web or afraid of their computer. Why are we trying to lump them all together like that?” 

I was in my early 40s when Ginny and I did this work. I am now in my early 60s. The empirical evidence stands up to what we learned then. I think I’m way smarter now, much quicker on analysis and critical thinking that I was then. This is wisdom (or it could be my own perception through some age-related degradation like early dementia – you tell me).  My eyesight is actually better than it was then. Turns out that the shape of your eyes change over time, and sometimes that is in your favor. Unfortunately, the genes I inherited from my wonderful parents included markers for arthritis. From both of them. I  feel this every day. It does not impede my ability to use the web. 

In 2025, websites, apps, and other technologies are still pretty unusable by lots of people. This is largely because they are made by people who are not their users and because those well-intentioned designers and product people are not learning from people who have lived experiences. And, as technology gains more features and functionality, it comes with more complexity not more simplicity. 

Some takeaways and a model for designing for everyone, including older adults

I’m going to link to all the reports from the work that Ginny and I did, but that’s not the same as directly observing individuals interacting with a thing that someone has designed. 

Our heuristics were informed by also observing older adults interacting with AARP.org and other sites. Among the insights I gained that linger with me today are these: 

  • People perceive “old” as about 20 years older than they are. 
  • Age is a moving target. You don’t turn 50 and fall apart. Different things happen (or not) to people at a range of ages and not all of them are strictly age-related. 
  • If we are lucky, all of us will get to experience aging. 
  • People who are in their 80s and 90s now used computers in their professions during their working years. They may still be happily in their working years. Some of them invented the technology we use today. 

But the big ah-ha that Ginny, Amy, and me had was that there were some simple factors to consider in the usability and accessibility of websites for older adults. At the time, we heuristically placed interactions on sites on scales that we used to try to capture the experience an older person might have. The factors were Age (because AARP), Ability, Aptitude, and Attitude. In our report, we described them this way: 

  • age: including chronological age, but taking into account life experiences 
  • ability: cognitive and physical
  • aptitude: expertise with the technology 
  • attitude: confidence levels and emotional state of mind 

Yes, chronological age is a kind of measure, but one 70-year-old might have amazing skin, excellent eyesight, and be able to run marathons because a combination of genes, privilege, and other factors. Another might have been exposed to environmental, genetic, or other factors that mean their mobility is restricted to being homebound. 

People of all ages struggle with using technology

Later, around 2008,  in some work I did for a company that was a pioneer in online learning, I applied this model to college students who were the audience for the startup’s prototype product. The participants aged in range from 18 to 30 (so-called “adult learners” who maybe were returning to get or complete degrees). What we saw was that age was not a factor at all in usability and accessibility of online tools and websites. 

We met 20-year-olds who were the perfect target audience for Facebook but didn’t know the first thing about how to interact with it, or why you would want to.  When we put them in front of our prototype, we saw no effect for age.  We did see effects for what we (and my friends at Fidelity) called “expertise.” Expertise came from a combination of ability, aptitude, and attitude. 

I have applied the model in formal and informal ways in studies since then and found the same thing: Age is not a factor in how well a design performs for older people. 

In the category of “everything old is new again,” I have had the delightful privilege of sharing my wisdom with lots exceptional designers and product people over the last several years. One thing that is not exceptional about them is that they, too, assume that older adults struggle with technology. So, it’s time to revive this work and get it out in the world again. 

AARP Audience-Centered Heuristics: Older adults (pdf, 106kb)
Twenty heuristics, each with several questions, from the following two studies.

Chisnell, D. and Redish, J. C., 2005, Designing Web Sites for Older Adults: Expert Review of Usability for Older Adults at 50 Web Sites. (pdf 1.8Mb)

Chisnell, D. and Redish, J. C., 2004, Designing Web Sites for Older Adults: A Review of Recent Research. (pdf, 397Kb)

You might also want to read our insights about recruiting and working with older participants in usability studies (pdf, 156Kb)

Delight resources

A key element of designing for delight is understanding where your product is in its maturity. One way to look at that is through the lens of the Kano Model. You can learn about the Kano Model and our addition of pleasure, flow, and meaning through a couple of sources:

Read Jared’s article on understanding the Kano Model  (8-minute read on uie.com)

Watch Jared talk about the Kano Model (45-minute video)

 

Dana and Jared have both written about different aspects of delight. It’s not just about dancing hamsters. Delight is much more nuanced than that. The three key elements are pleasure, flow, and meaning.

Read Jared’s overview of pleasure, flow, and meaning. (10-minute read on uie.com)

Read Dana’s series  at UX Magazine

 

Design can be used for good, or evil. Jared wrote about a technique that we use in our workshop that he calls “despicable design.” Going to the dark side can reveal a lot about how your team approaches designing its users’ experiences.

Read Jared’s article, “Despicable Design — When “going evil” is the perfect technique” (12-minutes at uie.com)

 

In our workshop, we also use sentiment words to help teams narrow down how they want people to feel or perceive a service. Here are the basics about sentiment analysis. And a piece from NNG about using the Microsoft Desirability Toolkit from which our use of sentiment words comes.

 

Framework for research planning

One of the tricks to making sure that I’ve designed the right study to learn what I need to learn is to tie everything together so I can be clear from the planning all the way through to the results report why I’m doing the study and what it is actually about. User research needs to be intentionally designed in exactly the same way that products and services must be intentionally designed.

 

What’s the customer problem?

It starts with identifying a problem that needs to be solved, and the contexts in which the problem is happening. This is a kind of meta research, I guess. From there, I can work with my team to understand deeply why we are doing the research at all, what the objective of the particular study is, and what we want to be different because we have done the research.

 

Why are you doing the study?

When the team shares understanding about why you’re doing the study and what you want to get out of it — along with envisioning what will be different because you will have done the study — forming solid research questions is a snap. You need research questions to set the boundaries of the study, determine what behaviors you want to learn about from participants, and what data you can reasonably collect in the constraints you have to answer your research questions.

Continue reading “Framework for research planning”

What to do with the data: Moving from observations to design direction

 

This article was originally published on December 7, 2009. 

 

What is data but observation? Observations are what was seen and what was heard. As teams work on early designs, the data is often about obvious design flaws and higher order behaviors, and not necessarily tallying details. In this article, let’s talk about tools for working with observations made in exploratory or formative user research.

Many teams have a sort of intuitive approach to analyzing observations that relies on anecdote and aggression. Whoever is the loudest gets their version accepted by the group. Over the years, I’ve learned a few techniques for getting past that dynamic and on to informed inferences that lead to smart design direction and creating solution theories that can then be tested.

 

Collaborative techniques give better designs

The idea is to collaborate. Let’s start with the assumption that the whole design team is involved in the planning and doing of whatever the user research project is.

Now, let’s talk about some ways to expedite analysis and consensus. Doing this has the side benefit of minimizing reporting – if everyone involved in the design direction decisions has been involved all along, what do you need reporting for? (See more about this in the last section of this article.)

Continue reading “What to do with the data: Moving from observations to design direction”

Observers are your friends

 

(This article was originally published on May 30, 2008. This is a refresh.)

 

Research that you do alone ends up in only your head. No matter how good the report, slide deck, or highlights video, not all the knowledge gets transferred to your teammates. This isn’t your fault. It just is.

So what to do? Enlist as many people on your team as possible to help you by observing your usability testing sessions. You can even give your observers jobs, such as time-keeper if you’re measuring time on task. Or, if you are recording sessions, it could be an observer’s job to start and stop the recordings and to label and store them properly.

The key is to involve the other people on the team – even managers – so they can

  • help you
  • learn from participants
  • share insights with you and other observers
  • buy in
  • reach consensus on what the issues are and how to solve them

Who should observe: Everyone

Ideally, everyone on the design and development team should observe sessions. Every designer, every programmer, every manager on the project should watch as real people use their designs. People on the wider team who are making design decisions should also observe sessions. I’m talking about QA testers, project managers, product managers, product owners, legal people, compliance people, operations people — everyone.

Continue reading “Observers are your friends”

Call centers as a source of data

Usability testing is a fantastic source of data on which to make design decisions. You get to see what is frustrating to users and why, first hand. Of course you know this.

There are other sources of data that you should be paying attention to, too. For example, observing training can be very revealing.  One of the richest sources of data about frustration is the call center. That is a place that hears a lot of pain.

 

Capturing frustration in real time

Often, the calls that people make to the call center surface issues that you’ll never hear about in usability testing. The context is different. When someone is in your usability study, you’ve given them the task and there’s a scenario in which the participants are working. This gives you control of the situation, and helps you bound the possible issues you might see. But when someone calls the call center, it could be anything from on boarding to off boarding, with everything in between as fair game for encountering frustration. The call center captures frustration in real time.

We could talk a lot about what it means that organizations have call centers, but let’s focus on what you can learn from the call center and how to do it.

Continue reading “Call centers as a source of data”

Talking to strangers in the street: Recruiting by intercepting people

 

Intercepting is an exercise in self-awareness. Who you choose and how you approach them exposes who you are and what you think. What your fears are. The inner voice is loud. As a practice, we worry about bias in user research. Let me tell you, there’s nothing like doing intercepts for recruiting that exposes bias in the researcher.

Why would you do recruiting by intercepting, anyway? Because our participants were hard to find.

Hard-to-find participants walk among us

Typically, we focus recruiting on behaviors. Do these people watch movies? Clip coupons? Ride bicycles? Shop online? Take medicine?

The people we wanted to talk to do not take part in a desired behavior. They don’t vote.

We did intercepts because we couldn’t figure out a way to find the people we wanted through any conventional recruiting method. How do you recruit on a negative behavior? Or rather, how do you find people who aren’t doing something, especially something they are likely to think they should be doing — so they might lie about it?

Continue reading “Talking to strangers in the street: Recruiting by intercepting people”

Deconstructing delight

 

Maybe you just read Jared Spool’s article about deconstructing delight. And maybe you want to hear my take, since Jared did such a good job of shilling for my framework.

Here’s a talk I did a couple of years ago, but have been doing for a while. Have a listen.  (The post below was originally published in May, 2012.)

Everybody’s talking about designing for delight. Even me! Well, it does get a bit sad when you spend too much time finding bad things in design. So, I went positive. I looked at positive psychology, and behavioral economics, and the science of play, and hedonics, and a whole bunch of other things, and came away from all that with a framework in mind for what I call “happy design.” It comes in three flavors: pleasure, flow, and meaning.

I used to think of the framework as being in layers or levels. But it’s not like that when you start looking at great digital designs and the great experiences they are part of. Pleasure, flow and meaning end up commingled.

So, I think we need to deconstruct what we mean by “delight.” I’ve tried to do that in a talk that I’ve been giving. Here are the slides:

 

You can listen to audio of the talk from the IA Summit here.

There are also a few articles about the delight framework.

The essence of usability testing, in your pocket

I’ve encountered a lot of user researchers and designers lately who say to me, “I can’t do all the testing there is to do. The developers are going to have to evaluate usability of the design themselves. But they’re not trained! I’m worried about how to give them enough skills to get good data.”

What if you had a tiny guide that would give your team just the tips they need to guide them in creating and performing usability tests? It’s here!

 

Usability Testing Pocket Guide

This is a 32-page, 3.5 x 5-inch book that includes 11 simple steps along with a quick checklist at the end to help you know whether you’re ready to run your test.

The covers are printed on 100% recycled chipboard. The internal pages are vegetable- based inks on 100% recycled papers. The Pocket Guides are printed by Scout Books and designed by Oxide Design Co.

You can order yours here.

usability-testing-pocket-guide

Why are researchers afraid of developers?

 

The other evening I was at a party with a whole lot of UX-y people, some of them very accomplished and some of them new to the craft. I grabbed an egg nog (this is why I love this time of the year!) and stepped up to a cluster of people. I knew a couple of them, and as I entered the circle, I overheard one of them saying that he had attended a workshop at his place of work that day on how to talk to developers, and it had really helped.

 

“Helped what?” I said. But what I’d thought was Good lord, it’s not as if developers are a different species. What’s going on here? As I listened longer, I heard others in the circle sympathize. They were afraid of the developers who they were supposed to be on the same team with.

 

Researchers are intimidated by developers because developers have two superpowers. They Make and they Ship. Researchers don’t. Researchers and the data they produce actually get in the way of making and shipping.

 

Developers are not rewarded for listening to researchers. They’re generally not rewarded for implementing findings from research about users. Learning about research results means that it takes more time to do the right thing based on data. (Let’s not even get into getting developers to participate in research.) It makes it harder and more time consuming to ship when you pay attention to research data. Everything about application development methodology is optimized for shipping. Application development processes are not optimized for making something superb that will lead to an excellent user experience.

Continue reading “Why are researchers afraid of developers?”