Tag: collecting data

  • Consensus on observations in real time: Keeping a rolling list of issues

      Design teams often need results from usability studies yesterday. Teams I work with always want to start working on observations right away. How to support them while giving good data and ensuring that the final findings are valid? Teams that are fully engaged in getting feedback from users – teams that share a vision…

  • Looking for love: Deciding what to observe for

      The team I was working with wanted to find out whether a prototype they had designed for a new intranet worked for users. Their new design was a radical change from the site that had been in place for five years and in use by 8,000 users. Going to this new design was a big…

  • Popping the big question(s): How well? How easily? How valuable?

    When teams decide to do usability testing on a design, it is often because there’s some design challenge to overcome. Something isn’t working. Or, there’s disagreement among team members about how to implement a feature or a function. Or, the team is trying something risky. Going to the users is a good answer. Otherwise, even…

  • Retrospective review and memory

    One of my favorite radio programs (though I listen to it as a podcast) is Radiolab, “ a show about science,” which is a production of WNYC hosted by Robert Krulwich and Jad Abmurad and distributed by NPR. This show contemplates lots of interesting things from reason versus logic in decision making to laughter to…

  • Making it easy to collect the data you want to collect

    As I have said before, taking notes is rife with danger. It’s so tempting to just write down everything that happens. But you probably can’t deal with all that data. First, it’s just too much. Second, it’s not organized. Let’s look at an example research question: Do people make more errors on one version of…

  • Translating research questions to data

    There’s an art to asking a question and then coming up with a way to answer it. I find myself asking, What do you want to find out? The next question is How do we know what the answer is? Maybe the easiest thing is to take you through an example. Forming the right question…

  • Data collecting: Tips and tricks for taking notes

    A common mistake people make when they’re new to conducting usability tests is taking verbatim notes. Note taking for summative tests can be pretty straightforward. For those you should have benchmark data that you’re comparing against or at least clear success criteria. In that case, data collecting could (and probably should) be done mostly by the…

  • Beware the Hawthorne Effect

    In a clear and thoughtful article in the May 3, 2007 Journal of Usability Studies (JUS) put out by the Usability Professionals’ Association, Rich Macefield blasts the popular myths around the legendary Hawthorne effect. He goes on to explain very specifically how no interpretation of the Hawthorne effect applies to usability testing. Popular myth –…

  • Should you record sessions on video/audio?

    The accepted practice for professional usability practitioners has been since the beginning of time to record sessions on video. It is something that we tend to do automatically. There aren’t many obstacles to recording sessions these days. It really only takes a web camera and some relatively inexpensive recording software on the testing PC. (Of…

  • Keeping a rolling list of issues throughout a study

    Design teams are often in a hurry to get results from usability studies. How do you support them while giving good data and ensuring that the final findings are valid? One thing I do is to start a list of observations or issues after the first two or three participants. I go over this list…