Why we say no to surveys and focus groups

Originally published on the DTA Blog.

Surveys and focus groups aren’t used much in our user-centred design process. These are the reasons why.

You can’t get authentic, actionable insights in a few clicks

Think about the last time you filled in a survey.

As you were filling in that survey, did you feel as though you were really, genuinely able to express to that organisation how you felt about the thing they were asking you? About the actual experiences you’ve had?

If the answer is no, you’re in good company. I ask this question a lot and the answer is always the same.

This is important to remember whenever you’re looking at research reports full of statistically significant graphs. Always make sure you are critically evaluating the quality of the research data you are looking at – no matter how large the sample size or whether it has been peer reviewed.

Also, when you are looking at research outcomes you should think about whether they help you understand what to do next. Surveys and other analytics can be good at telling us what is happening, but less good at telling us why. Understanding the why is critical for service design.

Government services have to work for everyone

As researchers, we have a pretty diverse toolkit of research techniques and it is important that we choose the right tools for the job at hand.

Surveys and focus groups are research techniques widely used in market research where we want to understand the size of a market and how to reach and attract them. But most of the time, designing government services is not like marketing.

Randomised control trials are widely used in behavioural economics to understand how best to influence behaviour in a desired direction. Most of the time, designing government services is not like behavioural economics.

The job that multi-disciplinary teams have to do when designing government services is simple but difficult. We need to make sure that the service works for the widest possible audience. Everyone who wants to use that digital government service should be able to.

When we achieve this level of usability in a government service we are more likely to achieve:

  • desired policy outcomes
  • increased compliance
  • reduced error rates
  • a better user experience for end-users.

It’s not about preference

Government services work when people understand what government wants them to do. Success also means they’re able to use the service as quickly and easily as possible without making errors. These are the outcomes that the user researcher needs to prioritise.

To achieve this we use observational research techniques and iterative processes that predate both the internet and computers – having their foundations in ergonomics and later in human computer interaction.

There are 3 important things our user researchers and their multi-disciplinary teams keep in mind as they do their work to understand whether services are usable and how the team might make them more usable:

  • We care about what makes the service work better for more people, more than we do about what people (either users or stakeholders) tell us what they prefer
  • We take an evidence-based approach to evaluating whether our design is working better to help people use the service
  • We know that the more opportunities we have to iterate (test and learn) the greater the chance we have of delivering a service that most people can understand and use.

Setting real-life tasks is more valuable than ‘tell us what you think’

We used task-based usability as one of the main research tools when we are evaluating the design of digital services and iterating to improve them in the Alpha, Beta and Live stages.

To do this we come up with examples of important tasks that people need to do to complete that service. For example we might ask them to register for a service and complete a registration form as if they were doing it for real.

When we are testing content, we might provide a real-life scenario that represents a question that people should be able to quickly and easily answer. Using a real-life scenario makes it easier for us to be sure that users are getting the right answer. The worst case scenario is when users think they have the right answer but are actually incorrect.

A scenario might be something like this:

Samantha is 41. She is a single mother of a 14-year-old boy.

The building company she worked for has recently gone out of business and she’s now working part-time at the local supermarket while looking for work.

How much can she earn each fortnight before her payment stops?

We can do task-based testing in a moderated environment. This is where the user researcher is in the room (or on a video conference) with the participant and asking them about how they are interpreting the design and information as they move through the task. This helps us understand what people are thinking and why they are making the decisions they do and let’s us understand how to improve the design to work better.

Task-based testing can also be done in an unmoderated environment. This is where the participant is left alone to do the tasks and we use software to measure how long it takes to complete. We also measure the pathways the user takes, whether they can accurately complete the task and their perception of the effort involved. This can help us to create a baseline for usability which we can try and improve upon.

Both of these approaches give the team valuable insights into how well a service is performing. But critically we also learn what we can do to make the service work better for users.

Of course there are times to use surveys and randomised control trials – no research method is in itself inherently bad. But if you’re in the business of designing government services and making them work better for users (which means better outcomes for government too) then you need to make sure you’re not automatically defaulting to research tools that don’t let you dig as deep as our users deserve.

‘I want a pony!’ or the critical difference between user research and market research

Originally published on the DTA Blog.

Research is not a new phenomenon in government. When you start a new project it is very possible that there is a wheelbarrow-full of previous, relevant research for you to review. Most policy, for example, is evidence based. Similarly when it comes to service delivery, there is often no shortage of research – often in the form of market research.

Market research goes wide not deep

Market research, usually drawn from focus groups and surveys, is appealing to many large organisations including government. It lets an organisation gather opinions from a reasonably large, geographically and demographically diverse audience.

When we talk about Criteria 1 of the Digital Service Standard ‘Understand user needs, research to develop a deep knowledge of the users and their context for using the service’, we rarely recommend starting with large scale market research. Instead, we recommend that teams do user research (also known as design research).

What works is more important than what people prefer

When designing government services, we are not competing to win market share or even give people what they think they ‘want’ (ie ‘I want a pony’). Our main concern is to make sure that people know what they need to do and that they can do it as easily as possible. This is a win-win outcome. Increased digital uptake and reduced failure demand both mean less cost to deliver services, while better comprehension and fewer mistakes mean increased compliance and policy effectiveness. Better digital services are also more convenient and easy to use for the people who need to use them – a better user experience.

These priorities mean that usability (including accessibility) is our primary focus.

User research methods offer deeper insights

There is only one way to understand if a service is more or less usable and that is to observe someone attempting to use it – ideally to achieve a realistic outcome in a realistic context. For example, watching someone try to find out if they are eligible for a benefit or grant based on their own circumstances and using existing websites, rather than asking them how they’d like to do it in a focus group room.

There is quite a lot of evidence that shows that when you are doing usability testing it requires only quite a small sample size to identify usability issues. This is why we recommend doing a series of small studies instead of investing in one large scale survey or a series of focus groups.

After each session we are able to apply the insights we’ve gained with the constant goal of attempting to improve the usability of the service before testing it again. Because we work in agile teams we try to do usability testing and subsequent improvements in every sprint.

By working in this iterative way we can guarantee that the service we deliver will be more usable.

Once we have achieved usability for the widest possible audience (including usability for people who have particular access needs) we can start to consider questions of preference.

People prefer government services that work

In market research it is tempting to put pictures of websites in front of people and ask them which they prefer – which one feels more trustworthy or more secure or more modern? In real life, it is not the picture of the website that people have to interact with -it is the actual service.

While the initial perception may have an impact for a second or two, the real impression comes from whether people can actually find, understand and undertake the task they need to do easily and successfully. People don’t choose to pick up the phone because they don’t like the look of a digital service. They call because it doesn’t let them get the job done.

Choose the right research tool for the research question at hand

It is important to recognise that we have a wide range of research methods available to us and that we should seek to use the right one for the job at hand. For example, small-scale usability studies won’t let you measure the prevalence of a particular trait across the population. But they are super effective for finding and fixing big usability issues.

Large scale studies including surveys, focus groups and random control trials (popular with behavioural insights experts) can help provide certainty at scale and are an important part of the mix of government research but are not appropriate as the primary tools for either discovery research or research to improve the usability of a digital service.

Both qualitative and quantitative research is important and necessary, but in service design, we should always start with rich, qualitative insights.