Why we say no to surveys and focus groups

Originally published on the DTA Blog.

Surveys and focus groups aren’t used much in our user-centred design process. These are the reasons why.

You can’t get authentic, actionable insights in a few clicks

Think about the last time you filled in a survey.

As you were filling in that survey, did you feel as though you were really, genuinely able to express to that organisation how you felt about the thing they were asking you? About the actual experiences you’ve had?

If the answer is no, you’re in good company. I ask this question a lot and the answer is always the same.

This is important to remember whenever you’re looking at research reports full of statistically significant graphs. Always make sure you are critically evaluating the quality of the research data you are looking at – no matter how large the sample size or whether it has been peer reviewed.

Also, when you are looking at research outcomes you should think about whether they help you understand what to do next. Surveys and other analytics can be good at telling us what is happening, but less good at telling us why. Understanding the why is critical for service design.

Government services have to work for everyone

As researchers, we have a pretty diverse toolkit of research techniques and it is important that we choose the right tools for the job at hand.

Surveys and focus groups are research techniques widely used in market research where we want to understand the size of a market and how to reach and attract them. But most of the time, designing government services is not like marketing.

Randomised control trials are widely used in behavioural economics to understand how best to influence behaviour in a desired direction. Most of the time, designing government services is not like behavioural economics.

The job that multi-disciplinary teams have to do when designing government services is simple but difficult. We need to make sure that the service works for the widest possible audience. Everyone who wants to use that digital government service should be able to.

When we achieve this level of usability in a government service we are more likely to achieve:

  • desired policy outcomes
  • increased compliance
  • reduced error rates
  • a better user experience for end-users.

It’s not about preference

Government services work when people understand what government wants them to do. Success also means they’re able to use the service as quickly and easily as possible without making errors. These are the outcomes that the user researcher needs to prioritise.

To achieve this we use observational research techniques and iterative processes that predate both the internet and computers – having their foundations in ergonomics and later in human computer interaction.

There are 3 important things our user researchers and their multi-disciplinary teams keep in mind as they do their work to understand whether services are usable and how the team might make them more usable:

  • We care about what makes the service work better for more people, more than we do about what people (either users or stakeholders) tell us what they prefer
  • We take an evidence-based approach to evaluating whether our design is working better to help people use the service
  • We know that the more opportunities we have to iterate (test and learn) the greater the chance we have of delivering a service that most people can understand and use.

Setting real-life tasks is more valuable than ‘tell us what you think’

We used task-based usability as one of the main research tools when we are evaluating the design of digital services and iterating to improve them in the Alpha, Beta and Live stages.

To do this we come up with examples of important tasks that people need to do to complete that service. For example we might ask them to register for a service and complete a registration form as if they were doing it for real.

When we are testing content, we might provide a real-life scenario that represents a question that people should be able to quickly and easily answer. Using a real-life scenario makes it easier for us to be sure that users are getting the right answer. The worst case scenario is when users think they have the right answer but are actually incorrect.

A scenario might be something like this:

Samantha is 41. She is a single mother of a 14-year-old boy.

The building company she worked for has recently gone out of business and she’s now working part-time at the local supermarket while looking for work.

How much can she earn each fortnight before her payment stops?

We can do task-based testing in a moderated environment. This is where the user researcher is in the room (or on a video conference) with the participant and asking them about how they are interpreting the design and information as they move through the task. This helps us understand what people are thinking and why they are making the decisions they do and let’s us understand how to improve the design to work better.

Task-based testing can also be done in an unmoderated environment. This is where the participant is left alone to do the tasks and we use software to measure how long it takes to complete. We also measure the pathways the user takes, whether they can accurately complete the task and their perception of the effort involved. This can help us to create a baseline for usability which we can try and improve upon.

Both of these approaches give the team valuable insights into how well a service is performing. But critically we also learn what we can do to make the service work better for users.

Of course there are times to use surveys and randomised control trials – no research method is in itself inherently bad. But if you’re in the business of designing government services and making them work better for users (which means better outcomes for government too) then you need to make sure you’re not automatically defaulting to research tools that don’t let you dig as deep as our users deserve.

‘I want a pony!’ or the critical difference between user research and market research

Originally published on the DTA Blog.

Research is not a new phenomenon in government. When you start a new project it is very possible that there is a wheelbarrow-full of previous, relevant research for you to review. Most policy, for example, is evidence based. Similarly when it comes to service delivery, there is often no shortage of research – often in the form of market research.

Market research goes wide not deep

Market research, usually drawn from focus groups and surveys, is appealing to many large organisations including government. It lets an organisation gather opinions from a reasonably large, geographically and demographically diverse audience.

When we talk about Criteria 1 of the Digital Service Standard ‘Understand user needs, research to develop a deep knowledge of the users and their context for using the service’, we rarely recommend starting with large scale market research. Instead, we recommend that teams do user research (also known as design research).

What works is more important than what people prefer

When designing government services, we are not competing to win market share or even give people what they think they ‘want’ (ie ‘I want a pony’). Our main concern is to make sure that people know what they need to do and that they can do it as easily as possible. This is a win-win outcome. Increased digital uptake and reduced failure demand both mean less cost to deliver services, while better comprehension and fewer mistakes mean increased compliance and policy effectiveness. Better digital services are also more convenient and easy to use for the people who need to use them – a better user experience.

These priorities mean that usability (including accessibility) is our primary focus.

User research methods offer deeper insights

There is only one way to understand if a service is more or less usable and that is to observe someone attempting to use it – ideally to achieve a realistic outcome in a realistic context. For example, watching someone try to find out if they are eligible for a benefit or grant based on their own circumstances and using existing websites, rather than asking them how they’d like to do it in a focus group room.

There is quite a lot of evidence that shows that when you are doing usability testing it requires only quite a small sample size to identify usability issues. This is why we recommend doing a series of small studies instead of investing in one large scale survey or a series of focus groups.

After each session we are able to apply the insights we’ve gained with the constant goal of attempting to improve the usability of the service before testing it again. Because we work in agile teams we try to do usability testing and subsequent improvements in every sprint.

By working in this iterative way we can guarantee that the service we deliver will be more usable.

Once we have achieved usability for the widest possible audience (including usability for people who have particular access needs) we can start to consider questions of preference.

People prefer government services that work

In market research it is tempting to put pictures of websites in front of people and ask them which they prefer – which one feels more trustworthy or more secure or more modern? In real life, it is not the picture of the website that people have to interact with -it is the actual service.

While the initial perception may have an impact for a second or two, the real impression comes from whether people can actually find, understand and undertake the task they need to do easily and successfully. People don’t choose to pick up the phone because they don’t like the look of a digital service. They call because it doesn’t let them get the job done.

Choose the right research tool for the research question at hand

It is important to recognise that we have a wide range of research methods available to us and that we should seek to use the right one for the job at hand. For example, small-scale usability studies won’t let you measure the prevalence of a particular trait across the population. But they are super effective for finding and fixing big usability issues.

Large scale studies including surveys, focus groups and random control trials (popular with behavioural insights experts) can help provide certainty at scale and are an important part of the mix of government research but are not appropriate as the primary tools for either discovery research or research to improve the usability of a digital service.

Both qualitative and quantitative research is important and necessary, but in service design, we should always start with rich, qualitative insights.

Triple testing your survey

Sending a survey is a convenient way to gather data quickly. But, it’s very easy to inadvertently gather misleading and inaccurate data.

When was the last time you filled in a survey that let you actually express what your really thought about an organisation, experience or topic? Just because you have a reasonably large sample size and you can make graphs out if it doesn’t mean it is good data with which you could be making important decisions. Data quality matters.

A good way to make sure you’re getting reliable data (and making good use of your survey respondents’ time) is to do a triple test before you hit send.

Here’s what you do.

  1. Create your survey (this is actually not as simple as it may seem)
  2. Find someone who could be a potential respondent for your survey (matches the target audience, not people in your team or the people who sit closest too you)
  3. Ask them to complete the survey, watch them while they do it, ask them questions to see whether they understand what the question means and whether the way your are collecting the answers allows them to give the answer they want to give
  4. Adjust the form based on what you have observed (there are always adjustments you will want to make)
  5. Repeat steps 2,3,4 until you’ve seen at least three people complete the survey OR you’re certain there is no more you can do to adjust the survey so that people understand the questions and can provide meaningful (to them) responses.

I have never known someone who has tested their survey this way and who didn’t make changes that would result in a better experience for respondents and better quality data.

How might we improve the voting experience?

Originally published on the GDS User Research Blog

Once every 5 years, when UK general elections come round, we’re given the opportunity to research the experience of voting. Although voting is not something we’re really working on, the recent general elections offered an opportunity impossible for the GDS user research team to resist.

We conducted a small study

With some help from colleagues in the Home Office, we conducted a small study in the days before, during and after the general election. Our goal was to see if there were opportunities to make the voting experience better.

Our data comes from:

  • 30 phone interviews with young people who were mostly voting for the first time. We spoke to them the day after the election.
  • 44 participants from a varied demographic participated in a diary study using dScout.
  • We triangulated the data from these 2 studies with data that our social media team found using Brandwatch.

Recruitment constraints

We were recruiting during Purdah (the pre-election period) and without a budget, both of which were significant constraints. As a result, the participants in our study skew towards being more engaged in the democratic process than we’d expect to be typical. Still, we think many of the findings are widely applicable, although you’d probably unearth a lot more opportunities and insights if you did the same study with less democratically engaged participants.

Here’s what we found out.

Voting is emotional

Young people told us that casting their first vote felt like a rite of passage to adulthood. They felt compelled to vote because of the history of sacrifices made to be able to vote, and how hard-won democracy can be.

It is exciting that I could vote – afterwards I felt like an adult and I can get a mortgage next. – Ben

Many people talked about the act of voting as feeling historic, and an important moment for them and their participation in society.

Some people interpreted the paper and stubby pencils as signifying the ‘tradition’ of voting, but many others felt that the experience of voting was quite antiquated.

When leaving the polling station it is hard to believe that we are in 2015… I think next time I will wear a Victorian costume to fit the experience. – EV

The experience of voting can be a bit of a let down

I expected it to be exciting but there wasn’t much of an atmosphere – maybe because I went early. – Hatti

The general elections are a moment in time where people are more engaged in their community and in the democratic process than they might normally be. This may offer an opportunity.

People are looking for voting to be more of an experience, but the experience often turns out to be a bit of a non-event.

Many people are surprised at how little time it takes to vote. On the one hand, they feel this should be communicated more widely so that more people know that voting is not time consuming and difficult. On the other hand, they sometimes feel ‘rushed through’ and the importance of the act of voting is lost.

Several people felt confused and intimidated by people wearing rosettes and asking for their polling number. People taking exit polls don’t identify themselves, or what they’re doing, leading some people to mistake them as polling officials. For some people, this made the voting experience more stressful.

[I’m] At the school to vote. People outside asking for my polling card number. Don’t know why. They are wearing political party rosettes. Does this mean my details will be used by the party? I wouldn’t want that. How are you feeling now? Anxious. – DS/BK

Making sure you’re registered to vote is not always simple

For people who do want to vote, making sure you’re registered to vote is not always simple. Most people we spoke to were registered to vote, but they told us about friends and family who were not aware that they needed to register until it was too late.

The need to provide your National Insurance number when registering to vote is more difficult for young people – they don’t receive a physical card with their number on it, so it’s more difficult to provide that information.

Some people indicated that they weren’t sure whether they were successfully registered to vote, or where they were registered. This meant telephone calls to local authorities, which often went unanswered as the election drew closer, and people re-registering just to be sure.

I registered online but I didn’t feel confident that it had been done as I didn’t get a confirmation email, so I called the local council a few days before to confirm. – Elizabeth

Moving house or having two addresses (eg students) was particularly problematic. This meant that some people delayed registering to vote until the last moment because they weren’t sure where they were going to be at election time. Or, sometimes they’d discover that they were enrolled a long distance from their current address.

I registered on the last day. I wasn’t sure where I was going to be [on election day]. I moved placement during my degree and forgot where I had registered. – Scott

Polling cards are important but often fail

People use polling cards to confirm they are registered to vote, to know when and where to vote, and to find the polling station. Polling cards were a point of failure for many people, especially as they seemed to arrive at different times or not at all.

The polling card never arrived, so I’m not sure if I’m registered to vote. – Shad

They were often misplaced because they’d arrived very early, or caused stress because they arrived very late or not at all. Also, the polling station map on the cards seemed to be unreliable.

The map on the polling cards is to the wrong place – people in the queue are furious, they’ve been wandering about lost. One man asks them to put a poster up to direct people. They say they can’t – they only take notes and tell the council for next time. Bit awkward. – DS/RA

Working out who to vote for can be difficult

Most people we spoke to took the decision as to who they would vote for quite seriously. Most of them struggled to find information that was useful to help them make a decision. Many people told us that they used the online tools that anonymised the policies in the manifestos and then told them who to vote for. They found these tools useful.

People weren’t happy to discover new candidates in the polling booth at the moment of voting.

When I got to the sanctum of the booth I was amused to see that there were ten, TEN, candidates. So where were their leaflets? How am I supposed to consider voting for them if the first time I hear about them is in the bleeding voting booth? I mean, FFS! – DS/SW

Young people told us that they talked with each other openly about who they were going to vote for and who they did vote for – they’re aware of this being more of a taboo for their parent’s generation, but feel that talking openly was an important way to help them decide who to vote for.

People thought government could do a better job of helping them know when, where and how to vote

Young people in particular thought there was a lot that government could be doing to help more people feel more confident and knowledgeable about the process of voting. A lot of people felt they learned about voting in the polling station when it should have been taught at school.

They don’t teach you that at school. I know how to draw plant cells but not how to vote. – Yiannis

It would be nice to explain the details about voting – to say, this is your first election, when to register, you don’t need to bring ID etc. – Michael

People also thought that the government should be more proactive about messaging people to remind them that the election was coming up and when and where they should plan to vote. Less consumption of TV and newspaper content seemed to mean that it’s more possible for young people to not notice there is an election coming up.

What would have been useful is an email a week before the election saying ‘you are registered, remember to vote, this is your polling station’. – Sophie

Voting for two elections at once can be confusing

A number of people were registered to vote in locations where both the general and local elections were held at the same time. For two main reasons, this often caused confusion.

First: the focus on the general election often meant that people were not aware that the local election was being held and had not considered who to vote for.

Went with my husband to vote. Surprised there were council elections as we haven’t heard anything about them. No idea who the candidates are. – DS/MX

Second: there are different methods of voting for each type of election. This caused some confusion and people felt they had to be careful about voting to ensure their vote was valid – some weren’t sure that they had in fact voted successfully.

All done. Always find the mixing of local and national elections tricky. Two crosses on one, definitely one cross on the other. – DS/JS

The’localness’ of voting can be frustrating

Voting is very local – you can only vote in one location where you live and you vote for local members who represent that area. Many people found this difficult to understand and frustrating.

The requirement to attend a single location to vote is a point of failure for people who intend to vote but aren’t able to be in the correct place on the day – they often don’t know this sufficiently in advance to arrange for alternative ways of voting or don’t know that other methods are available. To some people, this seemed particularly unnecessary and archaic and is often an unexpected discovery for people who are voting in the UK for the first time.

I thought I could vote anywhere, my friend explained to me I had to vote where I was registered. I missed the deadline to change address. It is irritating that can’t go to any polling station, that it is linked geographically. – Shad

People were often very frustrated that the local representation limited their ability to vote for the party they wished to represent. Most people seemed to think about voting for a party rather than for a particular Member of Parliament – a mental model that’s perpetuated in the way the media talks about the election – then they arrive at the polling booth to discover they can’t vote for their party of choice. This is a frustrating and disenfranchising experience.

I live in Buckinghamshire so I was only offered Conservative, Green and UKIP. It was annoying not to have more choice. Instead of voting on my ballot I just wrote ‘none of the above’ because I was very annoyed at the lack of choice I had. – Joanna

Finally, people who lived in safe seats felt that their votes were much less valuable than those who lived in closely-contested seats.

Part of the reason [I didn’t vote] was that the Tory seat where I live isn’t going to change, so my vote felt a bit pointless anyway. – Liam

Plenty of opportunities for the future

So, it turns out there are lots of opportunities to make the voting experience better, which will in turn result in more people voting in a more informed way.

You can download the deck for more details. We hope you find it useful.