Five dysfunctions of ‘democratised’ research. Part 5 – Stunted capability

This is the fifth and final in a series of posts examining some of the most common and most problematic problems we need to consider when looking to scale research in organisations. You can start with the first post in this series here.

Here are five common dysfunctions that we are contending with.

  1. Teams are incentivised to move quickly and ship, care less about reliable and valid research
  2. Researching within our silos leads to false positives
  3. Research as a weapon (validate or die)
  4. Quantitative fallacies
  5. Stunted capability

In this post, we’re looking at what happens when the research practice in an organisation fails to mature.

A great first step

Testing one user is 100 percent better than testing none – Steve Krug, Don’t Make Me Think

Many organisations get started doing research with customers and users off the back of encouragement from people like Steve Krug and his classic book ‘Don’t Make Me Think’. In this and other books Steve makes simple usability testing accessible and achievable to almost anyone.

Steve and others like him are evangelists reaching out to those companies who are afraid to engage with their customers to understand opportunities for them to improve. This is important work. Their message is usually that talking to customers is not hard or scary, and that we’ll be better for doing a bit of it, even not perfectly, than not doing it at all.

The first step can be scary

And they are right. Having anyone in the company talking to just one user (and hopefully some more) is a fabulous first step. But it is intended to be just that – a first step. An encouragement to realise the benefits of involving people outside our offices in the process of designing and developing products and services. And help to overcome the fear of engaging with customers and users and an opportunity to experience how beneficial this can be.

For those of us who work with research participants on a regular basis, it may be hard to recall exactly how terrifying those first few research sessions felt. Even trained and experienced researchers continue to experience some background fear (or exhilaration?)  of all the things that could go wrong in the research study – and there are plenty!

The thing about first steps, though, is that they are usually intended to be followed by second steps. Once we break through the fear (or in some cases, just lack of awareness), the idea is that we continue to increase the maturity of our practice.

And this is where many organisations seem to hit a roadblock. More and more people in the organisation might be out and eagerly involving customers in the process of shaping their products, but they often don’t invest in either improving their own skills in research or investing in hiring people who have training and experience doing research. 

Talking to users is not research

One important realisation we need to have on the path to maturity is recognising that ‘talking to customers’ is actually not the same thing as doing research. Talking to customers or watching customers use our products and services has many benefits – in particular it can increase our empathy for our customers and users, it can help expose us to scenarios of use that are dramatically different to our own and what we would expect, and it can provide clues as to where the biggest problems may like. All of these are good outcomes.

If we want to use research as evidence for decision making – either for product strategy or design decisions – then we need to be able to do more to ensure that the insights we are gleaning are sufficiently reliable and valid.

Research doesn’t need to be ‘perfect’, just valid and reliable.

 ‘I don’t need the research to be perfect, I just need enough to help me make a decision’.

Often this is said in response to the suggestion that the research we should be doing will take longer or be more difficult and expensive than our speaker would like. In this situation, there is often a pre-existing ‘hunch’ and they are looking to users for validation. Or perhaps they are stuck between two options and seeks a tie breaker.

Any specialist researcher has almost certainly had their recommended approach discredited as ‘too academic’, and sometimes it is true. Sometimes the research methodology is overdone for the question the business is seeking to answer. But what often follows is a bit of a race to the bottom where considered sample design and appropriate methodology are quickly discarded in favour of whatever is fastest and easiest.

Without the right experience and training, all too often interviewers ‘cut to the chase’ so we get more or less directly to the topic at hand. Somewhere in the world right now a product manager under pressure to make a decision is asking questions like these in a customer interview:

‘here’s what we’re thinking of making, what do you think about it?’

or, perhaps worse…

‘if we made this, would you pay for it’

It can be easy and tempting – so much faster and often quantitative – to mistake the research question for the interview question.

Even with training, it seems that the urge to be able to say that 10 out of 12 people said they would pay for it is almost irresistible. ‘Beating around the bush’ to get the question answered seems like a waste of everyone’s time, in this time where the bias to action and desire to ship at velocity is most valued. 

(It shouldn’t really be a surprise that lack of research capability maturity exposes us to the previous four dysfunctions).

Matching methodology to risk

Whilst we should have plenty of sympathy for this desire for lightweight research and simplicity, it is important to ensure that the methods employed are matched to the risk involved in the decision, rather than the most compressed timeframe.

As our organisations grow, the decisions we take using evidence from our customers can become more and more substantial – the gains of getting it right are greater and the risks of getting it wrong get uglier.

In the same way, our research maturity needs to continue to grow so that we can continue to match the size of the risk of getting it wrong.

This is not to say that mature organisations only ever do serious, time consuming research. Rather, that we invest where the risk is highest.

Investment might look like hiring trained researchers who can design and recruit the right sample and conduct the research in a way that reduces bias. Or investment might look like iterative research with an every increasing number of increasingly diverse participants, sprint after sprint – allowing the team to continue to learn, This can work beautifully when the team is able to be responsive to that learning over time.

Investing too much

Conversely, there are situations where the investment in research is far too high for the decision being made. This often happens where the organisations design process has broken down, or where designers have entirely lost confidence in being able to make relatively conventional design decisions. In these situation we design complex studies to ‘validate’ one micro design treatment over another. In this case,  the mismatch in risk to research investment can result in large quantities of what I would consider to be wasteful and often unreliable research. 

Beware Dunning Kruger

Dunning Kruger graph of confidence vs expertise

User Research is particularly susceptible to Dunning Kruger syndrome, wherein a relatively small amount of knowledge can result in an excess of confidence. Many people claim a ‘background in research’ when they could mean they watched someone else do a bunch of usability studies in their last job, or they did a research based degree at university.

Many designers and product managers are entirely happy with the outcomes they get from research and how it enables their practice – and often loudly object to the suggestion that anyone could get a better result from the research than they do.

Yet, at the same time, the harsh reality is that the work that is done is often resulting in misleading outcomes that can put their product and their organisation at risk. 

It also undermines the reputation of research in the organisation when people claim when a ‘researched’ product goes into the world and doesn’t succeed as expected. ‘We did research before and it didn’t work’.

In the same way that often both design and product management capabilities require an engineering led organisation to move through the stages from unconscious incompetence through to conscious competence ,  the very same is true for the research capability.

Achieving research maturity

And so, at the end of our five dysfunctions, what can be done to help provoke an organisation to not only involve users in the process of creating products and services but to start and continue to grow their ability to do so by revealing the important insights that are both reliable and valid?

Here are some things that have worked for me.

Perhaps through improving business fluency. By talking less about empathy and more about the risk to the business of getting it wrong. Talking less about customer obsession and more about the reliability and validity of the different types of evidence we can use to make decisions. And by running an open research practice – getting out of the black box, removing any mystery about our work, showing our workings and involving others in the process.

Make use of existing momentum – bringing new shape and substance to whatever your organisation is using to bring its attention to its customers – whether its an NPS survey, a customer convention, a feedback form, or a guerilla research practice – start by shaping the existing connections into something more insightful, more reliable and valid.

Be brave, but be patient and we’ll get there.

Five dysfunctions of ‘democratised’ research. Part 3 – Research as a weapon

This is the third in a series of posts examining some of the most common and most problematic problems we need to consider when looking to scale research in organisations. You can start with the first post in this series here.

Here are five common dysfunctions that we are contending with.

  1. Teams are incentivised to move quickly and ship, care less about reliable and valid research
  2. Researching within our silos leads to false positives
  3. Research as a weapon (validate or die)
  4. Quantitative fallacies
  5. Stunted capability

In this post, we’re looking at what happens when research is ‘weaponised’ in teams.

Dysfunction #3 – Research as a weapon (validate or die)

Over reliance on research, without care to the quality level of the research, can also be a symptom of another problem in our organisations – lack of trust between disciplines in a cross functional team.

In particular the relationship between design and product management can have a substantial impact on the way that research is used in product teams. If the relationship is strong, aligned and productive research is often used to support real learning in team. But where the relationship is less healthy, it is not uncommon to see research emerge as a form of weaponry. 

comic about how relationship has declined because partner graphs everything
©XKCD

Winning wars with research

How does research become weaponry? When it is being used primarily for the purpose of winning the argument in the team.

Using research as evidence for decision making is good practice, but as we have observed in earlier dysfunctions, the framing of the research is crucial to ensuring that the evidence is reliable and valid. Research that is being done to ‘prove’ or ‘validate’ can often have the same risk of false positives that comes from the silo dysfunction.

This is because the research will often be too tightly focussed on the solution in question and there is little or no interest from the team around the broader context. This lack of realistic context can result in teams believing that solutions are more successful than they will ultimately turn out to be in the realistic context of use.

Data as a crutch for design communications

Another reason to see research being used as weaponry is to compensate for a lack of confidence or ability in discussing the design decisions that have been made. Jen Vandagriff, who I’m very fortunate to work with at Atlassian, refers to this as having a ‘Leaky Design Gut’.

Here we see research ‘data’ being used instead of (not as well as) the designer being able to explain why they have made the design decisions they have made. Much as I love research, it is foolish to believe that every design decision needs to be evidenced with primary research conducted specifically for this purpose. Much is already known about design decisions that can enhance or detract from the usability of a system, for example. 

In a team where the designer is able to articulate the rationale and objectives for their design decisions, and there is trust and respect amongst team members, the need to ‘test and prove’ every decision is reduced.

Validation can stunt learning

Feeling the need to ‘prove’ every design decision quickly leads to a  validation mindset – thinking, ‘I must demonstrate that what I am proposing is the right thing, the best thing. I must win arguments in my team with ‘data”. .

Before going straight to research as validation’, it is worth considering whether supporting designers to grow on their ability to be more deliberate in how they make and communicate their design decisions could be a more efficient way to resolve this challenge.

Sometimes it is entirely the right thing to run research to help understand whether a proposed approach is successful or not. The challenge is to ensure that we avoid our other dysfunctions as we do this research. And to make sure that this doesn’t become the primary role of research in the team – to validate and settle arguments. Rather, it should be part of a ‘balanced diet’ of research in the team.

If we focus entirely on validation and ‘proof’, we risk moving away from a learning, discovery mindset. We prefer the leanest and apparently definitive practices. A/B testing prototypes and the creation of scorecards are common outputs that result from this mindset. We’re incentivised to ignore any flaws in the validity of the method if we’re able to generate data that proves our point. 

Alignment over evidence

Often this behaviour comes from a good place. A place where teams are frustrated with constant wheel spinning based on everyone having an opinion. Where the team is trying to move away from opinion based decision making, where either the loudest voice always wins or the team feels frustrated by their inability to make decisions to move forward. Using research as a method to address these frustration does make sense and should be encouraged.

Validation research can provide short term results to help move teams forward, but it can reinforce a combative relationship between designers and product managers. Often this relationship comes from a lack of alignment around the real problems that the team are setting out to solve. Investing more in more ‘discovery’ research, done collaboratively, as a ‘team sport’ can be incredibly powerful in helping create a shared purpose across the team that can help promote a more constructive and supporting teamwork environment.

Support from an experienced researcher with sufficient seniority can help the team avoid the common pitfalls of seeking the fastest and most definitive ‘result’, but to achieve a shared understanding of both the problem and the preferred solution. Here the practice of research, done collaborative as a team, can help not only to inform the situation to achieve more confident decision making, but also to heal some tensions in the team, by bringing the team together around a shared purpose – solving real problems for their customers or users.

You can read about the fourth dysfunction here.

If I could tell you 3 things – notes from a brief career in the public service

Recently a colleague asked me what 3 things I would say if I ever had an audience of Secretaries (very senior public servants) that would help them do things to help make public services better for end users. This is (roughly) what I said:

  1. Your organisation will benefit more from you being user centred than the users ever will. 

    It is a common misconception that we do user-centred design because we want to deliver a delightful or engaging experience for our users. Truth is, in government, this is very rarely the case. Paying tax is not delightful, complying with regulation is not delightful, discovering you to repay a benefits debt is far from delightful.Let’s be realistic here – the job of user-centred design is to make things as painless and effortless as possible.It might not be delightful to discover that you’re not eligible for a benefit or a visa, but it is much better to find out as quickly and easily as possible before investing a lot of effort in an application or making plans for the future.

    When we focus on making things usable rather than delightful or engaging we are focussed on making sure that:
    a) people know what you want them to do.
    b) they can do that thing as easily as possible and without accidentally making mistakes.

    I think it is fair to say that many government services still don’t meet that low bar. This is bad for users but it is also bad for government. Poor usability impacts government’s ability to achieve policy outcomes and it can lead to a decrease in compliance (because even the people who WANT to be compliant often can’t work out how to do so – or have to pay specialists to explain it to them. This failure also leads to more expensive service delivery because people don’t stay in the cheaper digital channels. Instead it takes multiple encounters across multiple channels to complete a task, leading to a higher cost to serve.

    Even if you don’t care about the quality of the experience for users (and, honestly, every secretary and most public servants I’ve met have cared a lot), you should care about it for the effectiveness of your department and for the sake of your career. Services that people can use help agencies achieve organisational goals.

  2. Orient everything you can in your organisation around real user journeys 

    Some of our biggest organisational blind spots are caused by focussing on our own organisational structures at the expense of supporting and understanding real user journeys and the part our work plays in supporting those journeys.Like any large organisation, often multiple agencies are involved in the service experience that users have at key points in their life – when they lose their job, have a baby, start an education, start a  business, or when a loved one dies. Even in the services that exist in a single agency, we create false barriers between ‘authenticated’ and ‘unauthenticated’ experiences – often the only person who has a view of the end to end experience is the end user and every single touch point is managed by a different senior manager, sometimes in entirely separate parts of the organisation.

    There are small things that you can do immediately and cheaply to try to address this. Stop naming your services after the government need (eg. compliance) and start naming them after the thing that people need to do when they encounter the service (eg. tell government when your rental situation changes). Words and what we call things can be powerful catalysts for cultural transformation.

    Make the real user experience visible to people across the organisation by making journey maps from the trigger to the outcome and make sure all the people who own parts of that journey know each other and have seen each other’s work. Put someone in charge of being the expert on that journey and informing all the parts.

    Challenge concepts like ‘authenticated’ and ‘unauthenticated’ which are meaningless to end users and often reinforce silos that amplify user experience problems in services.

    Make sure that the analytics you are capturing help you understand, across all the channels (digital, phone and shopfront) what is happening, what is working and not. Create success criteria that are really based on improving outcomes for users.

  3. Seek the truth, even if it’s ugly

    Large organisations like the public service can be pretty hierarchical. Something that can happen in hierarchical organisations is that bad news doesn’t travel up the line – people don’t speak truth to power. It can be a career limiting move (CLM – an acronym I learned for the first time in the Australian Public Service).The reality is that if you’re a senior person in a large organisation, people are probably going out of their way to let you believe that everything is fine. Or as fine as it can be.

    As a leader you need to be aware of this and to do everything you can to break through this. The best thing to do is to see it for yourself.

    Research has shown that organisations where everyone, including management, sees real users using their services for just 2hrs every 6 weeks are more likely to deliver good services. In truly customer-centric organisations, the executive team routinely get ‘behind the counter’ and see for themselves both what it is like to be a customer and (equally importantly) what it is like to deliver service. Watch an episode or two of Undercover Boss where CEOs go, in disguise, to work in the grass-roots of service delivery in their organisation and discover that the reality is very different to what the reports say.

    Many customer-centric organisations require that everyone in the organisation spend time at the coalface of service delivery as a part of induction and leadership should be required to do this regularly. ServiceNSW CEO Rachna Gandhi is known for routinely working behind the counters of the ServiceNSW shopfronts – this not only demonstrates true executive commitment to high quality user experience but also means she has a direct view of the reality of what it is like to experience ServiceNSW services and to learn from the day-to-day experience of the people who work in service delivery.

    If this is a priority you need to put time in your diary to make this happen. If you can’t escape endless meetings, then work with the user researchers in your delivery teams and ask them to show you the video footage of people talking about their experiences. What are they learning out in the field? – the good and the bad.

    And don’t let your organisation become culturally afraid or disrespectful of your users. Don’t accept that if your users were less stupid or lazy or naughty everything would be better and there is nothing we can do.

    Don’t believe for a moment that your users are about to go running to their MP or the media the minute something goes wrong. Nothing could be further from the truth.The average person would have to be on the edge of desperation before they contemplate approaching a politician or journalist. Rather, most people want to spend as little time as possible thinking about government services. From my experience they are only too happy to share their experiences and insights if they think their input will be used to make government services better for everyone. If you work in government it’s your job to make sure they get heard.

Epilogue:

Yesterday was my last day at the Digital Transformation Agency (DTA) and in the Australian Public Service (APS), for now. This followed a few years working at the Government Digital Service (GDS) in the UK Civil Service (no acronym that I’m aware of).

It’s been a privilege to be a part of the movement towards better quality public services and in doing this I’ve been able to work with some of the best and most passionate technologists, designers, policy makers, administrators and more. Working in government is one of the most challenging yet rewarding working environments I’ve encountered.

Thanks for the opportunity. Stay in touch.

Related reading:

Research about management spending time seeing real users

Words and cultural change

Undercover Boss

Naming your service as a ‘doing thing’

How improving internal systems can improve customer experience.

Why we should stop banging on about users

Triple testing your survey

Sending a survey is a convenient way to gather data quickly. But, it’s very easy to inadvertently gather misleading and inaccurate data.

When was the last time you filled in a survey that let you actually express what your really thought about an organisation, experience or topic? Just because you have a reasonably large sample size and you can make graphs out if it doesn’t mean it is good data with which you could be making important decisions. Data quality matters.

A good way to make sure you’re getting reliable data (and making good use of your survey respondents’ time) is to do a triple test before you hit send.

Here’s what you do.

  1. Create your survey (this is actually not as simple as it may seem)
  2. Find someone who could be a potential respondent for your survey (matches the target audience, not people in your team or the people who sit closest too you)
  3. Ask them to complete the survey, watch them while they do it, ask them questions to see whether they understand what the question means and whether the way your are collecting the answers allows them to give the answer they want to give
  4. Adjust the form based on what you have observed (there are always adjustments you will want to make)
  5. Repeat steps 2,3,4 until you’ve seen at least three people complete the survey OR you’re certain there is no more you can do to adjust the survey so that people understand the questions and can provide meaningful (to them) responses.

I have never known someone who has tested their survey this way and who didn’t make changes that would result in a better experience for respondents and better quality data.