A template for more deliberate 1:1 meetings (v2)

I mentioned on Twitter recently my intention to do better 1:1 meetings with my direct reports and stakeholders in 2021 and promised to share how I was approaching it.

This is not the first time I’ve made this resolution and it often tails off into adhoc-ery all too quickly but I’m hoping that being intentional about why I am doing this will help it stick more.

To that end there are a few things that I’m changing in how I do my 1:1s this year, including:

  1. Diarising time each week on Monday to prepare for my 1:1s. Here I need to admit that usually I’ve been collecting topics on post it notes throughout the week and then winging it through the 1:1 meeting, basically triaging on the fly the most important topics that I or my team member are bringing in that day. This means that we tend to deal with urgent things but not always important things.
  2. Creating a template for each of us to complete in preparation for the meeting, and to help ensure that we regularly touch on important topics that might otherwise be overlooked by urgent things. More on this below.
  3. Taking proper notes and actions in the meeting, and making sure the actions get acted on. Basic stuff really, the former wasn’t really happening and the latter could be a little hit and miss. Using the template as a way to build accountability for the actions.

With that in mind, here is my current iteration of the template I’m planning to use. I’ve used it only twice so far and already iterated it a little (hence the v2 in the heading) and I expect I will continue to iterate it more and more over time. So far its been pretty positively received, but it is still far, far from perfect. Feedback welcome.

The idea is that each week in preparation for the 1:1 my direct reports, who are mostly Research Managers, fill out this template and take the time to reflect on each of these items. I also have a section I need to fill in. We both contribute topics in addition to the standard items.

Template for my 1:1 Meetings

Date of the 1:1 Meeting

Actions from our last 1:1 Captured from the previous 1:1
– action 1
– action 2 etc.
Emoji of the week: meaning discussed in the meeting
Win what wins did you have last week?My managers fill out these sections of the template BEFORE the 1:1 meeting
Frustration –what was your biggest frustration last week?
Focuswhat is your focus this week? Pick one thing
Growth plan focus – what aspect of your growth plan are you currently working on?
Project Status – what are your top three (or fewer) projects right now and how are they tracking?1.
2.
3.
Team Healthanything remarkable to report re: people doing really well or poorly?
Stakeholder Healthanything remarkable to report re: relationships going well or poorly?
Reflections and/or feedback from Leisa This section is for ME to fill out each week. Needs to be personal feedback, not just feedback / opinions on work, ideas, questions.
Items for this week:
– topic 1
– topic 2
– topic 3
Take tonnes of notes.

(My actual template is in Confluence and look lots better as it has lots of colourful emojis all over it. Annoyingly WordPress won’t play nicely for me with emojis here so I’ve prioritised making this accessible over giving a screenshot of the pretty one instead)

During the meeting I will then capture a tonne of notes as the meeting progresses (my background as a qual researcher has prepared me well for this!). These notes are shared on a Confluence page so that both of us can add more, annotate etc.

As soon as the 1:1 meeting is completed, I update the template with next weeks items, and the agreed actions captured.

I am a little bit concerned that the extensive ‘structured’ section might not leave enough time to focus on the more ‘urgent’ topics – especially given these currently tend to take up the entire time allowed right now. But, that is also somewhat deliberate, so perhaps that’s not a bad thing.

The 4Ls – an alternative approach I might use on a monthly basis.

When I was talking about this approach with my colleague Dom Price, he shared another 1:1 format that he likes to use. I’m not sure it would work quite so well for me on a weekly basis but I might experiment with using this every 4-6 weeks as I think it does provide a really different perspective on how people are doing in their work and how you might be able to help them be more successful.

Dom’s approach asks people to reflect on these four categories:

  • Loved – what I loved doing this month
  • Longed for – what I longed to be doing but was unable to find the time/etc
  • Loathed – what I really did not enjoy doing this month
  • Learned – what I learned this month

The idea is to try to support the person you are managing to increase the loved and learned, enable the longed for and remove the loathed.

I’m sure there are many more great frameworks from people who have given this a good deal more thought than I have. I’d love to hear what’s worked well for you and what you’d recommend.

Ambient Reassurance

A long time ago, in 2007, I wrote about ambient intimacy, a name for a new kind of experience that came about as a result of the emergence of social media, in particular Twitter.

Over the last seven months I have been working from home, remotely from my team. It has just been in the past couple of weeks that I’ve been able to come up with a way of describing a particular kind of lack that I’ve been feeling.

There are many things we lack (and gain) in working remotely, but this is one I’ve not considered before, and I don’t hear other people talking about it either.

I call it ambient reassurance. (Almost certainly the organisational psychologists have another term for it but I can’t find it!)

Ambient reassurance is the experience of small, unplanned moments of interaction with colleagues that provide reassurance that you’re on the right track. They provide encouragement and they help us to maintain self belief in those moments where we are liable to lapse into unproductive self doubt or imposter syndrome.

In hindsight I realise, these moments flowed naturally in an office environment.

Sometimes we seek them out in an ad hoc way – a conversation in the hallway about the thing you’re working on right now, a request for someone to quickly look at something and give a tiny bit of feedback, a tiny moan about something you’re struggling with.

Sometimes they are completely unintended – someone looks over your shoulder at something you’re working on, or gives you a few encouraging words as you enter or leave a tough meeting, or just happens to comment positively on something they saw you do recently.

It is possible for these to happen when we are all remote, but it takes more effort and intentionality. As a result, I think we experience much less of this ambient reassurance when we work remotely.

Concerned about disrupting people’s flow with messaging, we’re much less likely to send that tiny message of encouragement or positivity. Without the visibility of whether people are in focus mode or have a moment of availability for a small interaction, we keep things to ourselves. We only reach out and demand someones attention if it feels sufficiently important or well thought out.

So many of our interactions now are textual. More visible, audit-able, traceable. Interactions that make us think twice. Far from a reassuring smile across the room or a secret thumbs up from the audience.

Getting the balance right is hard. Protecting our colleagues work time flexibility and their focus time helps deliver some of the advantages of working remotely.

And yet, in the absence of these tiny, human interactions, we’re more dependent on our own, individual self assurance. I never realised, until COVID and this long stretch of remote work, how dependent my self assurance was on ambient reassurance from others. In its absence, the natural peaks and troughs we experience – from confidence in our abilities to despair that we will never be good enough – feel more frequent and more extreme.

So knowing this, I experiment. With reaching out and sharing more than I might otherwise. Both about what I’m working on and how I feel about it, and also, with micro reassurance for others. I worry about the extra load I might be placing on others. And I ponder how our tools might take this new (to me) need into account as well.

Meanwhile, we muddle through. And I wonder if you’re experiencing this absence of ambient reassurance as well?

So, here’s a little reassurance for you right now – whatever you’re working on right now – you’re almost certainly doing better than you think you are. Don’t be so hard on yourself. Don’t be afraid to reach out for some feedback or just plain reassurance. Keep going! and stay safe.

If you found this interesting, you might also be interested in some research my team at Atlassian recently shared on what makes a difference to how people are experiencing remote work during the pandemic. Read more here.

The Benefits of an Open User Research Practice

User Research is a team sport poster

I never really loved mathematics. I am much more of a big picture person than a tiny detail person. But I usually did ok in maths tests because you got marks not just for the answer but for showing all the thinking you did to get there. I may not always get the answer right, usually as a result of a simple mistake along the way, but you can see how I am thinking and understand where to intervene to correct. We both learn.

I apply the same approach to research practice, especially when working with teams who may not have a particularly strong understanding of how and why we do things as we do. An open research practice has multiple benefits including:

  • learning about how and why you think about and make decisions and actions at each stage
  • understanding what tradeoffs are being made and the impact this has (there are always tradeoffs)
  • understanding how we move from data to insight and deeply understanding and trusting what we have learned.

Openness in research requires the willingness to adapt, to not always be right and perfect, to go slower than you want to (and often far slower than people expect) and as a result requires decent amount of bravery.

There are three crucial stages for openness in research practice:

  • study design
  • fieldwork
  • analysis and synthesis

Openness in study design

Being open in the study design phase means bringing your team into the process about considering who we want to talk to (and, as a result who we choose not to include), and what we want to talk with the about. In particular, the consideration of what kinds of differences matter in our audience base is an important one to have and one that thinking about research recruitment can help to facilitate.

Openness in fieldwork

Being open during fieldwork refers to a researcher’s willingness to have team mates observe the research as it happens. There are many different ways that you can enable this, and different levels of interactivity that your team might have with the participant during the study. Being comfortable with having your team observe as you conduct research can be really challenging for researchers at first. Once this becomes standard practice though, it quickly becomes an essential part of our practice and help us to demonstrate the differences between the research questions we want to answer and the questions we need to ask participants in order to answer those questions.

Many researchers are concerned that participants will observe one session and run off to change the entire product based on a single data point – although this is a commonly voiced concern it is usually easily managed by clearly setting expectations that everyone on the team is required to observe at least two sessions before being allowed to participate in the analysis process (from which the findings emerge). We often use UIE’s Exposure Hours requirement for at least 2hrs of observation every 6 weeks as a metric that helps encourage team mates to experience more than a single session in any one research study.

Openness in analysis and field work

While giving team mates the opportunity to observe their customers and users first hand has obvious benefits, allowing them to participate in the analysis process is arguably even more important. This is where we truly pull back the curtain and show the hardest work of research which is making sense of all the stories we have heard and things we have observed.

Robust analysis and synthesis is probably one of the most overlooked aspects of the research process – all too often we see examples of people observing a number of sessions, taking a few bullet point notes et voila – the findings immediately emerge.

If only it were really that simple. Analysis and synthesis is hard, time consuming work when done properly. Doing it properly is essential if you want to do the work required to rid yourselves of as many of those annoying cognitive biases as possible – in particular the confirmation bias and recency effect.

Allowing and encouraging team mates to participate in research analysis gives them an opportunity to get much closer to more of the data, but it also helps them to understand the way that we process that data in order to make sense of it and draw conclusions. It allows them to challenge the ways we are forming narratives about what we believe that data means and demonstrates the traceability of those claims back to the original source data.

This blog post and video describe how I’ve done collaborative analysis successfully with teams.

Open research is challenging but worthwhile

It is beyond dispute that working in an open way – research as a team sport – is slower and more painful for researchers than putting our heads down and getting through the work alone. It is, on the surface, less efficient and more annoying. Nonetheless, if you want to grow understanding and respect for the research craft in your organisation, it is very much worth the overhead to take the time and effort to open up your processes to your team and invite them in to participate actively. Overtime, the increased Research IQ in your team will pay dividends and your ability to be impactful more efficiently will increase.

It is important to remember that the most important thing is not the time to deliver the report, but the impact our research has on our teams ability to make good decisions for our customers and our users. Let’s make sure we’re optimising for efficiency to  the right outcome.

Five dysfunctions of ‘democratised’ research. Part 5 – Stunted capability

This is the fifth and final in a series of posts examining some of the most common and most problematic problems we need to consider when looking to scale research in organisations. You can start with the first post in this series here.

Here are five common dysfunctions that we are contending with.

  1. Teams are incentivised to move quickly and ship, care less about reliable and valid research
  2. Researching within our silos leads to false positives
  3. Research as a weapon (validate or die)
  4. Quantitative fallacies
  5. Stunted capability

In this post, we’re looking at what happens when the research practice in an organisation fails to mature.

A great first step

Testing one user is 100 percent better than testing none – Steve Krug, Don’t Make Me Think

Many organisations get started doing research with customers and users off the back of encouragement from people like Steve Krug and his classic book ‘Don’t Make Me Think’. In this and other books Steve makes simple usability testing accessible and achievable to almost anyone.

Steve and others like him are evangelists reaching out to those companies who are afraid to engage with their customers to understand opportunities for them to improve. This is important work. Their message is usually that talking to customers is not hard or scary, and that we’ll be better for doing a bit of it, even not perfectly, than not doing it at all.

The first step can be scary

And they are right. Having anyone in the company talking to just one user (and hopefully some more) is a fabulous first step. But it is intended to be just that – a first step. An encouragement to realise the benefits of involving people outside our offices in the process of designing and developing products and services. And help to overcome the fear of engaging with customers and users and an opportunity to experience how beneficial this can be.

For those of us who work with research participants on a regular basis, it may be hard to recall exactly how terrifying those first few research sessions felt. Even trained and experienced researchers continue to experience some background fear (or exhilaration?)  of all the things that could go wrong in the research study – and there are plenty!

The thing about first steps, though, is that they are usually intended to be followed by second steps. Once we break through the fear (or in some cases, just lack of awareness), the idea is that we continue to increase the maturity of our practice.

And this is where many organisations seem to hit a roadblock. More and more people in the organisation might be out and eagerly involving customers in the process of shaping their products, but they often don’t invest in either improving their own skills in research or investing in hiring people who have training and experience doing research. 

Talking to users is not research

One important realisation we need to have on the path to maturity is recognising that ‘talking to customers’ is actually not the same thing as doing research. Talking to customers or watching customers use our products and services has many benefits – in particular it can increase our empathy for our customers and users, it can help expose us to scenarios of use that are dramatically different to our own and what we would expect, and it can provide clues as to where the biggest problems may like. All of these are good outcomes.

If we want to use research as evidence for decision making – either for product strategy or design decisions – then we need to be able to do more to ensure that the insights we are gleaning are sufficiently reliable and valid.

Research doesn’t need to be ‘perfect’, just valid and reliable.

 ‘I don’t need the research to be perfect, I just need enough to help me make a decision’.

Often this is said in response to the suggestion that the research we should be doing will take longer or be more difficult and expensive than our speaker would like. In this situation, there is often a pre-existing ‘hunch’ and they are looking to users for validation. Or perhaps they are stuck between two options and seeks a tie breaker.

Any specialist researcher has almost certainly had their recommended approach discredited as ‘too academic’, and sometimes it is true. Sometimes the research methodology is overdone for the question the business is seeking to answer. But what often follows is a bit of a race to the bottom where considered sample design and appropriate methodology are quickly discarded in favour of whatever is fastest and easiest.

Without the right experience and training, all too often interviewers ‘cut to the chase’ so we get more or less directly to the topic at hand. Somewhere in the world right now a product manager under pressure to make a decision is asking questions like these in a customer interview:

‘here’s what we’re thinking of making, what do you think about it?’

or, perhaps worse…

‘if we made this, would you pay for it’

It can be easy and tempting – so much faster and often quantitative – to mistake the research question for the interview question.

Even with training, it seems that the urge to be able to say that 10 out of 12 people said they would pay for it is almost irresistible. ‘Beating around the bush’ to get the question answered seems like a waste of everyone’s time, in this time where the bias to action and desire to ship at velocity is most valued. 

(It shouldn’t really be a surprise that lack of research capability maturity exposes us to the previous four dysfunctions).

Matching methodology to risk

Whilst we should have plenty of sympathy for this desire for lightweight research and simplicity, it is important to ensure that the methods employed are matched to the risk involved in the decision, rather than the most compressed timeframe.

As our organisations grow, the decisions we take using evidence from our customers can become more and more substantial – the gains of getting it right are greater and the risks of getting it wrong get uglier.

In the same way, our research maturity needs to continue to grow so that we can continue to match the size of the risk of getting it wrong.

This is not to say that mature organisations only ever do serious, time consuming research. Rather, that we invest where the risk is highest.

Investment might look like hiring trained researchers who can design and recruit the right sample and conduct the research in a way that reduces bias. Or investment might look like iterative research with an every increasing number of increasingly diverse participants, sprint after sprint – allowing the team to continue to learn, This can work beautifully when the team is able to be responsive to that learning over time.

Investing too much

Conversely, there are situations where the investment in research is far too high for the decision being made. This often happens where the organisations design process has broken down, or where designers have entirely lost confidence in being able to make relatively conventional design decisions. In these situation we design complex studies to ‘validate’ one micro design treatment over another. In this case,  the mismatch in risk to research investment can result in large quantities of what I would consider to be wasteful and often unreliable research. 

Beware Dunning Kruger

Dunning Kruger graph of confidence vs expertise

User Research is particularly susceptible to Dunning Kruger syndrome, wherein a relatively small amount of knowledge can result in an excess of confidence. Many people claim a ‘background in research’ when they could mean they watched someone else do a bunch of usability studies in their last job, or they did a research based degree at university.

Many designers and product managers are entirely happy with the outcomes they get from research and how it enables their practice – and often loudly object to the suggestion that anyone could get a better result from the research than they do.

Yet, at the same time, the harsh reality is that the work that is done is often resulting in misleading outcomes that can put their product and their organisation at risk. 

It also undermines the reputation of research in the organisation when people claim when a ‘researched’ product goes into the world and doesn’t succeed as expected. ‘We did research before and it didn’t work’.

In the same way that often both design and product management capabilities require an engineering led organisation to move through the stages from unconscious incompetence through to conscious competence ,  the very same is true for the research capability.

Achieving research maturity

And so, at the end of our five dysfunctions, what can be done to help provoke an organisation to not only involve users in the process of creating products and services but to start and continue to grow their ability to do so by revealing the important insights that are both reliable and valid?

Here are some things that have worked for me.

Perhaps through improving business fluency. By talking less about empathy and more about the risk to the business of getting it wrong. Talking less about customer obsession and more about the reliability and validity of the different types of evidence we can use to make decisions. And by running an open research practice – getting out of the black box, removing any mystery about our work, showing our workings and involving others in the process.

Make use of existing momentum – bringing new shape and substance to whatever your organisation is using to bring its attention to its customers – whether its an NPS survey, a customer convention, a feedback form, or a guerilla research practice – start by shaping the existing connections into something more insightful, more reliable and valid.

Be brave, but be patient and we’ll get there.