This came out of an interesting exchange on Twitter the other day with a colleague who posted a Tweet about job opportunities at his company and promoting the opportunity to work on big brands if you worked with him.
He also has an awesome team working with him. I suggested to him perhaps he should be promoting that as well or instead.
I got to wondering (again) how how other people saw the world – what was important to UXers when they were thinking about a new job and what the process was like for finding, interviewing and taking a new job.
Being a good UXer, it was only logical to take the next step and do some research.
I’ll collate the results and share them back in a few weeks.
I was in Chicago the other week and out with a friend who has multiple, severe dietary allergies. She can’t eat dairy, eggs, wheat, and a bunch of other stuff. Eating out is a bit of a pain for her but, if she doesn’t get it right, a whole lot more pain later.
We were in one of those posh grocery stores with its own little cafe and, after much deliberation, decided what to eat. The girl who was taking our order had a desk that was positioned in a way that made it easy for me to look over her shoulder at the interface she was using to take the order.
Taking a standard order was pretty easy – you just pressed the button that said ‘thai chicken salad’. Simple.
Then it came time to take my friend’s order. First she had to press the button that said ‘thai chicken salad’ and then my friend asked that she make a special note for the chef about her allergies. To do this, the girl had to press the notes button and then type the special request in. No assistance from the UI whatsoever. And that’s when the trouble struck. Spelling.
Without wanting to ridicule her – she failed to spell ‘dairy’ even to the point that you might guess what she intended. There was no way she was ever going to accurately convey my friends requirements to the chef. I watched, quietly, as she tried and failed to type the instructions and eventually sent the following note through to the chef:
Here’s the thing. Our order taker is far from an edge case. Jonathan Kozol has written extensively about illiteracy in the US (and there are similar problems in many parts of the world). He says:
Twenty-five million American adults cannot read the poison warnings on a can of pesticide, a letter from their child’s teacher, or the front page of a daily paper. An additional 35 million read only at a level which is less than equal to the full survival needs of our society.
Together, these 60 million people represent more than one third of the entire adult population.
The largest numbers of illiterate adults are white, native-born Americans. In proportion to population, however, the figures are higher for blacks and Hispanics than for whites. Sixteen percent of white adults, 44 percent of blacks, and 56 percent of Hispanic citizens are functional or marginal illiterates. Figures for the younger generation of black adults are increasing. Forty-seven percent of all black seventeen-year olds are functionally illiterate. That figure is expected to climb to 50 percent by 1990.
Fifteen percent of recent graduates of urban high schools read at less than sixth grade level. One million teenage children between t velve and seventeen percent cannot read above the third grade level. Eighty-five percent of juveniles who come before the courts are functionally illiterate. Half the heads of households classified below the poverty line by federal standards cannot read an eighth grade book. Over one third of mothers who receive support from welfare are functionally illiterate. Of 8 million unemployed adults, 4 to 6 million lack the skills to be retrained for hi-tech jobs. (more here)
This is a big problem. This is not an edge case. And, before you say it, the answer is not icons. (The number of times people have told me that the solution to designing for an illiterate audience is icons. Now make me an icon for ‘vegan’).
I don’t have the solution, but I do have a couple of guiding thoughts.
People are better at recognising words than they are at making them from scratch. My 3yr old can recognise words in books that he is familiar with but he can’t read (no matter what he tells you). This is true for all of us. Recognition is far easier than recall. Think about foreign languages – most of us can read a lot more than we can speak, right?
Think about mission critical tasks. Things that, if not done right, could hurt people or have significant negative impact on people or business. Don’t give people a blank box to fill in when you’re designing these tasks. Give options (in words, not icons). Let people recognise and select, don’t make them remember how to spell stuff.
Jan Chipchase has done a lot of design research work with Nokia in the area of device design for illiterate end users and supports the view that making the interface easy to ‘learn’ (which largely means ‘remember’ for people who are less literate), is the best approach – better than icons or audio menus or all other apparently obvious solutions.
This presentation is worth a flip through if you’re interested in his experience and outcomes.
None of this is new, granted. But it’s not something I hear us talking about anywhere near enough. Watching that poor girl struggle with that interface and because of the poor design put my friend’s health at risk was a real wake up call and reminder to me of how wide-spread and significant this issue is.
I’m resolving to be more aware of this in the future and I hope you will to.
(And, if you’re in the UK, consider signing the Save Bookstart petition – this invaluable service puts books into the hands of young children – having books in the house in childhood is a key indicator of later literacy).
I happened across this twitter exchange this morning. This is a not surprising response to personas, I’ve shared this response at times and have empathy for both points of view.
Here’s the thing… You don’t really want to use personas, do you? They are really a pretty cumbersome way of maintaining your customer’s active presence in the design & product management process.
What you really want is a small, tight team who get who your users are, what they value about your product/service and what is behaviourally significant about them. And you want regular access to them (if you or your boss are representative of your target audience this helps enormously).
Enter reality – the majority of us are not working in this kind of environment. We are working for large organisations who are focussing more on themselves than their users, with people who may not have seen or heard from a customer for years (if ever), whose attention is constantly being focussed on internal KPIs focussed on quantity not quality. Who resist making and decisions in preference to making a sub- optimal decision that can be traced back to them.
Sure, the likelihood of incredible design flourishing in this environment is significantly reduced, but what do we do? Give up?
We can’t all do that, can we? And neither we should.
Many of us have experienced that moment when a team transforms – when they realise what it is like to be their customer and how easy it would be to make that experience better. This most often happens during usability testing. (around the 3rd or 4th participant when the team acknowledges that perhaps we haven’t recruited a bunch of stupid users and maybe we do need to change the design a little).
Well made and well used personas are less able to create this transformation (watching real users will always trump personas) but they can help maintain that transformation and act as a tool to evangelise a customer focus through out the organisation and to create a common language around our users and – possibly my favourite thing – to allow us to reduce usage of the term ‘user’ (so abstract, inhuman and elastic) and replace it with our personas names.
Yes, this does make you feel like a bit of a hippy. I agree. But it helps, a lot, to transform focus from internal processes and priorities to what people actually do, need, want.
You don’t *have* to use personas to do good design. If you make bad personas (made up not researched, focused on demographics not relevant behaviours and attitudes), and if you use then poorly (make them and forget about them, or keep them hidden within the UX team) then you might as well not use them.
But well made personas in day to day use through out the organisation are incredibly useful when you need to gain and maintain focus on the (potential) customer.
Here’s the test:
do you have personas for your project/product?
are they made of data from real (potential) customers?
do they have real names not segment names?
do you have fewer than five personas?
can you remember all the names of your personas and describe them?
do you use them to guide, evaluate and/or explain design decisions?
can your boss name your personas?
can the developers on your team name your personas?
If you’re not answering yes to the majority of these, there are probably good reasons why personas aren’t really working out for you.
Don’t fret if you didn’t do so well here – most people don’t (out of a room of dozens of UXers last night only one lonely hand remained in the air at the end of this line of questioning last night).
I reckon personas are the best known but most misunderstood and misused tool in the UX toolkit. Don’t throw your personas out necessarily but see how you can incrementally improve how they’re made and communicated.
And if your fortunate enough to work in a project team who doesn’t need personas, well, lucky you – just don’t be too successful or you may find yourself large enough that you’ll windup needing personas after all! ;)
I’m working on the UX for an application for the new Android Honeycomb User Interface (UI). This is the first time I’ve designed for this operating system and we were fortunate enough to have Nick Butcher of Google UK take us for a walk through the new environment and introduce us to some of the UI conventions.
If you’ve not yet had the pleasure, you can take a look for yourself here:
There’s a lot to like about the Honeycomb OS in my opinion, especially once you get to know it a little and learn what the many icons that litter the interface actually refer to, however there is one UI convention that really bothers me, and that in this application we are going to largely ignore and that is the action bar.
So, the action bar is application specific and context specific within an application. The idea is that you put any actions you may need in that part of the application into the action bar where you can then ‘take action’, the more contextually relevant the action, the further to the left, the less relevant and more global, the further to the right.
Now, it is A Good Thing that there is a convention for this, especially in an open source environment where designers/developers play fast and loose with the interface and the overall experience of using the device running this platform suffers as a result (as, having played with a Xoom for several days now, I can report is definitely the case).
The implementation, however, I’m less thrilled with, particularly as I’m confident it will actually encourage designers to ignore or duplicate the action bar’s functionality within the interface… and here’s why.
Note that the red band labeled ‘reach’ (indicating that you have to reach to access this zone and it is not an easy or comfortable activity) corresponds almost exactly with the Action Bar.
Although the TechCrunch article suggests that you instinctively look to the action bar (which, given that it is visually quite pushed back, seems to me to be an easily learned behaviour rather than a natural one), it qualifies this immediately by saying that you do this only if you can’t locate the action in the applications main UI.
This means that, if you follow Honeycomb’s UI conventions (in the hope of making the ecosystem of applications a good user experience with some potential hope of ever rivalling Apple’s iPad), you have three options: make the actions people need to perform ergonomically awkward (the xoom, for example, is no lightweight and unless you’re resting it on a table or your lap, requires some juggling in order to hit an item on the action bar, this juggling also makes the selection significantly less accurate), duplicate the actions in the bar and in the application UI, or ignore the convention and put the actions that are most contextually relevant in a place that falls with in the ‘easy’ activity zones.
I doubt that the designers of Honeycomb intended the action bar to replace in context calls to action, and neither it should, but it seems to me that it means that the actual use of the action bar is now very ambiguous. The more I design for this environment, the more I’m finding that the action bar actually feels much more suitable to application-wide actions rather than contextual – not the least because it is often visually quite removed from the content it is acting on.
This is a significant iteration on the previous Android software and, as our friends at TechCrunch say: ‘…at least people will actually be able to find these options, which is more than can be said about the options hidden behind the ‘Menu’ button on current versions of Android (which many people never hit).’
At the same time it feels like a rather awkward solution to this problem and certainly one that wasn’t really thinking about the mechanics of using a tablet in the wild.
As someone with an existing interest in design and UX for open source, I’d be really interested to hear what other designers who are encountering this convention are making of it. I’d also like to hear about whether the option of placing the action bar nearer to the ‘easy’ activity zones was considered and how that played out.
I’d really like to see Honeycomb evolve, and for good and clear UX conventions to emerge in this space so that we have a really solid alternative to the closed and ever more controlled world of Apple
(disclaimer: I own Apple gear. I know it’s great).