Here’s something I’ve been pondering a while, and I’d be interested to hear what you think.
We know that people anthropomorphise technology, that is to say, they relate to it as though it has human qualities. People talk to their computers, they talk about them as though they are capable of having human emotions or objectives. They have, for many years, had more or less a one-to-one relationship with their computer.
These days, if you’re a fan of social software, it seems as though your computer is now crawling with *real* people. Emails, twitter messages, incoming IM conversations, skype calls. Photographs of people you know, used as avatars, are constantly popping up on your screen, appearing in the web pages you’re browsing. Real human voices, voices of people you know, abound.
It seems to me an odd juxtaposition – anthropomorphism and the increasing *realness* of the voices of the people who now ‘populate’, so vibrantly, our computers. (Did we ever feel the need to ‘humanise our mobile phones? I mean, I hate mine passionately… but I’m pretty sure it’s not ‘personal’).
I get a sense that, perhaps, our need imagine our computers human-like may be on the decline as they become more and more tools for transmitting the voices of people we know and love.
Anyone else getting the sense that anthropomorphism may be, slowly, on the way out?
18 thoughts on “Will anthropomorphism decline with the rise of social software?”
It sounds like you’re suggesting the medium loses its “personality” as the message becomes the medium. Except, there’s so many places where the medium is the message. I think we focus a little too much on computers-as-tools-for-communication, rather than as, you know, computers. Also: we have a very narrow idea of what computers are. Really, they’re everywhere. We just don’t think about the ones in our cars or our washing machines. Yet.
In the case where machines aren’t used for communication, I still think they might be losing their anthropomorphic personality… and being replaced with a generic personality we attach specifically to computers. They’re not “human”, but “other”. (“Cylon”, perhaps…)
I’ll have a better answer by the end of the week – I’m going to be doing some work around this for the next few days, mainly on how computers should be talking to other computers.
Gosh, there’s so much between those lines I have completely failed to expand upon. Apologies. I’ll make sure I frame myself a bit better in future. It’s all very clear in my head, for sure…
I don’t know – I think it will be a long time before I stop swearing at my computer as if it’s a person when it crashes.
“You bloody f**king motherf**king b*stard of a prick, you always do this to me. Well this is the last time!”
You know, that kind of thing. :)
It seems at the moment that more and more things are becoming anthropomorphised, in a really infantile way. Witness coffee cups saying ‘Watch out, I’m hot!’ smoothies telling you how pure they are, or railways stations announcing “I’m sorry…”
I started recording them here: http://www.flickr.com/groups/firstpersonthings/
p.s. sorry again about the phone… we (nokia design) are trying to change things, but…
I think anthropomorphism has little to do with the apparent social aspects of computers. Rather anthropomorphism is actually an indication of a human-machine interface that calls attention to itself, usually by doing something that interrupts the user’s focus on the task. This may be a good thing as in, â€œWow, how helpful of you to do that for me,â€ but more often it seems to be closer Cherylâ€™s experience of, â€œWhy the hell wonâ€™t you let me do my work?â€ Such arbitrary and often incomprehensible disruptions make the user aware of the machine as a separate entity with apparent habits and motivations of its own, much like a person. In other words, the machine has a personality. A certain arbitrariness seems present in most cases of anthropomorphizing: vintage cars, sailboats, the weather (think weather gods).
Long-time users learn and adapt to such arbitrariness, developing a real or imagined ability to predict and manage the machine, and the arbitrariness can transform from â€œunreliableâ€ to an endearing â€œquirky.â€ The sense of intimacy with the machine that comes from such hard-won learning encourages the user to form what might be considered an irrational loyalty and passion for the machine. The user may actually start enjoying its personality. In general, however, I regard anthropomorphism as a warning sign of poor design.
Matt’s comments reminds me of a recurring theme in Philip K Dick where inaminate objects are forever offering helpful advice (doors suddenly breaking out of their programmed “Have a nice day” to offer relationship advice and so on). Perhaps the anthropomorphism reflected an underlying optimism that computing devices of any variety were evolving towards sentience (an unconscious inversion of so many Golem myths? Or perhaps we’re finally learning that a computer is essentially a car and no more or less worthy of a similar level of ascription of personality? either way, I still swear at my Windows machine but I’ve never raised a hand in anger to my powerbook.
I do not think anthropomorphism is specifically related to the increase in social features found on the net platform.
I do agree there is a current decline in anthropomorphism and it may be related to the uncanny valley phenomenon. That is, OS interfaces (inc web) have progressed to the point where they are not so quirky anymore. Not so much like that bothersome but quaint MG Mini. They’re more functional, they’re starting to do what is says on the box.
When we’re past the lull in the valley, when we’re running intelligent agents on our machines. Then I think we’ll see a full blown revival of anthropomorphism.
Sounds like “anthropomorphism” is the word of the day (and a great word it is too)! I counted it a lovely 4 times in one small post! Brilliant.
We’ve been using technology to talk with real people for a while, from telephones to the telegraph and even smoke signals. Even the TV could be considered in this way (albeit that it is a one-way communications device).
I am sure that people also give these objects names and anthropomophise them in order to facilitate interaction with the device.
From a social psychological perspective, though, the rise of social computing tells us that, as a culture, we’ve been using devices to facilitate interaction for so long that we’re probably ready and comfortable enough with using technology for the device to start to communicate back to us.
…the Cylon comment made by Tom might not be too far from the truth of the matter. You could also look at it in terms of HAL from space Odyssey 2001, or Rommy from the Andromeda Series.
…Web 2.0 Homo Technologicus: Fat , long fingers, slim legs but a very interesting face for the MyblogLog.com avatar ;-)
There was a girl I was crazy about who had a way of people keeping at a distance.
She kept them away precisely by keeping in touch through technology only. IM and cell phone and all that stuff was a way of abstracting people, turning them into words and maybe the occasional picture. After I saw her do this to her family and friends, I stopped being crazy about her.
Then I noticed all of us doing this, romanticizing the technology more than the connections. We don’t need to anthropomorphize the computer nowadays, since it is a lot more to us than one person. It’s our whole social being, for better or worse.
Yes I think you’re right ~ We communicate with beings, not things. Anthropomorphism was inevitable when most or all interaction was between you and your computer, before the internet itself and when the net was mainly a database. Not sure about cells (mobile phones) but when the first phones made their way into peoples’ homes way back some may well have anthropomorphised them.:)
I think Michael may have pinned down the exact issue … when the tool functions properly, we tend to forget that it’s there. My computer as a tool is not something that I consider when I’m interacting with others online, but when it fails me and interrupts that communication, that is when I am once again aware of its existence and treat it like a person that has “failed” me.
It seems that we tend to anthropomorphize things most frequently when they are either extremely reliable (my trusty vehicle/phone/TiVo) or extremely fallable (my *?!%$ computer/mobile).
Until the time that we do see more intelligent agents that interact with us on an individual level directly, our tools will remain hidden unless “broken”.
Insightful Todd, spot on. From my own experience owning a ’60s mini when I was in the UK, I would refine your definition from extremities to ‘objects with a perceived character’ with that perception growing towards the extremes of reliability & failure.
Todd, Pauric, others; I think what you’re discussing leads nicely into this post from my friend Tom Carden, about the problems cognitive dissonance raises for design. It goes beyond the anthropomorphic question and towards a much simpler one, about how we lose focus and flow when our frame is broken. It’s good stuff.
I have been listening to people talk about the services they use for social web activities. People not only seem to be talking about their friends on the service as “facebook friends” as that is how and where they connect and communicate, but they also talk about the services as if it is one of their friends, “I need to spend time with facebook to day” and other phrases.
Services like Flickr have played into the anthropomorphism aspect of their service, but many do not. Yet, the services that become common conduits for interacting with a desired or “needed” part of people’s lives start getting terms and phrases normally used for people used in association with the service.
It seems the services that are most effortless and easy to use get these attributes applied to them, but older cumbersome services and applications are those things that are still cursed software, sites, and applications.
Comments are closed.