My mother’s family always named their cars. Gertrude, Lucy, Edie—these were the mechanical members of her family as she was growing up. So, I found it ironic that people of her generation are still giving names to devices. In particular, a New York Times article about how robots are being used to help care for elderly people, including those with dementia, caught my interest. A cuddly robot, designed to look somewhat like a baby seal, is making the rounds among patients who treat like a pet. They talk to it and stroke its fur and call it by name, Paro.
Some patients who’ve responded to little else, come alive when Paro visits. It gets these elderly residents of nursing homes active and engaged. Doctors say they show both physical and psychological improvement. In a way, it is like the long-used pet therapy, with all the benefits and no need for cleaning up or feeding. Paro just has to be recharged from time to time.
About the same time, I read the CNN article about humanizing the voices of GPS devices. One of the benefits here is that they become less annoying. The out-of-the-box versions, for most people, seem too bossy. An interesting item within the article is that people are particularly pleased with devices that use their own voices and match their tone.
I wrote last time about how technology becomes invisible. Surely, one of the most subtle ways this happens is when technology hides in plain sight by aligning itself with our emotions. We have a natural tendency to anthropomorphize our environment, and design engineers are working to take advantage of that.
If you both detect and project emotions appropriately with your devices, people accept them more readily. The technology seems more natural, and some of the fear and hesitancy that people have goes away. In fact, one use of technology is to get people to disclose more. If you create a voice that is more humanlike, people will interact with it for longer periods of time and provide more information. When my wife has prescriptions filled on the phone, the automated system seems to be chatting, saying things like, “Let’s get started!”
The concerns with technology becoming invisible that I mentioned in my last entry begin to redline when devices blend in by manipulating our emotions. Think of it: the real target of all that data, the real entity with interest in what you say and how you behave, is not the one with which you are interacting. The agendas of those who control these devices can benefit from them are invisible to you. This is by intention. Emotional design is cooked into products because it focuses our attention on the task at hand, not at the medium.
This is not to say that there are not many benefits to users in having emotions embedded within devices. I would not take Paro always from the elderly patients anymore than I would snatch a well-loved teddy bear from a child. In those cases, the pretend world created is enriching and probably does no harm. But this will not always be the case. And the danger is that we only ask the right questions at the times when the technology is still novel, annoying and somewhat in the way. Once we become acclimated to devices as part of our world, we only challenge their uses and purposes when things go wrong–usually, very wrong, as with the BP Deep Horizon oil well.
And emotional invisibility can be more dangerous because it hits us on a non-intellectual level. Manipulating emotions creates subtle ways that people can be controlled and not act in their own best interests. A lie is much easier to swallow if it has emotional content. Any propagandist knows this. And the fact that the message of emotional devices is embodied in something other than text does not lessen the concerns for its abuse.