I was reading about computer vision and face recognition in the New York Times, and I was reminded of a quote from one of my favorite movies, The Princess Bride.
Fezzik (the giant): Why do you wear a mask? Were you burned by acid, or something like that?
Man in Black (Westley): Oh no, it’s just that they’re terribly comfortable. I think everyone will be wearing them in the future.
As computer vision becomes more ubiquitous and as the software becomes better at recognizing who we are and analyzing our facial expressions, we may turn to masks in self defense. Perhaps the most disturbing example given in the article is of Affectiva’s detailed tracking of facial expressions as a volunteer watched a movie.
“To the human eye, Ms. Sonin appeared to be amused. The software agreed, said Dr. Kaliouby, though it used a finer-grained analysis, like recording that her smiles were symmetrical (signaling amusement, not embarrassment) and not smirks. The software, Ms. Kaliouby said, allows for continuous, objective measurement of viewers’ response to media, and in the future will do so in large numbers on the Web.”
Will this broad, detailed and quick feedback make movies and other products more satisfying? Given the impact of audience testing on films (not to mention on politics), I have my doubts.
I have three key concerns about detecting identity and mood, moment-by-moment. First, art and education often carry us through discomfort before the “reveal.” I remember reading a poem in college that was intended to create outrage until the last line, which brought delight (and relief). It seemed to achieve its purpose, but I wonder if marketers, producers and advocates for the politically correct would allow people to be uncomfortable along the journey. (This would be even more acute for those who plant the seeds for an aha that would occur much later. I can remember waking up the next day laughing at jokes my math teacher told.)
Second, I worry about the practiced use of distraction through. Nixon, about to be dropped by Eisenhower because of special favors he’d received from contributors, saved his career by appealing to the public for sympathy through a dog he had given his daughter (Checkers). This sweetening distracted people from their outrage, and he went on to be President. Politicians still do this, but the technologies emerging will make it much easier (and more ubiquitous).
Third, I have a suspicion that moods are more complex than our faces reveal. We say some experiences leave us with “mixed emotions,” and I don’t think this is far from the truth. Coding of emotions is likely to lead to distortion.
Of course, there are deep concerns about privacy and many forms of manipulation. When we talk about tools for controlling mood, we are talking about lots of power, often at work subliminally.
Are observant machines all bad? No. One use mentioned in the article is training people who are autistic to be more sensitive to the moods of others, and I believe many of us could benefit from such training. In fact, I can imagine better feedback on our own emotions could help us to overcome performance anxiety, warn us when we are in a state that would increase danger (such as driving while angry), and provide new tools for managing moods that affect health. You could even use mood detection to determine if drugs for treating mood disorders (depression, attention deficit, anxiety) are effective.
Tracking of mood, especially when it is across the population, has the potential to create dramatic cultural and societal changes. These tools will certainly be developed and used on a large scale to sell products, but the implications are much deeper.
Now, where did I leave my mask?