In a town ten miles from my house, a GPS system directed a driver onto the the railroad tracks. The driver escaped, but the car was demolished. This happened again a few weeks later to another driver. Do we trust computers? Yes. Even when we shouldn’t.
We are most likely to trust them with factual information. Watson was correct more quickly than humans in spouting Jeopardy answers. Siri directs people (and even tells a few jokes.) I think that we trust more with use and with how natural the interaction is. If true, we will see more people driving on to the railroad tracks of life as interfaces become more familiar and feel more natural. And, as computers learn about us and adjust their styles to those that comfort us, we’ll get more use out of them, but we’ll be even more vulnerable becoming too trusting. (I remember a time before Photoshop when people believed pictures couldn’t lie.)
Things will get even more complicated as the interfaces offer opinions (sometimes being the face for crowdsourced answers) and advice. We will need to penetrate another level to assess the credibility of the responses. This is already a problem where people (and search programs) have turned the Internet into an echo chamber of sometimes radical perspectives and where review sites have gotten clogged with biased views (sometimes by providers themselves).
Imagine how powerful computers as advisors will become as they watch our facial expressions and respond with their own faces in ways that inspire confidence. Take that a step further to body language. In theory, computers can become wonderful liars.
Can we turn this around? Can we use independent computer agents to assess the answers and provide warnings or ratings of credibility?
Perhaps we could go even further and created agents that will ask US questions and challenge our answers. Just as I make it a point to have people in my circle of friends and associates who have divergent points of view, I would love to have an online Devil’s Advocate, challenging my opinions and “facts.” This seems doable (perhaps first as systems that emulate pundits), and it might help us avoid some disasters.