Can you automate trust? I think about that every time I come up against something that would be good to do in a virtual world, but is hampered or won’t work at all because trust is an issue.
- I can test people in online courses, but how do I know the person taking the test isn’t cheating (or even is the same one I credential)?
- I can collaborate on an invention or a book or a movie online, but how do I know all our work won’t be stolen? And, again, am I working with the person I think I am?
- Is that person providing counseling an online psychiatrist or a charlatan?
- Will the person I pay to create a web site take the money and run?
- Is that picture in the dating service a picture of the person I’m chatting with?
And so on. We have lots of fixes – assurances with credit cards, third party bonding, the look and feel of a site, passwords, encryption, past behavior, peer ratings, and more. And, of course, we can be deceived in a face-to-face environment, too. But the physical world has an edge on trust.
Part of this is related to scale. When the online world was very small, really a town full of sophisticated people with few degrees of separation and mostly good intentions, trust was common. That was the world before viruses and firewalls. I am reminded of a quote by Heinlein:
I learned centuries back that there is no privacy in any society crowded enough to need ID’s. A law guaranteeing privacy simply insures that bugs—microphones and lenses and so forth—are that much harder to spot.
The main point for me is that a society crowded enough to need ID’s works by different rules. That time, for the Internet, disappeared a long time ago. The closest we come now is walled off communities, and I always become leery of those when they start growing past a few hundred people.
But imagine if you could automate trust:
- If you could connect every statement and action to a specific person who could be held accountable.
- If credentials were completely verifiable.
- If reputation were measured (and visible) across competencies and intentions.
What might be possible then? How would it change the world?
Of course, the devil is in the details. How we would automate trust matters, because such automation will inevitably have costs, side effects, and unintended consequences. I’ll offer some thoughts on this next week, but I welcome ideas and comments in the meantime.