Andy Dufresne, the wrongly convicted character in The Shawshank Redemption, provocatively asks the prison guard early in the film: “Do you trust your wife?” It’s a dead serious question regarding avoiding taxes on a recent financial windfall that had come the guard's way, and leads to events that eventually win freedom for Andy. And it’s also a dead serious question being asked today with respect to AI.
Can we trust AI?
Trust. At this point we all recognize that successful deployment of AI is going to come down to something much more fundamental than the technical aspects of algorithms, neural networks and machine learning. It’s going to come down to trust.
Do we trust the black box calculations of AI? Do we trust it to drive our cars, diagnose our illnesses, and manage our finances?
We have the same issue of trust with objects, but with a different set of circumstances. How do I trust that the sun will rise again tomorrow, in the east no less? How do I trust that bridge won’t collapse? How do I trust that this medicine is safe?
Just as the issues and objects of trust are along a continuum, so too are the mechanisms that build trust. Experience builds trust – the sun has risen in the east every day so far, and there’s even this crazy theory out there that the Earth is round and rotates around its axis. Ubiquity builds trust -- everybody else is driving over that bridge. Time builds trust – that bridge has stood there for twenty-plus years now, good chance it will still be there tomorrow. Habits build trust.
Another key aspect of trust is the role of the intermediary: the lawyer, banker, trustee, the disinterested third-party umpire, or the regulator. You come to trust the products you buy because of their ratings from Consumer Reports, their certification from Underwriter’s Laboratories, or their FDA approval. AI can make use of such intermediaries as well, or it can remove the need for intermediaries through trusted technology systems, the most prominent of the lot being blockchain.
AI will undoubtedly employ a wide array of these approaches to trust. Trust in AI will come naturally as it becomes more widespread, accepted and even ordinary. Experience will show (or not) that the process works, that the bridge does indeed not fall down. Time will deliver the critical mass of billions of miles of accident-free driving by autonomous vehicles. Trusted intermediaries will vouch for the veracity of the AI-generated output. Blockchain transactions will become more common and others will trust the technology.
But if its designers are savvy they will not rely only on such passive measures of trust. They will welcome and actively seek to establish the standards, the testing and the regulations that promote trust across the wider potential audience who cannot learn trust via direct experience. Institutions and organizations that design or utilize AI will go out of their way to demonstrate not just the trustworthiness of the AI they employ but also the trustworthiness of their organizational values and reputation. AI can earn your trust, and lose it, as readily as any airline or restaurant or tire manufacturer. When your business model comes to depend almost entirely on AI-driven processes, your corporate reputation will be all you have.
AI designers will most definitely work out a partnership with blockchain – it’s a natural fit for something so new, so risky and initially so untrustworthy. Not so sure you really trust that company after what they did last time? With blockchain in the middle you can still trust that you’ll get what’s been promised, and the company may have found a way to salvage its damaged reputation.
Eventually, when AI reaches a level of capability where it interacts in a “personal” manner with humans, it will also develop the ability to undertake voluntary, unremunerated actions to build working social relationships. It will recognize the need for a newly encountered individual human to begin that relationship cautiously, taking small, trust-building steps until something more substantial is established. It will learn to be somewhat vulnerable, as we all are, and give you a glimpse into the motivations underlying its black box, transparency into its goal functions and action sets. I have no doubt that successful AI of the future will learn the value and importance of gift giving to build the relationship, and when its owner’s push back at the ostensible negative ROI of those gifts, it will have the quantified justification ready-to-hand.
We humans will, as a consequence, learn to fully trust AI when it behaves in the manner with which we are already familiar when it comes to building trust relationships with those other more familiar biological social creatures around us. It will still be a black box, but no more than I already am to you and you already are to me. Of course, the closer AI comes to approximating human social behavior in its relationship to us, the more trust will be possible, and the more effective AI will be.
Watch the webinar: Implementing AI Systems with Interpretability, transparency and trust
1 Comment
Maybe it's just me, but I would trust Margaret Hamilton's AI software during an Apollo mission more than Alan Shepard did.