Publication Date

2019

Abstract

As Al moves rapidly into the health care field, it promises to revolutionize and transform our approach to medical treatment. The black-box nature of AI, however, produces a shiver of discomfort for many people. How can we trust our health, let alone our very lives, to decisions whose pathways are unknown and impenetrable?

As challenging as these questions may be, they are not insurmountable. And, in fact, the health care field provides the perfect ground for finding our way through these challenges. How can that be? Why would we suggest that a circumstance in which we are putting our lives on the line is the perfect place to learn to trust Al? The answer is quite simple. Health care always has been a place where individuals must put their faith in that which they do not fully understand.

Consider the black box nature of medicine itself. Although there is much we understand about the way in which a drug or a medical treatment works, there is much that we do not. In modern society, however, most people have little difficulty trusting their life to incomprehensible treatments.

This article suggests that the pathways we use to place our trust in medicine provide useful models for learning to trust AI. As we stand on the brink of the Al revolution, our challenge is to create the structures and expertise that give all of society confidence in decision-making and information integrity.

Document Type

Article

Publication Title

Stanford Law & Policy Review

Share

COinS