The first ASHEcon in-person opening plenary in three years focused on AI in Health Care: Implications for Research, Practice, and Policy. The plenary was moderated by David Chan, Associate Professor of Health Policy at Stanford, and featured Leila Agha, Associate Professor of Economics at Dartmouth; Nathan Cortez, Professor of Law at Southern Methodist University; Tamra Moore, lawyer and partner at King and Spalding LLP; Ziad Obermeyer, Associate Professor of Health Policy and Management at UC Berkeley; and Christina Silcox, Research Director for Digital Health at the Duke-Margolis Center for Health Policy. The panelists discussed a variety of topics including how to incorporate AI into medicine, optimal uses of AI, misconceptions and risks of AI, and where AI is likely to be used. Panelists’ thoughts on these topics are combined and summarized below.
How do we get physicians to accept algorithms when the algorithms are better?
AI is part of a long tradition of incorporating technology in medicine. Physicians will typically adopt a new technology once they observe results of a randomized controlled trial (RCT) showing that it improves patient outcomes. An RCT on algorithms and heart attacks is currently underway, which is promising for future adoption of these algorithms.
What do people get wrong about AI?
One misconception is what AI means for the future of work. AI systems are complements not substitutes for clinicians. Another misconception is that these technologies are neutral and 100% accurate, but they can only use the data they are given. Imperfect data can lead to disparate results for different groups and unexpected or unintended output. More specifically, algorithmic predictions are based on a training data set, so for people in different regions, demographic groups, or time periods than the training data cover, those predictions may not be accurate. A misconception health economists may hold is that AI is overhyped, but it is already in the early stages of changing lots of things about health care. In terms of new research opportunities, AI opens the door for many new topics to study, and it also provides new tools to study old problems.
What’s the optimal use of AI?
First, AI should be used as a guide rather than a crutch. It’s also important to recognize that different types of AI can involve different levels of risk and should be regulated differently. The lion’s share of machine learning work is predictive rather than causal in nature, which implies that it may be easier to use in applications like diagnostic decisions or detecting heterogeneous treatment effects.
Where will AI be used?
AI is likely to be used in insurance, research, and consumer settings. On the insurance side, patients used to have much more information about their own health. If insurers are able to harness the predictive power of AI, that will no longer be true. On the research side, AI opens the door to measure many more health outcomes than what was previously possible, by allowing researchers to directly use lab and test results as outcomes instead of relying on physicians to code up outcomes in paperwork. On the consumer side, the predictive power of AI could be used to input health symptoms and personal characteristics into a web search and reveal potential illnesses. Such a tool could cut out a lot of unnecessary spending if individuals can receive accurate likely diagnoses instead of going to urgent care or the emergency department.