Nikhil Agarwal, Alex Moehring, Pranav Rajpurkar, and Tobias Salz
May 2023
Although Artificial intelligence (AI) algorithms have matched the performance of human experts on several predictive tasks, humans may access valuable contextual information that has not been incorporated into AI predictions. Humans that combine AI predictions with their own information could therefore out-perform both humans alone or AI alone. Using an experiment on professional radiologists that varies the availability of AI support and contextual information, the authors show that (i) providing AI predictions does not uniformly increase diagnostic quality, and (ii) providing contextual information does increase quality. The authors find that radiologists do not realize the potential gains from AI assistance because of large deviations from the benchmark Bayesian model with correct belief updating. Radiologists’ errors in belief updating can be explained using a model in which they partially under-weight the AI’s information relative to their own and do not account for the correlation between their own information and the AI’s. The authors then design a collaborative system between radiologists and AI. The results show that, unless the mistakes we document can be corrected, the optimal solution involves delegating cases either to humans or to AI but rarely to a human assisted by AI
Subscribe for Updates