The Prediction-Explanation Fallacy: A Pervasive Problem in Scientific Applications of Machine Learning

Authors

  • Marco Del Giudice Orcid

Abstract

I highlight a problem that has become ubiquitous in scientific applications of machine learning and can lead to seriously distorted inferences. I call it the Prediction-Explanation Fallacy. The fallacy occurs when researchers use prediction-optimized models for explanatory purposes, without considering the relevant tradeoffs. This is a problem for at least two reasons. First, prediction-optimized models are often deliberately biased and unrealistic in order to prevent overfitting. In other cases, they have an exceedingly complex structure that is hard or impossible to interpret. Second, different predictive models trained on the same or similar data can be biased in different ways, so that they may predict equally well but suggest conflicting explanations. Here I introduce the tradeoffs between prediction and explanation in a non-technical fashion, present illustrative examples from neuroscience, and end by discussing some mitigating factors and methods that can be used to limit the problem.