Classical Statistical (In-Sample) Intuitions Don't Generalize Well: A Note on Bias-Variance Tradeoffs, Overfitting and Moving from Fixed to Random Designs

Double descent, Bias variance tradeoff, Overfitting

We highlight that statistics historically focussed on fixed design settings (Rosset & Tibshirani, 2019), where in-sample prediction error is of interest, while modern ML evaluates its predictions in terms of generalization error, i.e. out-of-sample prediction error – and this seemingly small change has surprisingly far-reaching effects on textbook intuitions.