Loss, Risk and Posterior Distributions
For a while, I’ve been thinking about the deployment of predictive algorithms in clinical decision support. Specifically, about the difference between what we understand about a model’s performance from the publication describing it and how this might be less informative when deployed. In short: what is the value of knowing that a model has good balanced accuracy or a high area under the ROC curve when sat with a patient and using the tool to make a clinical decision.
In this post, using R, I attempt a detailed walk-through implementing common loss/risk functions over posterior distributions. It’s all textbook stuff, but hopefully useful.