The blogosphere is alive with the sound of Silver – Nate Silver, that is, the head of what should be called the Five Thirty Eight Modeling Agency. Silver constructs statistical models to calculate the probability of electoral outcomes. Though he hasn’t shared his model yet, the results fit his model very well. 

Is that the point? Statistical models can be constructed to serve various ends, among which are predictive modeling (“Five Thirty Eight says Obama will win”) and descriptive modeling (“in elections like these, the incumbent wins 35% of the time”). While Silver protested that he wasn’t trying to predict, only to model [at a 538 post I can't find right now...can anyone oblige?], the two often overlap. Many disciplines care about prediction and de-emphasize explanation. (A nice article about the overlap between prediction and explanation is here.)

This difference gets to the heart of how impressive Silver’s feats of statistical strength really were. If he was just predicting, any pocket calculator could do to average the polls and estimate that, in fact, Obama was a few points up. But this sheds no light on how Obama won or on what future candidates are likely to accomplish.

The same difference is also relevant in medicine, and how doctors often explain things to patients. A patient asks me if she should take an aspirin to prevent heart disease. I trot out the often used Framingham model. Like many models, it’s a little bit country and a little bit rock and roll: it assigns probabilities to certain events, spitting out a 10-year probability that a person will develop heart disease. On the other hand, it also explains heart disease as largely dependent on a number of underlying factors.

Unfortunately, these models aren’t very satisfying to us or our patients. They don’t predict very well, and their explanations, though sensible, don’t help much in telling patients what might happen to them. At the end of the day, we all have to decide about our health based on incomplete information. If the model says we are at 50% risk of developing heart disease in the next 10 years, we have to interpret this proportion not as an oracular judgment that will chase us like something out of a Greek tragedy, but the best guess of a predictive model that was based on the history of populations. We as individuals are always different. 

To put it another way: Nate Silver had a relatively easy time of it when you compare his task to the doctor’s. He had oceans of data to swim in to predict (or explain) an outcome that has come about 45 times. But, for our patients, the most important outcomes happen approximately once, and missing them is deadly.