Daniel I. Scully


The Human Touch

One of the best presents I got at Christmas this year was “The Signal and the Noise: The Art and Science of Prediction” by Nate Silver. You may have come across Nate Silver from his successful predictions of recent US Presidential elections and the book is broadly about making predictions in various fields: finance, baseball, politics etc. There are many take-home messages on making good predictions, and not making bad ones, but there was one theme in particular which resonated with some of my own thinking.

That point is best made by one of the examples that Nate Silver describes in the book, that of weather prediction.

The US meteorological office, like all weather predicting agencies, run very complex, sophisticated simulations of weather systems to make their predictions. These simulations are built using models based on our best understanding of how all the elements that produce the weather behave. They are fed with data about what happened in the past and what is happening now. And they run on some of the world’s largest and fastest supercomputers. Every year the accuracy of those models improves and so, therefore, does their prediction of whether or not you should take an umbrella out.

But despite how sophisticated those simulations are, at the end of it all a person sifts through all the predictions and changes them.

Not by much, and not too often. But that person is experienced in the weather the model has predicted in the past, and where it gets it wrong. That person understands the eccentricities of the model, the biases of the model, the failings of the model and makes the smallest of corrections to account for them, to match what they know to be more likely to be the right weather.

I found that quite surprising, as do many of my physics colleagues when I tell them (though, non-scientists have on the whole been less surprised). Because in physics there is often a drive to remove the human from the equation. We often hear scary tales of how, even unwittingly, people can bring biases to the analyses they’re doing. So to hear that a person would be let loose to change what the pure, unemotional computer has predicted is quite genuinely unexpected.

But the US meteorological office can measure the effect that their expert has, and they can demonstrate that the expert is in fact improving the performance, improving the accuracy of the predictions.

I personally feel there’s an important lesson here. One about understanding what computers and simulations are good for and, at the same time, understanding what human beings are good for. Computer simulations are unimpeded by preconceptions, but they are also unaided by physical intuition. They don’t know a stupid answer when they calculate it.

In physics we know well what the computer models are good for, but I think we could do with being a little more aware of what the human is good for, and not try so hard to exclude their input from the process.