Imagine that two doctors in the same city give different diagnoses to identify patients or that two judges in the same courthouse give different sentences to people who have committed the same crime.
Suppose that different food inspectors give different ratings to indistinguishable restaurants or that when a company is handling customer complaints, the resolution depends on who happens to be handling the particular complaint.
Now imagine that the same doctor, the same judge, the same inspector, or the same company official makes different decisions, depending on whether it is morning or afternoon, or Monday rather than Wednesday.
These are examples of noise: variability in judgments that should be identical. In Noise, Daniel Kahneman, Cass R. Sunstein, and Olivier Sibony show how noise contributes significantly to errors in all fields.
Including medicine, law, economic forecasting, police behavior, food safety, bail, security checks at airports, strategy, and personnel selection.
And although noise can be found wherever people make judgments and decisions, individuals and organizations alike are commonly oblivious to the role of chance in their judgments and in their actions.
Although interesting, the authors clearly show their bias in “Noise”. It was a disappointing book after reading the incredibly interesting and applicable “Thinking Fast and Slow”. My main concern is that they imply causation where statisticians would not claim more than correlation. Implying causation is sloppy and bad statistical practice.
They are greatly concerned with the randomness of individual impacts to people form judgments, insurance companies, and job interviews. Although they state that overall impacts to individuals on average are fair (i.e. unbiased), the impact to each individual may have a large variance (be wildly different from the average), which is caused by systemic noise.
They assume this is inherently bad and that we should systematically reduce this noise by implementing more algorithms and rules into all sorts of private and public institutions. My concern is: who determines the rules for those algorithms? (Unbiased statisticians or policymakers?).
Essentially, this is a long discussion about statistical models that have larger variances than the authors would like (and larger variances than the general population would expect). They use the variability in human judgment to illustrate that humans are flawed.
Their solution is to use more models, but they also point out that models can be flawed in similar ways. It’s a conflicting book of “don’t trust anyone’s judgment” and “don’t trust models”, but “do trust that individuals are likely having unfair things happen to them, even if there isn’t any bias in the system.”
It was unfortunate that they didn’t include a discussion of what individuals can do to improve themselves (reducing their own biases and noise) rather than waiting for big institutions to reduce that noise for them.
Download The PDF