Noise Audible Audiobook – Unabridged
THE INTERNATIONAL BESTSELLER
From the world-leaders in strategic thinking and the multi-million copy bestselling authors of Thinking Fast and Slow and Nudge, the next big book to change the way you think.
We like to think we make decisions based on good reasoning–and that our doctors, judges, politicians, economic forecasters and employers do too. In this groundbreaking book, three world-leading behavioural scientists come together to assess the last great fault in our collective decision-making: noise.
We all make bad judgements more than we think. Noise shows us what we can do to make better ones.
PLEASE NOTE: When you purchase this title, the accompanying PDF will be available in your Audible Library along with the audio.
- Get this audiobook free then 1 credit each month, good for any title you like - yours to keep, even if you cancel
- Listen all you want to the Plus Catalogue—a selection of thousands of Audible Originals, audiobooks and podcasts, including exclusive series
- Exclusive member-only deals
- $16.45 a month after 30 days. Cancel anytime
|Listening Length||14 hours and 6 minutes|
|Author||Daniel Kahneman, Olivier Sibony, Cass R. Sunstein|
|Narrator||Daniel Kahneman, Olivier Sibony, Todd Ross|
|Whispersync for Voice||Ready|
|Audible.com.au Release Date||18 May 2021|
|Best Sellers Rank|| 885 in Audible Books & Originals (See Top 100 in Audible Books & Originals) |
5 in Forecasting & Strategic Planning
6 in Business Systems & Planning (Books)
6 in Strategic Business Planning
Review this product
Top reviews from Australia
There was a problem filtering reviews right now. Please try again later.
As others have said, it's verbose. It starts by drawing a distinction between noise and bias, then continues with nearly 400 pages on noise. It would have been a better book if it included both noise and bias in that number of pages.
Top reviews from other countries
And so to Noise, a book, we are told that is designed to offer suggestions for the improvement of human judgement. As for Noise itself we are told in the book that that noise is about statistical thinking. We are also told that noise is a distinct source of error and that "the scatter in the forecasts is noise" and, that whenever we observe noise we should work to reduce it. However, we are also told that noise is invisible and embarrassing.
Noise occurs because people are idiosyncratic; they inhabit different psychological spaces; their moods are triggered by a unique set of contexts - they see and respond to the evidence in different ways. Not to mention their unconscious response to particular cues. (In many respects - seemingly the same things that trigger biases, and we are told rather confusingly that "psychological biases create system noise when many people differ in their biases.") We enter a convoluted vortex - biases cause noise - where there is noise (invisible) there will surely also be more biases at work - the two, it seems, exist in relationship that is characterised by their mutual and continuous interruption of each other. And there is actually no clear sense given as to how one should go about unpicking them.
Surprise surprise the authors pay passing homage to prediction markets, of which they say; "much of the time prediction markets have been found to do very well.") Prediction markets, in the wild (outisde of organisations) have not actually performed very well at all - because they lack insiders and do nothing more than aggregate noise. Their record on political events over the past ten years has been terrible (In the recent Chesham and Amersham By-Election in the UK, for example, the Tories were trading at 1.17 on the Betfair Betting Exchange as Polls opened - they lost). A better example, in the context of noise would have been horse racing betting markets - which contain lots of noise and bias, but which display a consistent ability to be predictive - because of the presence of insiders, who cancel out the noise.
Sadly it seems that we have gone back twenty years, to the notion of the jar of sweets and the benefits of aggregating independent judgements. In a nutshell, this book is about 380 pages too long.
At the same time, they bring into the discussion some serious tools you won’t even meet until you get to graduate school in statistics, like the “percentage concordant,” which is not some type of supersonic airplane, but a rank correlation type of measure, and even provide a mini-table to move you from percentage concordant (PC) to correlation. The table, by the way, is bogus in the absence of context, as percentage concordant is a construct that I’m willing to bet relies heavily on assumptions that go unmentioned here.
The chapters end with summaries, which was OK for Thinking Fast and Slow, but a bit of an insult when the subject matter is so plain.
The style is pompous and paternalistic.
System A and System B are parachuted in, but (i) they’re barely explained (ii) that’s a theory to explain bias rather than noise (and invite a celebrity author to the proceedings)
Most annoyingly, terribly little ground is covered in this weighty tome. Gun to my head, I could probably get it all down to one page. Let me try:
1. Noise is just as bad as bias in terms of messing up your results
2. A good way to measure how bad your results are is the mean square error
3. Composition of Mean Square Error:
• Mean square error is made up of Bias and Noise
• Noise is made up of Level Noise and Pattern Noise
• Pattern Noise is made up of Stable Pattern Noise and Occasion Noise
• Level Noise is the kind of noise that comes from the fact that some judges are harsh and some are lenient, so two guys who did the same crime could get very different punishment.
• Pattern Noise is the kind of noise that comes from the fact that a judge may have a daughter, making him less harsh on young women that remind him of his daughter. He could be a harsh judge who is less harsh on young women who remind him of his daughter; or he could be a lenient judge who is extra lenient on young women who remind him of his daughter.
• Occasion Noise is the kind of noise that comes from the fact that judges are harsher right before lunch. Same judge, same crime, same perpetrator, different outcome, because it was a different occasion
4. If you ask people to measure something independently from one another, the more the merrier; but if they talk to each other first, then they will amplify errors for a variety of reasons that lead to groupthink
5. Machines beat people when it comes to cutting noise
6. In the quest to limit noise, people can fight back by sticking to simple rules
7. We humans like to build stories after the fact to explain what happened; they’re usually bogus: statistical explanations beat causal explanations
8. Bias can be the source of noise: inconsistency in bias is noise
9. Noise can arise when you’re told to rank things on a scale; to cut noise, it’s better to go ordinal than cardinal
10. To improve judgements you need (i) better judges (ii) a decision process that aggregates in a way that maintains independence among the judges (iii) guidelines (iv) relative rather than absolute judgements
11. There is a place for intuition: it’s got to be brought in at the very end, after all the mechanical work has finished
12. There actually is a place for noise: when people are bound to game the system
Read something else!
Consider that the following studies listed in the Notes to the Introduction all used p-values:
(2) Child Protection and Child Outcomes: Measuring the Effects of Foster Care
(4) Refugee Roulette: Disparities in Asylum Adjudication
In Chapter 1:
(14) A Survey(!!!) of 47 Judges (dated 1977) (Survey vs. Random Control Study)
(16) Extraneous Factors in Judicial Decisions cites a p-value <.0001 on page 5
... and similar p-value references associated with judges' differential and variance in sentencing: related to food breaks, nearby NFL Team winning recently, birthdays, outside air temperature. IMHO, the identification of these explanatory factors based on p-values are bogus and illustrative of John Ioannidis' 2005 paper: Why Most Published Research Findings Are False.
It is disconcerting that these scholar authors utilize many questionable references to architect a thesis about what is more commonly known as variance. As the normal Gaussian distribution is ubiquitous, one should not be startled that selected ranges within it vary significantly.
Given the presence of uncertainty and the idiosyncracy and variability of individual experience, human judgments will vary. Human judgment is noisy! DUH !!!
The authors have failed their scholarship and profession.
The basic premise seems to be that decisions have noise in them (duh) and its important to understand that we should evaluate the decision making process and not just the outcome. Accuracy, Precision, and Bias are terms familiar to anyone with a basic understanding of statistics; for others, a couple of early examples focusing on shooting targets easily educates the three terms and their differences. The authors keep on stating the same concepts in a number of ways for the first 5-6 chapters. And very often, simple observations are turned to very dense phrases without really serving any purpose than trying to sound very academic or scholarly. (For example, "..what they are trying to achieve is, regardless of verifiability, is the internal signal of completion provided by the coherence between the facts of the case and the judgement. And what they should be trying to achieve...is the judgement process that would provide the best judgement over an ensemble of similar cases") . Then the authors spend a chapter or two differentiating "predictive" and "evaluative" judgements only to conclude that the difference is "fuzzy" (genius observation) and a decision will usually require both.
If you are able to grind your way through the first 3 Parts (12 chapters), you will be able to pick up some new insights in Part IV and V that discuss on how variability/noise occurs and their various sources. Conducting a "noise audit" and what constitutes decision "hygiene" are sections worth reading for those whose roles require constant synthesis of inputs from various experts/sources/stakeholders etc.
Overall, the unnecessarily dense style that overcomplicates a simple message, lack of a clear target audience, and a narrative arc that just takes too long to provide new insights or provocative thoughts, makes this a fairly dull read.