Wrong
David H. Freedman
Little Brown, 295 pages, $31.99
Government runs on expertise. Government executives seek the best knowledge and information, from experts within and without their departments, before making decisions.
But what if some – or a lot – of that expertise was faulty? What if the recommendations of experts came with a margin of error, and not a low one like most opinion surveys, but a sky-high chance of being wrong?
Science and business journalist David Freedman thinks we should be alert to that possibility. In his new book, Wrong: Why Experts Keep Failing Us – And How To Know When Not To Trust Them, he begins with the work of a researcher who has found that the findings in the most influential medical journals prove to be wrong two out of three times. And the author extends that to experts in general: “The fact is, expert wisdom usually turns out to be at best highly contested and ephemeral and at worst flat-out wrong.”
He goes further, suggesting while the mistakes usually aren’t deliberate, they aren’t wholly innocent either – indeed, we are all complicit. He notes that often people will joke that meteorologists are the only people who get paid to be wrong. “I would argue that in that sense most of our experts are paid to be wrong and are probably wrong a much higher percentage of the time than are meteorologists,” he says. “Expert pronouncements are pushed toward wrongness so strongly that in the end it’s harder, I think, to explain why they’re sometimes right. But that doesn’t mean we’re hopelessly mired in the swamp of bad advice. With a decent compass, we can find our way out.”
Admittedly, his book is a journalist’s report, with broad brush strokes from a pastiche of examples and, yes, snippets of research. But then so was Malcolm Gladwell’s The Tipping Point, whose style this book resembles, and likewise deserves some attention.
A lot of Freedman’s examples are focused on expertise transmitted through the media by scientific studies and big name consultants, often in a flurry of newsbytes.
That might be called “mass expertise,” but some of those people, and their work, makes its way into the funnel for government decisions; health executives may have depended on some of that medical research which was found to be wrong 66.7 percent of the time. Some of that work was even funded by government – the scramble for funding, he shows, often exacerbates the likelihood of going wrong. And if you have ever asked someone briefing you to make their points simpler, you may have contributed to the epidemic of wrong he is cataloguing.
Indeed, it starts with simplicity. We love advice that seems simple. He notes that when it comes to advice on improving our personal productivity, we’ll shun advice offering “the 138 things you might have to do to have some chance of partly achieving your goal, depending on which of these following 29 conditions best describe you and your situations.” Too many things to contemplate. “Instead, we look for the 12 steps, the seven habits, and, of course, the secret – that one-step recipe that enables any person to achieve any type of success under any conditions.”
Here are some other characteristics he notes we crave in expert advice:
- Clear-cut: We prefer to be told the right answer without ifs, ands or buts. “Qualifications that require matching different answers to different conditions, or that may render the advice entirely inappropriate to some situations, are an unwelcome complication and make the advice seem less fundamental,” he writes. We can thoughtlessly assume that an expert who hedges his bets really isn’t on top of the matter – if he was, he would be more clearcut.
- Doubt-free: We expect experts to transmit their advice with full confidence. So doubts aren’t usually shared, important as they may be.
- Universal: This may be the age of customization, but we prefer one-size-fits-all-advice. It is easier to apply, and, ironically, has the ring of truth.
- Upbeat: We don’t want to hear something can’t be fixed or if there are solutions they are elusive, murky long shots or difficult compromises or unpleasant to implement. We prefer positive advice.
- Actionable: Expert findings must go beyond explaining things to helping us improve the situation. As Woody Allen said, “you want to feel you can control things to some degree, because if you can’t, life is scarier.”
- Palatable: We are loaded with biases and beliefs, and are likely to reject advice that challenges our ingrained beliefs, no matter how well grounded the advice may be.
He adds those all up into something he calls the certainty principle: “We’re heavily biased to advice that is simple, clear-cut, actionable, universal and palatable. If an expert can explain how any of us is sure to make things better via a few simple, pleasant steps, then plenty of people are going to listen.”
That applies most heavily, of course, to diet crazes or investment tips. But it also fits the various forms of advice government executives receive from experts – and the advice experts within government sense they must give to be heard. When the advice is simple, doubt-free, universal, upbeat, actionable and palatable – when it hews to the certainty principle – we are more likely to respond positively than when it doesn’t.
The book also sifts through a variety of reasons why medical and scientific studies go bad. Of importance to government executives is the fact that the fight for funding by researchers can lead them to twist their study – and even at times their data – to get the all-important grant upon which their careers depend. Inconvenient data gets tossed out. A variety of analytical techniques might be tried until one, no matter how exotic, declares the study valid and spits out some appealing results.
Since journals tend to only publish positive findings – studies whose results back the study’s hypothesis – those that convey negative results are shoved into an office drawer, even though they may be important for our broader understanding of an issue. Too many of those negative findings, of course, and the researcher can be languishing in no-funding land, so the motivation to fudge the research grows.
Freedman suggests you be extra wary when advice coming to you displays the following characteristics:
- It’s simplistic, universal, and definitive: experts often operate at the very boundary of the unanalyzable. They aren’t tackling the easy stuff. Their advice, if decent, he says, “will be complex, it will come with many qualifications, and it will be highly dependent on conditions. Because of the ifs, ands, or buts, it will be difficult to act on. Because our beliefs tend to be simplistic and optimistic, it will probably be incompatible with them. In other words, good expert advice will be at odds with every aspect of the sort of advice that draws us to it.”
- It’s supported by only a single study, or many small or less careful ones, or animal studies: all those are common flaws with medical and scientific studies that are later proved wrong. Humans are not like rats or other animals, he says, dismissing those studies.
- It’s groundbreaking: most expert insights that seem novel and surprising are based on a small number of less rigorous studies, because usually the rigorous studies only come after the smaller ones pave the way.
- It’s pushed by people or organizations that stand to benefit from its acceptance. That’s something government executives know. Indeed, he declares that “government-sponsored research