Picture of Dr Graham Desborough

Dr Graham Desborough

Doctor, writer, mountaineer, photographer. Based in Auckland, New Zealand. My new book is 'How the Brain Thinks'.

How relevant is bias research?

Some of it, very, most of it, not a lot, is the short answer. But there is money to be made and power and influence to be gained. First though, a little bit of history about how it all began.

The popularity of the biases and heuristics approach is due to the usual combination of luck, style and substance. It began when the computer was both seen as an analogy as to how the mind works and a tool to run the complex statistical programmes used at the time to investigate the mind. The choice of suitable subjects to study were ‘designed as much like cocktail party anecdotes as traditional cognitive psychology studies.’ This made them ‘magnets for academic lecturers and textbook writers alike’, and appealed to students at all levels.

The systematic empirical study of judgment and decision making (JDM) out of which the study of heuristics and biases emerged developed in the 1960’s, when the field of cognitive psychology shifted from the study of motivation to the study of mental activity. Freudian psychology and the behaviourism of B. F. Skinner had ‘lost credibility with almost everyone’ and the computer provided a credible metaphor for mental activity. As the psychologist George Miller said, ‘the mind came in on the back of the machine’. The computer became both an analogue for the way the mind worked and a tool enabling complicated mathematical models to be produced that attempted to simulate human information-processing.

Cognitive psychologists involved in Judgment and Decision Making (JDM) research quickly split into two groups. One group of psychologists ‘took notice of the efforts of economists and statisticians’ and developed the field of preferential choice research. Central questions to these psychologists were: ‘How do people decide on a course of action? How do people choose what to do next, especially in the face of uncertain consequences and conflicting goals? Do people make these decisions rationally? If not, by what psychological processes do people make decisions, and can decision making be improved?’ It was from this group that the research into heuristics developed.

In 1944 the economists von Neumann and Morgenstern’s published ‘The Theory of Games and Behaviour’ and this precipitated the development of preferential choice research. A century after Bernoulli, in 1827 Jeremy Bentham had again introduced the concept of utility and its objective measurement and von Neumann and Morgenstern made it ‘respectable again’. The theory appealed, partly because it could be tested empirically. This ‘instigated a pattern of psychological experiments in which behavioural deviations from a presumed standard of rationality are considered the ‘interesting’ phenomena to be explained.’ Paul Meehl (1954), who Kahneman (2011) describes as ‘one of my heroes’, had already produced evidence showing that clinical predictions made by trained professionals were less accurate than statistical predictions and that when experts examined their results they always thought they performed better than they actually did. Later it was shown that replacing a judge with an algorithm produced more accurate results, a phenomenon that came to be called ‘bootstrapping’. (Elstein and Bordage 1998)  

This combination of ‘modest performance and robust confidence’ lead to ‘research on faulty processes of reasoning that yield compelling but mistaken inferences’. Ward Edwards (1954) introduced Bayesian statistical analysis. This was conditional probability that provided a ‘normative standard’ that simple, every day decisions could be compared against and that also led to an interest in the causes of ‘suboptimal performance’. The economist Herbert Simon (1955) introduced the term ‘bounded rationality’ as a mechanism to reason and to rationally choose ‘the fast and frugal way’, but ‘only within the constraints imposed by their limited search and computational abilities’. He also introduced the term ‘satisficing’, a blend of satisfying and sufficing. Simon also introduced the concept of ‘simplifying heuristics’ that people could use ‘to cope effectively with their limitations’.  (Gilovich and Griffen, 2002, Gigerenzer and Brighton, 2009, Goldstein and Hogarth, 1997)        

Following on from this, Kahneman and Tversky developed their own perspective out of a conviction that the processes of what they called ‘intuitive judgment’ were not simplified rational models but categorically different. ‘Much of this research has compared intuitive inferences and probability judgments to the rules of statistics and the laws of probability. The student of judgment uses the probability calculus as a standard of comparison much as a student of perception might compare the perceived size of objects to their physical sizes’.

‘Unlike the correct size of objects, however, the ‘correct’ probability of events is not easily defined. Because individuals who have different knowledge or hold different beliefs must be allowed to assign different probabilities to the same event, no single value can be correct for all people. Furthermore, a correct probability cannot always be determined, even for a single person. ….probability theory does not determine the probability of uncertain events – it merely imposes constraints on the relations among them. For example, if A is more probable than B, then the complement of A must be less probable than the complement of B.’

‘…..A probability measure is defined on a family of events, and each event is constructed as a set of possibilities, such as the three ways of getting a 10 on a throw of a pair of dice’, where ‘.. the probability of an event equals the sum of the probabilities of its disjoint outcomes’.    (Tversky and Kahneman, 1984)

Kahneman and Tversky (1974) initially described three general purpose heuristics: availability, representativeness and anchoring with adjustment. It was suggested that these used basic computations that the mind had evolved to make, and were not ‘quick and dirty’ solutions as they used complex cognitive processes.

But by using these heuristics, according to the heuristics and biases approach, people made mistakes, they were fallible, and these mistakes were made because the use of heuristics led to certain biases. These biases were ‘departures from the normal rational theory that served as markers or signatures for underlying psychological processes’ or heuristics. In neuroscience, biases like visual allusions occur because of the fallibility of the systems used when we attend to and perceive a cue or stimulus, and translate it using memory and emotion within the context we find ourselves in. 

The field of Social Psychology also gave a great boost to the heuristics and biases approach. ‘..the social evil with the greatest fascination for social psychologists has always been the combination of stereotyping, prejudice and discrimination, topics to which the heuristics and biases agenda was seen as highly relevant.’ The approach by the school of Nisbett and Ross, termed the ‘errors and biases’ perspective in social psychology was different in that it was concerned with the ‘causes and consequences of non-optimal reasoning in social life’, producing ‘the fundamental attribution error’, the ‘self-serving bias in attribution’, and the ‘confirmation bias in social interaction’. In this approach, the errors and biases are central and are not studied to try and give clues to the underlying processes as in the heuristics and biases approach. (Gilovich and Griffen, 2002) 

The heuristics and biases model as an explanation of rational choice has become embedded in behavioural economics, and has had an impact on political science, law and sociology. Models of spending and investment behaviour developed by Richard Thaler and Cass Sunstein were set out in their book Nudge in 2008. Here observations in the classroom, battlefield and conference room are used to predict the behaviours of the markets in stocks, housing and employment.

The central insight in Nudge is that changing the way information is framed, can have a huge impact on the choices people make. On page 190 in my book How the Brain Thinks, I discuss how framing impacts on perception, and how hard it was to separate framing effects in the early studies of risky choice: I write ‘if you were told a drug has a 25-50% chance of causing a problem, would you take it? If you were told 50-75% of people who took the drug didn’t develop a problem, would you take it then? Same drug remember.’

Sonia Sodha, writing that ‘Nudge theory is a poor substitute for hard science in matters of life or death’ in the Guardian in April 2020, was not enthusiastic when she came across the concept at a think-tank seminar some 10 years ago. Her opinion hasn’t changed, although at the time ‘lots of far more eminent people disagreed with me.’

Nudge was released just as the financial crisis of 2008 hit. TALK ABOUT LUCK! It’s release ‘was perfectly timed to achieve maximum traction by offering politicians the chance to reap savings through low-cost policy.’ Cass Sunstein was ‘quickly appointed to a senior job in the Obama administration’ and Richard Thaler was awarded the Nobel Prize in Economic Sciences in 2017.

In England, ‘David Cameron set up the behavioural insights team, dubbed the “nudge unit”, led by the psychologist’ David Halpern. This has had a mixed record with some successes on pensions and tax payments, but has otherwise been ‘a damp squib”.

And then Halpern ‘popped up’ to talk about the UK Governments pandemic strategy in early March 2020. It was he who ‘first publicly mentioned the idea of “herd immunity” as part of an effective response it Covid-19’. He also ‘favoured delaying a lockdown because of the risk of “behavioural fatigue”, the idea that people will stick with restrictions for only so long, making it better to save social distancing for when more people are infected’.

However, more than 600 behavioural economists wrote a letter questioning the evidence. ‘A rapid evidence review of behavioural science as it relates to pandemics only fleetingly refers to evidence that extending a lockdown might increase non-compliance, but this turns out to be a study about extending deployment in the armed forces. “Behavioural fatigue is a nebulous concept,” the review’s authors later concluded in the Irish Times.’

A common critique of behavioural economics is that ‘some (not all) members of the discipline have a tendency to overclaim and overgeneralise, based on small studies carried out in a very different context, often on university students in academic settings.’ The point here is that opinion masquerading as research has influenced a very senior member of the UK government, namely Boris Johnson himself.

And then there is the true economics. ‘The Behavioural Insights Team is a multimillion-pound profitable company, which pays Halpern, who owns 7.5% of its shares, a bigger salary than the prime minister. Here lies the potential conflict of interest: someone who contributes to Sage also has a significant financial incentive to sell his wares. It perhaps explains BIT’s bombastic claims – “it’s no longer a matter of supposition… we can now say with a high degree of confidence these models give you best policy,” Halpern claimed in 2018. And: “We make much of the simplicity of our interventions… but if properly implemented, they can have a powerful impact on even our biggest societal challenges.”’

The problem with all forms of expertise in public policy is that it is often the most formidable salespeople who claim greater certainty than the evidence allows, who are invited to jet around the world advising governments. But it is the combination of luck, style and substance and the optimism bias of the behavioural tsars that has led them to place too much stock in their own judgment in a world of limited evidence.

‘ But the irony for behavioural scientists is that this is a product of them trading off, and falling prey to, the very biases they have made their names calling out.’ It can be easily imagined ‘how easy it might have been for Johnson to succumb to confirmation bias in looking for reasons to delay a lockdown: what prime minister wants to shut down the economy?’ 

If we start to look at the evidence, Social Science research has long thought to be sketchy.

In the Science magazine sciencemag on 23 May 2014 : ‘The output of the new batch of replications, published alongside the previous 13 this week in an issue of Social Psychology guest-edited by Nosek and Lakens, is less reassuring. All told, the researchers failed to confirm the results of 10 well-known studies, such as the social psychological effects of washing one’s hands, holding cups of warm or cold liquid, or writing down flattering things about oneself. In another five cases, the replications found a smaller effect than the original study did or encountered statistical complications it did not report. For embodied cognition and also for behavior priming—the study of how exposure to one stimulus, such as the word “dog,” changes one’s reaction to another, such as a photo of a cat—the results are particularly grim. Seven of the replications focused on experiments in these areas, and all but one failed.’

And on 28 August 2015 in Science magazine and again there are some interesting articles and discussion points. I will leave it to the conclusion of the last paper to move us forward:

‘Reproducibility is not well understood because the incentives for individual scientists prioritize novelty over replication. Innovation is the engine of discovery and is vital for a productive, effective scientific enterprise. However, innovative ideas become old news fast. Journal reviewers and editors may dismiss a new test of a published idea as unoriginal. The claim that “we already know this” belies the uncertainty of scientific evidence. ‘

Innovation points out paths that are possible; replication points out paths that are likely; progress relies on both.

‘Replication can increase certainty when findings are reproduced and promote innovation when they are not. This project provides accumulating evidence for many findings in psychological research and suggests that there is still more work to do to verify whether we know what we think we know.’

So, can we trust the science behind bias research? The studies were certainly innovative but there is little evidence of reproducibility, yet….

But I think there is another way to look at why we do things. The two main influences on decision making, well studied by Herbert Simon (1955) are time and context. When combined with our innate biology, we can more fully understand why we do the things we do. The first two concepts are included in my next book and the biology of the brain that enables us to do this is in my first book, How the Brain Thinks.

Take care out there. The only certainty out there, is that all is not what it seems.

References

Edwards, Ward (1954). ‘The theory of decision making.’ Psychological Bulletin 51: 380-417

Elstein, Arthur S and Bordage, Georges (1998) ‘Psychology of clinical reasoning.’ In Professional Judgment: A Reader in Clinical Decision Making Edited by Jack Dowie and Arthur Elstein. Cambridge, U.K.: Cambridge University Press

Meehl, Paul E (1954). Clinical versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence Minneapolis: University of Minnesota Press

Gigerenzer, Gerd and Brighton, Henry (2009) ‘Homo Heuristicus: why biased minds make better inferences.’ Topics in Cognitive Science 1: 107–143

Gilovich, Thomas and Griffin, Dale (2002). ‘Introduction – heuristics and biases: then and now.’ In: Heuristics and Biases – The Psychology of Intuitive Judgment Eds Gilovich, Thomas, Grifffin, Dale and Kahneman, Daniel. Cambridge: Cambridge University Press

Goldstein, William and Hogarth, Robin M (1997). (Eds) Research on Judgment and Decision Making- Currents, Connections and Controversies Cambridge: Cambridge University Press

Kahneman, Daniel (2011). Thinking, fast and slow London: Penguin Group

The quote from George Miller comes from Connolly, T, Arkes, H A, and Hammond, K R (2000) Introduction Judgment and Decision Making: An Interdisciplinary Reader (2nd ed) Cambridge: Cambridge University Press.

Simon, HA (1955). ‘A behavioural model of rational choice.’ Quarterly Journal of Economics: 69; 99-118.

Tversky, Amos and Kahneman, Daniel (1974). ‘Judgment under uncertainty: heuristics and biases.’ Science,185: 1124-1131 Reprinted in:Judgment and Decision Making- an Interdisciplinary Reader (p35-52) Eds:Terry Connelly, Hal R Arkes and Kenneth R Hammond. (2000) Cambridge: University Press

Tversky, Amos and Kahneman, Daniel (1984). ‘Extensional versus intuitive reasoning: the conjunction fallacy in probability judgment.’ Reprinted in Heuristics and Biases, ibid 

Facebook
Twitter
LinkedIn

More to explore

Why is culture so important?

There is so much to think about in this one question. I referenced Culture Health and Illness, by Cecil Helman, in my

Follow Graham’s blog

GET THE LATEST BLOG POSTS

Join my mailing list to receive the latest blog posts and updates.

Thanks. You have successfully subscribed.