Behavioural Science at its Worst
As Laura Dodsworth writes in a recent article, the claim by members of SPI-B (the U.K.
Government’s Scientific Pandemic Insights Group on Behaviours) that they opposed the use of fear to control public behaviour is demonstrably false. It’s a minuted fact that they advised directly threatening a sizeable proportion of the U.K. population:
It’s disturbing enough that a group of senior academics see fit to deny historical fact in a major medical journal, yet Dodsworth’s other anxiety is even more troubling:
My second significant concern was the astonishing idea that the authors could “leave aside the ethical and political dimensions of this argument”. How can psychologists leave aside the ethical dimensions of using fear, whether for an article or advising Government and drafting the plans in the first place?
The psychologists base their claim that they wouldn’t have recommended threat (even though they did) on a selective reading of the literature:
The scientific literature tells a very different story. It shows that frightening people is generally an ineffective way of persuading them to engage in health protective behaviours.
However, basic facts, most scientific research in the field, and their previously published writings undermine their denial.
Ethics-free behavioural science is impossible if it is used to change behaviour
Professors Reicher, Drury, Michie and West were all members of SPI-B. There is no record of any of them objecting to the use of “hard-hitting emotional… threat”. Yet this is such an extreme recommendation that it’s surely reasonable to conclude that they would have opposed it had they disagreed with it.
The idea that you can separate behavioural science from politics and ethics shows lack of knowledge of all three disciplines. It is possible to use scientific methods to try to understand human behaviour without making value judgements, but it’s impossible to endorse techniques to change human behaviour without being slap bang in the middle of the ethical arena.
And this, undeniably, is where they sit. They say, for example:
Information is important and must provide clear and specific guidance for exactly what behaviour individuals should adopt to implement social distancing. …
‘Protect each other’ messages should stress how desired behaviours benefit the group and protect its most vulnerable members, including those we love. …
Messages should give clear, specific and calm advice, helping households to plan together how to commit to social distancing. …
Messages should be communicated via professionally designed and appealing mass and social media campaigns.
These are imperative statements. They cannot possibly be ethically neutral. They are central to the authors’ entire position – lay them to one side and there is nothing left.
It’s definitive of ethics that any intervention in the life of other sentient beings requires justification. It’s a fundamental component of ethical deliberation. All ethical codes, principles and standards are premised on it.
At least some SPI-B members appear unaware of this elementary fact. Instead, they regard the application of methods of persuasion as an unproblematic technical exercise – a pragmatic approach to behaviour change which has no need to justify its goals, need take no account of the diversity of human values, can ignore established principles of applied ethics and – incomprehensibly – is able to overlook the complexities of human psychology. Like Dodsworth, I find this extraordinary.
A similar document by other behavioural scientists – MINDSPACE – published by the Cabinet Office, shows equally questionable assumptions. A wordsearch of the MINDSPACE document for ‘ethical’, ‘ethics’ and related terms returns no results. Reicher et al.’s work is equally free from mention of ethics. MINDSPACE does raise what it calls the “moral hazard problem”:
If we think the state is making decisions for us, we may absolve ourselves of the responsibility to take charge of our own behaviour.
Which is a bit rich since behavioural scientists of this ilk don’t trust us to take charge of our behaviour in any case. The entire point of MINDSPACE – indeed the entire point of applying behavioural science at all – is to make people think and act in ways we would not otherwise do.
This is manifestly not an impartial endeavour, yet Susan Michie, one of the SPI-B group, explicitly believes it is. According to her, it is sufficient merely to provide the conditions most likely to achieve a “specified behavioural target“. She sees her job as nothing more than creating “reliable classifications” to help identify which “modes of delivery” (MoDs) will work best in which circumstances:
By providing greater clarity about how an intervention and its components are delivered, researchers can add to knowledge as to how MoDs influence intervention effectiveness, both directly and in interaction with other intervention-related entities. This will inform the selection of appropriate MoDs for interventions.
Value judgements, ethical deliberation and goal justification are simply not in the picture. All that matters is whether a strategy will change whatever behaviour happens to be in the scientists’ sights.
It’s difficult to understand how anyone can maintain this point of view. It contradicts a fundamental of Western thought, namely that ethically aware human beings are bound to consider the moral calibre of both the goals and the methods of their actions.
It’s almost universally accepted that the first question that should be asked by anyone with moral conscience is “should this be done?”, not “does this work?” It is no small matter that a group of influential behavioural scientists do not appear to recognise this.
Other behavioural scientists are more perceptive, suggesting moral limits to behavioural science interventions. Dr. Helena Rubinstein writes:
In the past, there has been public unease about the use of the psychological sciences in the commercial sector. This produced a backlash against apparently subversive techniques, but also resulted in the setting up of a code of practice supported by the industry.
The same may be needed with respect to the use of behavioural science. As a starting point, we suggest the following guidelines:
1.Behavioural interventions built on untruths are unacceptable.
2.Nudges that make it difficult for people to choose otherwise are unethical: people must have the freedom to choose differently.
3.Behavioural interventions should be scrutinised for unintended, as well as intended, consequences.
4.Consent should not be hidden: interventions should be transparent wherever possible.
5.Practitioners should be comfortable to defend their approach, methods and motives in public.
Unlike Michie and her colleagues, Rubinstein understands that there is a profound ethical difference between exercising methods to change a person’s behaviour if he or she requests help, and exercising methods to change a person’s behaviour without being asked.
The scientific literature shows that using fear to change behaviours can be an effective strategy
Reicher et al. cite a single paper in support of their claim that frightening people is an ineffective means of persuasion. Unfortunately, they do not seem to have read it fully. Despite what they suppose it says, it affirms that threatening people does work so long as you enable them to act to mitigate the perceived threat:
Current evidence shows that information about the severity of possible negative consequences from risk behaviour may prompt defensive responses. These counterproductive responses may be avoided by providing instruction on how to successfully implement the recommended actions as well as convincing people that they are personally susceptible to the threat.
This is precisely the approach which underpins the fear tactics the group advised and now want to deny – threats do work given certain conditions; therefore, threatening people can be considered a legitimate behavioural science strategy. This conclusion is further supported by a massive meta-analysis which for some reason they overlooked:
Fear-based appeals appear to be effective at influencing attitudes and behaviors, especially among women, according to a comprehensive review of over 50 years of research on the topic, published by the American Psychological Association.
These appeals are effective at changing attitudes, intentions and behaviours. There are very few circumstances under which they are not effective and there are no identifiable circumstances under which they backfire and lead to undesirable outcomes,” said Dolores Albarracin, PhD, Professor of Psychology at the University of Illinois at Urbana-Champaign.
The authors’ argument contradicts their other work
Reicher et al. maintain that it’s advisable to: “Avoid authoritarian messages: Messages based on coercion and authority can in some circumstances achieve large changes in the short term but can be hard to sustain in the longer term.”
Yet this is the polar opposite of what they say in other published work. Susan Michie, for example, has extensively promoted a “behaviour change wheel” (BCW) which she describes as “a new method for characterising and designing behaviour change interventions”.
At the hub of her wheel there are “three essential conditions: capability, opportunity, and motivation”, “nine intervention functions aimed at addressing deficits in one or more of these conditions” and “seven categories of policy that could enable those interventions to occur”.
These are summarised graphically:
The idea is that any behaviour requires capability, opportunity and motivation. If you want to change a behaviour you can use a range of interventions (in red) and broader policy (in grey). Taken in isolation this makes some sense, but when the BCW is examined in conjunction with the groups’ advice on “harnessing behavioural science in public health campaigns” it becomes absurd.
Michie and her collaborators urge caution about implementing restrictive measures for epidemics. Yet this, unambiguously, is what her BCW advocates: “restrictions”, “persuasion”, “modelling”, “regulation”, “training” and “coercion” are all essential tools of this type of applied behavioural science.
In previous publications they consider coercion a useful manipulative tool, in their denials they don’t: “We recommend coercion. We don’t recommend coercion.”
This is classic double-think – the acceptance of contrary opinions or beliefs at the same time. In case there’s any doubt, coercion (which they favour) involves threatening people (which they claim they didn’t want to do):
Coercion involves compelling a party to act in an involuntary manner by the use of threats, including threats to use force against that party. It involves a set of forceful actions which violate the free will of an individual in order to induce a desired response.
(Coercion is the) use of force or intimidation to obtain compliance.
(Coercion) occurs if one party intentionally and successfully influences another by presenting a credible threat of unwanted and avoidable harm so severe that the person is unable to resist acting to avoid it.
Apparently they have left memory, logic and reason to one side as well as ethics.
Their view of ‘empowerment’ is twisted
The scientists say they really wanted to ‘empower’ us:
This emphasis on empowerment is even clearer when one looks across the corpus of SPI-B reports. It reflected a conception of the public as an asset rather than an impediment in the pandemic.
The advice was to engage with the public and focus on supporting them in doing the right thing rather than assume they need frightening and coercing in order to stop them from doing the wrong thing. This is particularly clear in another report of April 3rd 2020 on “harnessing behavioural science to maintain social distancing” (subsequently published as a journal article).
Among the key principles set out in the paper were the need to avoid authoritarian messaging based on coercion, an emphasis on enabling behaviour rather than the use of punishment or castigation, and the need to engage with communities in order to co-design interventions with them as opposed to imposing interventions upon them.
SPI-B did indeed recommend giving people control. But not because they consider personal autonomy an intrinsic human good, rather because limited freedom of choice is thought to increase compliance with ‘doing the right thing’. In normal parlance, if you empower individuals, you enable them to choose for themselves. In SPI-B speak ‘empowerment’ means ‘enabling them to do the right thing’ as defined by ‘experts’ who self-evidently know best.
It’s very hard to see how fostering population-wide guilt, nationwide mass media propaganda campaigns and targeting potentially recalcitrant groups of people is empowering according to any established use of the term.
They believe the only thing that matters is that a behavioural intervention works
Every moderately educated school student understands that ‘the right thing to do’ is usually open to interpretation. In a diverse society ‘doing the right thing’ can have a different meaning for different people.
‘Doing the right thing’ can mean protesting authoritarian restrictions on free movement and assembly, or writing articles critical of Government policy, for example. How can university professors fail to understand that in social context ‘doing the right thing’ is a contested concept?
Michie laments that the U.K. Government failed to use key elements of the BCW (even though it did use them):
Just by identifying all the potential intervention functions and policy categories this framework could prevent policy makers and intervention designers from neglecting important options. For example, it has been used in U.K.
parliamentary circles to demonstrate to Members of Parliament that the current U.K. Government is ignoring important evidence-based interventions to change behaviour in relation to public health. By focusing on environmental restructuring, some incentivisation and forms of subtle persuasion to influence behaviour, as advocated by the popular book ‘Nudge’, the U.K.
Government eschews the use of coercion, persuasion or the other BCW intervention functions that one might use. (My italics)
In other words, government is too flaky. It should use whatever might work from the BCW smorgasbord without worrying about ethical niceties:
The BCW… forms the basis for a systematic analysis of how to make the selection of interventions and policies. Having selected the intervention function or functions most likely to be effective in changing a particular target behaviour, these can then be linked to more fine-grained specific behaviour change techniques (BCTs).
Any one intervention function is likely to comprise many individual BCTs, and the same BCT may serve different intervention functions… Thus, the BCW approach is based on a comprehensive causal analysis of behaviour and starts with the question: “What conditions internal to individuals and in their social and physical environment need to be in place for a specified behavioural target to be achieved?” (my italics)
As Laura Dodsworth notes, Michie and her colleagues do not exclude any interventions – if an intervention will change a behaviour they can see no reason not to implement it. If you’re oblivious to ethical considerations then what is to stop you advising the use of fear?
No-one asked them to manipulate our behaviours
The collaborators are proud of their controlling tactics:
As a group of behavioural and social scientists who have shared their advice with Government through the U.K.’s Government Office for Science, we have collaborated to develop a series of principles to inform interventions to promote whole population adherence to social distancing measures.
Their paternalistic assumptions are overwhelming. Perhaps the biggest is that the ‘whole population’ needed coercing to ‘adhere’ to rules we had no part in making, were not consulted about, and for which there was no clear evidence:
Airborne transmission of COVID-19 is highly random and suggests that the two-meter (6 foot) rule was a number chosen from a risk ‘continuum’, rather than any concrete measurement of safety.
Early social distancing is either doing nothing or making things worse. This is likely because the virus spreads mainly in hospitals, care homes and private homes rather than in the community, so social distancing of the wider population beyond a basic minimum (washing hands, self-isolating when ill, not getting too close, and so on) has little impact. The countries with the highest death tolls are often those which fail to protect their care homes adequately, with up to 82% of COVID-19 deaths occurring among care home residents.
SPI-B uncritically accepted the need for social distancing, made no recommendation to educate the public about the actual data, and had no interest in enabling people either to dissent or act on any alternative view. Informed consent was simply not a consideration.
What does this ‘behavioural science’ amount to?
Having read other publications by the main authors, and considered similar behavioural science writings, I conclude that the only possible answer is ‘not very much’. As far as I can see, at least in the hands of Michie and her colleagues, the primary function of applied behavioural science is to manipulate people into acting in ways we would not naturally.
It assumes a crude distinction between ‘doing the right thing’ and ‘doing the wrong thing’ the manipulators think requires neither explanation nor justification.
It also assumes that those of us not equipped or not willing to do ‘the right thing’ need to be persuaded to do it, using crude psychological devices. The use of threat, coupled with Orwellian ‘empowerment to comply’, is one such device.
I cannot find any more substance than this. If I’m mistaken, I’m open to a mutually informative discussion in public.
Finally, on a personal note, having published a frustrated analysis of Covid evidence and policy in September 2020, both mystified by and critical of the woeful thinking and insidious coercion we were all subject to then, my advice to the behavioural scientists and the many others addicted to control remains the same four years on.
The public is neither an ‘asset’ nor an ‘impediment’. We are knowledgeable human beings able to make our own choices given the chance. We need unfiltered information, unbiased support to process it if necessary, and practical democratic mechanisms that allow us to respond – free from coercion – to policies that affect our daily lives.
See more here Daily Sceptic
Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method
PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX.
Trackback from your site.
Wisenox
| #
They aren’t concerned about using a fake virus to push enslavement, but they think we should “protect” the vulnerable?
Notice that they didn’t care that the fake virus was asymptomatic and had a fake fatality rate less than 1%….they are pushing an agenda, not truth.
Reply
Cal Aylmer
| #
There’s two kinds of people in this world. Those who want to be left alone, and those who cannot leave others alone.
Reply
Neurocrat
| #
Had there been an actual, highly deadly disease, their behavioural sciences, meddling, pushing, threatening and fear mongering would have been unnecessary.
Reply