This is a legacy website for the Behavioural Design Lab. It is no longer updated.
Behavioural Design Lab Subscribe Bird Contact

Risk of bias in randomised controlled trials of health behaviour change interventions

Psychology, experiments, methods

Abstract

The major strength of a randomised controlled trial (RCT) is the degree to which it can establish a causal relationship between an experimental treatment and the outcome (i.e. internal validity), as randomisation should ensure that potential confounders are equally distributed over the treatment and control arms. Internal validity in RCTs can be threatened, however, by multiple sources of bias. For example, poor randomisation procedures (e.g. failure to conceal allocations until treatments have been assigned) can lead to selection bias because more high-risk individuals are selected to receive the experimental treatment. There is a range of potential sources of bias in RCTs, and several well-known tools for assessing the risk of bias provide useful overviews of these sources and strategies for reducing them (e.g. Guyatt et al., 2011; Higgins, Altman, & Sterne, 2011). These tools are also influential: trials scoring high on risk-of-bias assessments should have a smaller chance on publication and – if published – to be included in ‘best evidence’ systematic reviews (Johnson, Low, & MacDonald, 2015), and thus for their interventions to influence policy and practice.

Systematic reviews suggest that many health behaviour change (HBC) trials suffer from a moderate to high risk of bias (e.g. Oberjé, de Kinderen, Evers, van Woerkum, & de Bruin, 2013; Poobalan, Aucott, Precious, Crombie, & Smith, 2009); more so than, for example, drug trials (Crocetti, Amin, & Scherer, 2010). Moreover, risk-of-bias scores have been found to explain heterogeneity in effect sizes, especially in trials with subjective outcome measures (Savovic et al., 2012; Wood et al., 2008). Since replication studies of HBC intervention evaluations are uncommon, invalid inferences due to bias may not be easily discovered and rectified. Despite these concerns, there has been little attention in the behaviour change intervention literature to the sources and consequences of bias in trials, and to strategies that may be effective in reducing the risk of bias. This paucity of research on risk of bias in HBC trial is in turn reflected in widely used instruments for assessing the risk of bias, such as the Cochrane risk-of-bias tool (Higgins et al., 2011), which seems to be mostly based on evidence from non-behavioural trials.

The objective of this special issue is to reflect on the evidence, practices and challenges in relation to reducing the risk of bias in HBC trials. We hope that this special issue will both impact scientific practice (i.e. enhancing the quality of HBC trial design, reporting and synthesis) and have an agenda-setting effect (i.e. inspire people to do empirical research into what sources of bias actually do affect HBC trials, what strategies are effective for mitigating these, and thus what criteria should be used for grading the quality of the evidence of HBC trials).

View fulltext on journal website...

Related content

Policy experiments: Investigating effectiveness or confirming direction?

A Massive Social Experiment On You Is Under Way, And You Will Love It

Everything That Can Go Wrong in a Field Experiment (and What to Do About It)

© 2015 Warwick Business School and the Design Council