Review Article
Many scenarios exist for selective inclusion and reporting of results in randomized trials and systematic reviews

https://doi.org/10.1016/j.jclinepi.2012.10.010Get rights and content

Abstract

Objective

To collate and categorize the ways in which selective inclusion and reporting can occur in randomized controlled trials (RCTs) and systematic reviews.

Study Design and Setting

Searches of the Cochrane Methodology Register, PubMed, and PsycInfo were conducted in April 2011. Methodological reports describing empirically investigated or hypothetical examples of selective inclusion or reporting were eligible for inclusion. Examples were extracted from the reports by one author and categorized by three authors independently. Discrepancies in categorization were resolved via discussion.

Results

Two hundred ninety reports were included. The majority were empirical method studies (45.5%) or commentaries (29.3%). Eight categories (30 examples) of selective reporting in RCTs, eight categories (27 examples) of selective inclusion in systematic reviews, and eight categories (33 examples) of selective reporting in systematic reviews were collated. Broadly, these describe scenarios in which multiple outcomes or multiple data for the same outcome are available, yet only a subset is included or reported; outcome data are reported with inadequate detail; or outcome data are given different prominence through its placement across or within reports.

Conclusion

An extensive list of examples of selective inclusion and reporting was collated. Increasing trialists’ and systematic reviewers’ awareness of these examples may minimize their occurrence.

Introduction

What is new?

Key findings

  1. An extensive list of categories and examples of selective inclusion and reporting in randomized controlled trials (RCTs) and systematic reviews of RCTs was collated. Few empirical studies investigating the extent of bias associated with selective inclusion or reporting in systematic reviews of RCTs exist.

What this adds to what was known?
  1. To our knowledge, this is the first systematic review of reports describing empirically investigated or hypothetical examples of selective inclusion and reporting in RCTs and systematic reviews of RCTs.

What is the implication and what should change now?
  1. Trialists and systematic reviewers need to be aware of the scenarios in which they may inadvertently introduce bias through selective inclusion or reporting of results.

  2. Increasing trialists' and systematic reviewers' awareness of these examples may minimize their occurrence.

  3. More methodological research is needed to investigate the magnitude of bias resulting from different examples of selective inclusion and reporting, particularly at the systematic review level.

Systematic reviews of randomized controlled trials (RCTs) of health care interventions are used by clinicians to inform their treatment options, clinical practice guideline developers to formulate recommendations, and funding bodies to determine whether there is a justification for further research [1], [2], [3]. The success of these activities may be compromised when the methods used throughout the review process result in bias, defined as any systematic error that can over- or underestimate an intervention effect [4]. To inform systematic reviewers about methods that minimize bias in the context of systematic reviews, methodologists have developed lists of problematic practices, for example, searching only a single electronic bibliographic database or screening studies for eligibility by only a single reviewer [1], [4], [5], [6], [7], [8]. One such practice that has gained attention in recent years is selective reporting, defined as the selection of a subset of outcomes and analyses to report in a publication [9], [10], [11], [12].

Selective reporting can occur in various ways in both RCTs and systematic reviews of RCTs. In RCTs, examples include the nonreporting of outcomes that have been measured and analyzed or the partial reporting of results (e.g., reporting an effect estimate with no measure of variation when the result is nonsignificant) [10], [12], [13], [14], [15], [16], [17]. When the way in which outcomes and analyses are reported is based on the results (e.g., statistical significance, magnitude, or direction of effect), this is known as selective reporting bias [9], [10], [11]. In systematic reviews, when multiplicity of outcome data is available in RCTs, systematic reviewers may choose to include only a subset of this data. For example, if data for the outcome depression are reported in a journal article based on two measurement scales, each at three time points, the systematic reviewers may choose to only include the data from one scale at one time point. This practice is not always problematic, such as when the choice of outcome data is prespecified [18], [19]. However, when the choice of outcome data to include is based on the results (which we refer to as “selective inclusion”), this can introduce bias. After inclusion of outcome data, outcomes and analyses may be selectively reported in systematic reviews in the same way as occurs in RCTs (e.g., selecting which outcomes and meta-analytic effect estimates to report based on the results) [20], [21]. Both selective inclusion and reporting may over- or underestimate meta-analytic results [9], [10], [11], [19], limit interpretation, and mislead users about the importance of particular outcomes [13], [21]. Fig. 1 illustrates the levels at which selective reporting in RCTs, selective inclusion in systematic reviews, and selective reporting in systematic reviews can occur. The example depicts a scenario in which multiple measurement instruments of depression are used with different transformations of the outcomes (final and change from baseline values).

There are many additional ways in which outcomes and analyses can be selectively included or reported [1], [7], [12], [16], [22], [23]. To date, there has been no review of the literature describing these practices. Collating such a list has multiple purposes: it increases the trialists' and systematic reviewers' awareness of possible types of selective reporting which may occur at the RCT level, it highlights how systematic reviewers may inadvertently introduce bias through the selective inclusion of results or misinform users of systematic reviews through selective reporting of results, it helps to identify where empirical research may be required to investigate the prevalence and impact of potential sources of bias, and it guides methodological advice regarding how to minimize these practices. The aims of this research were therefore to (1) collate and categorize the ways in which selective inclusion and reporting can occur in RCTs and systematic reviews and (2) identify the types of selective inclusion or reporting that have been researched in empirical studies investigating such bias. To meet these aims, we conducted a systematic review that included reports describing examples of selective inclusion or reporting. We then categorized the identified examples and made judgments about whether examples reported at one level (e.g., selective reporting in RCTs) could hypothetically apply to other levels (e.g., at either the selective inclusion or reporting in systematic reviews levels or both). It was beyond the scope of this review to synthesize the results of empirical studies investigating selective inclusion or reporting—a systematic review of empirical studies investigating selective reporting in RCTs exists [23], and we are currently synthesizing the results of empirical studies investigating selective inclusion and reporting in systematic reviews [24]. This work will be reported elsewhere.

Section snippets

Eligibility criteria

The following inclusion criteria were used to select reports for the systematic review: (1) the report was (a) a report of an empirical study which investigated the prevalence or impact of a type of selective inclusion or reporting, or the extent of variation in how outcomes in a particular clinical area are measured, analyzed, and reported, in RCTs or systematic reviews of RCTs; (b) a review of such empirical studies; or (c) a statistical methods article or commentary focused on selective

Results

The search retrieved a total of 3,476 citations. A flow diagram of the report selection process, with reasons for exclusion, is presented in Fig. 2. Two hundred ninety reports were included (bibliography available on request). Report types are listed in Supplementary Table 1 in Appendix B at www.jclinepi.com. The most common report type was an empirical study (i.e., cohort, cross-sectional, or case study, n = 132; 45.5%), followed by a commentary (n = 85; 29.3%). Fourteen (4.8%) systematic

Discussion

To our knowledge, this is the first systematic review of methodological reports describing examples of selective inclusion or reporting in RCTs and systematic reviews of RCTs. This work aggregates the ideas of many methodologists and commentators who have discussed these issues. Two hundred ninety reports that described a range of commonly and infrequently discussed examples of these practices were included. We found that potential bias associated with many examples of selective inclusion or

Acknowledgments

The authors thank Professor G. Peter Herbison (Dunedin School of Medicine, University of Otago, New Zealand) for participating in the assignment of the example labels to the draft categories and for his helpful discussion about the categories. They also thank Professor Sally Green (School of Public Health and Preventive Medicine, Monash University, Australia) for her helpful comments on the drafts of this article. They also thank the peer reviewers for their valuable comments.

References (100)

  • K.F. Schulz et al.

    Epidemiology 4—multiplicity in randomised trials I: endpoints and treatments

    Lancet

    (2005)
  • O. Berwanger et al.

    The quality of reporting of trial abstracts is suboptimal: survey of major general medical journals

    J Clin Epidemiol

    (2009)
  • P.C. Gotzsche

    Methodology and overt and hidden bias in reports of 196 double-blind trials of nonsteroidal antiinflammatory drugs in rheumatoid arthritis

    Controlled Clin Trials

    (1989)
  • N. Wiebe et al.

    A systematic review identifies a lack of standardization in methods for handling missing variance data

    J Clin Epidemiol

    (2006)
  • Higgins JPT, Altman DG, Sterne JAC. Chapter 8: assessing risk of bias in included studies. In: Higgins JPT, Green S,...
  • M. Egger et al.

    Problems and limitations in conducting systematic reviews

  • F. Song et al.

    Publication and related biases

    Health Technol Assess

    (2000)
  • F. Song et al.

    Dissemination and publication of research findings: an updated review of related biases

    Health Technol Assess

    (2010)
  • J.L. Hutton et al.

    Bias in meta-analysis due to outcome variable selection within studies

    Appl Stat

    (2000)
  • J.J. Kirkham et al.

    The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews

    BMJ

    (2010)
  • P.R. Williamson et al.

    Identification and impact of outcome selection bias in meta-analysis

    Stat Med

    (2005)
  • P.R. Williamson et al.

    Outcome selection bias in meta-analysis

    Stat Methods Med Res

    (2005)
  • A.W. Chan et al.

    Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles

    JAMA

    (2004)
  • A.W. Chan et al.

    Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research

    CMAJ

    (2004)
  • A.W. Chan et al.

    Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors

    BMJ

    (2005)
  • K. Dwan et al.

    Systematic review of the empirical evidence of study publication bias and outcome reporting bias

    PLoS ONE

    (2008)
  • N. McGauran et al.

    Reporting bias in medical research—a narrative review

    Trials

    (2010)
  • B. Tendal et al.

    Disagreements in meta-analyses using outcomes measured on continuous or rating scales: observer agreement study

    BMJ

    (2009)
  • B. Tendal et al.

    Multiplicity of data in trial reports and the reliability of meta-analyses: empirical study

    BMJ

    (2011)
  • E.M. Beller et al.

    Reporting of effect direction and size in abstracts of systematic reviews

    JAMA

    (2011)
  • J.J. Kirkham et al.

    Bias due to changes in specified outcomes during the systematic review process

    PLoS ONE

    (2010)
  • K. Dwan et al.

    Comparison of protocols and registry entries to published reports for randomised controlled trials

    Cochrane Database Syst Rev

    (2011)
  • M.J. Page et al.

    Bias due to selective inclusion and reporting of outcomes and analyses in systematic reviews of randomised trials of healthcare interventions (protocol)

    Cochrane Database Syst Rev

    (2012)
  • C.B. Begg et al.

    Publication bias: a problem in interpreting medical data

    J Roy Stat Soc A

    (1988)
  • P.R. Williamson et al.

    Application and investigation of a bound for outcome reporting bias

    Trials

    (2007)
  • J.M. Bjordal et al.

    Can Cochrane reviews in controversial areas be biased? A sensitivity analysis based on the protocol of a systematic Cochrane review on low-level laser therapy in osteoarthritis

    Photomed Laser Surg

    (2005)
  • C.A. Silagy et al.

    Publishing protocols of systematic reviews. Comparing what was done to what was planned

    JAMA

    (2002)
  • Hopewell S, Beller E. Is there any evidence of selective reporting of outcomes in abstracts of Cochrane reviews? Oral...
  • Parmelli E, Liberati A, D'Amico R. Reporting of outcomes in systematic reviews: comparison of protocols and published...
  • R.M. Smyth et al.

    Frequency and reasons for outcome reporting bias in clinical trials: interviews with trialists

    BMJ

    (2011)
  • M. Koesters et al.

    Limits of meta-analysis: methylphenidate in the treatment of adult attention-deficit hyperactivity disorder

    J Psychopharmacol

    (2009)
  • J.C. Nunnally et al.

    Psychometric theory

    (1994)
  • McKenzie JE. Methodological issues in meta-analysis of randomised controlled trials with continuous outcomes (PhD...
  • A.A. Bartolucci

    Meta-analysis: some clinical and statistical contributions in several medical disciplines

    Yonsei Med J

    (2007)
  • S. Gilbody et al.

    Randomized trials with concurrent economic evaluations reported unrepresentatively large clinical effect sizes

    J Clin Epidemiol

    (2007)
  • K. Hauer et al.

    Systematic review of definitions and methods of measuring falls in randomised controlled fall prevention trials

    Age Ageing

    (2006)
  • C.J. Punt et al.

    Endpoints in adjuvant treatment trials: a systematic review of the literature in colon cancer and proposed definitions for future trials

    J Natl Cancer Inst

    (2007)
  • S. Ahmer et al.

    Do pharmaceutical companies selectively report clinical trial data?

    Pakistan J Med Sci

    (2006)
  • M. Marshall et al.

    Unpublished rating scales: a major source of bias in randomised controlled trials of treatments for schizophrenia

    Br J Psychiatry

    (2000)
  • G. Cordoba et al.

    Definition, reporting, and interpretation of composite outcomes in clinical trials: systematic review

    BMJ

    (2010)
  • Cited by (59)

    • Adherence to the PRISMA-P 2015 reporting guideline was inadequate in systematic review protocols

      2022, Journal of Clinical Epidemiology
      Citation Excerpt :

      If not, systematic review methods such as eligibility criteria, outcome measurements, and the statistical analysis may be modified according to the observed results or available evidence. This is a potential threat for the validity as selective inclusion of studies or outcome data may overestimate or underestimate the systematic review results [1,2]. Systematic reviews that are based on inadequately detailed protocols may suffer from the same risks as systematic reviews without protocols.

    • Public availability and adherence to prespecified statistical analysis approaches was low in published randomized trials

      2020, Journal of Clinical Epidemiology
      Citation Excerpt :

      The statistical methods used to analyze a randomized trial can affect the results; for instance, excluding different participants or using different statistical models can change the size of the estimated treatment effect or P-value [1–14].

    View all citing articles on Scopus

    Funding: This work was conducted as part of a PhD undertaken by M.J.P., which is funded by an Australian Postgraduate Award administered through Monash University, Australia.

    Declaration of interest: All authors have no conflict of interest.

    View full text