Skip to content
Publicly Available Published by De Gruyter April 1, 2013

Transparency in the reporting of in vivo pre-clinical pain research: The relevance and implications of the ARRIVE (Animal Research: Reporting In Vivo Experiments) guidelines

  • Andrew S.C. Rice EMAIL logo , Rosemary Morland , Wenlong Huang , Gillian L. Currie , Emily S. Sena and Malcolm R. Macleod

Abstract

Clear reporting of research is crucial to the scientific process. Poorly designed and reported studies are damaging not only to the efforts of individual researchers, but also to science as a whole. Standardised reporting methods, such as those already established for reporting randomised clinical trials, have led to improved study design and facilitated the processes of clinical systematic review and meta-analysis.

Such standards were lacking in the pre-clinical field until the development of the ARRIVE (Animal Research: Reporting In Vivo Experiments) guidelines. These were prompted following a survey which highlighted a widespread lack of robust and consistent reporting of pre-clinical in vivo research, with reports frequently omitting basic information required for study replication and quality assessment.

The resulting twenty item checklist in ARRIVE covers all aspects of experimental design with particular emphasis on bias reduction and methodological transparency. Influential publishers and research funders have already adopted ARRIVE. Further dissemination and acknowledgement of the importance of these guidelines is vital to their widespread implementation.

Conclusions and implications

Wide implementation of the ARRIVE guidelines for reporting of in vivo preclinical research, especially pain research, are essential for a much needed increased transparency and quality in publishing such research. ARRIVE will also positively influence improvements in experimental design and quality, assist the conduct of accurate replication studies of important new findings and facilitate meta-analyses of preclinical research.

1 Introduction

Clear and complete reporting of all aspects of original research is a crucial aspect of evidence dissemination. An integral part of this process is a full and transparent declaration of the methods used. This permits the reader systematically to ascertain the methodological quality, and flaws, of the experimental design and conduct, and consequently the likelihood of experimental bias confounding the results. Furthermore, the ability critically to assess methodological quality of primary research is a key enabling factor for systemic review and meta-analysis. Widely accepted requirements for the reporting of standard information sets are part of the publication culture for clinical research. This has been missing from the pre-clinical domain for studies involving the use of laboratory animals until the recent appearance of the ARRIVE (Animal Research: Reporting In Vivo Experiments www.nc3rs.org.uk/ARRIVE) guidelines [1].

2 Impact of guidelines for reporting clinical trials, systematic reviews, interventional trials, microarray and proteomics experiments

The editors of scientific journals have a vital responsibility in ensuring the adoption of transparent reporting criteria by publishing the guidelines and then by formally adopting and enforcing such guidelines.

2.1 Consolidated standards of reporting trials CONSORT

Probably the best known example of clinical reporting standards is the Consolidated Standards of Reporting Trials CONSORT (www.consort-statement.org). CONSORT governs the style and minimum information sets for the reporting of randomised controlled clinical trials [2,3]. CONSORT provides a template for the reporting of clinical trials and consists of a twenty-five item checklist (design, analysis and interpretation of the trial). CONSORT also includes an informative flow diagram which displays the path which all subjects enrolled in the trial took at each of the four key stages of a trial (enrolment, intervention allocation, follow-up, and analysis). Over the past two decades CONSORT has had a major positive impact on the design, conduct and reporting of clinical trials and the ability to conduct robust systematic review and meta-analysis across all disciplines of medical research (www.consort-statement.org/about-consort/impact-of-consort).

CONSORT is also an example of the iterative nature of such standards, having been through a number of revisions since the original inception in 1993. Quite properly, it should now be nearly impossible to conduct and publish a clinical trial without adhering to the CONSORT ethos.

2.2 Preferred reporting items for systematic reviews and meta-analyses guidelines PRISMA

The example of CONSORT has been replicated in other clinical research disciplines, for example the reporting of systematic reviews and meta-analyses are governed by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines (PRISMA (formerly QUOROM) – www.prisma-statement.org) [4] and for epidemiological studies MOOSE: Meta-analyses of observational studies in epidemiology (www.consort-statement.org/Initiatives/MOOSE).

2.3 SPIRIT, MIAME, MIAPE

Similarly, the SPIRIT 2013 Statement (Standard Protocol Items: Recommendations for Interventional Trials) sets out the standards for clinical trial protocols [5]. Minimum information sets have also become established in the culture of some areas of pre-clinical research, for example: Minimum Information About a Microarray Experiment (MIAME)(www.fged.org/projects/miame) [6] and Minimum Information About a Proteomics Experiment (MIAPE) (www.psidev.info/node/91) [7].

3 Animal Research: Reporting In Vivo Experiments = ARRIVE

3.1 Responsibilities of editors and funders for transparent reporting of animal research

Scientific journals and their editors have a vital implementation duty in ensuring the adoption of transparent reporting criteria; initially by publishing the guidelines, often in co-ordination with other journals, and then by formally including, and enforcing, such guidelines in their instructions to authors. The Scandinavian Journal of Pain have recently taken a crucial step in this regard in becoming the first specialist pain journal to adopt the ARRIVE format [1] for the reporting of pre-clinical studies which use experimental animals (www.scandinavianjournalpain.com).

Since their original publication [1], the ARRIVE guidelines have been adopted by a steadily increasing number of journals, including the prestigious Nature and Public Library of Science (PLoS) families of journals (www.nc3rs.org.uk/ARRIVEjournals).

Funders of research also have a duty to ensure responsible and transparent dissemination of research that they fund. In the United Kingdom, for example, public sector (e.g., Medical Research Council and the Biotechnology and Biological Sciences Research Council) and charity (e.g., The Wellcome Trust) funders of research are amongst those endorsing ARRIVE. In May 2012 the heads of these funding bodies instructed the leaders of universities and research institutes at which they fund research that compliance with ARRIVE is now a condition of their funding (www.nc3rs.org.uk/ARRIVEfunders). In the United States the National Institute of Neurological Disorders and Stroke have recently drawn attention to the widespread deficiencies in current reporting standards and pressed the case for transparency in reporting of in vivo studies [8].

3.2 Reducing bias and improving quality of reporting animal research: need for ARRIVE

The need for robust reporting standards for pre-clinical pain research have been previously highlighted [9,10], although a suggested pro forma [9] was not adopted into the instructions for authors of pain journals. This pro forma has now been superseded by the publication, and wide adoption, of ARRIVE as a generic reporting guideline for animal studies across the spectrum of biomedical research. ARRIVE has its origins in the work of the United Kingdom National Centre for the Replacement, Refinement and Reduction of Animals in Research (www.nc3rs.org.uk). The stimulus for ARRIVE has its roots in a survey that revealed that most publications reporting animal research lacked key information on how the experiments were designed, conducted and analysed [11].

This survey revealed that essential information about bias reduction tactics were not included, with 87% of publications not stating adequate information about random allocation of animals to groups and 86% not reporting details regarding observer blinding. Similar figures have been revealed for pain research [9,10,12]. Given that a great deal of in vivo animal experimental work essentially consists of “clinical trials” of development compounds in non-human species, it is unsurprising that aspects of ARRIVE bear resemblance to CONSORT. Thus, ARRIVE consists of a 20 essential item checklist concerning aspects of title, abstract, introduction, methods, results and discussion. The methods section highlights details of bias reduction tactics such as sample size calculation, random allocation to groups and observer blinding.

It is important to note that ARRIVE are intended as generic reporting guidelines to provide a standard format for authors to indicate to reviewers, editors and readers exactly what they did and how they did it. Whilst, the aim of ARRIVE is to improve the reporting of animal research, it is explicitly not intended as a rigid dictat for experimental design. Nevertheless, the checklist does de facto serve as a useful aide mémoire in the design of animal experiments, as highlighted by the UK funding agencies in their letter referred to above.

3.3 Impact of ARRIVE and “Good Laboratory Practice”

We cannot be certain of the impact of experimental bias in the results of the existing pre-clinical pain literature because of the poor standards of reporting [9,10,12]. Widespread adoption of ARRIVE by journals which publish pain research will, in time, permit meta-analyses to estimate the impact of bias in, for example, the overestimation of efficacy of drugs being developed.

In the meantime, we can learn and apply the lessons from other closely related fields such as stroke: Macleod, Sena and colleagues in the Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies (CAMARADES) consortium (www.camarades.info) have conducted ground-breaking work in demonstrating the clear detrimental impact of experimental bias in overestimation of efficacy in experimental stroke research [13,14,15,16,17,18,19,20]. Exploiting this information they have drawn up a code of “Good Laboratory Practice” intended to reduce the impact of bias in the design, conduct, and reporting of animal experiments modelling human stroke [20], which could easily be adopted for pain research. The essential domains of “Good Laboratory Practice” relevant to pain research are set out in Table 1, although it is accepted that the term “Good Laboratory Practice” does have alternative meanings in other fields. Sena et al. have demonstrated the cumulative impact of inadequate methodological quality in each of these domains in overestimating efficacy of experimental compounds [16].

Table 1

Core domains of “Good Laboratory Practice” for pre-clinical pain research studies that employ in vivo methods in experimental animals. Primarily adapted from [20], and also [9,10].

  • Information about animals, (species, strain, gender, age, source, etc.) should be stated. For genetically modified animals, this should include details of how the animals were generated and the selection of controls. Details of the environmental conditions in which the animals were housed and experiments conducted should also be stated.

  • Sample size, details of how the size of the experiment was determined. Where a power calculation was conducted to determine sample size then this should be reported in detail, including the experimental assumptions used in the calculation (the expected difference between groups and the expected variance), the desired statistical power and the details of the calculation. The minimum sample size required to achieve the desired power must be stated, together with details of the eventual sample size in each experimental group for both “intent to treat” and “per protocol” populations.

  • Explicitinclusion and exclusion criteria, as determined in the protocol before commencement of the study, should be stated.

  • Randomisation: Clear details of the methods used to allocate animals to experimental groups must be given, both for injury or sham group allocation and for treatment group allocation. Where randomisation was used the details of the precise method of randomisation should be stated.

  • Allocation concealment: Details of how the allocation of animals to experimental groups was concealed from the investigator who was responsible for the induction of the pain state (e.g., surgeon performing nerve ligation or person who injects an inflammogen). Allocation concealment might be achieved by having the experimental intervention administered by an independent investigator, or by having an independent investigator prepare a solution individually and label it for each animal according to the randomisation schedule as outlined above. These considerations also apply to comparisons between groups of genetically modified animals, but if phenotypic differences (e.g., coat colouring) prevent allocation concealment then this should be stated.

  • Reporting of animals excluded from analysis: All randomised animals (both overall to injury or sham group and by treatment group allocation) should be accounted for in the data presented. Some animals may, for good reasons, be excluded from analysis, but the circumstances under which this exclusion will occur should be determined in advance, and any exclusion decision should be taken without knowledge of the experimental group to which the animal belongs. The criteria for exclusion and the number of animals excluded should be reported. The stage at which any animals were excluded and the reasons for that exclusion (e.g., the animal died) should be clearly stated. CONSORT-type flow charts are useful for this purpose.

  • Blinded measurement, assessment and analysis of outcome measures should be conducted and reported. The assessment of outcome is blinded if the investigator responsible for measuring any outcome measure has no knowledge of the experimental group to which an animal belongs. The methods used to blind investigators who perform and analyse the outcome measures should be explicitly stated. The point at which the blinding codes were broken should also be clearly stated. These considerations also apply to comparisons between groups of genetically modified animals, but if phenotypic differences (e.g., coat colouring) prevent blinded assessment of outcome then this should be stated. Occasionally, it might be necessary to verify the veracity of the blinding process and check that group allocation has not been inadvertently revealed by extrinsic factors; perhaps by asking investigators to state the groups which they believe the animals to have been allocated.

  • All potential conflicts of interest and study funding should be stated. Any relationship which could be perceived to introduce a potential conflict of interest, or the absence of such a relationship, should be disclosed in an acknowledgments section, along with information on study funding and for instance supply of drugs or of equipment.

3.4 Other areas where the predictive validity of preclinical pain research can be improved

Although implementation of robust experimental bias reduction methods and transparent reporting thereof, for in vivo pain studies is one of the easiest changes to implement, there are several other challenging aspects of animal models that need to be addressed in order to improve their overall relevance to human painful disease. Top of this list is establishing a portfolio of conditions in rodents which, to that degree which is possible, reflect the range of clinical states where pain is a feature and the development of validated, ethologically relevant, outcome measures which reflect human pain-related clinical signs [9,10,21,22,23,24,25,26].

There are also other aspects which will be even more challenging to address, such as replicating the temporal aspects of chronic disease, accounting for co-morbidity seen in patients, heterogeneity of clinical presentations, predicting adverse effects and pharmacokinetic variables of drugs and a better reflection of human “pain” outcome incidence for various diseases.

The usual choice of healthy, young, male, genetically similar rodents for modelling the complexities of human chronic pain is questionable.

3.5 Biased animal research: ethical and societal implications

Returning to the question of pre-clinical bias reduction methods, and the transparent reporting thereof: the impact of methodologically flawed animal studies is not trivial nor an issue for mere academic cogitation and jousting, but has wide ethical and societal implications. Some examples: firstly, animal “pre-clinical trials” are often used to justify the conduct of early stage human clinical trials; allowing inefficacious medications through this barrier exposes the humans participating in such trials to unnecessary risk. Indeed, such prediction of efficacy is a justification for the large scale use of animals in drug development and poorly designed animal studies are probably the cause of much wasted time and money in the pharmaceutical industry. Only 37% of highly cited animal research is translated at the level of human randomised trials [27]. Secondly, the use of animals in studies which do not have the methodological capability to adequately test the hypothesis is unethical. Thirdly, the use of scarce and valuable research funding for poorly designed studies is unjustifiable. Finally, inadequate reporting of trial methods prevents adequate systematic review and meta-analysis of the literature, which in the clinical domain has revolutionised how we critically examine evidence.

3.6 Caveats, reevaluating metrics of success and future directions

There are important caveats to unquestioningly accepting the premise of the universal adoption of robust bias reduction tools and reporting transparency into pre-clinical research. A potential consequence of this approach is that less “positive” or “breakthrough” pre-clinical papers will be published. Of course, this artificial division of the literature into “positive” and “negative” is a false dichotomy and arguably a paper reporting robustly conducted studies in which the hypothesis was not proven, is of greater worth to the evidence base than an inadequately designed and conducted “positive” experiment. Nevertheless, metrics of success would need to be re-examined: in academia, publication record, especially of “trophy” papers, is a major criterion for success in professional performance appraisal, resource allocation, personal remuneration, grant funding, job security, and promotion.

Indeed, this judgement extends beyond the single academic to the overall assessment of departments and universities. For instance, in the United Kingdom the Research Excellence Framework (www.ref.ac.uk) is the main instrument by which state university funding is determined. A subjective value judgement of the best four publications of individual academics in a specific time window accounts for 65% of the overall REF score.

At the current time, academic success is rarely assessed beyond the number of publications, the citation index, the perceived standing/impact factor of the journals concerned and the grant funding track record. Attention has been drawn to the dangers, despite the inevitable attention which they attract, of overemphasising the importance of early, as yet un-reproduced, pre-clinical studies with small sample sizes reported in high impact factor journals [28,34]. If methodologically flawed animal based data are highly rated in this process it would introduce a systemic error into the assessment.

Similar perverse incentives exist in industry, which extend beyond performance appraisal of individual R&D workers, with an apparently promising pre-clinical pipeline contributing to the stock value and survival of companies. When ARRIVE is widely adopted, then it is essential that academia, and industry, seize the opportunity that transparent reporting provides; namely the ability to systemically appraise reports for the likelihood of methodological flaws and the potential for bias confounding the results of an experiment. There is also a need to revise professional performance criteria to include longer term metrics, such as the eventual confirmation of pre-clinical discoveries in randomised controlled clinical trials and a demonstrable change in clinical practice. There are promising developments on the horizon, for the first time the 2014 REF will assess, alongside publication output, the long term impact of research-although this only amounts to 20% of the overall assessment. The dangers of not measuring the long term impact of research has obvious parallels with the distortions inherent in the short term bonus culture for performance reward in the financial and management sectors.

Transparency of reporting and elimination of bias are relatively easy to deal with compared with some other issues which are facing pre-clinical research and which the clinical evidence field is already addressing.

Firstly, direct replication of important pre-clinical discoveries is rarely undertaken and attempts to document the reproducibility of high profile papers has not been reassuring [29,30,34]. Transparency of reporting will facilitate replication and there is a need to recognise the value of replication studies by funding agencies; encouragingly replication is already becoming part of the culture in genetic research.

Secondly, holistic assessment by systematic review of an evidence base requires unrestricted access to all experimental data, published and unpublished, “positive” and “negative”. The confounding problem of publication bias is well recognised in clinical circles and increasingly so for the pre-clinical literature not least to prevent unnecessary repetition of previous experiments [18,31,32,33]. Linked to this is the importance of publishing all data gathered in an experiment and the avoidance of selective publication of results. The clinical field is addressing these issues by prospective registration of clinical trial protocols in publically accessible databases, linked to portals for publication of results that may not be attractive to mainstream peer review journals. However, the challenges of introducing and enforcing such an ethos to pre-clinical in vivo research are Herculean.

Highlights

  1. The CONSORT guidelines for clinical trials had a major impact on the reporting and quality of clinical research.

  2. The ARRIVE guidelines for in vivo preclinical research aim to do the same for laboratory research involving animals.

  3. ARRIVE emphasizes the need for transparent and standardised reports of preclinical studies.

  4. ARRIVE will have major implications for the quality of design and reporting of preclinical studies.

  5. Academics, industry, editors, peer-reviewers and funders have responsibility for implementation and enforcement of ARRIVE.


DOI of refers to article: http://dx.doi.org/10.1016/j.sjpain.2013.02.004.



Department Surgery & Cancer, Imperial College London, Chelsea and Westminster Hospital Campus, 369 Fulham Road, London SW10 9NH, United Kingdom. Tel.: +44 203 315 8156

  1. Funding: WH is funded as part of the Europain Collaboration, which has received support from the Innovative Medicines Initiative Joint Undertaking, under grant agreement no 115007, resources of which are composed of financial contribution from the European Union’s Seventh Framework Programme (FP7/2007-2013) and EFPIA companies’ in kind contribution.

    WH and ASCR receive additional funding in the form of a collaborative project with The University of Edinburgh (GLC, ESS and MRM) funded by the National Centre for the Reduction, Refinement and Reduction of Animals in Research, entitled “Reduction & refinement in animal models of neuropathic pain: using systematic review & meta-analysis”.

    RM is funded by Industrial Collaborative Quota Studentships awarded to the London Pain Consortium by the Medical Research Council and Pfizer.

  2. Conflicts of interest: ASCR’s laboratory is/has recently been funded by agencies which have endorsed ARRIVE, including: The Wellcome Trust (London Pain Consortium), The Medical Research Council, The Biotechnology and Biological Sciences Research Council and The Dunhill Medical Trust. He also has a grant in collaboration with Edinburgh members of the CAMARADES Consortium (MRM, ESS and GLC) from the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs) to conduct a meta-analysis of animal model data pertaining to neuropathic pain. Industry and academic members of the IMI-JU EUROPAIN are participating in this exercise and have agreed to contribute unpublished data to the meta-analysis.

    ASCR is a member of the Editorial Boards of:

    • Public Library of Science – Medicine – Editorial Board

    • Pain – Associate Editor

    • The European Neurological Journal – Editorial Board

    • Pain Management – Senior Editor

    • Scandinavian Journal of Pain – Editorial Board

References

[1] Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol 2010;8:e1000412.Search in Google Scholar

[2] Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, Schulz KF, CONSORT Group. CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration. PLoS Med 2008;5:e20.Search in Google Scholar

[3] Schulz KF, Altman DG. Moher D, for the CONSORT Group: CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. PLoS Med 2010;7:e1000251.Search in Google Scholar

[4] Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 2009;6:e1000100.Search in Google Scholar

[5] Chan AW, Tetzlaff JM, Altman DG, Laupacis A, Gotzsche PC, Krleza-Jeric K, Hrobjartsson A, Mann H, Dickersin K, Berlin JA, Dore CJ, Parulekar WR, Summerskill WS, Groves T, Schulz KF, Sox HC, Rockhold FW, Rennie D, Moher D. Moher D: SPIRIT 2013 Statement: defining standard protocol items for clinical trials. Ann Intern Med 2013;158:200–7.Search in Google Scholar

[6] Brazma A, Hingamp P, Quackenbush J, Sherlock G, Spellman P, Stoeckert C, Aach J, Ansorge W, Ball CA, Causton HC, Gaasterland T, Glenisson P, Holstege FC, Kim IF, Markowitz V, Matese JC, Parkinson H, Robinson A, Sarkans U, Schulze-Kremer S, Stewart J, Taylor R, Vilo J, Vingron M. Minimum information about a microarray experiment (MIAME)-toward standards for microarray data. Nat Genet 2001;29:365–71.Search in Google Scholar

[7] Taylor CF, Paton NW, Lilley KS, Binz PA, Julian Jr RK, Jones AR, Zhu W, Apweiler R, Aebersold R, Deutsch EW, Dunn MJ, Heck AJ, Leitner A, Macht M, Mann M, Martens L, Neubert TA, Patterson SD, Ping P, Seymour SL, Souda P, Tsugita A, Vandekerckhove J, Vondriska TM, Whitelegge JP, Wilkins MR, Xenarios I, Yates 3rd JR, Hermjakob H. The minimum information about a proteomics experiment (MIAPE). Nat Biotechnol 2007;25:887–93.Search in Google Scholar

[8] Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, Crystal RG, Darnell RB, Ferrante RJ, Fillit H, Finkelstein R, Fisher M, Gendelman HE, Golub RM, Goudreau JL, Gross RA, Gubitz AK, Hesterlee SE, Howells DW, Huguenard J, Kelner K, Koroshetz W, Krainc D, Lazic SE, Levine MS, Macleod MR, McCall JM, Moxley III RT, Narasimhan K, Noble LJ, Perrin S, Porter JD, Steward O, Unger E, Utz U, Silberberg SD. A call for transparent reporting to optimize the predictive value of preclinical research. Nature 2012;490:187–91.Search in Google Scholar

[9] Rice ASC, Cimino-Brown D, Eisenach JC, Kontinen VK, Lacroix-Fralish ML, Machin I, Preclinical Pain Consortium, Mogil JS, Stöhr T. Animal models and the prediction of efficacy in clinical trials of analgesic drugs: a critical appraisal and call for uniform reporting standards. Pain 2008;139:243–7.Search in Google Scholar

[10] Rice ASC. Predicting analgesic efficacy from animal models of peripheral neuropathy and nerve injury: a critical view from the clinic. In: Mogil JS, editor. Pain 2010 – an updated review: refresher course syllabus. Seattle: IASP Press; 2010. p. 415–26.Search in Google Scholar

[11] Kilkenny C, Parsons N, Kadyszewski E, Festing MFW, Cuthill IC, Fry D, Hutton J, Altman DG. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS ONE 2009;4:e7824.Search in Google Scholar

[12] Kontinen VK, Meert TF. Predictive validity of neuropathic pain models in pharmacological studies with a behavioral outcome in the rat: a systematic review. In: Dostrovsky JO, Carr DB, Koltzenburg M, editors. Proceedings of 10th World Congress on Pain. Progress in Pain Research and Management. Seattle: IASP Press; 2003. p. 489–98.Search in Google Scholar

[13] Crossley NA, Sena E, Goehler J, Horn J, van der Worp B, Bath PMW, Macleod M, Dirnagl U. Empirical evidence of bias in the design of experimental stroke studies: a metaepidemiologic approach. Stroke 2008;39:929–34.Search in Google Scholar

[14] Macleod MR, van der Worp HB, Sena ES, Howells DW, Dirnagl U, Donnan GA. Evidence for the efficacy of NXY-059 in experimental focal cerebral ischaemia is confounded by study quality. Stroke 2008;39:2824–9.Search in Google Scholar

[15] Perel P, Roberts I, Sena E, Wheble P, Briscoe C, Sandercock P, Macleod M, Mignini LE, Jayaram P, Khan KS. Comparison of treatment effects between animal experiments and clinical trials: systematic review. BMJ 2007;334:197.Search in Google Scholar

[16] Sena E, van der Worp HB, Howells D, Macleod M. How can we improve the pre-clinical development of drugs for stroke? Trends Neurosci 2007;30:433–9.Search in Google Scholar

[17] Sena E, Wheble P, Sandercock P, Macleod M. Systematic review and meta-analysis of the efficacy of tirilazad in experimental stroke. Stroke 2007;38:388–94.Search in Google Scholar

[18] Sena ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR. Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol 2010;8:e1000344.Search in Google Scholar

[19] van der Worp HB, Howells DW, Sena ES, Porritt MJ, Rewell S, O’Collins V, Macleod MR. Can animal models of disease reliably inform human studies? PLoS Med 2010;7:e1000245.Search in Google Scholar

[20] Macleod MM, Fisher M, O’Collins V, Sena ES, Dirnagl U, Bath PMW, Buchan A, van der Worp HB, Traystman R, Minematsu K, Donnan GA, Howells DW. Good laboratory practice. Preventing introduction of bias at the bench. Stroke 2009;40:e50–2.Search in Google Scholar

[21] Andrews N, Legg E, Lisak D, Issop Y, Richardson D, Harper S, Huang W, Burgess G, Machin I, Rice AS. Spontaneous burrowing behaviour in the rat is reduced by peripheral nerve injury or inflammation associated pain. Eur J Pain 2011;16:485–95.Search in Google Scholar

[22] Hasnie FS, Breuer J, Parker S, Wallace V, Blackbeard J, Lever I, Kinchington PR, Dickenson AH, Pheby T, Rice ASC. Further characterization of a rat model of varicella zoster virus-associated pain: relationship between mechanical hypersensitivity and anxiety-related behavior, and the influence of analgesic drugs. Neuroscience 2007;144:1495–508.Search in Google Scholar

[23] Wallace VCJ, Blackbeard J, Segerdahl A, Hasnie FS, Pheby T, McMahon SB, Rice ASC. Characterisation of rodent models of HIV-gp120 and anti-retroviral associated neuropathic pain. Brain 2007;130:2688–702.Search in Google Scholar

[24] Wallace VCJ, Blackbeard J, Pheby T, Segerdahl AR, Davies M, Hasnie F, Hall S, McMahon SB, Rice ASC. Pharmacological, behavioural and mechanistic analysis of HIV-1 gp120 induced painful neuropathy. Pain 2007;133:47–63.Search in Google Scholar

[25] Mogil JS. Animal models of pain: progress and challenges. Nat Rev Neurosci 2009;10:283–94.Search in Google Scholar

[26] Huang W, Calvo M, Karu K, Olausen H, Bathgate G, Okuse K, Bennett DLH. Rice ASC: a clinically relevant rodent model of HIV antiretroviral drug stavudine induced painful peripheral neuropathy. Pain 2013: in press. www.painjournalonline.com/article/S0304–3959(13)00005–5/fulltextSearch in Google Scholar

[27] Hackam DG, Redelmeier DA. Translation of research evidence from animals to humans. JAMA 2006;296:1731–2.Search in Google Scholar

[28] Munafo MR, Stothart G, Flint J. Bias in genetic association studies and impact factor. Mol Psychiatry 2009;14:119–20.Search in Google Scholar

[29] Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov 2011;10:712.Search in Google Scholar

[30] Begley CG, Ellis LM. Drug development: raise standards for preclinical cancer research. Nature 2012;483:531–3.Search in Google Scholar

[31] Rowbotham MC. The case for publishing ‘negative’ clinical trials. Pain 2009;146:225–6.Search in Google Scholar

[32] Kontinen V, Kalso E. Why we are proud to publish well-performed negative clinical studies? Scand J Pain 2013;4:15–6.Search in Google Scholar

[33] Breivik H, Stubhaug A, Hals EKB, Rosseland LA. Why we publish negative studies and prescriptions on how to do clinical pain trials well. Scand J Pain 2010;1:98–9.Search in Google Scholar

[34] Ioannidis JPA. Why most published research findings are false. PLoS Med 2005;2(8):e124.Search in Google Scholar

Received: 2013-02-18
Accepted: 2013-02-18
Published Online: 2013-04-01
Published in Print: 2013-04-01

© 2013 Scandinavian Association for the Study of Pain

Downloaded on 22.7.2024 from https://www.degruyter.com/document/doi/10.1016/j.sjpain.2013.02.002/html
Scroll to top button