Understanding Veterinary Papers – A Data and Statistics Perspective – Part 1

Part 1 – From the Literature to Clinical Action


Welcome to this blog series on understanding veterinary papers from a data and statistics perspective. The aim is to help veterinary professionals assess a paper in terms of its relevance, trustworthiness and clinical application.

First, we’ll take a look at the ideas of evidence-based medicine and how it relies on the quantitative assessment of evidence. In the second blog post we’ll see some of the core statistical concepts that underpin many papers and which form the bedrock of many more complex statistical ideas. In part three we’ll see what can go wrong with the use of data and statistics, and in part four we’ll see how to critically assess a paper from a statistical point-of-view. Finally, there will be a ‘bonus’ blog post for those interested in delving deeper into modern data analysis, again highlighting what can go wrong behind the scenes of a scientific paper and why it’s so important to be wary of statistical claims.

Evidence-Based Thinking

According to one definition, evidence-based medicine (EBM) is “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients[1]. At its core, EBM is based upon the principle that “we should treat where there is evidence of benefit and not treat where there is evidence of no benefit (or harm)[2]. Such ideas touch on every aspect of medicine, including diagnosis, therapy, prognosis and prevention, asking questions ranging from the appropriateness of different tests to the likelihood of side-effects.

EBM is an active area in veterinary medicine, with the RCVS providing various guides and resources on their website [3], a dedicated Centre at the University of Nottingham [4], and various papers [5] and blog articles [6] on the subject.

When I first heard the term, it seemed almost obvious. Afterall, how else could you practice modern medicine if without evidence?

However, EBM is not about ‘guessing vs not guessing’, it’s a procedure for moving from a query to action, based upon the current scientific consensus. The overall process is outlined below [7]. The focus of this blog series isn’t on EBM, but instead part 3 of this list. That part is, effectively, all about being able to derive meaning from a statistical and data perspective.

  1. Ask the right question
  2. Search for the evidence
  3. Appraise the evidence
  4. Act on the evidence

The results of this four-step process are not meant to be a substitute for clinical experience or expertise. It’s not meant to dictate a set of strict rules that supersede clinical judgment or reduce the diagnosis of disease into a simple flow-chart exercise. EBM is about keeping up-to-date with the latest literature, and disseminating the science into something usable, practical and ultimately of benefit to the patient. It also gives a framework to a complex process which may otherwise prove too time consuming in busy clinical practice.

This raises some important questions. How exactly do you keep up-to-date with the latest literature? The obvious way is to make the time to read scientific papers on a regular basis. But, perhaps more crucially, how exactly do you ask the right questions, search the literature efficiently, and then convert what you read into actionable insight? Even more fundamentally, how do you judge the veracity and validity of what you’re reading?

Of course, a large part of this is aligning personal and clinical expertise with the clinical content of the paper. EBM offers tools for aiding this process, such as the PICO system for phrasing questions [8]. Specifically,

  • Patient or problem: What is the patient or problem in question?
  • Intervention: What main treatment or intervention is being considered?
  • Comparison: If necessary, what is the main alternative to compare with the intervention?
  • Outcome: What do you hope to accomplish?

It also provides a hierarchy of evidence, ranking different types of evidence with the aim of guiding the reader to place more emphasis on things like meta-analyses and randomized controlled trials than expert opinions and case studies (although such concepts are not without criticism [9]).

In the vast majority of papers, you’ll quickly encounter evidence presented in quantitative ways; Plots, tables, means, p-values, odds ratios. Not to mention a whole slew of statistical tests, with the authors confidently wielding such tools and techniques and forming conclusions on the basis of their output.

This aspect of the literature can be daunting for anyone without a strong statistics background, and given the rapidly growing volume of and importance placed on data [10], the frequency and complexity of statistics appearing in papers is only set to increase.

Quantifying the Evidence

This leaves the reader with a few ways forward. At one extreme, if you’re proficient in statistics, you can critique and evaluate the quantitative evidence thoroughly without issue. At the other, if your understanding of the relevant techniques is limited, you can still extract the clinical conclusions and find the paper useful.

However, in the latter case, it can be difficult to get a handle on the strength of the evidence, whether or not the design of a particularly study is such that it lends itself to your particular, clinical situation, or even if there are signs that the authors may have come to an incorrect conclusion. Knowing something about statistics is better than nothing, especially if what you know is foundational. In other words, it’s better to understand the core ideas and to think statistically than being able to unleash some complex statistical model on your data via fancy software.

There is another reason to be statistically and data literate in 2019. There is a growing realization in science that a large proportion of the literature is irreproducible, due to poor experimental design, data analysis being done in haphazard ways, and p-values being employed as incontestable revelations. This is a big problem, and is only set to get worse as datasets become larger and more complex.

Next time

In the next part of this blog series, we’ll look at some of the core statistical ideas that underpin many of the tools and techniques found in veterinary-related papers.


  1. Sackett, David L., et al. “Evidence based medicine: what it is and what it isn’t.” (1996): 71-72.
  2. Belsey, J. What is evidence-based medicine? http://www.bandolier.org.uk/painres/download/whatis/ebm.pdf
  3. RCVS Evidence-Based Veterinary Medicine https://knowledge.rcvs.org.uk/evidence-based-veterinary-medicine/
  4. Centre for Evidence-based Veterinary Medicine https://www.nottingham.ac.uk/cevm/about-the-cevm/cevm-aims.aspx
  5. Holmes, Mark, and Peter Cockcroft. “Evidence-based veterinary medicine 1. Why is it important and what skills are needed?.” In practice 26.1 (2004): 28.
  6. The SkeptVet, Evidence-based Veterinary Medicine: What Is It & Why Does It Matter? http://skeptvet.com/Blog/2015/08/evidence-based-veterinary-medicine-what-is-it-why-does-it-matter-2/
  7. What are the key steps in Evidence-Based Medicine? https://www.students4bestevidence.net/start-here/what-are-the-key-steps-in-ebm/
  8. Petrie, Aviva, and Paul Watson. Statistics for veterinary and animal science. John Wiley & Sons, 2013.
  9. Stegenga, Jacob. “Down with the hierarchies.” Topoi 33.2 (2014): 313-322.
  10. How Data Analytics will enhance evidence-based healthcare https://healthcaredatainstitute.com/2018/04/24/data-analytics-will-enhance-evidence-based-healthcare/

Written by Rob Harrand – Technology & Data Science Lead


To register your email to receive further updates from Avacta specific to your areas of interest then click here.