# Grand Rounds: EMS Research Statistics for Non-Statisticians

*Grand Rounds is a new monthly blog series developed by EMS World and FlightBridgeED that will feature top EMS medical directors exploring the intricacies of critical care in EMS practice. In this installment FlightBridgeED Chief Medical Director Jeffrey Jarvis, MD, MS, EMT-P, explains some key statistical concepts. *

So there you are at the station, hoping to avoid a third call this shift to that nursing home around the corner. You happen upon a blog post about a new study claiming epinephrine isn’t the cardiac arrest wonder drug we all thought it was. What proof is offered for this blasphemy? Some silly concept called an “odds ratio.” What manner of statistical nonsense is this?

I’m glad you asked.

We often are told papers “prove” something or other, but unless we actually RDP (*read the damn paper*), we’re just taking someone’s word for it. That’s just downright un-American, as well as un-Canadian, un-British, un-Australian, and un-wherever-you-happen-to-be-from. And sadly, RDP-ing is only the first challenge. Next you have to have some understanding of the statistics in the paper. The good news is most aren’t all that hard to understand.

Let’s take my recent piece about the PARAMEDIC-2 trial as an example. There were several ways the authors chose to describe whether the outcomes between the epinephrine and placebo groups were statistically different. Why worry about this? Because with a significant enough size comparison, even minimal-seeming differences can actually be statistically significant. Whether they are clinically significant is a whole other point we’ll come back to later.

The authors use regression analysis to calculate adjusted odds ratios. I’m going to skip the actual math of this analysis because I have at best a passing understanding of these analyses (I know how to program them into my statistics language and pull out the results). But importantly, you don’t need to know how to do them to interpret them.

An odds ratio (OR) is a simple ratio of two odds. As an example, in the PARAMEDIC-2 trial, 30-day survival was 3.2% in the epinephrine group and 2.4% in the placebo group. That’s an absolute difference of 0.8% (3.2 – 2.4). The OR is the odds of a thing happening in one group (3.2) divided by the odds of the thing in the second group (2.4). In this case, the thing happening is 30-day survival. So, the OR is 3.2 divided by 2.4, or 1.33. See? Simple.

What does that mean? In this case that the odds of the thing (survival) are higher in the epi group than the placebo group. How much higher? Thirty-three percent higher. We get that number by subtracting the value for no difference, which is 1, from the OR: Hence, 1.33 – 1 = 0.33. Why did we subtract 1? Because the only way you can get an OR of 1 is if the odds of the thing are the same in both groups.

The higher the OR, the higher the odds of the thing happening in one group compared to the other. An OR of 4.3 means that the odds are 330% higher (4.3 – 1 = 3.3, x 100 = 330%).

What if the odds were lower in the first group? The OR will be a number somewhere between 0 and 1. For example, say those 30-day survival values were 4.5 and 10.2, respectively. Dividing the first by the second yields 0.44. If you subtract this from 1, you’ll get the percentage decrease in odds: 1 – 0.44 = 0.56, x 100 = 56% lower odds of the thing (survival) happening.

**Words of Caution**

Two important safety tips about interpreting ORs: First, you can’t have a negative OR. Why? Because if you divide two positive numbers, you can never get a negative one. So, even a very small OR will still be positive, just much lower than 1. Remember, 1 means no difference. The further from no difference you get, the more difference you have.

The second tip is that it is important to understand which group is being compared to which. Since we’re talking about a ratio, it can go either way. You’ll just have to read the chart or paper to see which way the author did the math. It’s usually pretty clear. It’ll say something like “compared to placebo, epinephrine had an OR of 1.33 for 30-day survival.”

So if you look at the article about PARAMEDIC-2, you’ll see they found an aOR of 1.47 for epi vs. placebo in 30-day survival. Why 1.47 when 3.2/2.4 = 1.33? Because the *a *stands for *adjusted. *They threw in all the other variables that might affect 30-day survival to try to control for them. For example, if there were more witnessed VF arrests with bystander CPR in the epinephrine group, we would expect there to be more survivors independent of any impact of epinephrine. The regression analysis allows the authors to account for the impact those variables had. For all our sakes, let’s just skip the mathematical proof of why this is the case.

**Confidence Intervals**

Finally, we need a way to determine if an OR is significant or not. For this we look at the 95% confidence interval (95% CI). To understand what this means, remember the OR was calculated from a sample of the population and not the population itself. Why? Because we can’t put everyone into cardiac arrest and see what happens, now, can we? No, we have to look at a sample. If we repeat the study on a different population chosen at random, we will likely get a different answer by chance alone. But the answer isn’t likely to be much different. The 95% CI gives us an idea of the range within which we are 95% confident the true result will fall.

The 30-day survival with epi vs. placebo was 1.09 to 1.97. This means we’re 95% confident the real answer is between those two ORs. Is this significant? It is because that range does not include 1. Remember that an OR of 1 means the two groups are the same. If we have a 95% CI from 0.97 to 2.01, for example, the true OR could either be less than 1 or more than 1. This is the case with neurologically intact survival at three months. Because the odds could have been higher in either group, we consider this an insignificant difference.

The benefit to looking at 95% CI for significance over p-values is that the 95% CI gives us an idea of the size effect. In other words, how big a difference is there? P-values don’t do this.^{1,2}

**References**

1. Aschwanden C. Statisticians Found One Thing They Can Agree On: It’s Time To Stop Misusing P-Values. FiveThirtyEight, https://fivethirtyeight.com/features/statisticians-found-one-thing-they-can-agree-on-its-time-to-stop-misusing-p-values/.

2. Bastian H. 5 Tips For Avoiding P-Value Potholes. PLOS Blogs, https://blogs.plos.org/absolutely-maybe/2016/04/25/5-tips-for-avoiding-p-value-potholes/.

*Jeffrey L. Jarvis, MD, MS, EMT-P, FACEP, FAEMS, is the chief medical director for FlightBridgeED, LLC. He also serves as EMS medical director for the Williamson County (Tex.) EMS system and Marble Falls Area EMS and an emergency physician at Baylor Scott & White Hospital in Round Rock, Tex. He is board-certified in emergency medicine and EMS. He began his career as a paramedic with Williamson County EMS in 1988 and continues to maintain his paramedic license. *