EMS agency decision-making is blind, but there is hope for a cure. Until recently leaders have been forced to use anecdotes and personal intuition to guide their decisions. It wasn’t perfect, often resulted in poor outcomes, and rarely resulted in progress.
This isn’t a dig at them; they really didn’t have many other options. Try looking at the performance of your system using only thin pieces of dead trees and see how far you get. Fortunately there are now better alternatives.
The widespread adoption of electronic patient care records (ePCRs) has created many possibilities for data-driven decision-making. It is now relatively easy to use data to better understand our systems’ performance. This understanding creates the foundation for making better decisions. With well-defined key performance indicators, a system can measure its performance, then craft small-scale changes to its operations and rapidly see their impact. In this way real progress is possible in ways that are clear and easily communicated.
In addition to agency-level data, we can now combine data from multiple agencies in an anonymous fashion to create massive data sets. When these data sets are made available to leaders and researchers (in a fashion blinded to system- or patient-identifying information), progress of a different sort is possible: We can create clinical and operational benchmarks to compare our progress to how the rest of the nation is doing.
We can also use this data to expand the knowledge that defines our practice. Formal randomized, controlled trials (RCTs) are the ultimate way to test new treatment hypotheses. These hypotheses have to come from somewhere, though. They also need to have some preliminary evaluation in a more accessible and affordable fashion than an RCT. That’s where the concepts of data exploration and hypothesis generation come in.
Hypothesis generation is the process by which results from preliminary data analysis suggest more specific clinical questions. For example, let’s say a large data set of cardiac arrest patients with shockable rhythms is analyzed to determine whether more patients were discharged from the hospital if they were given amiodarone compared to lidocaine. If this initial data analysis of existing records suggests there was no difference in outcome, we now have a very good (and clinically relevant) question to address with an RCT. As it turns out this was done, and the results of the RCT confirmed there is no difference in meaningful survival between these two drugs.1
ESO Solutions, a vendor in the ePCR market, made a copy of its 2017 data set available for research. This data set has no personal- or agency-identifying information in it but does contain blinded data on more than 5 million EMS calls from over 900 different EMS agencies in 48 states. Working with ESO, we used this data set to conduct preliminary exploration and analysis of several interesting topics.
Pediatric patients come in all shapes and sizes. Many medications given to children are weight-based. So how often do we document weights on our pediatric patients?
We identified all patients under 15 years old and calculated the proportion who had a documented weight. The results were a bit concerning: Out of more than 187,000 patients, only 57% had a weight documented. While we might think we’d be better at documenting weights on smaller, younger children, this wasn’t the case: Infants (under 2 years) were 13% less likely to have a documented weight compared with those between 2–10 years old.
There was some good news, however: We were more likely to document a weight for those patients to whom we gave a weight-based medication. Of these children, 78% had a weight documented. Still not optimal, though.
You may have noticed that from a data set of more than 5 million EMS calls, only 187,000 were children. It turns out that over 95% of all 9-1-1 calls in this data set were for adults.
Of the calls for pediatric patients, the majority (41%) were for trauma. Other reasons included seizures (18%), no complaint at all (17%), some type of respiratory complaint (15%), and fever (9%). Together these complaints accounted for 56% of all pediatric calls.
Airway management is one of the core skills of EMS personnel. Advanced techniques, including endotracheal intubation, supraglottic airway insertion, and some type of front-of-neck access (FONA), are potentially lifesaving procedures when performed well and for the right patients.
There is also a growing body of evidence suggesting that, at least with ETI, first-pass success (FPS), or getting the tube in correctly on the first attempt, is important and associated with fewer adverse events. How are we doing when it comes to airway management?
There were just under 70,000 airway procedures represented in the data set. ETI attempts represented 47% of those, along with CPAP (34%), SGAs (17%), and some type of FONA (0.7%). Combined, these attempts were successful 74% of the time. The per-attempt success rate varied by method used, with SGA being the most successful at 88%, ETI at 69%, and FONA the least successful at 44%.
FONA attempts were surprisingly unsuccessful. The highest rate of successful attempts was with surgical cricothyrotomy (77%), but the overall rate among all devices was still successful less than half the time. The various commercial devices had very low success, with rates ranging from 0%–8%. Even needle crichothyrotomy had successful attempts just 18% of the time. Clearly this leaves room for improvement.
If we look only at first attempts, SGA was the most successful, at 92%. ETI was successful on the first attempt 78% of the time. The use of medications to facilitate intubation was associated with improved FPS.
Rapid-sequence intubation was successful 89% of the time on first attempt compared with non-RSI at 77%. Of all methods, the odds of first-attempt success were more than fivefold higher than all additional attempts combined (OR 5.63).
Age was also associated with intubation FPS. When compared with adults, FPS in infants was 56% less likely and 51% less likely in children aged 2–10 years. Teens had the same FPS rate as adults, and the odds of FPS in the elderly were 42% higher than for adults.
Painful conditions are a common reason for calling 9-1-1. More than a third (34%) of all 9-1-1 patients had an initial pain score above 5. Unfortunately, 58% of patients did not have even one pain score documented.
Patients received a medication in roughly a third of 9-1-1 calls. Narcotics were the most common type of pain medication administered, with fentanyl representing over two-thirds of all narcotics given. Women, African-Americans, Hispanics, and infants were less likely to receive pain medication than men, whites, and adults.
While these findings are thought-provoking and can lead to further research, they should be interpreted within the context of several limitations. Large data sets like these can easily reveal statistically significant differences, but that doesn’t mean those differences are meaningful. We must remember that these are for generation of hypotheses, not confirmation.
Exploration of existing data sets can only show associations between two things—they cannot prove one thing caused the other. As is often said, correlation does not equal causation. The truth of this is best demonstrated with an example: Let’s say a study is conducted involving a widely available drug. Investigators discover that everyone taking it has diabetes. They conclude the drug caused the diabetes. The catch? The drug is insulin. Insulin doesn’t cause diabetes. Its use is, however, correlated with diabetes.
Finally we must also understand that these large data sets are only as good as the data put into them. The concept of GIGO (garbage in, garbage out) applies here. If the data being put in is not consistent, the conclusions drawn are less valid.
So, where do we go from here? What should we do with this data? Today’s data isn’t perfect, but that doesn’t mean it isn’t useful. Existing data sets are very good for hypothesis generation. We should use them for that. In the process we should identify areas and ways by which these data sets can be improved.
Here are some good starting places: As an industry, we need to agree on, adopt, and teach a common set of data definitions (e.g., what is an intubation attempt? How should a STEMI be documented?). We should build robust data-validation rules and fine-tune the work on performance measures begun by the EMS Compass initiative.
We also need to work hard to include medics at all levels of our organizations and in all phases of their careers. All of us need to understand the potential that lies in good data. We need everyone’s buy-in to achieve this potential.
Just because EMS decision-making was blind in the past doesn’t mean it must remain so in the future. We now have the tools available to promote data-driven decision-making. We should embrace this opportunity and run with it.
1. Kudenchuk PJ, Brown SP, Daya M, et al. Amiodarone, lidocaine, or placebo in out-of-hospital cardiac arrest. N Engl J Med, 2016; 374: 1,711–22.
Jeffrey L. Jarvis, MD, MS, EMT-P, FACEP, FAEMS, is EMS medical director for the Williamson County EMS system and Marble Falls Area EMS and an emergency physician at Baylor Scott & White Hospital in Round Rock, Tex. He is board-certified in emergency medicine and EMS. He began his career as a paramedic with Williamson County EMS in 1988 and continues to maintain his paramedic license. Follow him on Twitter at @DrJeffJarvis.