Skip to main content

Mobile Integrated Healthcare Part 2: MIH-CP Outcome Measures Tool Project

NULL

Beginning last month, EMS World launched a yearlong series that provides readers with a road map for developing MIH-CP programs. This series will address the following topics:

  • Strategic planning for rapid MIH-CP implementation;
  • Collaborations with home healthcare;
  • Updates on CMS Innovation Grants;
  • Accreditation of MIH-CP programs;
  • Profile of the MIH Summit at EMS on the Hill Day;
  • Payer perspectives for MIH-CP services;
  • Choosing MIH-CP practitioner candidates;
  • Education and training of MIH-CP practitioners;
  • MIH-CP programs in rural settings;
  • International models of MIH-CP.

This month we look at data metrics and strategic goals.

An increasing number of agencies within the federal Department of Health and Human Services, including the Agency for Healthcare Research and Quality (AHRQ) and the Centers for Medicare and Medicaid Innovation (CMMI), are supportive of efforts to advance healthcare innovation and value-based purchasing.

During recent updates provided to these agencies, officials have recognized the promising early results from several MIH-CP programs around the U.S. However, in order to help make the case for payment policy changes to support MIH-CP programs, we need to demonstrate with thousands of patients that the EMS-based MIH-CP service delivery model:

  1. Achieves the Institute for Health Improvement’s Triple Aim;1
  2. Is scalable and replicable across many different communities and systems with common measures to be able to compare results across the country;
  3. Is structured for program integrity to help reduce the possibility of fraud and abuse.

Armed with this counsel, in April 2014 we embarked on an ambitious project to develop outcome measures for MIH-CP that will help address these three recommendations.

With the Round One Healthcare Innovation Award grants one year from expiration, as well as several other grant-funded MIH-CP programs underway, we knew we had a short window of six months in which to develop and seek stakeholder consensus on measures that could prove value and help make programs sustainable beyond the grant periods.

Framework and Reference Sources

We started by framing out the project and articulating early goals. The team wanted to ensure a focus on the IHI’s improvement methodology and measurement strategy, and focus on measures that are consistent with the goals of the Triple Aim, as external stakeholders would be familiar with those goals.

It also became apparent that there are three basic types of measures: program structure (how the program is put together to meet the goals); process (the way the intervention is carried out); and outcomes (what the result is from the intervention). Program structure measures include components like executive sponsorship, community needs/gap assessment documentation, strategic plan and sustainability plan. Process measures would be things like time from referral to enrollment, patient to provider ratios and cost of the intervention. While we felt that process measures were important, given such a short timeframe to demonstrate the value of MIH-CP services, we decided to focus first on outcome measures. Outcome measures include changes in healthcare utilization, patient health status and patient experience measures.

Since many of those on the Outcome Measures Tool team have had the opportunity to not only meet extensively with external stakeholders, but also present at numerous national conferences, we are familiar with key questions being asked and attempted to address in the Tool:

  1. Are these programs safe for patients?
  2. Are these programs providing quality services as defined by external stakeholders?
  3. What has been the impact on the rest of the healthcare system providers, such as primary care, specialty care and behavioral health as a result of these programs?
  4. Do patients like the programs?
  5. Do providers conducting the MIH-CP services like the program?

Based on questions like these, and learning from healthcare and payer partners about the outcomes they want to track, we developed five outcome measure domains:

  • Quality of Care and Patient Safety
  • Experience of Care
  • Utilization
  • Cost of Care/Expenditure Savings
  • Balancing Metrics.

Because one of the principle audiences for the Outcome Measures Tool is CMS, we also desired to ensure that the “Big Four” measures routinely used by CMS to measure innovation effectiveness were included as a mandatory reporting requirement. In evaluating the impact on changes to the healthcare delivery system, CMS places a significant focus on hospital ED visits, all-cause hospital admissions, unplanned 30-day hospital readmissions and the total cost of care. We also researched measures that agencies such as AHRQ,2 the National Quality Forum (NQF),3 and other resources had developed and felt we could not only incorporate much of their work (such as definitions and measurement calculations) into the Outcome Measures Tool, but we could also utilize a similar format, one that the healthcare system stakeholders would be familiar with.

We also recognized that there has been much work done through a grant by the Health and Human Services, Health Resources and Services Administration, Office of Rural Health in the development of the Community Paramedicine Evaluation Tool published in 20124 and wanted to incorporate as much of that work as possible into the MIH Outcome Measures Tool.

Program Integrity

We wanted to include program structure measures that demonstrate the MIH-CP program is more than simply payment for treat and release.

EMS and the ambulance industry have been recently identified as one of the fastest growing Part B Medicare expenditures and that the growth in this spending is inconsistent with changes in Medicare beneficiaries.5 In fact, the industry has been criticized for fraudulent billing, primarily for non-emergency, repetitive patients.6,7 CMS has launched a demonstration project in Pennsylvania, New Jersey and South Carolina that requires that non-emergency, repetitive services will require a preauthorization by CMS prior to being eligible for payment.8 Needless to say, we are on CMS’ investigative radar screen.

There were two excellent consensus documents we added to the resource list to help with the program structure measures: the September 2012 White Paper Mobile Integrated Healthcare Practice: A Healthcare Delivery Strategy to Improve Access, Outcomes, and Value,9 and the MIH-CP Vision Statement jointly developed by NAEMT and 10 other EMS Associations.10 These two documents list several “pillars” that define the foundations MIH-CP programs should be built upon in order to be successful. You will see these principles used in the MIH Outcome Measures Tool to help establish that the program being measured is, in fact, a formally established MIH-CP program.

Which Intervention?

There may be numerous interventions—or components—to an MIH-CP strategy in a local community. These could include community paramedicine, 9-1-1 nurse triage, nurse help line, ambulance transport alternatives, transitional response vehicles staffed with a paramedic and a nurse practitioner, station-based clinics, house call physicians or any other intervention that a gap analysis reveals could be of value in the local community. Each one of these interventions should and could have their own outcome measures.

Given the timeframe in which we had to develop the initial draft, and the preponderance of interventions being conducted in communities across the country, the development team decided to first focus on developing the outcome measures for the Community Paramedic intervention.

As the measurement tool evolves as a living document, measures will be developed that are specific for those additional interventions. Some of the measures, such as the “CMS Big Four” will remain the same, but some will be different. For example, if you are doing an ambulance transport alternatives intervention (taking low-acuity patients who accessed the 9-1-1 system to a clinic or PCP as opposed to an ED), you should be tracking the repatriation frequency, the frequency on which a patient taken to the alternate destination by ambulance ends up needing an ambulance to take them from that destination to the ED.

Calculation Basis and Methods

One of the most interesting parts of developing the MIH Outcome Measures Tool was the discussion regarding how the outcomes should be calculated.

We’ve all read the reports in the media or at conferences about MIH-CP programs that have reduced 9-1-1 call volume by x%, or saved the local healthcare system $x million. We need to be very specific with how those numbers are calculated for two reasons. First, the results need to be verifiable by outside agencies and peer-reviewed journals, as well as comparable between programs. Second, the calculations need to reflect actual changes to important measures of healthcare delivery. One of the great development discussions was the issue of “cost.” Many programs use the avoidance of billed charges as the “cost savings.” The issue with this measure is that billed charges do not mean money paid/money saved. Similarly, just because you did not send an ambulance to a call, does not really mean you saved any money to the EMS agency, unless you reduced ambulance staffing and therefore reduced your expenditures. The Outcome Measures Tool helps provide clarity to the cost-savings dilemma by defining expenditures and referencing several sources for published data on things like ED and hospital admission expenditures per episode.

Another great discussion was the calculation of changes in utilization. Should the measure be per capita (ambulance responses per capita this year vs. last year)? Or perhaps be an absolute number year to year (ED visits to Mercy Hospital this year vs. last year). What if Mercy sees 450 patients a day in the ED, but only enrolls 100 patients per year into the program. The MIH-CP program may have little impact on the overall ED utilization, but for the 100 patients referred, there is a 75% reduction in ED use (more on that in the Strategic Goals section below). What if the population or demographics of the community is changing? How does that impact utilization? In fact, ED utilization in any given community could be impacted by many factors, including MIH-CP programs and other factors outside the control of the EMS provider.

The MIH Outcome Measures Tool attempts to resolve some of these issues by referencing the changes in utilization, health status and patient experience scores in enrolled patients over time. While comparing the patients’ utilization before their enrollment to their utilization after their enrollment is not the most robust way to calculate the impact from a statistical perspective, the team felt this was the only measure that could be universally captured by EMS agencies offering a community paramedic intervention.

Outcome Measures Based on Strategic Goals

The most important part of reporting outcomes for any program is a clear definition of the strategic goal of the program. In other words, what problem the program was trying to solve? What was the gap in the healthcare system that an EMS-based MIH-CP program is now filling, and what has been the outcome from filling that gap? How do the funders, or potential funders define value? The Outcome Measure Tool has a Program Structure requirement of a strategic planning document, such as a driver diagram described in last month’s column. The specific strategic goals of the program are not as important as the fact that they have been identified and articulated so that success of the MIH-CP program can be measured against the goals for establishing the program.

There may be significantly different strategic goals upon which to measure success. Consider these two scenarios, which have completely different strategic goals, but both of which are valuable to the stakeholders.

Scenario #1: Mercy Hospital is strapped with a 2% readmission penalty costing them $1.5 million in lost revenue this year. They want to reduce their 30-day unplanned readmission rate from their current 23% to 15% next year. They project this change will reduce their penalty from 2% to 0.7% and increase their revenue by $750,000 next fiscal year. More importantly, it will get them from the “red bar” in the Hospital Compare data base to a “green bar”. The C-Suite perceives that public perception as valuable. They fund your agency $250,000 to enroll 100 of the highest-risk readmission patients and offer a $100,000 bonus if you can reduce the planned 100% readmit rate for those patient to a 50% readmission rate.

Scenario #2: The local EMS chief is under significant budget pressures and the city manager is planning a ballot initiative next year establishing an EMS levy to fund EMS operations to avoid layoffs and service delivery challenges. Having read several articles this year on failed levies, the city manager wants to use this year to build the community’s perception of the EMS agency’s value to increase the chances that the levy will pass. The EMS agency trains the existing staff to help their high utilizers navigate the complex healthcare system to find the most appropriate sources for care. The program has numerous high profile successes, patients are interviewed in the media, and the local newspaper chronicles how the agency has improved patient outcomes and reduced the expenditures to the county’s indigent care fund for ED visits by $350,000 for the 35 patients enrolled in the program. The community’s trust in the EMS agency and their perceived value from the services they provide are greatly enhanced. When the levy appears on the ballot in voting the booth, they recall all the cool and valuable things the EMS agency is doing in the community and approve the levy 55% to 45% jobs saved and service levels assured! Strategic goal accomplished—for this year!

Next Steps

Developing outcome and quality metrics is a continual process that requires the Tool to be a living, evolving process and document. So far in this process, the Tool has been reviewed by those agencies currently operating MIH-CP programs and several provided feedback using a formal feedback tracking tool. Two face-to-face meetings have been held in conjunction with national EMS conferences. The feedback from the face-to-face meetings and from the agencies that have submitted the structured feedback has been exceptionally valuable. One of the clear messages is that the Tool contains A LOT of measures and although the 43 currently developed measures will have significant value, perhaps the project could start with the “Top 10” that most agencies feel are valuable to report and reportable. We are now in the process of polling the agencies asking them to identify their Top 10 to see if there is any commonality among the responses to help select the initial 10 measures to report.

Several of the agencies conducting MIH-CP programs have been asked to start inputting numbers from their programs into the Tool to determine; a) if they CAN track this data and b) do the formulas make any sense and yield the outcome measures we as an industry are seeking to demonstrate the value of these programs.

Numerous stakeholder associations have been formally invited to participate in the expanded development team to refine the measures already developed, and to assist with the development of outcome measures for additional MIH interventions, as well as the process measures for those interventions.

In the coming months, we will be holding additional face-to-face meetings to review the progress of the Tool and present to external stakeholder groups such as AHRQ, NCQA, and the Joint Commission, as well as the national payers who have expressed interest in the outcome measures for these programs like CMS, Cigna, Humana and Aetna. We also plan to include large healthcare systems like HCA, Tenet, Baptist and Adventist to help determine their definition of "value" to help foster the growth of these programs in local communities with the support of the national organization

Call to Action

Clearly this document will evolve over time and is crucial in uniformly demonstrating the value of these programs to help establish the long-term economic sustainability. We would like to invite agencies offering any component of an MIH-CP program in your community to participate in creating similar evaluation tools for these interventions. We would also invite those who are not currently providing a program to provide feedback on the metrics as they are developed. If you would like more information on how to participate, contact any of the authors.

Next Month: Updates on CMS Innovation Grants.

References

1. http://www.ihi.org/Engage/Initiatives/TripleAim/pages/default.aspx

2. http://www.ahrq.gov/research/index.html

3. http://www.qualityforum.org/Home.aspx

4. http://www.hrsa.gov/ruralhealth/paramedicine.html

5. https://oig.hhs.gov/oei/reports/oei-09-12-00350.asp

6. http://www.nytimes.com/2013/12/05/health/think-the-er-was-expensive-look-at-the-ambulance-bill.html?_r=0

7. http://www.bloomberg.com/news/2014-04-24/medicare-s-5-billion-ambulance-tab-signals-area-of-abuse.html

8. http://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2014-Fact-sheets-items/2014-05-22-3.html

9. http://info.modernhealthcare.com/rs/crain/images/Medtronic_Download_12-9.pdf

10. http://www.emsworld.com/news/11307570/ems-organizations-collaborate-on-new-vision-statement-for-mobile-integrated-healthcare-and-community-paramedicine

Matt Zavadsky, MS-HSA, EMT, is the public affairs director at MedStar Mobile Healthcare, the exclusive emergency and non-emergency EMS/MIH provider for Fort Worth and 14 other cities in North Texas. Matt has helped guide the implementation of several innovative programs with healthcare partners that have transformed MedStar fully as a Mobile Integrated Healthcare provider, including high utilizer, CHF readmission reduction, observational admission reduction, hospice revocation avoidance and 9-1-1 nurse triage programs.

Brenda Staffan is the project director for the $10 million CMS Health Care Innovation Award grant that was awarded to REMSA in Reno, NV. In the prior four years, she served as the executive director of the California Ambulance Association (CAA). She has served on the American Ambulance Association (AAA) Board of Directors and is a co-author of the AAA’s EMS Structured for Quality (2008) guide. Contact her at bstaffan@remsa-cf.com.

Dan Swayze, DrPH, MBA, MEMS, is the vice president for the Center for Emergency Medicine in Pittsburgh, PA.

 

Back to Top