Monteiro S, Melvin L, Manolakos J, Patel A, Norman G. Evaluating the effect of instruction and practice schedule on the acquisition of ECG interpretation skills. Perspect Med Educ, 2017 Aug; 6(4): 237–45.
Evidence from the fields of cognitive psychology and education supports several teaching and studying strategies that promote effective learning. Several decades of research show that distributed instruction (presenting material in short segments over a period of time) results in deeper learning than massed instruction (presenting material in a long single session). More recent research suggests that testing students on multiple integrated topics, known as interleaving, may be superior for long-term learning to the more common practice of blocking, or testing students by topic area.
It is thought that blocking by subject area emphasizes similarities, while interleaving emphasizes differences, encourages comparison, and enhances higher-level critical thinking. Applications from diverse subject areas, such as music, math, and art, have demonstrated positive effects of distributed instruction and interleaved content. But these strategies, despite their potential benefit to student outcomes, are not widely adopted in medical education. Also, it is unknown if these strategies would be applicable to clinically complex skills such as ECG interpretation.
Researchers in Canada attempted to address this gap through an intervention study designed to examine the effect of instructional and practice methods on the ECG interpretation skills acquisition of 80 first-year medical undergraduate students. Participants were randomly assigned to one of four groups: massed-blocked, massed-interleaved, distributed-blocked, and distributed-interleaved. The groups that received massed instruction attended a one-day, 3.5-hour workshop during which three modules on ECG interpretation were presented: 1.5 hours on STEMI, 1 hour on narrow-complex tachycardias, and 1 hour on heart blocks. The groups that received distributed instruction attended three separate sections over a three-week period covering the same topic areas and using the same materials, presentations, and instructors. In this way, the researchers controlled for variability in the instructional content and only varied the instructional delivery (massed versus distributed).
Then each group was further divided into two practice groups. The blocked group practiced ECG interpretation skills immediately following each segment and only on the topic area covered in that segment (blocking). For example, at the conclusion of module 1 (STEMI), students practiced problems on a variety of STEMI ECGs. At the conclusion of module 2 (narrow-complex tachycardias), students practiced interpreting ECGs of the types covered in module 2, and so forth. The interleaving group practiced interpreting ECGs that contained all diagnoses covered in the three modules and only after completing the instruction on all three modules.
The primary outcome was scoring on two separate post-tests designed to measure short-term learning (immediate test) and retention (delayed test two weeks later). Differences in test scores between groups, types of instruction (massed vs. distributed), and methods of practice (blocked vs. interleaved) were examined using statistical tests such as analysis of variance (ANOVA). Test item analysis was also performed to ensure validity of the summative tools.
Of the 80 recruited medical students, 10 dropped out of the study before completing the immediate test, and an additional 19 dropped out before the final retention test (n=51). This left an unequal distribution among the four groups for analysis. Overall, final test scores were low, with a mean overall delayed test score of 27% compared to immediate scores for each of the three modules of 38%, 51%, and 51%. The analysis between groups showed a significant positive effect of distributed instruction over massed instruction, consistent with previous studies in a variety of subject areas over the past few decades. But the study failed to demonstrate a positive effect of interleaving practice problems and found no interactions within groups.
Of course the limited number and loss of participants at the time of the delayed test affected any ability to conclude much about retention. Also, the authors note that in an attempt to avoid interference with study design parameters, they provided minimal feedback to students and did not encourage the use of outside resources—two educational practices important and relevant to student learning! The authors also recognize the overall low scores on both immediate and delayed tests but attribute them to the complexity of the subject matter rather than the deliberate removal of best teaching practices (feedback, multiple resources).
I think this study contributes to our knowledge in several ways. First, it’s a good basic template for further research that could be conducted within and across EMS programs to quantify the impact of defined instructional strategies. Second, ECG interpretation was an excellent choice of subject matter, since it’s typically structured in topical segments and has broad applications as an educational component of many health professions programs. Finally, in a misguided effort to hold study conditions steady, the researchers removed several vital best teaching practices. This reminds us that an educational research project must always maintain educational integrity first and foremost and not lose sight of the end goal of student learning just for the sake of the research.
Megan Corry, EdD, EMT-P, is the program director and full-time faculty for the City College of San Francisco paramedic program and on the board of advisors of the UCLA Prehospital Care Research Forum.