INTRODUCTION

Panel management (PM) is a proactive, team-based approach to population health (PH) that systematically addresses gaps in care1,2,3,4 and often utilizes electronic health record (EHR) technologies to provide performance feedback in the form of clinic measures or quality metrics.5 Internal medicine (IM) residents in graduate medical education (GME) are expected to become competent in utilizing EHR data and performing PM, and gain skills to meet the core competency of practice-based learning improvement (PBLI) set by the Accreditation Council of Graduate Medical Education (ACGME).6,7 The ACGME defines PBLI as “one of the defining characteristics of being a physician,” with emphasis on personal responsibility for lifelong learning and continuous self-evaluation to improve patient care.6 Therefore, self-directed learning (SDL) is an integral skill for physicians to develop, since it is the process in which individuals take initiative with or without help, to identify learning needs, formulate goals, identify and utilize resources, apply learned knowledge, and evaluate learning outcomes.8,9,10,11,12

Previous PM curricula have correlated modest improvements in quality metrics and confidence in PM skills13,14,15 and highlighted the importance of self-reflection in improving gaps in performance.16,17,18 Longitudinal EHR-driven feedback has shown to help residents develop competency in accepting and applying performance feedback data, recognizing the growing role of technology in GME and patient care.19 Residents are often provided performance data with panel registries or perform chart audits that lack regular interval feedback.15,17,18,20,21,22,23 Usually there is a set topic of interest associated with a clinic performance measure which may not address residents’ varied interests or needs across postgraduate years. 13,14,15,18,24,25

We sought to create a PM curriculum that integrates SDL and interval performance feedback using EHR-based tools while also allowing learner autonomy to set learning and patient care goals to address concepts of PM and PH. The purpose of this paper is to describe the development, implementation, and evaluation of our curriculum.

SETTING AND PARTICIPANTS

The PM curriculum was a requirement of the University of Vermont Medical Center IM residency program at the mixed faculty-resident practice in Burlington, VT. The curriculum was implemented beginning in July 2019 through mid-June 2020. The resident clinic consists of 35 residents (27 categorical and 8 primary care residents) divided into 5 cohorts that rotate through clinic every fifth week (4+1 block schedule). Each resident has their own panel with approximately 80–100 patients.

PROGRAM DESCRIPTION

We structured the curriculum based on the theoretical model proposed by Sawtasky et al. for SDL in the GME setting, which recommends a “trigger” to help uncover a knowledge gap, form learning objectives, utilize resources to gain knowledge, apply knowledge, and self-reflect on learning outcomes.9

Each clinic week, residents participated in a 30-min precepted session with faculty in which they received a care gap report to act as the “trigger” to review and reflect on their performance to uncover knowledge gaps in learning and patient care. Care gap reports consisted of the percentage of “care gaps” or overdue health maintenance items that were closed the previous continuity clinic week. Residents also had access individualized dashboards, which display clinic performance measures for their patient panels. Residents formulated learning objectives by reviewing their data and setting goals for the week. Residents were given autonomy to choose a care gap goal, such as diabetes management (e.g., foot exams, A1c checks), and a learning goal (e.g., how to manage hyperglycemia). Residents had 1 half-day per clinic week to enact their plan of self-study, explore educational resources, and identify patients overdue for care and coordinate patient outreach with support staff.

PROGRAM EVALUATION

Program evaluation included pre- and post-curricular surveys, weekly clinic surveys during the curriculum, and post-curriculum focus groups. The surveys and interview guide were developed by the authors. Pre- and post-surveys occurred between July 2019 and June 2020, were adapted to assess ease of use of the dashboard and EHR tools,26 and consisted of multiple-choice questions that utilized a 5-point Likert scale (strongly agree to strongly disagree). The weekly clinic surveys occurred from August 2019 to early June 2020, and consisted of multiple-choice questions and open-ended responses to assess care gap and learning goals, successes and barriers, educational tools utilized, and confidence in PM skills. All survey data were managed using REDCap electronic data capture tools hosted at the University of Vermont.27 Focus groups capturing resident experiences and perceptions were conducted during April and May 2020 and were moderated by an experienced facilitator (L.A.H., K.N.H., or A.G.K.) with an interview guide.

Care gap data was collected from the EHR and the primary outcome of interest was rate of care gap closure to assess resident behavior change and curriculum outcomes. Given the substantial impact on preventive care at the onset of the COVID-19 pandemic, three time periods were evaluated: the pre-intervention period: July 2018–June 2019, the first intervention period: July 2019–March 2020, and the second intervention period (COVID-19 period): April through early June 2020.

STATA 16.1 (Stata Corporation, College Station, TX) was used for all quantitative analyses, with p<0.05 required for statistical significance. Agreement between pre- and post-survey responses was compared using McNemar’s test. Weekly clinic surveys were analyzed descriptively. The care gap closure rate was analyzed using an interrupted time series analysis during the time periods. The focus group transcripts were evaluated using qualitative content analysis through an iterative process until major themes were identified.

According to the policy defining activities which constitute research at the University of Vermont and University of Vermont Medical Center, this work met criteria for improvement activities exempt from ethics review.

RESULTS

Of the 35 residents that participated in the curriculum, 91% (32) completed the pre- and post-surveys, 80% (28) completed at least six of the eight weekly clinic surveys, and 83% (29) participated in the focus groups. Data from weeks 7–8 were excluded due to disruption from the pandemic; therefore, residents that completed at least 6 surveys were included (n=168).

Pre- and post-survey responses showed that post-curriculum, residents were more likely to report the dashboard was useful in their job, easy to use, clear and understandable, and likely benefited patients’ health overall (p<0.05). Weekly clinic surveys revealed the common successes and barriers (Table 1). Of the 168 surveys, the most reported care gap goal set was social determinants of health screening at 11% (19), followed by 8% (14) behavioral health screening, and 8% (14) advance directive completion. Focus groups revealed that residents valued protected time, dashboard data, perceived high-yield resources, peer and faculty mentorship, autonomy to set learning and care gap goals, and an overall increased sense of panel ownership. However, residents reported goal setting felt arbitrary when it was not pertinent to patients seen during a given clinic week, and they did not feel accountable for patients outside of their panel. Residents felt that lack of continuity with their patient panel was a barrier to PM and patient care. Many residents found that the care gap reports were difficult to interpret and did not represent their performance or efforts.

Table 1 Reported Successes and Barriers: Weekly Survey Data Averages (N=168)a

There were 13,313 patient visits available for analysis, including 7395 visits in the pre-intervention period, 4697 visits in the first intervention period, and 1221 visits in the COVID-19 intervention period. The percentage of care gaps closed per month ranged from 3 to 21%. Overall, there was a statistically significant amount of variation in care gap closure rate across the three time periods (p<0.001) (Fig. 1). There was no significant difference in the rates between the pre-intervention period and the first intervention period (p=0.44). The rate of care gap closure was significantly higher in the COVID-19 period than it was in the first intervention period (p<0.001). Across all time periods, fall risk screening and behavioral health screening were found among the top five care gaps closed.

Figure 1
figure 1

Interrupted time series analysis of monthly care gap closure trends during three time periods: prior to the educational intervention (12 months), during the intervention (9 months), and continuing the intervention during the COVID-19 pandemic (3 months).

DISCUSSION

We developed a longitudinal PM curriculum that incorporated EHR-driven performance feedback and SDL concepts to address PBLI required by the ACGME. Resident experiences based on surveys and focus group data showed that EHR tools, individualized goal setting, protected time, and mentorship were well-received and important for resident engagement and patient ownership, but performance feedback needs to be accurate, timely, and easy to interpret for resident acceptability.

To our knowledge, we are the only PM curriculum in the literature to give residents autonomy in choosing care gap metrics and learning goals. Previous curricula have allowed autonomy in goal setting specific to performance or allowed residents to come to a consensus as a group, but the quality metrics were still assigned.19,28 Focus groups revealed residents felt empowered and appreciated this autonomy, but they found it problematic when goals were not relevant to patient visits. Previous work has suggested using resident-sensitive metrics, or metrics that are actionable and appropriate for resident panel populations, and align with educational goals19,29,30,31 Continuous review of whether metrics are appropriate for residents may be needed. It is also plausible that more time may increase the likelihood that residents would have pertinent patient visits, but future work is needed to address how to better align learner needs with patient care.

Most residents rejected the care gap reports as a helpful tool as they did not feel the reports were representative of their perceived performance or efforts. Residents elaborated that care gaps were not always completed due to reasons outside of their control, such as EHR tools not capturing completion, continuity issues, competing patient priorities, and time constraints. The lack of context from the reports also proved to be problematic since residents could not easily reflect on the data and attribute it to specific encounters. These limitations of EHR metrics and barriers to PM and patient care are not specific to GME and remain prevalent in real-world primary care. Residents may benefit from more guidance, since a previous curriculum found faculty-guided interpretation helped residents gain competency in receiving, interpreting, and applying performance feedback by addressing accuracy concerns and preventing defensive reactions.19 A common challenge for educators is balancing the tension between assessment and feedback to promote learning and growth;32 future iterations of this curriculum may benefit from trying to continue to shift resident perceptions of EHR-driven performance feedback as a formative tool versus summative assessment.

Despite an increased sense of patient ownership, multiple residents voiced the concern that performance metrics should not be based on patients outside of their panel. We would argue that in primary care, both residents and attendings may see colleagues’ patients from time to time outside of their panel, and the bigger perspective of PH does not draw lines at individual panels. This suggests a need for more emphasis and education around concepts of PH and a team-based approach to patient care.

Residents’ behavior changes and patient outcomes were in the form of care gap closure rate, which trended toward improvement, but results did not reach statistical significance in the pre-pandemic period. It may have been too short of a period to capture significant change. Unsurprisingly, care gap closure percentages fell during the pandemic. The subsequent rise in care gap completion rate across this period was robust at a time when patient access was still limited for most preventive care and primarily telehealth based. This finding, in conjunction with the findings of behavioral health and social determinants of health as top care gaps completed during that period, may reflect residents’ awareness and interest in continuing to address preventive health needs that were particularly pertinent and accessible amidst the challenges of the pandemic, and improved proficiency in utilizing the EHR to deliver care.

There are several limitations to note. Surveys and the interview guide were not tested for validity of evidence or piloted. Data were self-reported and therefore subject to recall bias. We had a limited assessment of behavioral change and patient-level outcomes like most studies in GME.33 This curriculum may only be feasible in programs with a X+Y model and protected time for PM. The findings of this single institution evaluation may not be generalizable to other residencies.

More work is needed to investigate how PM curricula can not only prepare residents for future practice but also impact patient outcomes.