Providing mental health services in K-12 schools has become a vast and critical undertaking as research has continued to demonstrate an alarming increase in the need for outpatient and emergency treatment for children and adolescents (Plemmons et al., 2018). The current 12-month prevalence rate of mental illness for adolescents is estimated to be 40.3% (Bagalman & Cornell, 2018). According to a 2019 report by the Substance Abuse and Mental Health Services Administration (Substance Abuse and Mental Health Services Administration, 2020), among the 3.8 million adolescents who reported a major depressive episode in the past year, nearly 60% did not receive any treatment. Of the adolescents who do receive mental health treatment, almost two-thirds of them receive treatment through school only (National Association of School Psychologists, 2021). Addressing mental health needs in schools allows students to be more likely to receive and continue services without the complications of traditional outpatient services, such as long waiting lists, lack of after-school appointment times, and long drives from home to the provider’s office (Swick & Powers, 2018; Weist, 2005). Having providers on site to deliver mental health services throughout the school day, once a novel solution to student care, has grown over the last two decades to be accepted “virtually everywhere” in the public-school system (Shernoff et al., 2017). This expansion of mental health services has been defined as “Expanded School Mental Health (ESMH)” and now has national organizations, multi-state coalitions, annual conferences, and academic journals devoted to its application.

Implementation of school mental health services through ESMH requires participation and cooperation from many parties (e.g., parents, school administration, universities, third-party reimbursement). However, the overwhelming responsibility of introducing such programs and conducting mental health service delivery falls on interdisciplinary teams that include school psychologists, school counselors, and social workers (McCance-Katz, 2019; Teich et al., 2008). These service providers have complex jobs with many, sometimes overlap**, areas of knowledge and skills. School psychologists, school counselors, and social workers may be employees of school districts or may be contracted from a district cooperative, mental health agency, or private practice to provide services. They often work directly with school administrators, community agencies, students, and families to help promote student success (American School Counselor Association, 2018; National Association of School Psychologists, 2017a; National Association of Social Workers, 2017). These mental health providers are often available to schools in a variety of combinations depending on factors such as funding, administration preference, or availability (Locke et al., 2016; Markle et al., 2014; Teich et al., 2008). Some schools may just have one mental health provider, who could be a school psychologist, school counselor, or social worker. Other schools may have one of each provider, as well as additional contracted service providers who offer services not otherwise covered by school employees. Because of the myriad of employment variations, much of the literature fails to stratify data by professions, and rather lumps all providers together when researching mental health services in schools (Teich et al., 2008; Weist, et al., 2019).

Although ESMH services may include skills and abilities from a variety of practitioners with differences in education and licensure backgrounds, each provider has a framework and guidelines to structure their practice provided by their graduate training and leadership organizations, ASCA, NASP, and NASW, all of which encourage and promote the use of evidence-based practice (EBP; Mullen et al., 2008). EBP may be represented in the literature by other names, such as evidence-based interventions (EBIs), evidence-based practice in psychology (EBPP), and empirically supported treatments (ESTs), but for consistency throughout this study, the term EBP will be used. While EBIs and ESTs are specific psychological treatments that have been shown to be efficacious in controlled clinical trials, the complete scope of EBP includes a broader range of clinical activities, such as psychological assessment, case formulation, and building therapeutic relationships (APA, 2005). EBP is part of a national undertaking stemming from the medical, education, and prevention science fields and suggests similar treatment expectations when receiving mental health care, such as when receiving “best practices” treatment during an appointment with a physician (Institute of Medicine, 2001, p. 147). According to the American Psychology Association (APA, 2005), the purpose of EBP is to “promote effective psychological practice and enhance public health by applying empirically supported principles of psychological assessment, case, formulation, therapeutic relationship, and intervention” (p. 5).

Meeting all the elements of EBP when working with students in school settings is a challenging and often daunting task. Best practice guidelines come from the US Federal Government and the US Department of Education, however, relevant literature reports significant gaps in services and quality of interventions (Ennett et al., 2003; Fabiano & Pyle, 2019; Hicks, et al., 2014; Lyon & Bruns, 2019; President's New Freedom Commission on Mental Health, 2003; Spiel et al., 2014). Even with guidelines from governing organizations, such as ASCA, NASP, and NASW, most providers are not using empirically supported treatments (McIntyre et al., 2007; Schaeffer et al., 2005) or are using outdated, eclectic, and reactive approaches with poor scientific support for efficacy (Evans & Weist, 2004; Hoover, 2018). Because of the diversity of school-based mental health providers, each person may look to their own licensing board or regulatory agency instead of one authoritative body for direction of what is considered “best practice.” In addition, each mental health profession has its own unique, and sometimes conflicting, history with adaptation to using and mandating EBP.

Several attempts have been made to identify therapist characteristics associated with adoption of EBP and encourage the application of empirical knowledge to real-world practice (Addis & Krasnow, 2000; Baumann et al., 2006; Essock et al., 2003; Rubin & Parrish, 2007). Multiple studies have detailed the many barriers providers face when trying to implement EBP in school settings (Domitrovich et al., 2008; Eiraldi et al., 2015; Jensen & Foster, 2010; Owens et al., 2002; Patalay et al., 2016; Schaeffer et al., 2005). In addition, researchers have developed assessment tools to learn more about mental health provider attitudes towards the adoption of specific EBIs and collaboration between providers and team members (Aarons, 2004; Mellin et al., 2014). Aarons (2004) created and validated the Evidence-Based Practice Attitude Scale (EBPAS) using a sample of 322 public sector clinical service workers from 51 programs providing mental health services to children and adolescents and their families. The EBPAS identified four dimensions of attitudes towards adoption of EBP that can assist with dissemination and implementation into real-world settings: (1) intuitive Appeal of EBP, (2) likelihood of adopting EBP given Requirements to do so, (3) Openness to new practices, and (4) perceived Divergence of usual practice with research-based/academically developed interventions (emphasis original).

Mellin et al. (2014) created a three-scale instrument, the Expanded School Mental Health Collaboration Instrument (School Version) (ESMHCI (SV)) based on findings from focus group interviews and a review of the literature. This instrument was intended to assess ESMH collaboration from the perspective of school-employed professionals. However, even with the development of these supporting tools, a core element is missing: there is no measure of EBP use across disciplines. Measuring EBP use using an underlying consistency of what the term means should be the first step to determine if a provider is or is not using EBP. Having a common language among mental health providers would be a natural beginning towards improving use of EBP and understanding more about barriers and facilitators to its use. The purpose of this research is to report the development and initial psychometrics of a new instrument that can be used to measure EBP use based on a shared understanding across school psychologists, school counselors, and social workers.

Challenges to Measuring EBP

Historically, measuring EBP has been a challenge. School psychology, school counseling, and social work have formalized EBP conceptualization and training requirements, which impact accreditation standards, clinical competencies, and ethical codes (Bellamy et al., 2006; Drisko, 2014; Drisko & Grady, 2015; Mullen et al., 2019; Reddy et al., 2017). Because these three groups have different experiences and histories with EBP, there has been vacillation on what constitutes EBP implementation, including current ongoing criticism regarding dissemination of evidence from clinical research trials into real-world practice settings (Bellamy et al., 2006). In an updated perspective of applying EBP in social work, Drisko and Grady (2015) write “The definition of EBP is actually very clear; it is just not effectively taught, nor well understood. It is also rarely practiced in full” (p. 275). Reasons for this include general lack of awareness of available best practices, lack of fit to patient population, or suspicion of a rapid push towards change. In addition, mental health providers have identified controversy in establishing a formal EBP definition as it has been adapted and implemented by state, federal, and health care funding entities to mandate and restrict service delivery (Jensen and Foster, 2010).

When conceptualizing how to measure EBP, much of the research has focused more on EBIs, including graduate school training, provider willingness to use them, and applicability to real-world settings (Beidas & Kendall, 2010; Karekla et al., 2004; Schaeffer et al., 2005; Weisz et al., 2005). Some of these clinical activities, such as the therapeutic relationship, have been shown to be reliable predictors of positive clinical outcomes, regardless of the psychotherapy approach or specific assessment measures used (Ardito & Rabellino, 2011). Too much focus on EBIs can also take away from the whole picture of EBP, with many practitioners conflating the two and believing that using an EBI, such as motivational interviewing (MI) or cognitive-behavioral therapy (CBT), is the only way to apply EBP (Drisko and Grady, 2015).

Despite these difficulties, many providers continue to press on with their own definition of EBP while others await further instruction. Parrish (2018) calls social workers to the importance of a common definition stating “a broad sampling of the social work literature continues to reflect confusion with the term” (p. 407). Calling the process and definitions of EBP a “circular debate,” Parrish encourages use of a common definition of EBP in order to “engage in critical and reflective thinking, ethical practice rooted in client empowerment, and practice decisions that have the most promise for hel** the clients they serve” (p. 408).

The Current Study

This study was designed to develop an instrument to understand and measure EBP use based on a shared understanding of what behaviors constitute best practice that is applicable to school-based mental health providers regardless of their discipline. Creating a measure of EBP use allows for mental health providers, employers, and professional groups to examine their own proficiency with the construct and establish a common understanding of the term. As part of this instrument development, common elements of EBP across school psychologists, school social workers, and school counselors were identified. While much has been published on the topic of barriers and facilitators to EBP in schools (Domitrovich et al., 2008; Eiraldi et al., 2015), the literature is sparse on the efforts needed to first measure appropriate understanding and agreement of EBP use across providers. It is difficult to understand what is hindering EBP when it remains unclear if mental health providers have the same understanding of what EBP means.

Framework of This Study

The framework for this study stems directly from the Evidence-Based Practice in Psychology (EBPP) definition from APA (2005), which states EBPP is the “integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” (p. 6). APA offers examples of each aspect of this definition as part of their policy statement (see APA, 2005, updated 2021) that were used to inspire survey questions for this instrument. While there is not one consistent definition of EBP used across all mental health professions, this definition was chosen as a starting point because APA has the largest membership base of mental health researchers, educators, providers, consultants, and students in the USA. Definitions published by APA are frequently referenced by leaders in other disciplines and by supporting health care literature, causing a “trickle down” effect as APA’s decisions are implemented into supporting professions, such as counseling and social work (Bacon, 2008; Drisko & Grady, 2015; Mullen et al.,; 2008). This definition also fits within the social ecological framework suggesting a relationship between the actions of people (providers) with their environment (schools). The social ecological model (SEM) is a “theory-based framework for understanding the multifaceted and interactive effects of personal and environmental factors that determine behaviors, and can be used for identifying behavioral and organizational leverage points and intermediaries for health promotion within organizations” (Bronfenbrenner, 1979; Centers for Disease Control and Prevention, 2014, p. 1). Using this model is helpful when conceputalizing how mental health providers in schools have an interdependent relationship with research, administrators, teachers, students, students’ families, school neighborhood, and more (APA, 2008). This model is consistent with multiple policy statements released by APA (APA, 2005; APA 2017) and helped to ground this new instrument in theory as there do not exist other similar measures for comparison.

Methods

Phase 1: Literature Review and Development of Items

A questionnaire was created for this study using the recommended steps for instrument development described by DeVellis (2012). Initial items developed for the questionnaire were based on three principal sources: (a) a review of the literature and related instruments, (b) examples of evidence-based practice in psychology (EBPP) as provided by APA’s policy statement (2005, updated 2021): and (c) the social ecological model (SEM). Questions were designed to ask participants about the professional practice elements used in their current school setting. Initially, participants were asked to respond “yes” or “no” to a series of items to indicate their implementation of various forms of EBP. The next set of questions asked to what extent each item contributes to or interferes with the participants’ use of professional practice. A 4-point Likert rating scale was used as it requires the participant to express an opinion or attitude, thus clearly indicating a definitive response (DeVellis, 2012). At its early stages, the EBP instrument was conceptualized to measure EBP either as a single construct or as three constructs consistent with: 1. best available research, 2. clinical expertise, and 3. patient characteristics, culture, and preferences, consistent with the APA definition.

Phase 2: Expert Review

An expert review was conducted to investigate the quality and clarity of the instrument (Ikart, 2019; Kelley et al., 2003). The first version of the instrument was shared with a convenience sample of school psychologists, school counselors, and social workers (N = 12) working in Central Indiana. The group consisted of professors, regional professional organization representatives, and long term (10 + years) practitioners. The consensus across the expert review was that more emphasis should be placed on finding survey terms that are consistent across professions to provide an accurate method of providing contrast between groups. The original instrument was then deconstructed to identify a total of 13 items that school psychologists, school counselors, and social workers would be expected to do as routine parts of their job. The 12 experts agreed, without any differences among school psychologists, school counselors, and social workers, that these 13 items had sufficient applicability to each profession and could be used as a measure of whether someone was using EBP. These 13 items were consistent with examples of evidence-based professional practice (EBPP) as provided by APA (2008) that were used in the first version of the instrument. The 12 experts were also asked to offer feedback on overall directions, format, and length of the instrument, which was used to update the document prior to the pilot study.

Phase 3: Pilot Study

A pilot study was conducted to establish content validity and investigate the quality and clarity of the revised instrument (Hill, 1998; Isaac & Michael, 1995). A cover letter and questionnaire were distributed using an emailed link to a convenience sample of 56 school psychologists, school counselors, and social workers in Central Indiana (Connelly, 2008). Participants were recruited from alumni databases, social networking groups for mental health providers in schools, and recommendations from participants who had already completed the questionnaire (i.e. “snowballing”). A total of 20 of the 56 potential respondents completed the pilot study (school counselors = 4, school psychologists = 8, and social workers = 8). Respondents were asked to first read the cover letter and informed consent and then complete the questionnaire as if they were participating in the actual study. Participants were given a chance throughout the questionnaire to flag questions that seemed confusing or unclear and were given space on each page to leave comments, feedback, and suggestions. At the end of the questionnaire, participants were asked to evaluate the instrument for content and clarity, consider the relevance of each item, and offer useful information that may have been overlooked. Because of the small sample size of the pilot study, data from participants was used in a qualitative format to look for any outliers or unique responses. Feedback from the pilot testing was used to fine tune the final version of the instrument for this study, which included 13 items with five response options per item (Table 5). Small changes recommended by respondents included word choices, formatting, and minor grammatical edits.

Phase 4: Study

Upon completion of the pilot study, a written request was made to the ASCA, NASP, and NASW Research Committees for access to their membership databases for participants for the current study. Following approval from ASCA, NASP, and NASW Research Committees, participants were randomly selected by a marketing agency from a current computerized list of members who were listed as currently practicing in a school setting. A total of 1000 member names from NASP and NASW were requested, as this was the maximum permitted purchase number, and 2000 member names from ASCA were requested (4000 total names requested). Additional names were purchased from ASCA to learn if increasing mailings offered a significant increase in response rate and power to determine an appropriate request number for future studies. Additional names would have been purchased from NASP and NASW if that was an available option. Each organization provided these participant names via mailing or email addresses; NASP and ASCA only allowed the purchase of physical mailing lists and NASW allowed for purchase of email addresses. Each potential participant was mailed or emailed a cover letter explaining the purpose of the study and providing a link and quick response (QR) code to access the electronic questionnaire. Participants were able to follow the link and the QR code to the informed consent page and an initial screening question that asked if the participant was an active provider of mental health services in a school setting. Participants who consented could then complete the study. After completing the questionnaire, participants had the option of providing their contact information in a separate document to enter an incentive drawing for one of 10, $25 Amazon gift cards. Two weeks after the initial mailing and email, a follow-up postcard and email reminder were sent with the same link and QR code to access the questionnaire. The survey was available for participants to access for one month. Four mailings were returned to sender as “undeliverable.” Qualtrics online software was used to collect data and was set to automatically make collected data both confidential and anonymous.

Sample

Using a recommendation for sample size adequacy for exploratory factor analysis, a sample size between 300 (good) and 500 (very good) was desired (Comrey & Lee, 1992). In total, 418 (10.5%) participants responded to the survey. The response rate was consistent with the response rate suggested by NASP; ASCA, and NASW did not provide an estimated response rate. Out of the 418 participants who responded to the survey, one participant did not provide consent and closed the survey, 29 provided no responses to any portion, and 85 provided responses, but did not answer any demographic information on the next page to allow for classification or further assessment. Overall, 303 of the 418 participants (86.5% female, 12.9% male, 0.3% preferred not to identify gender) completed the consent form, the instrument, and the demographic information. Please see Table 1 for demographic information; demographics were consistent with the most recent representative sample of NASP members (NASP, 2018); ASCA and NASW did not have readily available membership data. It should also be noted that cover letters were mailed towards the beginning of the global COVID-19 pandemic. Numerous schools closed throughout the duration of data collection, which likely impacted the response rate for this study.

Table 1 Participant demographics

Results

An exploratory factor analysis (EFA) was completed to determine the structure of the instrument. Responses of “does not apply to my position” were included in analyses; these items were filled with each participant’s average score on the remainder of the instrument as to not move the total score in an overall positive or negative direction. The EFA revealed three underlying factors with an eigenvalue greater than one. This was confirmed visually by scree plot. A three-factor solution was identified using image factoring extraction and Quartimax rotation with Kaiser normalization with the three factors accounting for 57% of the variance. The Quartimax rotation was chosen to minimize the number of factors needed to explain each variable as this study conceptualized EBP as either a single construct or three constructs, as parsed apart by APA definition. The Kaiser-Mayer-Olkin measure of sampling adequacy was 0.85, above the recommended threshold of 0.6 (Dziuban & Shirkey, 1974) and Bartlett’s test of Sphericity was significant at < 0.01. Due to five items cross-loading (Table 2), the additional item analysis was conducted and loading values were further examined. Because all five items with cross-loadings loaded with over 0.2 difference on each item, the highest loading was used, which meant that the highest loadings for all items loaded onto one factor. Further item examination using Pierson correlations between the variables demonstrated a pattern that the three factors did not correspond with the three factors hypothesized from the APA definition, so the EFA was run a second time with all items being forced to load onto one factor (Table 3). No items were deleted from the instrument as all items loaded higher than the recommended 0.35 cutoff (Tabachnick & Fidell, 2013). A reliability test was conducted using all the 13 items as one factor, which revealed a Cronbach’s alpha of 0.85. This suggests that EBP can be conceptualized and measured as a single construct. No subscales were identified throughout the analysis. See Table 4 for distribution of question responses.

Table 2 Item loadings on three factors
Table 3 Item loadings on one factor
Table 4 Distribution of question responses

After the EFA was completed, a one-way ANOVA was conducted to examine any differences in EBP use. Participants were classified into three groups: school psychologists (n = 139), school counselors (n = 98), and social workers (n = 57). These groups served as the independent variables, and the total score on the EBP instrument was used as the dependent variable. There were two outliers in the social work group as assessed by boxplot. Shapiro–Wilk test demonstrated normally distributed data (p > 0.05) and Levene’s test of homogeneity of variances demonstrated homogeneity of variances (p = 0.67). As a result, the ANOVA was run without any transformations of the data. EBP score was not significantly different among school psychologists (M = 3.45, SD = 0.33), school counselors (M = 3.45, SD = 0.35), and social workers (M = 3.54, SD = 0.34), F(2, 291) = 1.56, p = 0.21, η2 = 0.011. No further post hoc analyses were conducted. Of note, social workers received their recruitment letter by email, while school counselors and school psychologists received their letters by mail. These results suggest that there is not a difference based on profession or recruitment method.

Discussion

The purpose of this study was to create a measure of EBP use based on a common understanding of the concept that could be understood and used across the three professions who provide the most mental health services in schools. The results of this study provide preliminary evidence for the validity and internal reliability of the single construct, 13-item EBP instrument for measuring EBP use. Integration of three mental health professions into its development ensures maximal utility of the instrument across ESMH settings. This instrument can provide numerical understanding of EBP adherence and set a standard for progress improvement. While still in early stages, the rigor of an expert review, pilot study, and national study shows excellent potential for addressing the research gap through connecting APA’s formal definition to EBPP to daily real-life work environments across the nation. In addition, the instrument is brief and easy to administer via paper or electronic delivery.

Limitations

This is the first instrument of its kind (to the knowledge of this author) measuring EBP, so there are no similar measures or external criteria to compare the validity of the results. As there is no existing “gold standard” of measuring EBP, there was no inclusion of a related construct when collecting data to examine the criterion-related validity of the instrument. The evidence-based practice movement has looked different over time across mental health professions and lacks any type of measurement tool that can be used across disciplines, hence, the need for the type of exploratory work reflected in this present study. However, the scale was validated by factor analysis and strong reliability prior to use in the current study. It is important to note that other researchers may choose to handle the “does not apply to my position” responses differently in lieu of using the average weight as used in this study. The decision was made for the purpose of this study that coding the “does not apply” responses as 0 or removing them all together might disproportionally sway the overall results. However, different approaches to coding the data would be an interesting direction for a future study when further validating this instrument through additional studies.

Furthermore, as with all self-report studies, social desirability is a concern with participants wanting to paint a more positive picture of their daily practice; however, there is no specific reason why results from this study would be expected to differ from other self-report studies. Another consideration is the response rate. Out of 4000 potential respondents, 418 completed some part of the survey, but only 303 completed each section of the survey. It is unclear how much of this was related to the global COVID-19 pandemic or other factors, such as survey fatigue or recruitment techniques. Postcard and emails reminders were sent to mitigate this limitation. It is also unclear if the 85 participants who did not provide any demographic information and were removed from analyses would have meaningfully contributed to the results. In addition, organizational limitations on the number of member names available to purchase meant that more school counselors were recruited for the study than school psychologists and social workers in order to have a sufficient sample size for the desired analyses; future validation of this instrument will want to include a more balanced sample of the three different groups or avoid separation by profession as no differences were found based on profession.

An additional limitation is possible theory drift. While the Social Ecological Model (SEM) was used to help conceptualize EBP at the origin of the instrument creation, additional expert review is needed to examine if the final product continues to have a theoretical base. Significant care was taken to ground the instrument development in both professional consensus through the expert review and known EBP elements as defined through examples from APA to minimize any drift from its theoretical basis. However, it is likely that real-world responses from mental health providers who are actively working in schools may not exactly align with the SEM or even concrete examples of EBPP provided by APA. More research in this area will be needed.

Implications for Practice and Research

First, it will be imperative to further validate the instrument with a variety of settings and populations and conduct additional psychometrics on the EBP instrument. Because EBP is used in mental health care outside of schools, there is potential for this scale to be used in other settings, such as community mental health and primary care. Additional psychometrics and expert review may also be needed to establish a “cut-off” score to determine if someone was or was not using EBP in their practice. In this present study, participants were simply given a total EBP use score based on their responses. Furthermore, because EBP was conceptualized as a single construct that would apply across disciplines, the 13 items that make up the instrument are intentionally broad. Enhancing the instrument to add more specific behaviors and actions may be a future research direction after additional interdisciplinary review.

In the future, this scale could help clarify the distinction between EBP and EBIs and allow for common language among mental health providers. This scale could also be used in connection with student outcomes when tracking results of EBP use. Measuring EBP has been difficult to pin down throughout its adaptation into the mental health field (Bellamy et al., 2006; Drisko & Grady, 2015; Parrish, 2018). Defining and measuring EBP is one of the first steps for quality service improvement and could be a significant contribution to the mental health field.

Additionally, more research is needed on how mental health providers work together. School psychologists, school counselors, and social workers have different requirements to enter their respective field, such as differences in education, training, supervision, and licensure (American School Counselor Association, 2018; National Association of School Psychologists, 2017b; National Association of Social Workers, 2017). However, this study shows the fields may have more in common when it comes to EBP than they have in contrast. Because each school setting may have different employment options when it comes to providing mental health care for students, it is important to examine what crossover information is relevant for the three fields in order to best provide EBP. Sharing resources across professions, such as supervision, research journals, and continuing education trainings, may be a useful way to have a consistent message and promote collaboration surrounding EBP (Mellin et al., 2014).

Results from this study showed that there is no significant difference in EBP use based on profession. This is especially interesting, since school psychologists, school counselors, and social workers have updated their required trainings on EBP at different times and sometimes with different results (Barrio Minton et al., 2014; Hicks et al., 2014; Mullen et al., 2019; Reddy et al., 2017; Wike et al., 2019). School psychologists, school counselors, and social workers are all equipped to utilize EBP, even though their respective fields have different histories with adaptation and dissemination of EBP content. This is similar to a finding from Aarons (2004) who created the Evidence-Based Practice Attitude Scale (EBPAS) and examined attitudes of 322 mental health providers about aspects of EBP. Results from that study showed little difference between professional groups when it came to dimensions that influence evidence-based interventions (EBI) practice. This current study implies that EBP can be conceptualized and measured with one instrument that applies to all three professional groups. Consistency across the literature will be crucial to ensuring providers are talking about the same thing when trying to make changes and improve quality of mental health services (Anderson & Bronstein, 2012; Mellin et al., 2014).

Finally, this instrument can be used as a reference point when studying facilitators and barriers to EBP. Through utilizing the new instrument as a numerical understanding of EBP use, it can distinguish between providers’ perception of facilitators and barriers and the reality of actual practice. In addition, the instrument can be used to help clarify the severity of facilitators’ and barriers’ effects (i.e., if barriers can be ignored, overcome with ease, or if they are completely debilitating).

Conclusion

This study was conducted with the aim of improving mental health service delivery to one of our nation’s most vulnerable populations: our children. Global events that transpired throughout this study’s completion have only placed further emphasis on the growing need for quality mental health care in schools. School psychologists, school counselors, and social workers are currently on the front lines of trying to help children understand how issues like systemic racism and a health pandemic impact their daily lives (American School Counselor Association, 2020; National Association of School Psychologists, 2020; National Association of Social Workers, 2020). Using EBP is an important framework to ensure a standard of care as these mental health services are being provided. This research provides a starting conversation piece on commonalities among different groups of providers and discussion on what it looks like to measure a set of practices consistent with a definition. This research has created, through a rigorous process of literature review, expert review, pilot study, and national study, a brief and convenient way to identify and measure EBP use across three different professions with no significant differences among groups, indicating an exciting new chapter in measuring best practice standards.