Introduction

Adolescents and young people with disabilities have the same sexual and reproductive health (SRH) needs as their peers without disabilities. However, they are also at increased risk of adverse SRH outcomes and rights violations, including increased risk of HIV infection and experience of violence [1,2,3,4,5,6,7]. Recent evidence shows that the increased risk of exposure to HIV and violence follows into adulthood and holds a gendered dimension. Women with disabilities are twice as likely to experience intimate partner violence (IPV) and HIV infection as their peers without disabilities [3,4,5, 8,9,10]. The risk of IPV increases with the severity of the disability [9]. Exposure to violence also varies across different types of disabilities, with people with mental illnesses and intellectual disabilities being at particular risk [6].

Despite these increased SRH vulnerabilities, adolescents and young people with disabilities lack access to sexual reproductive health and rights (SRHR) services, including comprehensive sexuality education (CSE) in South Africa, where HIV and gender-based violence are endemic [10,11,12]. Worldwide, CSE is a cornerstone of efforts to equip adolescents and young people with the knowledge, attitudes, and skills to make informed decisions and practice safer sex [10, 13, 14]. Furthermore, program evaluations have shown that access to CSE effectively prevents risky sexual behavior and HIV infections. In contrast, abstinence-only or pro-sexuality education programs are ineffective at preventing HIV [14, 15]. However, research shows that adolescents and young people with disabilities lack access to CSE and do not have access to correct information about sexuality from their caregivers, parents, or peers [16,17,18,19,20,21].

Research with educators of learners with disabilities in Eastern and Southern Africa shows that these educators often feel ill-equipped to provide CSE in accessible formats to learners with disabilities. Educators may also hold negative attitudes about CSE or the intersection of disability and sexuality and may fear repercussions due to community norms around CSE, sexuality, and disability [22,23,24,25,26,27,28,29]. In response, the Breaking the Silence approach (BtS) to CSE was developed. BtS is aligned with the UN Technical Guidelines on sexuality education and focuses on educator training and development to change attitudes, improve self-efficacy, and develop skills to provide accessible CSE to learners with diverse disabilities [30,31,32]. The BtS approach to CSE offers a 3–4 day workshop-based training, where participants are exposed to legal and epidemiological information, universal design to learning, accommodation of learners with disabilities, development of CSE implementation guidelines, and practical skills for teaching CSE in accessible formats for diverse disabilities. Educators receive a Comprehensive Guide and 15 Lesson Plans with accessible resources and teaching tools to implement CSE after the workshops [33, 34].

The Teacher Sexuality Education Questionnaire (TSE-Q) was developed to assess the impact of the BtS educator training on participants [35, 36]. The TSE-Q was initially designed and validated in 2013 [36]. Some of the scales and sets of questions were culturally adapted from pre-existing scales, while others were developed specifically to assess CSE delivery to learners with disabilities in South Africa [36]. The questionnaire is based on an adapted version of the theory of planned behavior (TPB) (Fig. 1). It includes a scale for CSE knowledge, attitudes and beliefs (disability, sexuality, and HIV), CSE teaching beliefs and behavior, perceived subjective norms, self-efficacy, and environmental conditions at the school (availability of material to teach CSE and linkage to SRHR services) [36, 37].

The original TSE-Q was reviewed and culturally adapted with educators in South Africa through formative validation, including focus group discussions and written reviews [36]. The questionnaire’s formative validation and reliability testing revealed that the teaching beliefs, behavior, and self-efficacy scales were robust. The sets of questions prompting environmental conditions, knowledge, attitudes, and beliefs about disability, sexuality, and HIV were not robust enough [36]. Since then, the BtS team has adjusted the disability, sexuality, HIV, and environmental conditions sets of questions and developed sets of questions related to CSE policy knowledge and confidence in CSE teaching skills. In 2021, the team tested the adapted TSE-Q before implementing two BtS training workshops [35]. This paper presents the results of the reliability and validity testing of the adjusted TSE-Q in South Africa.

Methods

Study Design

We conducted a validation study of the adapted version of the TSE-Q. This study was embedded in a feasibility study assessing the implementation and effectiveness of the Breaking the Silence CSE workshop training with two purposively selected special schools in South Africa [35]. The adapted TPB guided the development of the TSE-Q and the validation study [36, 37]. The adapted TPB postulates that individual behavior (e.g., teaching behavior) is determined by the intention to perform the behavior, having the necessary skills, and environmental conditions to execute the behavior [37]. In addition, the intention to perform a behavior is determined by knowledge and attitudes, self-efficacy, and perceived subjective norms (see Fig. 1). Hence, when assessing educators’ likelihood of implementing CSE, we need to evaluate their CSE teaching intention, skills, and environmental conditions under which they are supposed to teach CSE. These factors are tested with the TSE-Q.

Fig. 1
figure 1

Adapted Theory of Planned Behaviour

Sampling

In collaboration with the South African Department of Education, two special schools in eThekweni and Cape Town were purposively selected. One school was for learners with hearing impairments and the Deaf, and the other was for learners with intellectual disabilities. Despite inclusive education policies, South Africa predominately provides education to learners with disabilities in ‘special schools.’ In these schools, educators often work across subjects; hence all teachers are important for implementing sexuality education. In addition, these schools have therapists, learning assistants, and NGO staff who support the learning process and student development. Many of them may be tasked to implement elements of CSE in formal Life Orientation sessions (which include CSE), therapy sessions, or extramural activities. Many schools also have boarding establishments with house mothers who must address CSE issues. Therefore, all educational staff members, including teachers and support staff, were invited to participate in the BtS study and workshop training. Recruited participants were invited to fill in the TSE-Q before and after workshop exposure. During the pre-survey, participants were also asked to fill in the validation questionnaire of the TSE-Q. Overall, 50 staff members volunteered to participate from both schools, as presented in figure 2.

Fig. 2
figure 2

Sample of participants

Research Tools

To assess educator CSE knowledge, attitudes, perceived norms, skills, confidence, and how they relate to implementing CSE (reported teaching practice), we adapted the BtS Teacher Sexuality Education Questionnaire (TSE-Q) [35]. The original TSE-Q questionnaire was developed, piloted, and validated in KZN and included self-developed and culturally adapted scales from Howard-Barr, Rienzo, Morgan Pigg and James [38] and Mathews, Boon, Flisher and Schaalma [39] (see Table 3) [40,41,42,43]. For this study, we adapted the questionnaire further. We included self-developed questions prompting the educators’ knowledge about CSE in South Africa (Table 1). Example questions can be found in Online Resource 1.

Table 1 Overview of scales in adapted TSE-Q before the second validation

To test the face and content validity of the TSE questionnaire, we asked educators to validate the TSE-Q with an adapted version of Rowe, Oxman, and O’Brien’s validity questionnaire [44]. This validity questionnaire has been used and adjusted to fit the context of CSE in South Africa in our previous study [13]. The validity questionnaire prompts face validity, content validity, and ease of usage (see Table 2). In addition, it tests whether the questionnaire makes sense on a basic level and can be used by the target population. Hence, it tests whether an instrument is meaningful to respondents [45]. Additionally, once the questionnaires were submitted, we allowed participants to provide additional verbal feedback.

Table 2 Validity questionnaire structure

Procedure and Ethics

The baseline survey with the TSE-Q and validity questionnaire were conducted before the exposure to the BtS training. The survey and questionnaire were administered using paper versions of the questionnaire. Questionnaires and data were anonymized using participant identifiers and entered into Excel. In addition, the experience of filling in the questionnaire was verbally validated with the participating educators.

Participation in this study was voluntary. All participants were informed about the study verbally and in writing before participating. Those who chose to participate signed an informed consent form. The study was approved by the Ethics Committee of the South African Medical Research Council (EC047-11-2020).

Analysis

The analysis for this paper consists of descriptive statistics for our sample’s demographic variables and the validity question, as well as reliability testing. The reliability of each scale in the TSE-Q was estimated using McDonald’s omega and Cronbach’s alpha from the baseline questionnaire. Where the alpha was lower than 0.8, items were sequentially eliminated if removing them increased the overall alpha. All analyses were performed in R 4.2.1. McDonald’s omega and Cronbach’s alpha were calculated individually for each scale using the psych package (version 2.2.5). As discussed in Watkins [49], alpha is generally inappropriate as a measure of reliability, but as it is so widely used, we have included it alongside the suggested omega [49].

Descriptive statistics are also presented for the validation questionnaire, noting frequencies for each category of responses in the content validity, face validity, and ease of usage domains. Poorly performing categories were flagged and raised in our validation discussions with educators from each school to understand what adaptations were needed for the TSE-Q.

Results

Table 3 covers the demographic variables collected in the study. It shows that most participants were women (92%) and between the ages of 31 and 60 (86%), with only a few older and younger participants. Most participants were Christian (70%), 44% of the participants were support staff at the school, and most educators had at least four years of teaching experience (46%). Furthermore, 61% of the educators were teaching LO or sexuality education, but only 47% of these educators had formal training in Life Orientation.

Table 3 Overview of demographic variables

Cronbach’s alphas and McDonald’s omegas for the various sets of questions being tested are shown in Table 4. Most of the sets of questions showed good performance with alphas and omegas over 0.8, deeming them suitable to be considered as validated scales. Several of these scales had excellent performance with alphas over 0.9, namely, the ‘Human development,’ ‘Personal skills,’ and ‘Sexual behavior’ scales of the CSE teaching beliefs section; the ‘Human development,’ ‘Relationships,’ ‘Personal skills,’ and ‘Sexual health’ scales of the CSE teaching practices section; and the self-efficacy scale. The ‘CSE policy knowledge,’ ‘CSE beliefs,’ ‘Disability and sexuality beliefs,’ and ‘disability and HIV-risk belief’ set of questions showed poorer performance and were considered for adjustment to develop validated scales. Removing poorly performing items improved the performance of the ‘CSE impact beliefs,’ ‘Disability and sexuality beliefs,’ and ‘disability and HIV-risk belief’ set of questions and enabled the team to construct a validated scale.

Table 4 Reliability

The face validity, content validity, and ease of usage of the TSE-Q based on the participants’ feedback in our validity questionnaire is shown in Table 5. The face validity of the questionnaire was acceptable, with 60–80% of participants agreeing that they could understand and use the TSE-Q and its answer options. The content validity questions also revealed acceptable levels. Most participants agreed that the TSE-Q captured the intended content and described their view of teaching CSE. They also felt that they could find their answer in the list of possible answers, that the instrument was not missing any important items, and that the questions were not out of order. However, a slight majority of participants felt that some items were repetitive or redundant. The authors discovered that some questions were duplicated in the physical copy of the TSE-Q that was provided to participants (printing error).

Table 5 Validity

In terms of the ease of using the questionnaire, most participants found that answering the questionnaire helped them in some way, that they were comfortable answering the questions, and that the questionnaire was useful in describing their experiences of teaching CSE. A slight majority of participants felt that the questionnaire did not require too much effort to complete. An even split of participants found that the questionnaire made them think about things they preferred not to think about. Finally, most participants felt that the questionnaire was too long to complete, indicating that this questionnaire needs dedicated time and effort to be filled in appropriately.

Discussion

The TSE-Q was designed to assess the needs and experiences of educators teaching CSE to learners with disabilities in South Africa. The initial study reported on the questionnaire’s development, cultural adaption, and piloting [36]. This paper extends this work by refining existing TSE-Q questions and adding new items. The TSE-Q development was guided by the adapted TPB, which aided in identifying relevant scales from other surveys and in develo** new items.

As a whole, the TSE-Q aims to measure the theoretically relevant predictors of CSE teaching behavior (CSE Teaching Practices scales) based on the TPB. It does this through scales that were identified or developed to measure attitudes (Disability and SRHR beliefs/attitudes, CSE teaching beliefs scales), norms (Perceived Subjective Norms scale), self-efficacy (Self-Efficacy and Confidence scale), skills (Teacher knowledge scale), and environmental constraints (Material and professional preparation scale). The TSE-Q does not measure behavioral intention directly since it is a combination of attitudes, norms, and self-efficacy.

Most reliability tests show good reliability based on Cronbach’s Alpha and McDonald’s Omegat. Based on Cronbach’s Alpha and using a cut-off of 0.7 for scale acceptability (ref), most sets of questions in the questionnaire, except the one on CSE knowledge, can be considered acceptably reliable and be considered a validated scale. McDonald’s omega further confirms this finding. Like Cronbach’s alpha, McDonald’s omega is a measure of reliability and can be interpreted in the same way as alpha. However, it is considered to estimate reliability more accurately and requires fewer statistical assumptions to be met. The reliability estimates for McDonald’s omega are all above 0.7, showing that all sets of questions perform acceptably as validated scales. Hence, the TSE-Q scales are acceptably reliable, with the CSE knowledge set of questions being the only weak ‘scale.’

The content validity, face validity, and ease of usage components of the validity questionnaire help to assess the measure’s validity and identify areas for improvement. For instance, by flagging the repeated questions in the physical copy of the TSE-Q. The validity questionnaire showed that the questions and instructions of the TSE-Q were clear to participants. However, filling in the questionnaire takes time and can be exhausting in terms of length and topic. Hence, this questionnaire should not be filled in directly before and after a workshop but on a different day when participants have enough time and energy to complete the questionnaire.

The TSE-Q is now a finalized questionnaire that can be used to assess educators’ knowledge, beliefs, practices, self-efficacy, and preparedness to teach CSE to young people with disabilities or evaluate programs that try to improve teachers’ ability to provide CSE in accessible formats. As such, a potential next step is to use the TSE-Q in larger studies assessing educators or evaluating programs focusing on CSE, including the Breaking the Silence intervention. In addition, the TSE-Q could be adapted and tested for other countries.

Limitations and Future Directions

This paper is based on a relatively small sample of 50 participants. While this is regarded as acceptable [50], future work should sample a larger number of participants (n > 200) such as recommended by Frost et al. [51]. It is also important to note that the study only covered two schools, so future testing should also include a wider range of disabilities and greater number of schools to improve generalizability. Finally, while all teachers in this sample were fluent in English, it may be necessary to translate the tools so that teachers can complete the questionnaire in their languages of preference.

Conclusion

The TSE-Q is a robust survey tool with validated scales that can be utilized to assess educators’ beliefs, skills, self-confidence, and environmental conditions that enable them to provide CSE to learners with disabilities. It can be used as a cross-sectional survey tool or to evaluate CSE training workshop outcomes. It is validated and adapted for the South African context and might be suitable for other African contexts. The methods followed in this and the previous paper provide a good starting point for adapting the questionnaire for other contexts.