Introduction

Osteoarthritis (OA) is a chronic disease that affects the knee joint. The lifetime risk of knee osteoarthritis (KOA) is approximately 46%. Globally, 85% of the burden of osteoarthritis is attributable to KOA [2], making it the eleventh contributor to global disability and the 38th in terms of disability-adjusted life years (DALYs) [3].

Osteoarthritis impacts every part of daily life. Associated deformity results in a rigid, unstable, and painful gait that reduces the independent walking distance and is accompanied by weight gain, sleep problems, and depression [4].

KOA induces substantial costs. With the ageing of the population and the increasing obesity in many countries, the economic burden on healthcare systems could be even higher in the coming years [36]. This requires decision-makers to rely on economic evaluations for optimal resource allocation and maximizing health benefits from fixed budgets.

In economic evaluations, generic questionnaires are preferable to disease-specific questionnaires in order to compare the value of interventions across various disease areas and programs. Most of the official national pharmacoeconomic evaluation guidelines mention EQ-5D by name as a preferred instrument for the determination of health utilities or as an illustration of a suitable instrument. Of those that did not specify a specific measure, the majority of guidelines favored calculating utilities using national preference weights, which are generally derived from societal preferences for health states [29].

However, the Oxford knee score (OKS) was used in the majority of published studies evaluating the interventions used to treat KOA; this score has the drawbacks of being disease-specific and lacking a preference-based index value [26].

Map** can offer a solution when EQ-5D scores are unavailable for interventions of interest, where health-related utilities can be generated from another measure of health outcomes [5]. By map** OKS scores to EQ-5D, we can use results from OKS-based previous studies without having to re-run them using EQ-5D questionnaires.

Two strategies are used in map** studies: direct utility map** and indirect response map**. The EQ-5D index value (utility) is predicted using direct map**, whereas the responses to EQ-five domains are predicted using response map**. Although response map** requires an additional step to estimate the expected index value using available EQ-5D tariffs, indirect map** allows for the prediction of EQ-5D-5L utility values for any country. Direct map** would only be applicable for the country that produces the tariff [19].

Aim of the work

This study aims to develop indirect map** algorithms that can predict responses to the five domains of EQ-5D based on OKS values. Utility values can be then derived from the predicted responses as a separate second step using available EQ-5D tariffs.

Material and methods

Included patients

Adults over 18 years old with KOA based on clinical and radiographic characteristics, with or without total knee arthroplasty (TKA), were included in the samples. Patients whose conditions prevented them from completing the questionnaires were excluded (e.g., severe organic or psychiatric diseases). The Institutional Review Board of Medical Research Institute has granted ethical approval following U.S. Department of Health and Human Services (IORG 0008812) guidelines and other applicable regulations. The research adhered to the Declaration of Helsinki's principles.

Two cross-sectional samples were collected: estimation and external validation samples. For the estimation sample, 456 (80% of the whole sample) were recruited between December 2020 and May 2021 and used to develop the model. From September to October 2021, the external validation sample (n = 115) was collected to assess the generalizability of the developed model. By recruiting patients at different times, we aimed to have structurally different samples [28]. Justice AC Suggested evaluating the generalizability of a model using data unavailable at the time of model development. When the external validation sample closely resembles the estimation sample, the evaluation focuses on reproducibility rather than generalizability.

Using a self-administered questionnaire, the following data was gathered:

  1. 1.

    Patient characteristics: sex, age, weight, height, duration of OA, presence of TKA, and co-morbidities.

  2. 2.

    OKS questionnaire (12 questions). Each response level ranges between 0 and 4. The ratings ranged from 0 to 48, with 48 representing the best health (7,8). In Egypt, the questionnaire was translated and validated. (9) The score was classified as very mild (40 to 48), mild (30 to 39), moderate (20 to 29), and severe (0–19) [6] .

  3. 3.

    The EQ-5D-5L questionnaire evaluates health status in five domains: mobility, self-care, routine activities, pain/discomfort, and anxiety/depression. Each response level ranges between 1 and 5. In addition, the patient is required to record their overall health status using a visual analogue scale (EQ-VAS). Following assessment, the scores from the descriptive component can be reported as a five-digit number, known as profile scores. There are a total of 3,125 profile scores for EQ-5D-5L, ranging from 11,111 (full health) to 55,555 (worst health). The profile score can be converted into a utility index using a country-specific value set. Two types of value sets are available for many countries: valuation value sets, which were generated using a time trade-off (cTTO) valuation technique supplemented by a discrete choice experiment (DCE), and cross-walk value sets, which were generated by map** between the EQ-5D-5L and EQ-5D-3L descriptive systems [21]. All countries' value sets were obtained from the EuroQol.org website [7].

Statistical analysis

Conceptual overlap

Spearman's rank correlation was used to determine the conceptual overlap between the domains of EQ-5D-5L and the 12 questions of OKS. The similarity between the two measurements was examined using an exploratory Ordinary Least Squares (OLS) model, where the dependent variable was EQ-5D-5L, and the regressor was the total OKS score.

Method of model selection, building, and evaluation (Fig. 1).

Fig. 1
figure 1

Summarizes the methods used for model selection, building, and evaluation

Selection of the optimum model structure

Four classes of ordinal models were evaluated: two regression models (cumulative model and penalized ordinal regression) and two tree-based models (ordinal classification and regression trees (O-CART) and ordinal forests (OF)). The binomial and multinomial models were ruled out because they disregard information about the outcome's order. Each model class can accept distinct structures with varying performance. The structure of a model is determined by its hyperparameters and their values. Box 1 describes the model classes and their hyperparameters in detail.

Which predictors are incorporated into the model, and in what form have a substantial impact on its predictive performance. Consequently, each model structure was constructed using four distinct sets of predictors (all derived from OKS questions) (1) all predictors; (2) REF-based significant predictors; (3) model-based significant predictors; and (4) principal components. Box 2 provides a summary of how sets were identified.

We employed 5 × threefold cross-validations to determine the optimal model structure based on model trials. The estimation sample was divided into five non-overlap** folds, with each fold serving as an internal validation set to assess the accuracy of the model developed using the other four folds. This was repeated three times for every model structure and predictor set. Each model's cross-validated accuracy rate was the mean of the 5 × threefold accuracy rates. The optimal model structure was the one with the highest accuracy rate when cross-validated.

Model building

After identifying the optimum model structure and best set of predictors in the previous step, the whole estimation sample was used to estimate the model parameters. Model parameters specify how to calculate the outcome from the predictors. They are estimated by optimizing the model's fitness for the estimation sample (Table 1).

Table 1 Summary for the structure and number of models tried to build a map** algorithm from OKS to each of the five domains of EQ-5D-5L

Model evaluation

Evaluation of the predictive performance of the top models was conducted as follows:

  1. 1.

    Comparing the accuracy of no model (baseline accuracy) to the accuracy of the best model.

    The base level of precision for each domain is the proportion of the most prevalent level. [8]. The crude accuracy attained by the final model for each domain is the proportion of accurate predictions made on estimation and external validation samples[9].

  2. 2.

    Estimating the performance of models in terms of errors in measuring predicted utility values

    The levels in the five domains were combined to determine both the actual and predicted profile scores. Actual and predicted utilities were estimated using available tariffs (n=39) and eq5d R package [14], For each value set, the mean absolute error (MAE) and mean squared error (MSE) for differences between observed and predicted EQ-5D-5L index scores were calculated. Using the boot package, the 95% confidence interval for these measurements was calculated [15].

Comparing the MAE between utilities above and below the median estimated utility to assess the model's ability to fit patients with better and worse estimated utilities. We followed the Map** onto Preference-based Measures Reporting Standards (MAPS) statement to improve the clarity, transparency, and thoroughness of map** study reporting [33].

Model building and evaluation were conducted using caret R package [10].

Results

General characteristics

The estimation sample

The estimation sample had a mean age of 47.6 ± 13.3 years, and 321 (70.4%) were female. OA lasted an average of 6.7 ± 6.5 years. About 26% of patients complained of back pain, with hypertension being the most common comorbidity (Additional file 1). Approximately 13.4% had undergone TKR, 24.1% were indicated for TKR, and 62.5% were not. Level 1 was the most frequently reported level for mobility (27%) and self-care (60%), level 2 for typical activities (25%), levels 2 and 3 for pain/discomfort (30%), and level 3 for anxiety/depression (30%). The average EQ-VAS was 61.2 ± 24.7

The estimation sample expressed 206 of the 3125 EQ-5D-5L health conditions, with utilities ranging from − 0.964 to 1 (Additional file 1). Maximum and minimum utility indices were reported by equal numbers (3.1% and 2.2%, respectively) (Table 2).

Table 2 Description of EQ-5D-5L domains and VAS as well as OKS in the Estimation sample

The external validation sample

The external validation sample had a mean age of 49 ± 14.1 years, and 78 (80.9%) were female. On average, osteoarthritis lasted 6.7 ± 6.5 years. About 28% of patients complained of back pain, with hypertension being the most common comorbidity (Additional file 1). Approximately 24.3% had undergone TKR, 18.3% were indicated for TKR, and 57.4% were not.

Level 1 was the most frequently reported level for mobility (39%), self-care (57%), usual activities (30%) and anxiety/depression (38.3%), and level 3 for pain/discomfort (33%). The average EQ-VAS was 69.3 ± 21.3.

They expressed 67 different EQ-5D-5L health conditions, with utilities ranging from − 0.732 to 1 (Additional file 1). Maximum and minimum utility indices were reported by 10.4% and 0.9%, respectively (Table 3).

Table 3 Description of EQ-5D-5L domains and VAS as well as OKS in the external validation sample

Exploratory data analysis

Conceptual overlap

The significant correlations between EQ-5D-5L domains and OKS questions ranged from − 0.79 to − 0.28 (Additional file 1). The prevalence of blue hues throughout the plot indicates a robust first principal component, which accounts for 66.35 per cent of the total variance.

Important questions as determined by recursive feature elimination (RFE)

RFE ranks predictors based on their contribution to every domain (Additional file 1). All questions (n = 12) contributed to mobility. Eleven, eight, and seven questions pertained to usual activities, pain/discomfort, and self-care, respectively. Only three questions contributed to anxiety/depression.

"Walking time before severe pain" was the first contributing question in predicting mobility. “Troubles with washing and drying" topped the self-care list, and "Pain interferes with work" topped the lists for usual activities, pain/discomfort, and anxiety/depression.

Model building on the estimation sample

After constructing models, we compared and selected the most accurate model for each domain (Table 4) (Additional file 1). Cross-validation accuracy was highest for self-care and lowest for anxiety/depression. Cross-validation yielded coefficients of variation of 5% for self-care, 6% for mobility and pain/discomfort, 7% for usual activity, and 9% for anxiety/depression.

Table 4 Measures of performance (accuracy) of the best models in the five domains on the estimation and external validation sample

Model evaluation on the external validation sample

In the external validation sample, the performance of the models predicting all domains yielded greater crude accuracy than the baseline accuracy (Table 4). The mobility domain's predictive accuracy increased from 26.5% (baseline accuracy) to 65.6% in the estimation sample and to 68.2% in the external validation sample. The models' accuracy was lowest for anxiety/depression and highest for mobility and usual activity.

The five EQ-5D domains were predicted using the models with the highest accuracies. Mobility was predicted by penalized regression with pre-processed predictors, usual activities by random forest, pain/discomfort by cumulative probability with pre-processed predictors, self-care by random forest with RFE predictors, and anxiety/depression by CART with RFE predictors.

Actual and predicted EQ-5D-5L utility values were estimated for all countries with available tariffs (either valuation technique VT or crosswalk CW tariffs), and errors in predicted utility values were calculated. The average MAE was 0.098 ± 0.022, ranging from 0.063 to 0.142, and the average MSE was 0.020 ± 0.008 ranging from 0.008 to 0.042 (Table 5).

Table 5 Error measurement for predicted utility values based on OKS in the external validation sample using different countries value sets
Box 1 Structures of model classes used to derive the map** algorithm for the EQ-5D-5L
Box 2 Structure of the different sets of OKS questions used to feed the models to derive the map** algorithm for the EQ-5D-5L

The developed algorithms' accuracies vary between countries and tariff types. Sweden and South Korea had the smallest MSEs in utility estimated using the valuation technique tariff, while Ireland, Denmark, and Taiwan had the highest. In utilities estimated using crosswalk tariffs, Zimbabwe, Japan, and the United States exhibited the smallest MSEs. The largest MSEs were in Spain and the United Kingdom.

We compared the MAE between utilities above and below the median estimated utility to assess the fit of models in patients with better and worse utilities (Table 5). The MAE was less than 0.20 in both groups.

Discussion

The majority of literature evaluating osteoarthritis treatment technologies utilized OKS. The map** of OKS to EQ-5D-5L allows this literature to be utilized in economic evaluations. The application of response map**, in which we map onto EQ-5D-5L domains as opposed to the utility index, affords an international advantage. Only one Spanish study [31] mapped OKS to EQ-5D-5L; however, response map** between both instruments was disregarded; consequently, the map** algorithm will only aid economic evaluation in Spanish contexts.

Another study [20] developed a response map** algorithm from OKS with satisfactory prediction accuracy; however, it mapped it to the three level EQ-5D-3L rather than the five level one.

Similar to others [20, 31], we found sufficient conceptual overlap between EQ-5D-5L domains and OKS questions. As Dakin et al. (2013) we found that all OKS and EQ-5D-5L questions loaded into a single principal component; while, variance explained by our component (66%) was higher than theirs (40% for pre-operative sample and 54% for post-operative).

We developed a map** algorithm that predicts EQ-5D- 5L utility based on OKS responses; model performance was better than the model developed using Spanish tariff [31], where the lowest MAEs obtained using GLM and Breg models, were 0.1127 (0.1014–0.1239) and 0.1141 (0.1031–0.1251). Our MAE was 0.099 (0.091–0.102) using Spain VT value set and 0.110 (0.093–0.112) using Spain CW value set. Although prediction accuracy varied with tariff, our algorithm gave accurate predictions of utilities in the external validation sample using the EQ-5D tariffs (maximum MSE = 0.042).

Models predicting mobility, self-care, usual activity as well as pain/discomfort outperform that predicting anxiety/depression. Because OKS includes questions related to mobility, self-care, usual activities, and pain. Meanwhile, no questions ask about psychological symptoms. Nonetheless, OKS improved the accuracy of predicting anxiety/depression from 30% to 43.5% in the estimation sample and 35.7% in the external validation sample, probably as pain and poor knee function contribute to some of the observed anxiety/depression.

As our sample included patients with comorbidities, KO whether indicated or not for TKR, the developed algorithm is likely to increase the range to which it can be applied. However, its performance in dissimilar populations is unknown.

The response map** model had the best accuracy in prediction of EQ-5D response levels from OKS responses in UK [20]. Therefore, it was our target method. In addition to producing more accurate predictions in this study, response map** models do not need to deal with not normal utility distributions. Furthermore, while direct map** models must be developed for specific tariffs, response map** algorithms can be applied to any five-level EQ-5D available tariff now or in the future [24]. Response map** gives rich insights into the relationship between the two instruments. For instance, predicting the proportion of patients with different levels in each domain.

Despite all benefits of response map**, the belief of the need for a large sample size prevented it from being conducted on many occasions [1, 20, 25, 37]. A recent article [35] provided practical guidance for calculating the sample size required for the development of prediction models with continuous, binary, and time-to-event outcomes. In case of ordinal outcomes, one might think that they could follow the suggested method for a binary outcome model. They would calculate the required minimum sample size for each 2 outcome levels and use the highest minimal sample size. If any of the levels is rare, the estimated sample size will be very high. As patients with L5 in any domain are usually rare, it was thought that very large sample size is needed for response map**. However, we believe this is applicable when the levels of the ordinal outcomes are separate. In this case, the model predicts the probability of falling into one of two adjacent categories, e.g., the probability of L1 vs L2, L2 vs L3, etc. While, if the ordinal outcome is based on a categorization of a continuous latent variable, the model predicts the cumulative probability (probability of falling at or below a particular point), e.g., probability of being in L3 or more vs being in lower levels (L1 & L2). The availability of two versions of the EQ-5D, where each domain might be categorized into 3 (3L) or five (5L) levels is sufficient theoretical evidence for assuming that the ordinal domains are based on latent continuous variables. Another empirical evidence from the current study is that the optimum model for pain/discomfort contained a single set of coefficients to predict all levels of the outcome (parallel curves).

Another argument against the need for large sample size is the effect size. The larger the effect size the smaller the required sample size [12]. In the current study, the correlation between total OKS and EQ-5D domains was high indicating a large effect size.

While the impact of rare events on estimating the sample size is large, their impact on the overall accuracy of the developed model is small due to the following:

First, the events are rare, thus, their contribution to the overall accuracy will be small. Second, with the use of cumulative ordinal regression, usually, these events are predicted at a closer level. Therefore their accurate prediction which requires a large sample size is of little value.

Due to the aforementioned reasons, some data scientists tend to believe that there are no shortcuts to say if we have enough data. The only way would be to try a sample size and build models [13]. One indication of achieving a sufficient sample size is the consensus on the model accuracy. In the current study, the coefficients of variation (CV) of cross-validated model accuracy to judge the consensus of the model accuracies were all below 10%.

Another problem that might emerge with small sample size is overfitting. Overfitting is a condition where a statistical model captures the random error in the data as well as the relationships between variables. As consequence, the predictive performance and the generalization ability of the model will be degraded [16, 34]. To avoid overfitting the following approaches were taken:

  1. 1.

    Selection of important predictors

  2. 2.

    Cross-validation

  3. 3.

    Penalization in penalized ordinal regression

  4. 4.

    Pruning in O-CART

  5. 5.

    Limiting the number of trees in the final OF

Strengths

The use of caret package and cross-validation technique allowed for trying four classes of machine learning models for ordinal outcomes (Cumulative Probability Model for Ordinal Data, Penalized Ordinal Regression, CART, and Ordinal random forest). Tuning models' hyperparameters permitted to proceed with 133 different model structures. The use of four different sets of predictors per every model structure increased the number of models tried to 532. Machine learning in map** was introduced by one study which used a deep neural network (DNN) in map** from MacNew Heart Disease Health-related Quality of Life questionnaire (MacNew) onto country-specific EQ-5D-5L utility scores[23]. While this study mapped to the utility index (direct map**), our study introduced the use of machine learning in response map**.

Another strength is that we assessed the uncertainty around the estimated MAE using bootstrap** which does not depend on assumptions.

Limitations

Although machine learning algorithms result in accurate predictions using small sample size, they act as black boxes where the process of prediction is not as clear as regression analysis with known coefficients.

Furthermore, map** is not a substitute for including the EQ-5D in future studies and does not overcome the limitations of either instrument [30].

Conclusions

The current study derived the map** algorithm from OKS onto the five domains of EQ-5D-5L. With available EQ-5D-5L utility values, utility scores can be calculated, and the latter enables the estimation of QALYs in an economic evaluation. A machine learning approach presents a promising alternative in the map** literature that warrants further exploration.