Dear Editor,

We would like to make clear that none of the issues brought up by Ian Jacob and his colleagues [1] at Health Economics and Outcomes Research Ltd and Alimera Sciences Ltd were misunderstandings or methodological errors in our publication: “Fluocinolone Acetonide Intravitreal Implant for Treating Recurrent Non-infectious Uveitis: An Evidence Review Group Perspective of a NICE Single Technology Appraisal” by Pouwels et al. published in the November 2019 issue of Pharmacoeconomics [2].

First, we would like to point out that our publication is a summary of the clinical and cost-effectiveness evidence submitted by the company, the Evidence Review Group’s (ERG’s) critique on the submitted evidence, and the guidance issued by the National Institute for Health and Care Excellence (NICE) Appraisal Committee (AC) [2]. The statement that “The place of FAc in the treatment pathway was unclear, which was problematic because the comparators of interest were dependent on the place of FAc in the treatment pathway.” was a direct reflection of the company submission, which seems to be recognised by Jacob et al. as they rightly state that the treatment pathway was unclear when the initial company submission was sent to NICE (November 2018) and when the ERG report was written.

The second issue raised by Jacobs et al. [1] was the choice of comparators by NICE in the scope [3], especially dexamethasone. As Jacobs et al. [1] rightly say, NICE explicitly requested an informal analysis vs dexamethasone, first, in the NICE scope and subsequently in the appraisal consultation document following the first NICE AC meeting. Therefore, there is no misunderstanding on the side of the ERG regarding the comparators considered relevant by the NICE AC.

We stated in our publication that the control arm in the PSV-FAI-001 trial [i.e. (limited) current practice] was not considered to be representative of UK clinical practice. Jacobs et al. [1] disagree. In the ERG report, we explained this as follows: the control arm is a constrained version of current practice. For active unilateral disease, particularly if this included macular oedema, local treatment would be common practice. However, for bilateral disease, many clinicians would opt for systemic therapy (which was not allowed within the trial unless local treatment had failed). In addition, it is not clear from the submission or from the PSV-FAI-001 trial clinical study report which treatments patients in the sham arm of the trial received [4].

Next, Jacobs et al. [1] state that the design of the PSV-FAI-001 trial must have been misinterpreted because we considered the trial results to be difficult to interpret and uncertain. Recurrence of uveitis in the treated eye, i.e. the primary outcome of the PSV-FAI-001 trial, was difficult to interpret and uncertain because most of the events were imputed during the PSV-FAI-001 trial [for the primary outcome (recurrence of uveitis at 6 months), 23/24 (95.8%) of the recurrences on FAc and 26/38 (68.4%) of the recurrences on (limited) current practice were imputed]. This probably led to an overestimation of the number of recurrences of disease, and a biased estimate of the relative effectiveness of FAc vs (limited) current practice. The NICE AC agreed with this in their final appraisal document, by stating: “the committee noted that recurrence was assumed for patients who had missing data for the required eye examinations, or who had local or systemic treatments that were prohibited as part of the trial. The trial did not record why these treatments were given, but the committee considered that they may have been used to treat the other eye or for an underlying condition (rather than for recurrent uveitis in the study eye). So, it agreed that the recurrence rates reported in the trial were likely overestimated” (final appraisal document, Sect. 3.4) [5]. We do not agree that the benefits of the FAc implant observed in the trial were independent of, and unbiased by the imputation approach, but consider the extent of the benefits of the FAc implant uncertain because of the imputation approach.

Regarding the cost-effectiveness evidence, Jacobs et al. [1] mention that some issues concerning the model structure (visual acuity not being captured in the model, a single implant and single eye being modelled, the use of a “Remission” state) were caused by the lack of data collected in PSV-FAI-001. We recognise that the lack of data may complicate model development. However, according to the ISPOR-SMDM modelling good research practice “The modelling team should consult widely with subject experts and stakeholders to assure that the model represents disease processes appropriately […]” and that “it is important to have a complete picture of the problem, regardless of data availability […]” [6]. We believe that the model structure employed by the company did not adhere to these guidelines and as a consequence may misrepresent the disease trajectory of patients. For instance, patients are at risk of develo** bilateral disease, which was not captured in the model, and patients in the “Remission” health state had a utility value equal to the general UK population, which is unrealistic [4].

Concerning the absence of a transition between the “on treatment” and “blindness” health states, Jacobs et al. [1] emphasise that the “company approach was heavily based on available evidence, and only the data available can be modelled without making substantial assumptions”. As mentioned in the ERG report, this matter of judgement was included in the ERG analyses because the FAc implant may be administered to patients with lower visual acuity than patients treated in the PSV-FAI-001 trial (based on clinical expert opinion consulted by the ERG) [4]. Hence, a decrease in visual acuity may lead to legal blindness. The fact that none of the study participants in the PSV-FAI-001 trial experienced blindness is also no proof that blindness will not occur in patients receiving treatment within clinical practice. In addition, this analysis was considered informative by the AC [5] instead of “impeding decision making”, as mentioned by Jacobs et al. [1].

Finally, Jacobs et al. [1] argue that the ERG and company implementation of dexamethasone as a comparator cannot be considered informative and that “various scenarios and assumptions used in the ERG analysis were not considered appropriate or helpful to decision making”. Concerning the first argument, we disagree that none of these analyses were informative because the final appraisal document mentions that “The ERG’s model using the dexamethasone implant as a comparator is preferred” [5]. This statement emphasises that these analyses were considered informative by the AC and contributed to a well-informed decision. Concerning the second argument, the ERG performed multiple scenario analyses to explore the impact of varying these assumptions on the results, as advised by modelling good practices [7].

The “Bigger Picture”

While we agree with Jacob et al. that pragmatic trials have a place in clinical research, we think interventions (both experimental and control) should be precisely described in the clinical study report. In addition, the use of pragmatic trials does not prevent investigators measuring and reporting the primary outcomes unambiguously to reduce the risk of bias. We strongly disagree that our critique of the company submission was based on misunderstandings or included any methodological errors.

To conclude, we would like to thank Jacob et al. for sending a reaction to our publication, and welcome discussion on the arguments underlying our critique. We hope that we made clear that none of the issues raised by Jacob et al. concerned misunderstandings or methodological errors in our publication. We would also like to emphasise that such correspondence would be more valuable if it raised true methodological errors and fostered the debate on how to address these.