1 Introduction

Interior-point algorithms (IPAs) are efficient tools for solving optimization problems, since Karmarkar [26] published his IPA. The most important results on IPAs for solving linear programming (LP) problems are summarized in the monographs written by Roos, Terlaky and Vial [43], Wright [49] and Ye [50]. IPAs for solving linear complementarity problems (LCPs) have been also introduced, see [12, 13, 21,22,23,24, 27, 28]. In general, LCPs belong to the class of NP-complete problems [8]. However, Kojima et al. [28] proved that if the problem’s matrix possesses the \(P_*(\kappa )\)-property, then IPAs for LCPs have polynomial iteration complexity in the size of the problem and in the special parameter, called the handicap of the matrix. IPAs have been extended to more general problems such as symmetric cone optimization (SCO) problems. Faraut and Korányi [18] summarized the most important results related to the theory of Jordan algebras and symmetric cones. Güler [20] noticed that the family of self-scaled cones, introduced by Nesterov and Todd [34, 35], is identical to the set of self-dual and homogeneous, i.e. symmetric cones. IPAs have been introduced to Cartesian symmetric cone linear complementarity problems, as well [30, 31].

The Cartesian symmetric cone horizontal linear complementarity problem (SCHLCP) has been recently introduced by Asadi et al. [4]. Note that LP, LCP, SCO, second-order cone programming, semidefinite optimization and convex quadratic optimization problems can be solved by using SCHLCP possessing the \(P_*(\kappa )\)-property. Several IPAs have been proposed for solving Cartesian SCHLCPs. Mohammadi et al. [33] presented an infeasible IPA taking full Nesterov–Todd steps for solving this kind of problems. Later on, Asadi et al. [3] proposed an IPA for solving Cartesian SCHLCP based on the search directions introduced in [9]. In 2020, Asadi et al. [2] introduced a new IPA for solving Cartesian SCHLCP based on a positive-asymptotic barrier function. Moreover, in [5] a predictor–corrector (PC) IPA for \(P_*(\kappa )\)-SCHLCP was presented. Asadi et al. [6] also defined a feasible IPA for \(P_*(\kappa )\)-SCHLCP using a wide neighbourhood of the central path.

There are several approaches for determining search directions that lead to different IPAs. Peng et al. [37] used self-regular barriers to introduce large-update IPAs for LP. Lesaja and Roos [29] provided a unified analysis of kernel-based IPAs for \(P_*(\kappa )\)-LCPs. Vieira [47] used different IPAs for SCO problems that determine the search directions using kernel functions. In 1997, Tunçel and Todd [46] presented for the first time a reparametrization of the central path system. Later on, Karimi et al. [25] analysed entropy-based search directions for LP in a wide neighbourhood of the central path. In 2003, Darvay presented a new technique for finding search directions for LP problems [9], namely the algebraically equivalent transformation (AET) of the system defining the central path. The most widely used function in the AET technique is the identity map. Darvay [9] applied the function \(\varphi (t)=\sqrt{t}\) in the AET method in order to define IPA for LP. Kheirfam and Haghighi [27] proposed IPA for \(P_{*} (\kappa )\)-LCPs which is based on a search direction generated by considering the function \(\varphi (t)=\frac{\sqrt{t}}{2(1+\sqrt{t})}\) in the AET. In 2016, Darvay et al. [14] used the function \(\varphi (t)=t-\sqrt{t}\) for the first time in the complexity analysis of an IPA. Later on, this IPA was generalized to SCO [15]. It should be mentioned that if we use the function \(\varphi (t)=t-\sqrt{t}\) in the AET approach, then the corresponding kernel function is a positive-asymptotic kernel function. Hence, this search direction cannot be expressed by using usual kernel function approach. An infeasible version of the proposed IPA was also introduced in [42]. Darvay et al. [12] considered the function \(\varphi (t)=t-\sqrt{t}\) in the AET method in order to define primal-dual IPA for solving \(P_*(\kappa )\)-LCPs. In [40], different IPAs for LP, SCO and sufficient LCPs using the AET technique have been presented.

In this paper, we generalize the PC IPA proposed in [13] to Cartesian \(P_*(\kappa )\)-SCHLCPs. We also provide a general framework for determining search directions in case of PC IPAs for Cartesian \(P_*(\kappa )\)-SCHLCPs, which is an extension of the approach given in [41]. We present the analysis of the introduced PC IPA, and we give some new technical lemmas that are necessary in the analysis. Furthermore, we prove that the PC IPA has the same complexity bound as the best known interior-point algorithms for solving these types of problems.

In general, the analysis of IPAs is carried out only for fixed values of the proximity and update parameters. There are only a few results, where the well-definedness of the algorithms is proven under some given conditions related to the parameters. In this paper, we give a condition related to the proximity and update parameters for which the introduced PC IPA is well defined. In this way, a whole set of parameters is determined for which the polynomial complexity of the algorithm is guaranteed. Note that in [43] the authors gave conditions for the update parameters for which the proposed PC IPA for LP is well defined. Moreover, Darvay [10, 11] considered PC IPAs using the function \(\varphi (t)=\sqrt{t}\) in the AET technique and gave conditions for the proximity and update parameters for which the PC IPAs are well defined. It should be mentioned that in this paper we provide the first result related to IPAs using the function \(\varphi (t)=t-\sqrt{t}\) in the AET technique, which is well defined for a set of parameters instead of a given value for the proximity and update parameters.

The paper is organized as follows. In the next section, we present the Cartesian \(P_*(\kappa )\)-SCHLCP. In Sect. 3, we present the generalization of the AET technique for determining search directions on Cartesian \(P_*(\kappa )\)-SCHLCP. Subsection 3.1 is devoted to give a general framework for defining search directions in case of PC IPAs for Cartesian \(P_*(\kappa )\)-SCHLCP. In Sect. 4, the new PC IPA for Cartesian \(P_*(\kappa )\)-SCHLCP is introduced, which is based on a new search direction by using the function \(\varphi (t)=t-\sqrt{t}\) in the AET technique. Section 5 contains the complexity analysis of the proposed PC IPA. In Sect. 6, some concluding remarks are enumerated. In the “Appendix”, we present some well-known results related to the theory of Euclidean Jordan algebras and symmetric cones.

2 Cartesian \(P_*(\kappa )\)-Symmetric Cone Horizontal Linear Complementarity Problem

For a more detailed description of the theory of Euclidean Jordan algebras and symmetric cones, see “Appendix”. Let \(\mathcal {V}:=\mathcal {V}_1\times \mathcal {V}_2\times \cdots \times \mathcal {V}_m\) be the Cartesian product space with its cone of squares \(\mathcal {K}:=\mathcal {K}_1\times \mathcal {K}_2\times \cdots \times \mathcal {K}_m\), where each space \(\mathcal {V}_{i}\) is a Euclidean Jordan algebra and each \(\mathcal {K}_i\) is the corresponding cone of squares of \(\mathcal {V}_{i}\). For any \(x=\left(x^{(1)},\,x^{(2)},\,\ldots ,\,x^{(m)}\right)^T\in \mathcal {V}\) and \(s=\left(s^{(1)},\,s^{(2)},\,\ldots ,\,s^{(m)}\right)^T\in \mathcal {V}\) let

$$\begin{aligned} x\circ s=\left(x^{(1)}\circ s^{(1)},\,x^{(2)}\circ s^{(2)},\,\ldots ,\,x^{(m)}\circ s^{(m)}\right)^{T},~~ \langle x,\,s\rangle =\sum _{i=1}^{m}\langle x^{(i)},\,s^{(i)}\rangle . \end{aligned}$$

Besides this, for any \(z=\left(z^{(1)},\,z^{(2)},\,\ldots ,\,z^{(m)}\right)^T\in \mathcal {V}\), where \(z^{(i)}\in \mathcal {V}_i\), the trace, the determinant and the minimal and maximal eigenvalues of the element z are defined as follows:

$$\begin{aligned} \textrm{tr}(z)= & {} \sum _{i=1}^{m}\textrm{tr}(z^{{(i)}}),~~\det (z)=\prod _{i=1}^{m}\det (z^{{(i)}}),\\ \lambda _{\min }(z)= & {} \min _{1\le i \le m}\{\lambda _{\min }(z^{{(i)}})\},~~~\lambda _{\max }(z)=\max _{1\le i \le m}\{\lambda _{\max }(z^{{(i)}})\}.~~~ \end{aligned}$$

The Frobenius norm of x is defined as \( \left\Vert x \right\Vert _F=\left(\sum _{i=1}^{m}\left\Vert x^{(i)} \right\Vert ^2_F\right)^{1/2}\). Furthermore, consider the Lyapunov transformation and the quadratic representation of x:

$$\begin{aligned}{} & {} L(x)=\textrm{diag}\,\!\!\left(L(x^{(1)}),\,L(x^{(2)}),\,\ldots ,\,L(x^{(m)})\right),\\{} & {} \quad P(x)=\textrm{diag}\,\!\!\left(P(x^{(1)}),\,P(x^{(2)}),\,\ldots ,\,P(x^{(m)})\right). \end{aligned}$$

In the Cartesian SCHLCP, we should find a vector pair \((x,s) \in \mathcal {V}\times \mathcal {V}\) such that

figure a

where \(Q,R : \mathcal {V}\rightarrow \mathcal {V}\) are linear operators, \(q \in \mathcal {V}\) and \(\mathcal {K}\) is the symmetric cone of squares of the Cartesian product space \(\mathcal {V}\). Consider the constant \(\kappa \ge 0\). The pair \(\left(Q,\,R\right)\) has the \(P_*(\kappa )\) property if for all \((x,s)\in \mathcal {V}\times \mathcal {V}\)

$$\begin{aligned} Qx+Rs=0 ~~\text {implies}~~ (1+4\kappa )\sum _{i\in I_+}\langle x^{(i)},\,s^{(i)}\rangle +\sum _{i\in I_-}\langle x^{(i)},\,s^{(i)}\rangle \ge 0, \end{aligned}$$

where \(I_+=\{i~:~\langle x^{(i)},\,s^{(i)}\rangle >0\}\) and \(I_-=\{i~:~\langle x^{(i)},\,s^{(i)}\rangle <0\}\).

We suppose that the interior-point condition (IPC) holds, which means that there exists \(( x^0, s^0)\) so that

figure b

For a detailed study on the initialization of the problem using self-dual embedding for conic convex programming and particular cases, see [16, 17, 32, 38, 39].

The system defining the central path is given as

$$\begin{aligned} \begin{aligned} Q x+ R s= & {} q,\qquad x \succeq _{\mathcal {K}} 0,\\ x \circ s= & {} \mu e,\qquad s \succeq _{\mathcal {K}} 0, \end{aligned} \end{aligned}$$
(1)

where \(\mu > 0\). The subclass of Monteiro–Zhang family of search directions is characterized by

$$\begin{aligned} C(x,s)=\left\{ u|u \text { is invertible and } L(P(u)x) L(P(u)^{-1}s)=L(P(u)^{-1}s) L(P(u)x) \right\} . \end{aligned}$$

Lemma 2.1

(Lemma 28 in [44]) Let \(u \in \text {int}\,\mathcal {K}.\) Then,

$$\begin{aligned} x \circ s = \mu e \quad \Leftrightarrow \quad P(u)x \circ P(u)^{-1}s=\mu e. \end{aligned}$$

Considering \(u \in C(x,s)\), \(\widetilde{Q} = Q P(u)^{-1} \), \(\widetilde{R}=RP(u)\) and using Lemma 2.1, we can rewrite system (1) in the following way:

$$\begin{aligned} \begin{aligned} \widetilde{Q}P(u) x+\widetilde{R} P(u)^{-1}s= & {} q, \qquad \quad P(u)x \succeq _{\mathcal {K}} 0,\\ P(u)x \circ P(u)^{-1}s= & {} \mu e,\qquad P(u)^{-1}s \succeq _{\mathcal {K}} 0. \end{aligned} \end{aligned}$$
(2)

If the IPC holds, then system (2) has unique solution for each \(\mu >0\), see [4] and [31].

3 Search Directions for IPAs for Cartesian \(P_{*} (\kappa )\)-Symmetric Cone Horizontal Linear Complementarity Problems

We present the generalization of the AET technique to \(P_*(\kappa )\)-SCHLCP (cf. [2, 41]). Let us consider the vector-valued function \(\varphi \), which is induced by the real-valued univariate function \(\varphi : (\xi ^2,+\infty ) \rightarrow \mathbb {R}\), where \(0 \le \xi <1\) and \(\varphi '(t)>0\), for all \(t>\xi ^2\). We assume that we have \(\lambda _{\min }\left( \frac{x \circ s}{\mu }\right) >\xi ^2\). In this way, system (2) can be written as follows:

$$\begin{aligned} \begin{aligned} \widetilde{Q}P(u) x+\widetilde{R} P(u)^{-1}s= & {} q,\qquad \quad P(u)x \succeq _{\mathcal {K}} 0,\\ \varphi \left( \frac{P(u)x \circ P(u)^{-1}s}{\mu } \right)= & {} \varphi (e),\qquad P(u)^{-1}s \succeq _{\mathcal {K}} 0. \end{aligned} \end{aligned}$$

We define the search directions by using the technique considered in [41, 48]. For the strictly feasible \(x \in \text {int}\, \mathcal {K}\) and \(s \in \text {int}\, \mathcal {K}\) we want to find the search directions \((\varDelta x, \varDelta s)\) so that

$$\begin{aligned} {} \begin{aligned} \widetilde{Q}P(u) \varDelta x+\widetilde{R} P(u)^{-1} \varDelta s= & {} 0,\qquad \quad P(u)x \succeq _{\mathcal {K}} 0,\\ P(u)x \circ P(u)^{-1} \varDelta s + P(u)^{-1} s \circ P(u) \varDelta x= & {} a_{\varphi },\qquad P(u)^{-1}s \succeq _{\mathcal {K}} 0, \end{aligned} \end{aligned}$$
(3)

where

$$\begin{aligned} a_{\varphi } = \mu \left( \varphi ' \left( \frac{P(u)x \circ P(u)^{-1}s}{\mu } \right) \right) ^{-1} \circ \left( \varphi (e)-\varphi \left( \frac{P(u)x \circ P(u)^{-1}s}{\mu } \right) \right) . \end{aligned}$$

In [40] an overview of different functions \(\varphi \) used in the literature in the AET technique is presented.

Throughout the paper, we use the NT-scaling scheme. Let \(u=w^{-\frac{1}{2}}\), where w is called the NT-scaling point of x and s and is defined as follows:

$$\begin{aligned} w=P(x)^{\frac{1}{2}}\left( P(x)^{\frac{1}{2}}s\right) ^{-\frac{1}{2}} =P(s)^{-\frac{1}{2}}\left( P(s)^{\frac{1}{2}}x \right) ^{\frac{1}{2}}. \end{aligned}$$
(4)

Let us introduce the following notations:

$$\begin{aligned} {} v:=\frac{P(w)^{-\frac{1}{2}}x}{\sqrt{\mu }} = \frac{P(w)^{\frac{1}{2}}s}{\sqrt{\mu }}, \quad d_x:=\frac{P(w)^{-\frac{1}{2}} \varDelta x}{\sqrt{\mu }}, \quad d_s:=\frac{P(w)^{\frac{1}{2}} \varDelta s}{\sqrt{\mu }}. \end{aligned}$$
(5)

From (5) we obtain the scaled system:

$$\begin{aligned} \begin{aligned}&\sqrt{\mu } Q P(w)^{\frac{1}{2}} d_x + \sqrt{\mu } R P(w)^{-\frac{1}{2}} d_s = 0,\\&d_x+d_s=p_v, \end{aligned} \end{aligned}$$
(6)

where

$$\begin{aligned} p_v = v^{-1} \circ (\varphi '(v \circ v))^{-1} \circ (\varphi (e)-\varphi (v \circ v)). \end{aligned}$$

In order to be able to define IPAs, we should define a proximity measure to the central path. For this, let

$$\begin{aligned} \delta (v)= \delta (x,s,\mu ) := \frac{\Vert p_v\Vert _F}{2} . \end{aligned}$$

Furthermore, we define the \(\tau \)-neighbourhood of a fixed point on the central path:

$$\begin{aligned} \mathcal {N}(\tau ,\mu ):= \{(x,s) \in \mathcal {V}\times \mathcal {V}: Qx+Rs = q, \; x \succeq _{\mathcal {K}} 0, \; \; s \succeq _{\mathcal {K}} 0 : \; \delta (x,s,\mu ) \le \tau \}, \end{aligned}$$

where \(\tau \) is a threshold parameter and \(\mu >0\) is fixed.

As it was mentioned in Sect. 1, the search directions can be determined by using another approach based on kernel functions, see [7, 37]. Achache [1] and Pan et al. [36] showed that one can associate the corresponding kernel functions to several functions \(\varphi \) used in the AET technique. In [40], the author also presented the relationship between the approach based on kernel functions and the AET technique. However, it is interesting that if we apply the function \(\varphi (t)=t-\sqrt{t}\) in the AET, we cannot obtain a corresponding kernel function in the usual sense, because it is not defined on a neighbourhood of the origin, see [15]. Darvay and Rigó [15] introduced the notion of the positive-asymptotic kernel function. In this sense, the kernel function associated to the function \(\varphi (t)=t-\sqrt{t}\) used in this paper is positive-asymptotic kernel function:

$$\begin{aligned} \bar{\psi } : \left( \frac{1}{2},\infty \right) \rightarrow [0,\infty ), \quad \bar{\psi }(t)=\frac{t^2}{2}-\frac{t}{2}-\frac{1}{4}\log (2t-1). \end{aligned}$$

For details, see [2, 15, 40]. In the following subsection, we generalize a method for determining the scaled predictor and corrector systems in case of the PC IPAs that are given in [41].

3.1 Determining Search Directions in Case of Predictor–Corrector Interior-Point Algorithms

We generalize the method introduced by Darvay et al. [13] and presented in [41] to determine the search directions in case of PC IPAs. In general, the PC IPAs perform a predictor and several corrector steps in a main iteration. The predictor step is a greedy one which aims to find the optimal solution of the problem as soon as possible. Hence, a certain amount of retirement from the central path is obtained after a predictor step. In the corrector step, the goal is to return in the \(\tau \)-neighbourhood of the central path.

First, we deal with the corrector step, which is a full-NT step. Hence, the scaled corrector system coincides with (6). We should decompose \(a_{\varphi }\) given in system (3) in order to obtain the predictor system as follows:

$$\begin{aligned} a_{\varphi }=f(x,s,\mu ) + g(x,s), \end{aligned}$$
(7)

where f and g are vector-valued functions and \(f(x,s,0)=0\). We set \(\mu =0\) in this decomposition. In this way, we get

$$\begin{aligned} \begin{aligned} \widetilde{Q}P(u) \varDelta x+\widetilde{R} P(u)^{-1} \varDelta s= & {} 0,\quad \quad P(u)x \succeq _{\mathcal {K}} 0,\\ P(u)x \circ P(u)^{-1} \varDelta s + P(u)^{-1} s \circ P(u) \varDelta x= & {} g(x,s), \quad P(u)^{-1}s \succeq _{\mathcal {K}} 0. \end{aligned} \end{aligned}$$

Using \(u=w^{-\frac{1}{2}}\), the scaled predictor system is the following:

$$\begin{aligned} \begin{aligned} \sqrt{\mu } Q P(w)^{\frac{1}{2}} d_x + \sqrt{\mu } R P(w)^{-\frac{1}{2}} d_s= & {} 0,\qquad \qquad \qquad \\ d_x+d_s= & {} (\mu v)^{-1} \circ g(x,s). \end{aligned} \end{aligned}$$
(8)

Using this system, the predictor search directions can be easily obtained. In the following subsection, we introduce the new PC IPAs for solving Cartesian \(P_{*} (\kappa )\)-SCHLCPs.

4 New Predictor–Corrector Interior-Point Algorithm

We deal with the case, when \(\varphi (t)=t-\sqrt{t}\). Similar to [13], the decomposition of \(a_{\varphi }\) given in (7) will be

$$\begin{aligned} a_{\varphi } = (\sqrt{\mu } \; x \circ s) \circ \left( 2 (x \circ s)^{\frac{1}{2}}- \sqrt{\mu } e\right) ^{-1} - x \circ s. \end{aligned}$$
(9)

By setting \(\mu =0\) in (9) and from (5) we get

$$\begin{aligned} {} g(x,s) = -x\circ s = - \mu \; v^2. \end{aligned}$$
(10)

in our case. Furthermore,

$$\begin{aligned} {} p_v = 2(v-v^2) \circ (2v-e)^{-1}. \end{aligned}$$
(11)

In the corrector step, we take a full-NT step. Hence, the scaled corrector system coincides with system (6) with \(p_v\) given in (11):

$$\begin{aligned} \begin{aligned} \sqrt{\mu } Q P(w)^{\frac{1}{2}} d_x^c + \sqrt{\mu } R P(w)^{-\frac{1}{2}} d_s^c&= 0,\qquad \qquad \qquad \qquad \quad \\ d_x^c+d_s^c&= 2(v-v^2) \circ (2v-e)^{-1}. \end{aligned} \end{aligned}$$
(12)

We can calculate the corrector search directions \(\varDelta ^c x\) and \(\varDelta ^c s\) from

$$\begin{aligned} {} \varDelta ^c x = \sqrt{\mu } P(w)^{\frac{1}{2}} d_x^c, \quad \varDelta ^c s = \sqrt{\mu } P(w)^{-\frac{1}{2}} d_s^c. \end{aligned}$$
(13)

Let \(x^{c}=x+ \varDelta ^c x\) and \(s^{c}=s + \varDelta ^c s\) be the point after a corrector step. In the predictor step, we define the following notations:

$$\begin{aligned} v^c=\frac{P(w^c)^{-\frac{1}{2}}x^c}{\sqrt{\mu }} = \frac{P(w^c)^{\frac{1}{2}}s^c}{\sqrt{\mu }}, \end{aligned}$$

where \(w^c\) is the NT-scaling point of \(x^c\) and \(s^c\).

Using (8) and (10), the scaled predictor system in this case is

$$\begin{aligned} \begin{aligned} \sqrt{\mu } Q P(w^c)^{\frac{1}{2}} d_x^p + \sqrt{\mu } R P(w^c)^{-\frac{1}{2}} d_s^p= & {} 0, \\ d_x^p+d_s^p= & {} -v^c. \end{aligned} \end{aligned}$$
(14)

We obtain the predictor search directions \(\varDelta ^p x\) and \(\varDelta ^p s\) from

$$\begin{aligned} {} \varDelta ^p x = \sqrt{\mu } P(w^c)^{\frac{1}{2}} d_x^p, \quad \varDelta ^p s = \sqrt{\mu } P(w^c)^{-\frac{1}{2}} d_s^p. \end{aligned}$$
(15)

The point after a predictor step is \(x^{p}=x^c +\theta \varDelta ^p x,\) and \(s^p=s^c + \theta \varDelta ^p \textbf{s},\) \(\mu ^p = (1-\theta )\mu ,\) where \(\theta \in (0,1)\) is the update parameter. The proximity measure in this case is

$$\begin{aligned} \delta (v):=\delta (x,s,\mu )=\frac{\Vert p_v\Vert _F}{2} = \left\| (v-v^2) \circ (2v-e)^{-1}\right\| _F. \end{aligned}$$
(16)

Our PC IPA algorithm starts with \((x^0,s^0) \in \mathcal {N}(\tau ,\mu )\). The algorithm performs corrector and predictor steps. The PC IPA is given in Algorithm 4.1.

figure c

5 Analysis of the Predictor–Corrector Interior-Point Algorithm

5.1 The Corrector Step

The corrector step is a full-NT step, hence the following lemmas can be easily obtained by using the results published in [2]. Consider

$$\begin{aligned} q_v=d_x^c-d_s^c, \end{aligned}$$
(17)

hence

$$\begin{aligned} d_x^c =\frac{p_v+q_v}{2},~~ d_s^c =\frac{p_v-q_v}{2},~~~d_x^c \circ d_s^c=\frac{p_v^2-q_v^2}{4}. \end{aligned}$$
(18)

Lemma 5.1 gives an upper bound for \(\left\Vert q_v \right\Vert _F\) in terms of \(\left\Vert p_v \right\Vert _F\).

Lemma 5.1

(Lemma 5.1 in [2]) We have \( ~\left\Vert q_v \right\Vert _F\le \sqrt{1+4\kappa }~\left\Vert p_v \right\Vert _F\).

Let \(x,\,s\in \text {int}\;\mathcal {K}\), \(\mu >0\) and w be the scaling point of x and s. We have

$$\begin{aligned} \begin{array}{ccc} x^c:=x+\varDelta x=\sqrt{\mu }P(w)^{1/2}(v+d_x^c),~\\ s^c:=s+\varDelta s=\sqrt{\mu }P(w)^{-1/2}(v+d_s^c). \end{array} \end{aligned}$$
(19)

It should be mentioned that \(x^c\) and \(s^c\) belong to \(\text {int}\;\mathcal {K}\) if and only if \(v+d_x^c\) and \(v+d_s^c\) belong to \(\text {int}\;\mathcal {K}\), because \(P(w)^{1/2}\) and its inverse, \(P(w)^{-1/2}\), are automorphisms of \(\text {int}\;\mathcal {K}\), cf. Proposition A.1 (ii) from “Appendix”. The next lemma proves the strict feasibility of the full-NT step. It should be mentioned that throughout the paper we use the proximity measure given in (16).

Lemma 5.2

(Lemma 5.3 in [2]) Let \(\delta :=\delta (x,s,\mu )<\frac{1}{\sqrt{1+4\kappa }}\). Then, \(\lambda _{\min }(v)>\frac{1}{2}\) and the full-NT step is strictly feasible, that is, \( x^c \succ _{\mathcal {K}} 0\) and \(s^c \succ _{\mathcal {K}} 0\).

Lemma 5.3 gives an upper bound for the proximity measure after a full-NT step.

Lemma 5.3

(Lemma 5.6 in [2]) Let \(\delta =\delta (x,s,\mu )<\frac{1}{2\sqrt{1+4 \kappa }}\) and \(\lambda _{\min }(v)>\frac{1}{2}\). Then, \(\lambda _{\min }(v^c)>\frac{1}{2}\) and

$$\begin{aligned} \delta (x^c,s^c,\mu ) \le \frac{\sqrt{1-(1+4 \kappa ) \delta ^2}}{2(1-(1+4 \kappa ) \delta ^2)+\sqrt{1-(1+4 \kappa )\delta ^2}-1}(3+4 \kappa ) \delta ^2. \end{aligned}$$

Furthermore,

$$\begin{aligned} \delta (x^c,s^c,\mu ) < \frac{3-\sqrt{3}}{2} (3+4\kappa )\delta ^2. \end{aligned}$$

Lemma 5.4 provides an upper bound for the duality gap \(\langle x, s \rangle \) after a full-NT step.

Lemma 5.4

(cf. Lemma 5.8 in [2]) Let \(x^c\) and \(s^c\) be obtained after a full-NT step. Then,

$$\begin{aligned} \langle x^c,s^c \rangle \le \mu (r+2\delta ^2). \end{aligned}$$

Furthermore, if \(\delta <\frac{1}{2(1+4\kappa )}\), then

$$\begin{aligned} \langle x^c,s^c \rangle < \frac{3}{2} \mu r. \end{aligned}$$

In the following subsection, we present some technical lemmas.

5.2 Technical Lemmas

First, consider the following two results that will be used in the next part of the analysis.

Lemma 5.5

Let \(\bar{f}:{(\bar{d},+\infty )}\longrightarrow (0, +\infty )\) be a function, where \(\bar{d} > 0\) and \(\bar{f}(t)>\xi \), for \(t > \bar{d}\) where \(\xi >0\). Assume that \(v\in \mathcal {V}\) and \(\lambda _{\min }(v) > \bar{d}\). Then,

$$\begin{aligned} \left\Vert \bar{f}(v)\circ \left(e-v\right) \right\Vert _F\ge \xi \left\Vert e-v \right\Vert _F {.} \end{aligned}$$

Proof

Using Theorem A.1 given in “Appendix”, we assume that \(v=\sum _{i=1}^{r} \lambda _i(v) c_i\). Moreover, \(\bar{f}(v)=\sum _{i=1}^r \bar{f}(\lambda _i(v))c_i\). Then,

$$\begin{aligned} \left\Vert \bar{f}(v)\circ \left(e-v\right) \right\Vert _F = \sqrt{\sum _{i=1}^{r} \left( \bar{f}(\lambda _i(v))\right) ^2 \lambda _i^2(e-v)} \ge \xi \left\Vert e-v \right\Vert _F {.} \end{aligned}$$

\(\square \)

Lemma 5.6

(cf. Lemma 7.4 of [15]) Let \(\widetilde{f}:{(d,+\infty )}\longrightarrow (0, +\infty )\) be a decreasing function, where \(d > 0\). Furthermore, suppose that \(v\in \mathcal {V}\), \(\lambda _{\min }(v)>d\) and \(\eta >0\). Then,

$$\begin{aligned} \left\Vert \widetilde{f}(v)\circ \left(\eta ^2 e-v^2\right) \right\Vert _F\le \widetilde{f}\left(\lambda _{\min }(v)\right) \left\Vert \eta ^2 e-v^2 \right\Vert _F. \end{aligned}$$

In the following part, we consider some inequalities from [2] that will be used later in the analysis. Using the first equation of (6), we get

$$\begin{aligned} Q\left( P(w)^{1/2}d_x\right)+R\left(P(w)^{-1/2}d_s\right)=0. \end{aligned}$$

The definition of the \(P_*(\kappa )\) property results in

$$\begin{aligned} \left\langle \! P(w)^{1/2}d_x ,\,P(w)^{-1/2}d_s\!\right\rangle \!\ge \!\! -4\kappa \!\sum _{i\in I_+}\!\!\left\langle \! P\!\left(w^{(i)}\right)^{1/2}\!\!d_x^{(i)},\,P\!\left(w^{(i)}\right)^{-1/2}\!\!d_s^{(i)}\!\right\rangle \!,~~~~ \end{aligned}$$
(20)

where \(I_+=\left\rbrace i:\left\langle d_x^{(i)},\,d_s^{(i)}\right\rangle > 0\right\lbrace \). Using that each \(P\left(w^{(i)}\right)^{1/2}\), \(1\le i \le m\), is self-adjoint, we get

$$\begin{aligned} \left\langle P\left(w^{(i)}\right)^{1/2}d_x^{(i)},\,P\left(w^{(i)}\right)^{-1/2}d_s^{(i)}\right\rangle =\left\langle d_x^{(i)},\,d_s^{(i)}\right\rangle . \end{aligned}$$

Substituting the last equation into (20) and using the fact that \(P(w^{\frac{1}{2}})\) is self-adjoint, we obtain

$$\begin{aligned} \left\langle d_x,\,d_s\right\rangle \ge -4\kappa \sum _{i\in I_+}\left\langle d_x^{(i)},\,d_s^{(i)}\right\rangle . \end{aligned}$$
(21)

Furthermore, one has

$$\begin{aligned} \sum _{i\in I_+}\left\langle d_x^{(i)},\,d_s^{(i)}\right\rangle \le \frac{1}{4}\sum _{i\in I_+}\left\Vert d_x^{(i)}+d_s^{(i)} \right\Vert ^2_F\le \frac{1}{4}\left\Vert d_x+d_s \right\Vert ^2_F=\frac{\left\Vert p_v \right\Vert _F^2}{4}, \end{aligned}$$
(22)

and therefore \(\left\langle d_x,\,d_s\right\rangle \ge -\kappa \left\Vert p_v \right\Vert _F^2\). Thus,

$$\begin{aligned} \left\Vert q_v \right\Vert _F^2= & {} \left\Vert d_x-d_s \right\Vert _F^{2}=\left\Vert d_x+d_s \right\Vert _F^2-4\left\langle d_x,\,d_s\right\rangle \nonumber \\\le & {} \left\Vert p_v \right\Vert _F^2+4\kappa \left\Vert p_v \right\Vert ^2_F=(1+4\kappa )\left\Vert p_v \right\Vert _F^2. \end{aligned}$$
(23)

Note that (23) leads to the result given in Lemma 5.1. Now, we prove a lemma which is a generalization of Lemma 5.3 given in [13] to Cartesian SCHLCP. We assume that (QR) possesses the \(P_* (\kappa )\)-property.

Lemma 5.7

The following inequality holds:

$$\begin{aligned} \Vert d^p_{x} \circ d^p_{s} \Vert _F \le \frac{r(1+2\kappa )(1+2\delta ^c)^2}{2}, \end{aligned}$$

where \(\delta ^c=\delta (x^c,s^c,\mu )=\left\| (v^c-\left( v^c\right) {^2})\circ (2v^c-e)^{-1}\right\| _F.\)

Proof

Using (22) and the second equation of the scaled predictor system (14) we have

$$\begin{aligned} \sum _{i\in I_+}\left\langle {d^p_x}^{(i)},\,{d^p_s}^{(i)}\right\rangle \le \frac{1}{4}\left\Vert d^p_x+d^p_s \right\Vert ^2_F=\frac{\Vert v^c\Vert _F^2}{4}, \end{aligned}$$
(24)

From (21) and (24) we obtain

$$\begin{aligned} \Vert v^c\Vert _F^2= & {} \Vert d_x^p + d_s^p\Vert _F^2=\Vert d_x^p\Vert _F^2 + \Vert d_s^p\Vert _F^2+ 2 \langle d_x^p, d_s^p \rangle \\\ge & {} \Vert d_x^p\Vert _F^2 + \Vert d_s^p\Vert _F^2 - 8\kappa \sum _{i\in I_+}\langle {d_x^p}^{(i)}, {d_s^p}^{(i)} \rangle \ge \Vert d_x^p\Vert _F^2 + \Vert d_s^p\Vert _F^2 - 2\kappa \Vert v^c\Vert _F^2. \end{aligned}$$

Hence, \(\Vert d_x^p\Vert _F^2 + \Vert d_s^p\Vert _F^2 \le (1+2\kappa ) \Vert v^c\Vert _F^2.\) We give an upper bound for \(\Vert v^c\Vert _F\) depending on \(\delta ^c\) and r. For this, consider \(\sigma ^c=\Vert e-v^c\Vert _F\).

Then,

$$\begin{aligned} {} \Vert v^c\Vert _F\le & {} \Vert v^c - e\Vert _F + \Vert e\Vert _F = \sigma ^c + \sqrt{r} \le \sqrt{r}(\sigma ^c + 1). \end{aligned}$$
(25)

Consider \(\bar{f}(t) = \frac{t}{2t-1} > \frac{1}{2}\). Then, using Lemma 5.5 with \(\xi =\frac{1}{2}\) and \(\bar{d}=\frac{1}{2}\) we have

$$\begin{aligned} {} \delta ^c= & {} \left\| (v^c-\left( v^c\right) {^2})\circ (2v^c-e)^{-1}\right\| _F \nonumber \\= & {} \left\| v^c \circ (2v^c-e)^{-1} \circ (e-v^c) \right\| _F \ge \frac{1}{2} \Vert e-v^c\Vert _F = \frac{\sigma ^c}{2}, \end{aligned}$$
(26)

hence \(\sigma ^c\le 2 \delta ^c\).

From (25) and (26) we get

$$\begin{aligned}{} \Vert v^c\Vert\le & {} \sqrt{r}(1+2\delta ^c). \end{aligned}$$

In this way, we have

$$\begin{aligned} \Vert d_x^p \circ d_s^p\Vert _F\le & {} \Vert d_x^p\Vert _F \Vert d_s^p\Vert _F \le \frac{1}{2} \left( \Vert d_x^p\Vert _F^2 + \Vert d_s^p\Vert _F^2 \right) \\\le & {} \frac{1}{2} (1+2\kappa ) \left\| v^c\right\| _F^2 \le \frac{r(1+2\kappa )(1+2\delta ^c)^2}{2}, \end{aligned}$$

which proves the lemma. \(\square \)

5.3 The Predictor Step

The first lemma of this subsection is a generalization of Lemma 5.5 given in [13] to Cartesian SCHLCP. It provides a sufficient condition for the strict feasibility of the predictor step.

Lemma 5.8

Let \(x^c \succ _{\mathcal {K}} 0\) and \(s^c \succ _{\mathcal {K}} 0\), \(\mu > 0\) such that \(\delta ^c:=\delta (x^c,s^c,\mu ) < \frac{1}{2}\). Furthermore, let \(0<\theta <1\). Let \(x^{p}=x^c + \theta \varDelta ^p x, \quad s^p=s^c + \theta \varDelta ^p s\) be the iterates after a predictor step. Then, \(x^p \succ _{\mathcal {K}} 0\) and \(s^p \succ _{\mathcal {K}} 0\) if \(\bar{u}(\delta ^c,\theta ,r) > 0\), where

$$\begin{aligned} \bar{u}(\delta ^c, \theta , r) := (1-2\delta ^c)^2 - \frac{r(1+2\kappa )\theta ^2 (1+2\delta ^c)^2}{2(1-\theta )}. \end{aligned}$$

Proof

Let \(\alpha \in [0,1]\) and \(x^p(\alpha ) =x^c + \alpha \theta \varDelta ^p x\) and \(s^p(\alpha )= s^c + \alpha \theta \varDelta ^p s\). Hence,

$$\begin{aligned} {} x^p(\alpha ) = \sqrt{\mu } P(w^c)^{\frac{1}{2}} (v^c+\alpha \theta d_x^p), \quad s^p(\alpha )=\sqrt{\mu } P(w^c)^{-\frac{1}{2}} (v^c+\alpha \theta d_s^p). \end{aligned}$$
(27)

From (27) and using that \( P(w^c)^{\frac{1}{2}}\) and \(P(w^c)^{-\frac{1}{2}}\) are automorphisms of \(\text {int}\;\mathcal {K}\) we obtain that \(x^p(\alpha ) \in \text {int}\;\mathcal {K}\) and \(s^p(\alpha ) \in \text {int}\;\mathcal {K}\) if and only if \(v^c+\alpha \theta d_x^p \in \text {int}\;\mathcal {K}\) and \(v^c+\alpha \theta d_s^p \in \text {int}\;\mathcal {K}\). Let \(v_x^p(\alpha ) = v^c+ \alpha \theta d_x^p\) and \(v_s^p(\alpha ) := v^c+\alpha \theta d_s^p,\) for \(0 \le \alpha \le 1\). From the second equation of system (14) we obtain

$$\begin{aligned} {} v_x^p(\alpha ) \circ v_s^p(\alpha )= & {} (v^c+ \alpha \theta d_x^p ) \circ (v^c+ \alpha \theta d_s^p ) \nonumber \\= & {} \left( \left( v^c\right) ^2 + \alpha \theta v^c \circ ( d_x^p + d_s^p) + \alpha ^2 \; \theta ^2 \; d_x^p \circ d_s^p \right) \nonumber \\= & {} \left( (1-\alpha \theta ) \left( v^c \right) ^2 + \alpha ^2 \; \theta ^2 \; d_x^p \circ d_s^p \right) . \end{aligned}$$
(28)

Using (28) and Lemma A.1 from “Appendix” we get

$$\begin{aligned} {} \lambda _{\min } \left( \frac{v_x^p(\alpha ) \circ v_s^p(\alpha )}{(1-\alpha \theta )}\right)= & {} \lambda _{\min } \left( \left( v^c \right) ^2 + \frac{\alpha ^2 \; \theta ^2}{(1-\alpha \theta ) }d_x^p \circ d_s^p \right) \nonumber \\\ge & {} \lambda _{\min } \left( v^c \right) ^2 - \frac{\alpha ^2 \theta ^2}{1-\alpha \theta } \Vert d_x^p \circ d_s^p \Vert _{F}. \end{aligned}$$
(29)

Using \(\lambda _i(e-v^c) \le \Vert e-v^c\Vert _F, \; \forall \; i=1,\ldots ,r,\) we have

$$\begin{aligned} 1-\sigma ^c \le \lambda _i(v^c) \le 1+\sigma ^c, \; \forall \; i=1,\ldots ,r. \end{aligned}$$
(30)

From (26), (30) and \(\delta ^c<\frac{1}{2}\) we have

$$\begin{aligned} {} \lambda _{\min } \left( v^c \right) ^2 \ge (1-\sigma ^c)^2 \ge (1-2\delta ^c)^2. \end{aligned}$$
(31)

Note that \(f(\alpha )=\frac{\alpha ^2 \theta ^2}{1-\alpha \theta }\) is strictly increasing for \(0 \le \alpha \le 1\) and each fixed \(0< \theta < 1\). Using this, (29), (31) and Lemma 5.7 we get

$$\begin{aligned} {} \lambda _{\min } \left( \frac{v_x^p(\alpha ) \circ v_s^p(\alpha )}{(1-\alpha \theta )}\right)\ge & {} (1-2\delta ^c)^2 - \frac{r(1+2\kappa )\theta ^2 (1+2\delta ^c)^2}{2(1-\theta )} \nonumber \\= & {} \bar{u}(\delta ^c,\theta ,r) > 0. \end{aligned}$$
(32)

This means that \(x^p(\alpha ) \circ s^p(\alpha ) \succ _{\mathcal {K}} 0\), for \(0\le \alpha \le 1\). Hence, \(x^p(\alpha )\) and \(s^p(\alpha )\) do not change sign on \(0\le \alpha \le 1\). From \(x^p(0) = x^c \succ _{\mathcal {K}} 0\) and \(s^p(0) = s^{c} \succ _{\mathcal {K}} 0\), we deduce that \( x^p(1)=x^p \succ _{\mathcal {K}} 0\) and \(s^p(1) = s^p \succ _{\mathcal {K}} 0,\) which yields the result. \(\square \)

Consider the following notation

$$\begin{aligned} v^p = \frac{P(w^p)^{-\frac{1}{2}} x^p}{\sqrt{\mu ^p}} = \frac{P(w^p)^{\frac{1}{2}} s^p}{\sqrt{\mu ^p}}, \end{aligned}$$

where \(\mu ^p=(1-\theta )\mu \) and \(w^p\) is the NT-scaling point of \(x^p\) and \(s^p\). Substituting \(\alpha =1\) in (28) and (32) we get

$$\begin{aligned} \left( v^p\right) ^2 = \left( v^c\right) ^2 + \frac{\theta ^2}{1-\theta } (d_x^p \circ d_s^p), \end{aligned}$$
(33)
$$\begin{aligned} \lambda _{\min } \left( v^p \right) ^2 \ge \bar{u}(\delta ^c,\theta ,r) > 0. \end{aligned}$$
(34)

The next lemma investigates the effect of a predictor step and the update of \(\mu \) on the proximity measure. This is the generalization of Lemma 5.6 proposed in [13] to Cartesian SCHLCP.

Lemma 5.9

Consider \(\delta ^c:=\delta (x^c,s^c,\mu ) <\frac{1}{2}\), \(\delta :=\delta (x,s,\mu )\), \(\lambda _{\min }(v) > \frac{1}{2}\), \(\mu ^p=(1-\theta ) \mu \), where \(0<\theta <1\), \(\bar{u}(\delta ^c,\theta ,r) > \frac{1}{4}\) and let \(x^p\) and \(s^p\) be the iterates after a predictor step. Then, we have

$$\begin{aligned} \delta ^p:=\delta (x^p,s^p,\mu ^p) \le \frac{\sqrt{\bar{u}(\delta ^c,\theta ,r)}\left( (3+4\kappa )\delta {^2}+(1-2\delta ^c)^2 - \bar{u}(\delta ^c,\theta ,r) \right) }{2\bar{u}(\delta ^c,\theta ,r) + \sqrt{\bar{u}(\delta ^c,\theta ,r)} -1}, \end{aligned}$$

and \(\lambda _{\min }(v^p) > \frac{1}{2}\).

Proof

From \(\bar{u}(\delta ^c,\theta ,r)> \frac{1}{4} > 0\), using Lemma 5.8 we obtain that \(x^p \succ _{\mathcal {K}} 0\) and \(s^p \succ _{\mathcal {K}} 0\). This means that the predictor step is strictly feasible. Furthermore, from (34) we get

$$\begin{aligned} \lambda _{\min } \left( v^p \right) \ge \sqrt{\bar{u}(\delta ^c,\theta ,r)} > \frac{1}{2}, \end{aligned}$$

hence the first part of the lemma is proved. Moreover,

$$\begin{aligned} {} \delta ^p= & {} \left\| (v^p - \left( v^p\right) ^2) \circ (2 v^p - e)^{-1} \right\| _F \nonumber \\= & {} \left\| v^p \circ \left( e - (v^p)^2\right) \circ \left( (2 v^p - e)\circ (e+v^p)\right) ^{-1} \right\| _F. \end{aligned}$$
(35)

Consider \(\widetilde{f}:\left( \frac{1}{2},\infty \right) \rightarrow \mathbb {R}\), \(\widetilde{f}(t) = \frac{t}{(2t-1)(1+t)}\), which is decreasing with respect to t. Using this, (33), (34) and (35) and Lemma 5.6 we get

$$\begin{aligned} {} \delta ^p= & {} \left\| v^p \circ \left( e - (v^p)^2\right) \circ \left( (2 v^p - e)\circ (e+v^p)\right) ^{-1} \right\| _F \nonumber \\\le & {} \frac{\lambda _{\min } \left( v^p\right) }{\left( 2 \lambda _{\min } \left( v^p\right) -1\right) \left( 1+\lambda _{\min } \left( v^p\right) \right) } \left\| e- \left( v^p\right) ^2\right\| _F \nonumber \\\le & {} \frac{\sqrt{\bar{u}(\delta ^c,\theta ,r)} }{\left( 2 \sqrt{\bar{u}(\delta ^c,\theta ,r)} -1\right) \left( 1+\sqrt{\bar{u}(\delta ^c,\theta ,r)} \right) } \left\| e- \left( v^c\right) ^2 - \frac{\theta ^2}{1-\theta } (d_x^p \circ d_s^p)\right\| _F\\\le & {} \frac{\sqrt{\bar{u}(\delta ^c,\theta ,r)} }{\left( 2 \sqrt{\bar{u}(\delta ^c,\theta ,r)} -1\right) \left( 1+\sqrt{\bar{u}(\delta ^c,\theta ,r)} \right) } \left( \left\| e- \left( v^c\right) ^2 \right\| _F + \frac{\theta ^2}{1-\theta } \left\| d_x^p \circ d_s^p\right\| _F \right) .\nonumber \end{aligned}$$
(36)

Using the definition of \(v^{c}\), (5), (17) and (18) we have

$$\begin{aligned} {} \left\| e- \left( v^c\right) ^2 \right\| _F= & {} \Vert (v+d_x^c) (v+d_s^c)-e\Vert _F \nonumber \\= & {} \Vert v^2 + v \circ \left( d_x^c + d_s^c\right) - e + d_x^c \circ d_s^c \Vert _F \nonumber \\\le & {} \Vert v^2 + v \circ p_v - e\Vert _F + \left\| \frac{p_v^2 - q_v^2}{4}\right\| _F. \end{aligned}$$
(37)

Besides this, using \(\lambda _{\min }(v) > \frac{1}{2}\), \(v^2 \circ (2v - e)^{-1}\succeq _{\mathcal {K}} 0\), we have

$$\begin{aligned} {} 0 \preceq _{\mathcal {K}} v^2 + v \circ p_v - e= & {} v^2 + 2v^2 \circ (e-v) \circ (2 v- e)^{-1} - e \nonumber \\= & {} (v-e)^2\circ (2v-e)^{-1} \nonumber \\&\preceq _{\mathcal {K}}&((v-e)^2 \circ v^2)\circ (2v-e)^{-2} = \frac{p_v^2}{4}. \end{aligned}$$
(38)

Using (37), (38) and Lemma 5.1 we obtain

$$\begin{aligned} {} \left\| e- \left( v^c\right) ^2 \right\| _F\le & {} \Vert v^2 + v \circ p_v - e\Vert _F + \left\| \frac{p_v^2 - q_v^2}{4}\right\| _F \nonumber \\\le & {} \frac{\Vert p_v\Vert _F^2}{4}+ \frac{\Vert p_v\Vert _F^2}{4}+\frac{\Vert q_v\Vert _F^2}{4}\nonumber \\\le & {} (3+4\kappa ) \delta ^2. \end{aligned}$$
(39)

From (36), (39) and Lemma 5.7 we obtain the second statement of the lemma. \(\square \)

The next lemma gives an upper bound for the complementarity gap after an iteration of the algorithm.

Lemma 5.10

Let \(x^c \succeq _{\mathcal {K}} 0\), \(s^c \succeq _{\mathcal {K}} 0\) and \(\mu >0\) such that \(\delta ^c:=\delta (x^c,s^c,\mu ) < \frac{1}{2}\) and \(0<\theta <1\). If \(\delta <\frac{1}{2(1+4\kappa )}\) and \(x^p\) and \(s^p\) are the iterates obtained after the predictor step of the algorithm, then

$$\begin{aligned} \langle x^p, s^p \rangle \le \left( 1-\theta +\frac{\theta ^2}{2} \right) \langle x^c, s^c \rangle \le \left( 1-\frac{\theta }{2}\right) \langle x^c , s^c \rangle <\frac{3r\mu ^p}{2(1-\theta )}. \end{aligned}$$

Proof

Using the definition of \(v^p\) and (33) we have

$$\begin{aligned} {} \langle x^p, s^p \rangle= & {} \mu ^p \langle e, (v^p)^2 \rangle = \mu \langle e, (1-\theta ) \left( v^c\right) ^2 + \theta ^2 d_x^p \circ d_s^p \rangle \nonumber \\= & {} (1-\theta ) \langle x^c, s^c \rangle + \mu \theta ^2 \langle d_x^p, d_s^p \rangle . \end{aligned}$$
(40)

From the second equation of (14) we obtain

$$\begin{aligned} {} \langle d_x^p, d_s^p \rangle = \frac{\langle x^c, s^c \rangle }{2 \mu } - \frac{\Vert d_x^p\Vert _F^2+\Vert d_s^p\Vert _F^2}{2} \le \frac{\langle x^c, s^c \rangle }{2\mu } . \end{aligned}$$
(41)

Moreover, using (40) and (41) we get

$$\begin{aligned} \langle x^p, s^p \rangle \le \left( 1-\theta +\frac{\theta ^2}{2} \right) \langle x^c, s^c \rangle . \end{aligned}$$

Note that if \(0<\theta <1\), then we have

$$\begin{aligned} 1-\theta +\frac{\theta ^2}{2} < 1- \frac{\theta }{2}. \end{aligned}$$
(42)

From (42), \(\mu ^p = (1-\theta ) \mu \) and Lemma 5.4 we deduce

$$\begin{aligned} \langle x^p, s^p \rangle \le \left( 1-\theta +\frac{\theta ^2}{2} \right) \langle x^c, s^c \rangle< \left( 1-\frac{\theta }{2}\right) \langle x^c, s^c \rangle <\frac{3r\mu ^p}{2(1-\theta )}, \end{aligned}$$

which yields the result. \(\square \)

5.4 Complexity Bound

Firstly, consider the following notation:

$$\begin{aligned} \varPsi (\tau ) = \frac{\left( 1- \frac{2}{3} \tau \right) ^2 - \frac{1}{2}}{\left( 1+\frac{2}{3}\tau \right) ^2}. \end{aligned}$$
(43)

In the following lemma, we give a condition related to the proximity and update parameters \(\tau \) and \(\theta \) for which the PC IPA is well defined. This result is one of the novelties of the paper.

Lemma 5.11

Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\), where \(\bar{g}\ge 2\). If

  1. i.

    \(\bar{c} \le \frac{1}{2}\bar{g}\),

  2. ii.

    \(\frac{r(1+2\kappa )\theta ^2}{2(1-\theta )} < \varPsi (\tau )\),

then the PC IPA proposed in Algorithm 4.1 is well defined.

Proof

Let (xs) be the iterate at the start of an iteration with \(x \succeq _{\mathcal {K}} 0\) and \(s \succeq _{\mathcal {K}} 0\) such that \(\lambda _{\min }\left( \frac{x \circ s }{\mu }\right) > \frac{1}{4}\) and \(\delta (x,s,\mu ) \le \tau \). It should be mentioned that \(\tau =\frac{1}{\bar{c} (3+4\kappa )}< \frac{1}{2\sqrt{1+4\kappa }}\), where \(\bar{c} \ge 2\). After a corrector step, applying Lemma 5.3 we have

$$\begin{aligned} \delta ^c:=\delta (x^c, s^c,\mu ) < \frac{3-\sqrt{3}}{2} (3+4\kappa )\delta ^2. \end{aligned}$$

The right-hand side of the above inequality is monotonically increasing with respect to \(\delta \), which implies

$$\begin{aligned} \delta ^c \le \frac{3-\sqrt{3}}{2} (3+4\kappa )\tau ^2=:\omega (\tau ). \end{aligned}$$
(44)

Substituting \(\tau = \frac{1}{\bar{c}(3+4\kappa )}\) in (44), using that \(\frac{3-\sqrt{3}}{2} < \frac{2}{3}\), \(\kappa \ge 0\) and \(\bar{c} \ge 2\) we obtain

$$\begin{aligned} \omega (\tau ) = \frac{(3-\sqrt{3})\tau }{2\bar{c}} < \frac{2\tau }{3\bar{c}} \le \frac{1}{3}\tau . \end{aligned}$$
(45)

Using \(\frac{r(1+2\kappa )\theta ^2}{2(1-\theta )} < \varPsi (\tau )\) and (45) we obtain

$$\begin{aligned} \frac{1}{4}< \frac{1}{2}< & {} \left( 1- \frac{2}{3}\tau \right) ^2 - \frac{r(1+2\kappa ) \theta ^2}{2(1-\theta )} \left( 1+\frac{2}{3}\tau \right) ^2 \nonumber \\< & {} (1-2\omega (\tau ))^2 - \frac{r(1+2\kappa )\theta ^2 (1+2\omega (\tau ))^2}{2(1-\theta )} \nonumber \\\le & {} (1-2\delta ^c)^2 - \frac{r(1+2\kappa )\theta ^2 (1+2\delta ^c)^2}{2(1-\theta )} = \bar{u}(\delta ^c,\theta ,r), \end{aligned}$$
(46)

hence condition \(\bar{u}(\delta ^c,\theta ,r) > \frac{1}{4}\) from Lemma 5.9 is satisfied. Furthermore, using \(\tau =\frac{1}{\bar{c}(3+4\kappa )}\), \(\bar{c} \ge 2\) and (45) we have \(\delta ^c \le \omega (\tau )< \frac{1}{18} < \frac{1}{2}\). Following a predictor step and a \(\mu \)-update, by Lemma 5.9 we have

$$\begin{aligned} {} \delta ^p \le \frac{\sqrt{\bar{u}(\delta ^c,\theta ,r)}\left( (3+4\kappa )\delta {^2}+(1-2\delta ^c)^2 - \bar{u}(\delta ^c,\theta ,r) \right) }{2 \bar{u}(\delta ^c,\theta ,r) + \sqrt{\bar{u}(\delta ^c,\theta ,r)} -1}, \end{aligned}$$
(47)

where \(\delta :=\delta (x,s,\mu )\). It can be verified that \(\bar{u}(\delta ^c,\theta ,r)\) is decreasing with respect to \(\delta ^c\). In this way, we obtain \(\bar{u}(\delta ^c,\theta ,r) \ge \bar{u}(\omega (\tau ),\theta ,r)\). We have seen in Lemma 5.9 that the function \(\tilde{f}\) is decreasing with respect to t, therefore

$$\begin{aligned} \tilde{f}(\sqrt{ \bar{u}(\delta ^c,\theta ,r)}) \le \tilde{f}(\sqrt{\bar{u}(\omega (\tau ),\theta ,r)}). \end{aligned}$$
(48)

From (46) we have \(\sqrt{\bar{u}(\omega (\tau ),\theta ,r)} > \frac{\sqrt{2}}{2}\), hence using (48)

$$\begin{aligned} \tilde{f}(\sqrt{ \bar{u}(\delta ^c,\theta ,r)}) < \tilde{f}\left( \frac{\sqrt{2}}{2} \right) = 1. \end{aligned}$$
(49)

Using that \(\delta \le \tau \), \(\delta ^{c} \le \omega (\tau )\) and

$$\begin{aligned} (1-2\delta ^c)^2 - \bar{u}(\delta ^c,\theta ,r) = \frac{r(1+2\kappa )\; \theta ^2 \;(1+2\delta ^c)^2}{2(1-\theta )}, \end{aligned}$$
(50)

which is increasing with respect to \(\delta ^c\), we obtain

$$\begin{aligned}{}{} & {} \frac{\sqrt{\bar{u}(\delta ^c,\theta ,r)}\left( (3+4\kappa )\delta {^2}+(1-2\delta ^c)^2 - \bar{u}(\delta ^c,\theta ,r) \right) }{2 \bar{u}(\delta ^c,\theta ,r) + \sqrt{\bar{u}(\delta ^c,\theta ,r)} -1} \\\le & {} \frac{\sqrt{\bar{u}(\omega (\tau ),\theta ,r)}\left( (3+4\kappa )\tau ^2+(1-2\omega (\tau ))^2 - \bar{u}(\omega (\tau ),\theta ,r) \right) }{2 \bar{u}(\omega (\tau ),\theta ,r) + \sqrt{\bar{u}(\omega (\tau ),\theta ,r)} -1}. \end{aligned}$$

From \(\tau = \frac{1}{\bar{c}(3+4\kappa )}\) and \(\delta \le \tau \) we have

$$\begin{aligned} (3+4\kappa ) \delta ^2 \le (3+4\kappa )\tau ^2 = \frac{1}{\bar{c}^2(3+4\kappa )} = \frac{1}{\bar{c}} \tau . \end{aligned}$$
(51)

Moreover, we use \(\theta \le \frac{2}{\bar{g}(3+4\kappa ) \sqrt{r}}\) and \(\kappa \ge 0\), hence we obtain

$$\begin{aligned} \frac{1}{1-\theta } \le \frac{3 \bar{g}}{3 \bar{g}-2}. \end{aligned}$$
(52)

We also consider

$$\begin{aligned} \theta \le \frac{2}{\bar{g}(3+4\kappa ) \sqrt{r}} \le \frac{1}{\bar{g} (1+2 \kappa )\sqrt{r}}. \end{aligned}$$
(53)

Using (52) and (53) we get

$$\begin{aligned} \frac{r(1+2\kappa )\; \theta ^2}{2(1-\theta )}\le & {} \frac{r(1+2\kappa )}{2} \cdot \frac{3 \bar{g}}{3 \bar{g}-2} \cdot \frac{2}{\bar{g}(3+4\kappa ) \sqrt{r}} \cdot \frac{1}{\bar{g} (1+2 \kappa )\sqrt{r}} \nonumber \\= & {} \frac{3}{\bar{g}(3\bar{g}-2)} \cdot \frac{1}{3+4\kappa } = \frac{3\bar{c}}{\bar{g}(3\bar{g}-2)} \tau . \end{aligned}$$
(54)

Using (45) and \(3+4\kappa \ge 3\) we obtain

$$\begin{aligned} \omega (\tau ) < \frac{2}{3\bar{c}^2 (3+4\kappa )} \le \frac{2}{9 \bar{c}^2}. \end{aligned}$$
(55)

Furthermore, from (50), (54) and (55) we have

$$\begin{aligned} (1-2\omega (\tau ))^2 - \bar{u}(\omega (\tau ),\theta ,r)= & {} \frac{r(1+2\kappa )\; \theta ^2 \;(1+2\omega (\tau ))^2}{2(1-\theta )}\nonumber \\\le & {} \frac{3\bar{c}}{\bar{g}(3\bar{g}-2)} \left( 1+\frac{4}{9\bar{c}^2}\right) ^2 \tau . \end{aligned}$$
(56)

Conditions \(\bar{c} \ge 2\), \(\bar{g} \ge 2\) and \(\bar{c} \le \frac{1}{2}\bar{g}\) yield

$$\begin{aligned} \frac{3\bar{c}}{\bar{g}(3 \bar{g} -2)} \left( 1+\frac{4}{9\bar{c}^2}\right) ^2= & {} 3 \frac{\bar{c}}{\bar{g}} \frac{1}{3\bar{g}-2}\left( 1+\frac{4}{9\bar{c}^2}\right) ^2 \nonumber \\\le & {} 3 \cdot \frac{1}{2} \cdot \frac{1}{4} \cdot \frac{100}{81} = \frac{25}{54} < \frac{1}{2}. \end{aligned}$$
(57)

From (51), (56) and (57), using \(\bar{c}\ge 2\) we get

$$\begin{aligned} (3+4\kappa )\tau ^2+(1-2\omega (\tau ))^2 - \bar{u}(\omega (\tau ),\theta ,r)\le & {} \left( \frac{1}{\bar{c}} + \frac{3\bar{c}}{\bar{g}(3\bar{g}-2)} \left( 1+\frac{4}{9\bar{c}^2}\right) ^2\right) \tau \nonumber \\< & {} \left( \frac{1}{2} + \frac{1}{2}\right) \tau = \tau . \end{aligned}$$
(58)

Using (47), (49) and (58) we have

$$\begin{aligned} \delta ^p < \tau , \end{aligned}$$

hence the PC IPA is well defined. \(\square \)

The following lemma gives a sufficient condition for satisfying Condition ii. from Lemma 5.11.

Lemma 5.12

Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\), where \(\bar{g}\ge 2\). Consider \(\varPsi \) given in (43). If

$$\begin{aligned} \frac{1}{\bar{g}^2} < \varPsi \left( \frac{1}{3\bar{c}}\right) , \end{aligned}$$
(59)

then condition ii. of Lemma 5.11 is satisfied.

Proof

Using \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\) and \(\bar{g} \ge 2\) we have \(\theta \le \frac{1}{2}\). From this we obtain

$$\begin{aligned} \frac{1}{2(1-\theta )} \le 1. \end{aligned}$$
(60)

Furthermore, using (53) and \(\kappa \ge 0\) we get

$$\begin{aligned} r(1+2\kappa ) \theta ^2 \le r (1+2\kappa ) \frac{1}{\bar{g}^2(1+2\kappa )^2 r} = \frac{1}{\bar{g}^2 (1+2\kappa )} \le \frac{1}{\bar{g}^2}. \end{aligned}$$
(61)

Besides this, from \(\tau = \frac{1}{\bar{c} (3+4\kappa )}\) and \(\kappa \ge 0\) we obtain

$$\begin{aligned} \tau \le \frac{1}{3\bar{c}}. \end{aligned}$$
(62)

It should be mentioned that the function \(\varPsi (\tau )\) is strictly decreasing with respect to \(\tau \), hence using (62) we obtain

$$\begin{aligned} \varPsi (\tau ) \ge \varPsi \left( \frac{1}{3\bar{c}} \right) . \end{aligned}$$
(63)

In this way, using (59), (60), (61) and (63) we obtain

$$\begin{aligned} \frac{r(1+2\kappa )\theta ^2}{2(1-\theta )} \le \frac{1}{\bar{g}^2} < \varPsi \left( \frac{1}{3\bar{c}}\right) \le \varPsi (\tau ), \end{aligned}$$

which yields the result. \(\square \)

Lemma 5.13

Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\), where \(\bar{g}\ge 2\). If Condition i. from Lemma 5.11 is satisfied, then Condition ii. from Lemma 5.11 also holds.

Proof

Consider the following function

$$\begin{aligned} m(\bar{c}) = \frac{1}{2\bar{c}\; \sqrt{\frac{\left( 1-\frac{2}{9\bar{c}}\right) ^2-\frac{1}{2}}{\left( 1+\frac{2}{9\bar{c}}\right) ^2}}}, \end{aligned}$$

which is decreasing with respect to \(\bar{c}\). Thus, for \(\bar{c} \ge 2\) we have

$$\begin{aligned} m(\bar{c}) \le m(2) < 1. \end{aligned}$$
(64)

Using (64) and Condition i. of Lemma 5.11 we obtain that

$$\begin{aligned} \bar{g} \ge 2 \bar{c} > \frac{1}{\sqrt{\frac{\left( 1-\frac{2}{9\bar{c}}\right) ^2-\frac{1}{2}}{\left( 1+\frac{2}{9\bar{c}}\right) ^2}}} = \frac{1}{\sqrt{\varPsi \left( \frac{1}{3\bar{c}}\right) }}. \end{aligned}$$

Hence, if Condition i. of Lemma 5.11 holds, then (59) is satisfied. Using Lemma 5.12, we obtain that Condition ii. from Lemma 5.11 also holds. \(\square \)

Corollary 5.1

Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\). If \(\bar{g} \ge 2\bar{c}\), then the PC IPA proposed in Algorithm 4.1 is well defined.

Lemma 5.14

Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\), where \(\bar{g}\ge 2\bar{c}\). Moreover, let \(x^0\) and \(s^0\) be strictly feasible, \(\mu ^0 = \frac{\langle x^0, s^0 \rangle }{r}\) and \(\delta (x^0,s^0,\mu ^0 )\le \tau \). Let \(x^k\) and \(s^k\) be the iterates obtained after k iterations. Then, \(\langle x^k, s^k \rangle \le \epsilon \) for

$$\begin{aligned} k \ge 1+\left\lceil {\frac{1}{\theta } \log \frac{3 \langle x^0, s^0 \rangle }{2\epsilon }}\right\rceil . \end{aligned}$$

Proof

Using Lemma 5.10 we have

$$\begin{aligned} \langle x^k, s^k \rangle < \frac{3r\mu ^k}{2(1-\theta )} = \frac{3r(1-\theta )^{k-1}\mu ^0}{2} = \frac{3(1-\theta )^{k-1} \langle x^0, s^0 \rangle }{2}. \end{aligned}$$

Hence, if

$$\begin{aligned}\frac{3(1-\theta )^{k-1}\langle x^0, s^0 \rangle }{2}\le \epsilon , \end{aligned}$$

then the inequality \(\langle x^k, s^k \rangle \le \epsilon \) holds. We take logarithms, thus

$$\begin{aligned} (k-1) \log (1-\theta ) + \log \frac{3 \langle x^0, s^0 \rangle }{2} \le \log \epsilon . \end{aligned}$$
(65)

From \(\log (1+\theta ) \le \theta \), \(\theta \ge -1\), we deduce that (65) is satisfied if

$$\begin{aligned} -\theta (k-1)+\log \frac{3 \langle x^0, s^0 \rangle }{2} \le \log \epsilon , \end{aligned}$$

hence the lemma is proved. \(\square \)

Theorem 5.1

Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\), where \(\bar{g}\ge 2 \bar{c}\). Then, the PC IPA proposed in Algorithm 4.1 is well defined and it performs at most

$$\begin{aligned} \mathcal {O}\left( (3+4 \kappa ) \sqrt{r} \log \frac{3r \mu ^0}{2\epsilon } \right) \end{aligned}$$

iterations. The output is a pair (xs) satisfying \(\langle x, s \rangle \le \epsilon \).

Proof

The result follows from Corollary 5.1 and Lemma 5.14.

Corollary 5.2

Consider \(0 < \tau \le \frac{1}{6+8\kappa }\) and \(0<\theta \le \frac{1}{\sqrt{r}} \tau \). Then, the PC IPA proposed in Algorithm 4.1 is well defined and it performs at most

$$\begin{aligned} \mathcal {O}\left( (3+4 \kappa ) \sqrt{r} \log \frac{3r \mu ^0}{2\epsilon } \right) \end{aligned}$$

iterations.

Proof

If \(\tau \le \frac{1}{6 + 8\kappa }\), then we can find \(\bar{c} \ge 2\) such that

$$\begin{aligned} \tau = \frac{1}{\bar{c}(3+4\kappa )}. \end{aligned}$$
(66)

Using \(\theta \le \frac{1}{\sqrt{r}} \tau \) and \(\tau \le \frac{1}{6+8\kappa }\) we have

$$\begin{aligned} \theta \le \frac{1}{\sqrt{r}} \tau \le \frac{1}{(6+8\kappa )\sqrt{r}}. \end{aligned}$$

Hence, we can find \(\bar{g} \ge 4\) such that

$$\begin{aligned} \theta = \frac{2}{\bar{g}(3+4\kappa ) \sqrt{r}}. \end{aligned}$$
(67)

Moreover, from (66), (67) and \(\theta \le \frac{1}{\sqrt{r}} \tau \) we have

$$\begin{aligned} \theta = \frac{2}{\bar{g}(3+4\kappa ) \sqrt{r}} = \frac{2 \bar{c} }{\bar{g}\sqrt{r}} \tau \le \frac{1}{\sqrt{r}} \tau , \end{aligned}$$

hence \(\bar{g} \ge 2 \bar{c}\) holds. All conditions from Lemma 5.11 are satisfied, hence from Corollary 5.1 and Lemma 5.14 we obtain the desired result. \(\square \)

In the following section, we present concluding remarks.

6 Conclusions

In this paper, we extended the PC IPA proposed in [13] to \(P_*(\kappa )\)-SCHLCP. For the determination of the search directions, we used the function \(\varphi (t)=t-\sqrt{t}\) in the AET technique proposed by Darvay [9]. We showed that the introduced PC IPA has the same complexity bound as the best known interior-point algorithms for solving these types of problems. We also proved that the proposed PC IPA has the iteration bound that matches the best known iteration bound for this type of PC IPAs given in the literature. We also provided a condition related to the proximity and update parameters for which the PC IPA is well defined. As further research, it would be interesting to give a more general framework which could deal with problems where we do not assume the \(P_*(\kappa )\)-property of the pair (QR). This approach could lead to analyse problems similar to general LCPs studied in [23, 24].