Abstract
We propose a new predictor–corrector interior-point algorithm for solving Cartesian symmetric cone horizontal linear complementarity problems, which is not based on a usual barrier function. We generalize the predictor–corrector algorithm introduced in Darvay et al. (SIAM J Optim 30:2628–2658, 2020) to horizontal linear complementarity problems on a Cartesian product of symmetric cones. We apply the algebraically equivalent transformation technique proposed by Darvay (Adv Model Optim 5:51–92, 2003), and we use the difference of the identity and the square root function to determine the new search directions. In each iteration, the proposed algorithm performs one predictor and one corrector step. We prove that the predictor–corrector interior-point algorithm has the same complexity bound as the best known interior-point methods for solving these types of problems. Furthermore, we provide a condition related to the proximity and update parameters for which the introduced predictor-corrector algorithm is well defined.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Interior-point algorithms (IPAs) are efficient tools for solving optimization problems, since Karmarkar [26] published his IPA. The most important results on IPAs for solving linear programming (LP) problems are summarized in the monographs written by Roos, Terlaky and Vial [43], Wright [49] and Ye [50]. IPAs for solving linear complementarity problems (LCPs) have been also introduced, see [12, 13, 21,22,23,24, 27, 28]. In general, LCPs belong to the class of NP-complete problems [8]. However, Kojima et al. [28] proved that if the problem’s matrix possesses the \(P_*(\kappa )\)-property, then IPAs for LCPs have polynomial iteration complexity in the size of the problem and in the special parameter, called the handicap of the matrix. IPAs have been extended to more general problems such as symmetric cone optimization (SCO) problems. Faraut and Korányi [18] summarized the most important results related to the theory of Jordan algebras and symmetric cones. Güler [20] noticed that the family of self-scaled cones, introduced by Nesterov and Todd [34, 35], is identical to the set of self-dual and homogeneous, i.e. symmetric cones. IPAs have been introduced to Cartesian symmetric cone linear complementarity problems, as well [30, 31].
The Cartesian symmetric cone horizontal linear complementarity problem (SCHLCP) has been recently introduced by Asadi et al. [4]. Note that LP, LCP, SCO, second-order cone programming, semidefinite optimization and convex quadratic optimization problems can be solved by using SCHLCP possessing the \(P_*(\kappa )\)-property. Several IPAs have been proposed for solving Cartesian SCHLCPs. Mohammadi et al. [33] presented an infeasible IPA taking full Nesterov–Todd steps for solving this kind of problems. Later on, Asadi et al. [3] proposed an IPA for solving Cartesian SCHLCP based on the search directions introduced in [9]. In 2020, Asadi et al. [2] introduced a new IPA for solving Cartesian SCHLCP based on a positive-asymptotic barrier function. Moreover, in [5] a predictor–corrector (PC) IPA for \(P_*(\kappa )\)-SCHLCP was presented. Asadi et al. [6] also defined a feasible IPA for \(P_*(\kappa )\)-SCHLCP using a wide neighbourhood of the central path.
There are several approaches for determining search directions that lead to different IPAs. Peng et al. [37] used self-regular barriers to introduce large-update IPAs for LP. Lesaja and Roos [29] provided a unified analysis of kernel-based IPAs for \(P_*(\kappa )\)-LCPs. Vieira [47] used different IPAs for SCO problems that determine the search directions using kernel functions. In 1997, Tunçel and Todd [46] presented for the first time a reparametrization of the central path system. Later on, Karimi et al. [25] analysed entropy-based search directions for LP in a wide neighbourhood of the central path. In 2003, Darvay presented a new technique for finding search directions for LP problems [9], namely the algebraically equivalent transformation (AET) of the system defining the central path. The most widely used function in the AET technique is the identity map. Darvay [9] applied the function \(\varphi (t)=\sqrt{t}\) in the AET method in order to define IPA for LP. Kheirfam and Haghighi [27] proposed IPA for \(P_{*} (\kappa )\)-LCPs which is based on a search direction generated by considering the function \(\varphi (t)=\frac{\sqrt{t}}{2(1+\sqrt{t})}\) in the AET. In 2016, Darvay et al. [14] used the function \(\varphi (t)=t-\sqrt{t}\) for the first time in the complexity analysis of an IPA. Later on, this IPA was generalized to SCO [15]. It should be mentioned that if we use the function \(\varphi (t)=t-\sqrt{t}\) in the AET approach, then the corresponding kernel function is a positive-asymptotic kernel function. Hence, this search direction cannot be expressed by using usual kernel function approach. An infeasible version of the proposed IPA was also introduced in [42]. Darvay et al. [12] considered the function \(\varphi (t)=t-\sqrt{t}\) in the AET method in order to define primal-dual IPA for solving \(P_*(\kappa )\)-LCPs. In [40], different IPAs for LP, SCO and sufficient LCPs using the AET technique have been presented.
In this paper, we generalize the PC IPA proposed in [13] to Cartesian \(P_*(\kappa )\)-SCHLCPs. We also provide a general framework for determining search directions in case of PC IPAs for Cartesian \(P_*(\kappa )\)-SCHLCPs, which is an extension of the approach given in [41]. We present the analysis of the introduced PC IPA, and we give some new technical lemmas that are necessary in the analysis. Furthermore, we prove that the PC IPA has the same complexity bound as the best known interior-point algorithms for solving these types of problems.
In general, the analysis of IPAs is carried out only for fixed values of the proximity and update parameters. There are only a few results, where the well-definedness of the algorithms is proven under some given conditions related to the parameters. In this paper, we give a condition related to the proximity and update parameters for which the introduced PC IPA is well defined. In this way, a whole set of parameters is determined for which the polynomial complexity of the algorithm is guaranteed. Note that in [43] the authors gave conditions for the update parameters for which the proposed PC IPA for LP is well defined. Moreover, Darvay [10, 11] considered PC IPAs using the function \(\varphi (t)=\sqrt{t}\) in the AET technique and gave conditions for the proximity and update parameters for which the PC IPAs are well defined. It should be mentioned that in this paper we provide the first result related to IPAs using the function \(\varphi (t)=t-\sqrt{t}\) in the AET technique, which is well defined for a set of parameters instead of a given value for the proximity and update parameters.
The paper is organized as follows. In the next section, we present the Cartesian \(P_*(\kappa )\)-SCHLCP. In Sect. 3, we present the generalization of the AET technique for determining search directions on Cartesian \(P_*(\kappa )\)-SCHLCP. Subsection 3.1 is devoted to give a general framework for defining search directions in case of PC IPAs for Cartesian \(P_*(\kappa )\)-SCHLCP. In Sect. 4, the new PC IPA for Cartesian \(P_*(\kappa )\)-SCHLCP is introduced, which is based on a new search direction by using the function \(\varphi (t)=t-\sqrt{t}\) in the AET technique. Section 5 contains the complexity analysis of the proposed PC IPA. In Sect. 6, some concluding remarks are enumerated. In the “Appendix”, we present some well-known results related to the theory of Euclidean Jordan algebras and symmetric cones.
2 Cartesian \(P_*(\kappa )\)-Symmetric Cone Horizontal Linear Complementarity Problem
For a more detailed description of the theory of Euclidean Jordan algebras and symmetric cones, see “Appendix”. Let \(\mathcal {V}:=\mathcal {V}_1\times \mathcal {V}_2\times \cdots \times \mathcal {V}_m\) be the Cartesian product space with its cone of squares \(\mathcal {K}:=\mathcal {K}_1\times \mathcal {K}_2\times \cdots \times \mathcal {K}_m\), where each space \(\mathcal {V}_{i}\) is a Euclidean Jordan algebra and each \(\mathcal {K}_i\) is the corresponding cone of squares of \(\mathcal {V}_{i}\). For any \(x=\left(x^{(1)},\,x^{(2)},\,\ldots ,\,x^{(m)}\right)^T\in \mathcal {V}\) and \(s=\left(s^{(1)},\,s^{(2)},\,\ldots ,\,s^{(m)}\right)^T\in \mathcal {V}\) let
Besides this, for any \(z=\left(z^{(1)},\,z^{(2)},\,\ldots ,\,z^{(m)}\right)^T\in \mathcal {V}\), where \(z^{(i)}\in \mathcal {V}_i\), the trace, the determinant and the minimal and maximal eigenvalues of the element z are defined as follows:
The Frobenius norm of x is defined as \( \left\Vert x \right\Vert _F=\left(\sum _{i=1}^{m}\left\Vert x^{(i)} \right\Vert ^2_F\right)^{1/2}\). Furthermore, consider the Lyapunov transformation and the quadratic representation of x:
In the Cartesian SCHLCP, we should find a vector pair \((x,s) \in \mathcal {V}\times \mathcal {V}\) such that
![figure a](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs10957-022-02078-z/MediaObjects/10957_2022_2078_Figa_HTML.png)
where \(Q,R : \mathcal {V}\rightarrow \mathcal {V}\) are linear operators, \(q \in \mathcal {V}\) and \(\mathcal {K}\) is the symmetric cone of squares of the Cartesian product space \(\mathcal {V}\). Consider the constant \(\kappa \ge 0\). The pair \(\left(Q,\,R\right)\) has the \(P_*(\kappa )\) property if for all \((x,s)\in \mathcal {V}\times \mathcal {V}\)
where \(I_+=\{i~:~\langle x^{(i)},\,s^{(i)}\rangle >0\}\) and \(I_-=\{i~:~\langle x^{(i)},\,s^{(i)}\rangle <0\}\).
We suppose that the interior-point condition (IPC) holds, which means that there exists \(( x^0, s^0)\) so that
![figure b](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs10957-022-02078-z/MediaObjects/10957_2022_2078_Figb_HTML.png)
For a detailed study on the initialization of the problem using self-dual embedding for conic convex programming and particular cases, see [16, 17, 32, 38, 39].
The system defining the central path is given as
where \(\mu > 0\). The subclass of Monteiro–Zhang family of search directions is characterized by
Lemma 2.1
(Lemma 28 in [44]) Let \(u \in \text {int}\,\mathcal {K}.\) Then,
Considering \(u \in C(x,s)\), \(\widetilde{Q} = Q P(u)^{-1} \), \(\widetilde{R}=RP(u)\) and using Lemma 2.1, we can rewrite system (1) in the following way:
If the IPC holds, then system (2) has unique solution for each \(\mu >0\), see [4] and [31].
3 Search Directions for IPAs for Cartesian \(P_{*} (\kappa )\)-Symmetric Cone Horizontal Linear Complementarity Problems
We present the generalization of the AET technique to \(P_*(\kappa )\)-SCHLCP (cf. [2, 41]). Let us consider the vector-valued function \(\varphi \), which is induced by the real-valued univariate function \(\varphi : (\xi ^2,+\infty ) \rightarrow \mathbb {R}\), where \(0 \le \xi <1\) and \(\varphi '(t)>0\), for all \(t>\xi ^2\). We assume that we have \(\lambda _{\min }\left( \frac{x \circ s}{\mu }\right) >\xi ^2\). In this way, system (2) can be written as follows:
We define the search directions by using the technique considered in [41, 48]. For the strictly feasible \(x \in \text {int}\, \mathcal {K}\) and \(s \in \text {int}\, \mathcal {K}\) we want to find the search directions \((\varDelta x, \varDelta s)\) so that
where
In [40] an overview of different functions \(\varphi \) used in the literature in the AET technique is presented.
Throughout the paper, we use the NT-scaling scheme. Let \(u=w^{-\frac{1}{2}}\), where w is called the NT-scaling point of x and s and is defined as follows:
Let us introduce the following notations:
From (5) we obtain the scaled system:
where
In order to be able to define IPAs, we should define a proximity measure to the central path. For this, let
Furthermore, we define the \(\tau \)-neighbourhood of a fixed point on the central path:
where \(\tau \) is a threshold parameter and \(\mu >0\) is fixed.
As it was mentioned in Sect. 1, the search directions can be determined by using another approach based on kernel functions, see [7, 37]. Achache [1] and Pan et al. [36] showed that one can associate the corresponding kernel functions to several functions \(\varphi \) used in the AET technique. In [40], the author also presented the relationship between the approach based on kernel functions and the AET technique. However, it is interesting that if we apply the function \(\varphi (t)=t-\sqrt{t}\) in the AET, we cannot obtain a corresponding kernel function in the usual sense, because it is not defined on a neighbourhood of the origin, see [15]. Darvay and Rigó [15] introduced the notion of the positive-asymptotic kernel function. In this sense, the kernel function associated to the function \(\varphi (t)=t-\sqrt{t}\) used in this paper is positive-asymptotic kernel function:
For details, see [2, 15, 40]. In the following subsection, we generalize a method for determining the scaled predictor and corrector systems in case of the PC IPAs that are given in [41].
3.1 Determining Search Directions in Case of Predictor–Corrector Interior-Point Algorithms
We generalize the method introduced by Darvay et al. [13] and presented in [41] to determine the search directions in case of PC IPAs. In general, the PC IPAs perform a predictor and several corrector steps in a main iteration. The predictor step is a greedy one which aims to find the optimal solution of the problem as soon as possible. Hence, a certain amount of retirement from the central path is obtained after a predictor step. In the corrector step, the goal is to return in the \(\tau \)-neighbourhood of the central path.
First, we deal with the corrector step, which is a full-NT step. Hence, the scaled corrector system coincides with (6). We should decompose \(a_{\varphi }\) given in system (3) in order to obtain the predictor system as follows:
where f and g are vector-valued functions and \(f(x,s,0)=0\). We set \(\mu =0\) in this decomposition. In this way, we get
Using \(u=w^{-\frac{1}{2}}\), the scaled predictor system is the following:
Using this system, the predictor search directions can be easily obtained. In the following subsection, we introduce the new PC IPAs for solving Cartesian \(P_{*} (\kappa )\)-SCHLCPs.
4 New Predictor–Corrector Interior-Point Algorithm
We deal with the case, when \(\varphi (t)=t-\sqrt{t}\). Similar to [13], the decomposition of \(a_{\varphi }\) given in (7) will be
By setting \(\mu =0\) in (9) and from (5) we get
in our case. Furthermore,
In the corrector step, we take a full-NT step. Hence, the scaled corrector system coincides with system (6) with \(p_v\) given in (11):
We can calculate the corrector search directions \(\varDelta ^c x\) and \(\varDelta ^c s\) from
Let \(x^{c}=x+ \varDelta ^c x\) and \(s^{c}=s + \varDelta ^c s\) be the point after a corrector step. In the predictor step, we define the following notations:
where \(w^c\) is the NT-scaling point of \(x^c\) and \(s^c\).
Using (8) and (10), the scaled predictor system in this case is
We obtain the predictor search directions \(\varDelta ^p x\) and \(\varDelta ^p s\) from
The point after a predictor step is \(x^{p}=x^c +\theta \varDelta ^p x,\) and \(s^p=s^c + \theta \varDelta ^p \textbf{s},\) \(\mu ^p = (1-\theta )\mu ,\) where \(\theta \in (0,1)\) is the update parameter. The proximity measure in this case is
Our PC IPA algorithm starts with \((x^0,s^0) \in \mathcal {N}(\tau ,\mu )\). The algorithm performs corrector and predictor steps. The PC IPA is given in Algorithm 4.1.
![figure c](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs10957-022-02078-z/MediaObjects/10957_2022_2078_Figc_HTML.png)
5 Analysis of the Predictor–Corrector Interior-Point Algorithm
5.1 The Corrector Step
The corrector step is a full-NT step, hence the following lemmas can be easily obtained by using the results published in [2]. Consider
hence
Lemma 5.1 gives an upper bound for \(\left\Vert q_v \right\Vert _F\) in terms of \(\left\Vert p_v \right\Vert _F\).
Lemma 5.1
(Lemma 5.1 in [2]) We have \( ~\left\Vert q_v \right\Vert _F\le \sqrt{1+4\kappa }~\left\Vert p_v \right\Vert _F\).
Let \(x,\,s\in \text {int}\;\mathcal {K}\), \(\mu >0\) and w be the scaling point of x and s. We have
It should be mentioned that \(x^c\) and \(s^c\) belong to \(\text {int}\;\mathcal {K}\) if and only if \(v+d_x^c\) and \(v+d_s^c\) belong to \(\text {int}\;\mathcal {K}\), because \(P(w)^{1/2}\) and its inverse, \(P(w)^{-1/2}\), are automorphisms of \(\text {int}\;\mathcal {K}\), cf. Proposition A.1 (ii) from “Appendix”. The next lemma proves the strict feasibility of the full-NT step. It should be mentioned that throughout the paper we use the proximity measure given in (16).
Lemma 5.2
(Lemma 5.3 in [2]) Let \(\delta :=\delta (x,s,\mu )<\frac{1}{\sqrt{1+4\kappa }}\). Then, \(\lambda _{\min }(v)>\frac{1}{2}\) and the full-NT step is strictly feasible, that is, \( x^c \succ _{\mathcal {K}} 0\) and \(s^c \succ _{\mathcal {K}} 0\).
Lemma 5.3 gives an upper bound for the proximity measure after a full-NT step.
Lemma 5.3
(Lemma 5.6 in [2]) Let \(\delta =\delta (x,s,\mu )<\frac{1}{2\sqrt{1+4 \kappa }}\) and \(\lambda _{\min }(v)>\frac{1}{2}\). Then, \(\lambda _{\min }(v^c)>\frac{1}{2}\) and
Furthermore,
Lemma 5.4 provides an upper bound for the duality gap \(\langle x, s \rangle \) after a full-NT step.
Lemma 5.4
(cf. Lemma 5.8 in [2]) Let \(x^c\) and \(s^c\) be obtained after a full-NT step. Then,
Furthermore, if \(\delta <\frac{1}{2(1+4\kappa )}\), then
In the following subsection, we present some technical lemmas.
5.2 Technical Lemmas
First, consider the following two results that will be used in the next part of the analysis.
Lemma 5.5
Let \(\bar{f}:{(\bar{d},+\infty )}\longrightarrow (0, +\infty )\) be a function, where \(\bar{d} > 0\) and \(\bar{f}(t)>\xi \), for \(t > \bar{d}\) where \(\xi >0\). Assume that \(v\in \mathcal {V}\) and \(\lambda _{\min }(v) > \bar{d}\). Then,
Proof
Using Theorem A.1 given in “Appendix”, we assume that \(v=\sum _{i=1}^{r} \lambda _i(v) c_i\). Moreover, \(\bar{f}(v)=\sum _{i=1}^r \bar{f}(\lambda _i(v))c_i\). Then,
\(\square \)
Lemma 5.6
(cf. Lemma 7.4 of [15]) Let \(\widetilde{f}:{(d,+\infty )}\longrightarrow (0, +\infty )\) be a decreasing function, where \(d > 0\). Furthermore, suppose that \(v\in \mathcal {V}\), \(\lambda _{\min }(v)>d\) and \(\eta >0\). Then,
In the following part, we consider some inequalities from [2] that will be used later in the analysis. Using the first equation of (6), we get
The definition of the \(P_*(\kappa )\) property results in
where \(I_+=\left\rbrace i:\left\langle d_x^{(i)},\,d_s^{(i)}\right\rangle > 0\right\lbrace \). Using that each \(P\left(w^{(i)}\right)^{1/2}\), \(1\le i \le m\), is self-adjoint, we get
Substituting the last equation into (20) and using the fact that \(P(w^{\frac{1}{2}})\) is self-adjoint, we obtain
Furthermore, one has
and therefore \(\left\langle d_x,\,d_s\right\rangle \ge -\kappa \left\Vert p_v \right\Vert _F^2\). Thus,
Note that (23) leads to the result given in Lemma 5.1. Now, we prove a lemma which is a generalization of Lemma 5.3 given in [13] to Cartesian SCHLCP. We assume that (Q, R) possesses the \(P_* (\kappa )\)-property.
Lemma 5.7
The following inequality holds:
where \(\delta ^c=\delta (x^c,s^c,\mu )=\left\| (v^c-\left( v^c\right) {^2})\circ (2v^c-e)^{-1}\right\| _F.\)
Proof
Using (22) and the second equation of the scaled predictor system (14) we have
Hence, \(\Vert d_x^p\Vert _F^2 + \Vert d_s^p\Vert _F^2 \le (1+2\kappa ) \Vert v^c\Vert _F^2.\) We give an upper bound for \(\Vert v^c\Vert _F\) depending on \(\delta ^c\) and r. For this, consider \(\sigma ^c=\Vert e-v^c\Vert _F\).
Then,
Consider \(\bar{f}(t) = \frac{t}{2t-1} > \frac{1}{2}\). Then, using Lemma 5.5 with \(\xi =\frac{1}{2}\) and \(\bar{d}=\frac{1}{2}\) we have
hence \(\sigma ^c\le 2 \delta ^c\).
In this way, we have
which proves the lemma. \(\square \)
5.3 The Predictor Step
The first lemma of this subsection is a generalization of Lemma 5.5 given in [13] to Cartesian SCHLCP. It provides a sufficient condition for the strict feasibility of the predictor step.
Lemma 5.8
Let \(x^c \succ _{\mathcal {K}} 0\) and \(s^c \succ _{\mathcal {K}} 0\), \(\mu > 0\) such that \(\delta ^c:=\delta (x^c,s^c,\mu ) < \frac{1}{2}\). Furthermore, let \(0<\theta <1\). Let \(x^{p}=x^c + \theta \varDelta ^p x, \quad s^p=s^c + \theta \varDelta ^p s\) be the iterates after a predictor step. Then, \(x^p \succ _{\mathcal {K}} 0\) and \(s^p \succ _{\mathcal {K}} 0\) if \(\bar{u}(\delta ^c,\theta ,r) > 0\), where
Proof
Let \(\alpha \in [0,1]\) and \(x^p(\alpha ) =x^c + \alpha \theta \varDelta ^p x\) and \(s^p(\alpha )= s^c + \alpha \theta \varDelta ^p s\). Hence,
From (27) and using that \( P(w^c)^{\frac{1}{2}}\) and \(P(w^c)^{-\frac{1}{2}}\) are automorphisms of \(\text {int}\;\mathcal {K}\) we obtain that \(x^p(\alpha ) \in \text {int}\;\mathcal {K}\) and \(s^p(\alpha ) \in \text {int}\;\mathcal {K}\) if and only if \(v^c+\alpha \theta d_x^p \in \text {int}\;\mathcal {K}\) and \(v^c+\alpha \theta d_s^p \in \text {int}\;\mathcal {K}\). Let \(v_x^p(\alpha ) = v^c+ \alpha \theta d_x^p\) and \(v_s^p(\alpha ) := v^c+\alpha \theta d_s^p,\) for \(0 \le \alpha \le 1\). From the second equation of system (14) we obtain
Using (28) and Lemma A.1 from “Appendix” we get
Using \(\lambda _i(e-v^c) \le \Vert e-v^c\Vert _F, \; \forall \; i=1,\ldots ,r,\) we have
From (26), (30) and \(\delta ^c<\frac{1}{2}\) we have
Note that \(f(\alpha )=\frac{\alpha ^2 \theta ^2}{1-\alpha \theta }\) is strictly increasing for \(0 \le \alpha \le 1\) and each fixed \(0< \theta < 1\). Using this, (29), (31) and Lemma 5.7 we get
This means that \(x^p(\alpha ) \circ s^p(\alpha ) \succ _{\mathcal {K}} 0\), for \(0\le \alpha \le 1\). Hence, \(x^p(\alpha )\) and \(s^p(\alpha )\) do not change sign on \(0\le \alpha \le 1\). From \(x^p(0) = x^c \succ _{\mathcal {K}} 0\) and \(s^p(0) = s^{c} \succ _{\mathcal {K}} 0\), we deduce that \( x^p(1)=x^p \succ _{\mathcal {K}} 0\) and \(s^p(1) = s^p \succ _{\mathcal {K}} 0,\) which yields the result. \(\square \)
Consider the following notation
where \(\mu ^p=(1-\theta )\mu \) and \(w^p\) is the NT-scaling point of \(x^p\) and \(s^p\). Substituting \(\alpha =1\) in (28) and (32) we get
The next lemma investigates the effect of a predictor step and the update of \(\mu \) on the proximity measure. This is the generalization of Lemma 5.6 proposed in [13] to Cartesian SCHLCP.
Lemma 5.9
Consider \(\delta ^c:=\delta (x^c,s^c,\mu ) <\frac{1}{2}\), \(\delta :=\delta (x,s,\mu )\), \(\lambda _{\min }(v) > \frac{1}{2}\), \(\mu ^p=(1-\theta ) \mu \), where \(0<\theta <1\), \(\bar{u}(\delta ^c,\theta ,r) > \frac{1}{4}\) and let \(x^p\) and \(s^p\) be the iterates after a predictor step. Then, we have
and \(\lambda _{\min }(v^p) > \frac{1}{2}\).
Proof
From \(\bar{u}(\delta ^c,\theta ,r)> \frac{1}{4} > 0\), using Lemma 5.8 we obtain that \(x^p \succ _{\mathcal {K}} 0\) and \(s^p \succ _{\mathcal {K}} 0\). This means that the predictor step is strictly feasible. Furthermore, from (34) we get
hence the first part of the lemma is proved. Moreover,
Consider \(\widetilde{f}:\left( \frac{1}{2},\infty \right) \rightarrow \mathbb {R}\), \(\widetilde{f}(t) = \frac{t}{(2t-1)(1+t)}\), which is decreasing with respect to t. Using this, (33), (34) and (35) and Lemma 5.6 we get
Using the definition of \(v^{c}\), (5), (17) and (18) we have
Besides this, using \(\lambda _{\min }(v) > \frac{1}{2}\), \(v^2 \circ (2v - e)^{-1}\succeq _{\mathcal {K}} 0\), we have
Using (37), (38) and Lemma 5.1 we obtain
From (36), (39) and Lemma 5.7 we obtain the second statement of the lemma. \(\square \)
The next lemma gives an upper bound for the complementarity gap after an iteration of the algorithm.
Lemma 5.10
Let \(x^c \succeq _{\mathcal {K}} 0\), \(s^c \succeq _{\mathcal {K}} 0\) and \(\mu >0\) such that \(\delta ^c:=\delta (x^c,s^c,\mu ) < \frac{1}{2}\) and \(0<\theta <1\). If \(\delta <\frac{1}{2(1+4\kappa )}\) and \(x^p\) and \(s^p\) are the iterates obtained after the predictor step of the algorithm, then
Proof
Using the definition of \(v^p\) and (33) we have
From the second equation of (14) we obtain
Moreover, using (40) and (41) we get
Note that if \(0<\theta <1\), then we have
From (42), \(\mu ^p = (1-\theta ) \mu \) and Lemma 5.4 we deduce
which yields the result. \(\square \)
5.4 Complexity Bound
Firstly, consider the following notation:
In the following lemma, we give a condition related to the proximity and update parameters \(\tau \) and \(\theta \) for which the PC IPA is well defined. This result is one of the novelties of the paper.
Lemma 5.11
Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\), where \(\bar{g}\ge 2\). If
-
i.
\(\bar{c} \le \frac{1}{2}\bar{g}\),
-
ii.
\(\frac{r(1+2\kappa )\theta ^2}{2(1-\theta )} < \varPsi (\tau )\),
then the PC IPA proposed in Algorithm 4.1 is well defined.
Proof
Let (x, s) be the iterate at the start of an iteration with \(x \succeq _{\mathcal {K}} 0\) and \(s \succeq _{\mathcal {K}} 0\) such that \(\lambda _{\min }\left( \frac{x \circ s }{\mu }\right) > \frac{1}{4}\) and \(\delta (x,s,\mu ) \le \tau \). It should be mentioned that \(\tau =\frac{1}{\bar{c} (3+4\kappa )}< \frac{1}{2\sqrt{1+4\kappa }}\), where \(\bar{c} \ge 2\). After a corrector step, applying Lemma 5.3 we have
The right-hand side of the above inequality is monotonically increasing with respect to \(\delta \), which implies
Substituting \(\tau = \frac{1}{\bar{c}(3+4\kappa )}\) in (44), using that \(\frac{3-\sqrt{3}}{2} < \frac{2}{3}\), \(\kappa \ge 0\) and \(\bar{c} \ge 2\) we obtain
Using \(\frac{r(1+2\kappa )\theta ^2}{2(1-\theta )} < \varPsi (\tau )\) and (45) we obtain
hence condition \(\bar{u}(\delta ^c,\theta ,r) > \frac{1}{4}\) from Lemma 5.9 is satisfied. Furthermore, using \(\tau =\frac{1}{\bar{c}(3+4\kappa )}\), \(\bar{c} \ge 2\) and (45) we have \(\delta ^c \le \omega (\tau )< \frac{1}{18} < \frac{1}{2}\). Following a predictor step and a \(\mu \)-update, by Lemma 5.9 we have
where \(\delta :=\delta (x,s,\mu )\). It can be verified that \(\bar{u}(\delta ^c,\theta ,r)\) is decreasing with respect to \(\delta ^c\). In this way, we obtain \(\bar{u}(\delta ^c,\theta ,r) \ge \bar{u}(\omega (\tau ),\theta ,r)\). We have seen in Lemma 5.9 that the function \(\tilde{f}\) is decreasing with respect to t, therefore
From (46) we have \(\sqrt{\bar{u}(\omega (\tau ),\theta ,r)} > \frac{\sqrt{2}}{2}\), hence using (48)
Using that \(\delta \le \tau \), \(\delta ^{c} \le \omega (\tau )\) and
which is increasing with respect to \(\delta ^c\), we obtain
From \(\tau = \frac{1}{\bar{c}(3+4\kappa )}\) and \(\delta \le \tau \) we have
Moreover, we use \(\theta \le \frac{2}{\bar{g}(3+4\kappa ) \sqrt{r}}\) and \(\kappa \ge 0\), hence we obtain
We also consider
Using (45) and \(3+4\kappa \ge 3\) we obtain
Furthermore, from (50), (54) and (55) we have
Conditions \(\bar{c} \ge 2\), \(\bar{g} \ge 2\) and \(\bar{c} \le \frac{1}{2}\bar{g}\) yield
From (51), (56) and (57), using \(\bar{c}\ge 2\) we get
Using (47), (49) and (58) we have
hence the PC IPA is well defined. \(\square \)
The following lemma gives a sufficient condition for satisfying Condition ii. from Lemma 5.11.
Lemma 5.12
Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\), where \(\bar{g}\ge 2\). Consider \(\varPsi \) given in (43). If
then condition ii. of Lemma 5.11 is satisfied.
Proof
Using \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\) and \(\bar{g} \ge 2\) we have \(\theta \le \frac{1}{2}\). From this we obtain
Furthermore, using (53) and \(\kappa \ge 0\) we get
Besides this, from \(\tau = \frac{1}{\bar{c} (3+4\kappa )}\) and \(\kappa \ge 0\) we obtain
It should be mentioned that the function \(\varPsi (\tau )\) is strictly decreasing with respect to \(\tau \), hence using (62) we obtain
In this way, using (59), (60), (61) and (63) we obtain
which yields the result. \(\square \)
Lemma 5.13
Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\), where \(\bar{g}\ge 2\). If Condition i. from Lemma 5.11 is satisfied, then Condition ii. from Lemma 5.11 also holds.
Proof
Consider the following function
which is decreasing with respect to \(\bar{c}\). Thus, for \(\bar{c} \ge 2\) we have
Using (64) and Condition i. of Lemma 5.11 we obtain that
Hence, if Condition i. of Lemma 5.11 holds, then (59) is satisfied. Using Lemma 5.12, we obtain that Condition ii. from Lemma 5.11 also holds. \(\square \)
Corollary 5.1
Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\). If \(\bar{g} \ge 2\bar{c}\), then the PC IPA proposed in Algorithm 4.1 is well defined.
Lemma 5.14
Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\), where \(\bar{g}\ge 2\bar{c}\). Moreover, let \(x^0\) and \(s^0\) be strictly feasible, \(\mu ^0 = \frac{\langle x^0, s^0 \rangle }{r}\) and \(\delta (x^0,s^0,\mu ^0 )\le \tau \). Let \(x^k\) and \(s^k\) be the iterates obtained after k iterations. Then, \(\langle x^k, s^k \rangle \le \epsilon \) for
Proof
Using Lemma 5.10 we have
Hence, if
then the inequality \(\langle x^k, s^k \rangle \le \epsilon \) holds. We take logarithms, thus
From \(\log (1+\theta ) \le \theta \), \(\theta \ge -1\), we deduce that (65) is satisfied if
hence the lemma is proved. \(\square \)
Theorem 5.1
Let \(\tau =\frac{1}{\bar{c} (3+4\kappa )}\), where \(\bar{c}\ge 2\) and \(0 < \theta \le \frac{2}{\bar{g} (3+4\kappa ) \sqrt{r}}\), where \(\bar{g}\ge 2 \bar{c}\). Then, the PC IPA proposed in Algorithm 4.1 is well defined and it performs at most
iterations. The output is a pair (x, s) satisfying \(\langle x, s \rangle \le \epsilon \).
Proof
The result follows from Corollary 5.1 and Lemma 5.14.
Corollary 5.2
Consider \(0 < \tau \le \frac{1}{6+8\kappa }\) and \(0<\theta \le \frac{1}{\sqrt{r}} \tau \). Then, the PC IPA proposed in Algorithm 4.1 is well defined and it performs at most
iterations.
Proof
If \(\tau \le \frac{1}{6 + 8\kappa }\), then we can find \(\bar{c} \ge 2\) such that
Using \(\theta \le \frac{1}{\sqrt{r}} \tau \) and \(\tau \le \frac{1}{6+8\kappa }\) we have
Hence, we can find \(\bar{g} \ge 4\) such that
Moreover, from (66), (67) and \(\theta \le \frac{1}{\sqrt{r}} \tau \) we have
hence \(\bar{g} \ge 2 \bar{c}\) holds. All conditions from Lemma 5.11 are satisfied, hence from Corollary 5.1 and Lemma 5.14 we obtain the desired result. \(\square \)
In the following section, we present concluding remarks.
6 Conclusions
In this paper, we extended the PC IPA proposed in [13] to \(P_*(\kappa )\)-SCHLCP. For the determination of the search directions, we used the function \(\varphi (t)=t-\sqrt{t}\) in the AET technique proposed by Darvay [9]. We showed that the introduced PC IPA has the same complexity bound as the best known interior-point algorithms for solving these types of problems. We also proved that the proposed PC IPA has the iteration bound that matches the best known iteration bound for this type of PC IPAs given in the literature. We also provided a condition related to the proximity and update parameters for which the PC IPA is well defined. As further research, it would be interesting to give a more general framework which could deal with problems where we do not assume the \(P_*(\kappa )\)-property of the pair (Q, R). This approach could lead to analyse problems similar to general LCPs studied in [23, 24].
References
Achache, M.: A new primal-dual path-following method for convex quadratic programming. Comput. Appl. Math. 25, 97–110 (2006)
Asadi, S., Mahdavi-Amiri, N., Darvay, Z., Rigó, P.R.: Full-NT step feasible interior-point algorithm for symmetric cone horizontal linear complementarity problem based on a positive-asymptotic barrier function. Optim. Methods Sofw. (2020). https://doi.org/10.1080/10556788.2020.1734803
Asadi, S., Mansouri, H., Darvay, Z.: An infeasible full-NT step IPM for \(P_*(\kappa )\) horizontal linear complementarity problem over Cartesian product of symmetric cones. Optimization 66, 225–250 (2017)
Asadi, S., Mansouri, H., Darvay, Z., Zangiabadi, M.: On the \(P_*(\kappa )\) horizontal linear complementarity problems over Cartesian product of symmetric cones. Optim. Methods Softw. 31, 233–257 (2016)
Asadi, S., Zangiabadi, M., Mansouri, H.: A predictor-corrector interior-point algorithm for \(P_*(\kappa )\)-HLCPs over Cartesian product of symmetric cones. Numer. Funct. Anal. Optim. 38, 20–38 (2017)
Asadi, S., Mansouri, H., Lesaja, G., Zangiabadi, M.: A long-step interior-point algorithm for symmetric cone Cartesian \(P_*(\kappa )\)-HLCP. Optimization 67, 2031–2060 (2018)
Bai, Y.Q., El Ghami, M., Roos, C.: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 15, 101–128 (2004)
Chung, S.-J.: NP-completeness of the linear complementarity problem. J. Optim. Theory Appl. 60, 393–399 (1989)
Darvay, Z.: New interior-point algorithms in linear programming. Adv. Model. Optim. 5, 51–92 (2003)
Darvay, Z.: A new predictor–corrector algorithm for linear programming. Alk. Mat. Lapok 22, 135–161 (2005). (in Hungarian)
Darvay, Z.: A predictor–corrector algorithm for linearly constrained convex optimization. Stud. Univ. Babeş-Bolyai Ser. Inform. 54, 121–138 (2009)
Darvay, Z., Illés, T., Majoros, C.: Interior-point algorithm for sufficient LCPs based on the technique of algebraically equivalent transformation. Optim. Lett. 15, 357–376 (2021)
Darvay, Z., Illés, T., Povh, J., Rigó, P.R.: Predictor–corrector interior-point algorithm for sufficient linear complementarity problems based on a new search direction. SIAM J. Optim. 30, 2628–2658 (2020)
Darvay, Z., Papp, I.-M., Takács, P.-R.: Complexity analysis of a full-Newton step interior-point method for linear optimization. Period. Math. Hung. 73, 27–42 (2016)
Darvay, Z., Rigó, P.R.: New interior-point algorithm for symmetric optimization based on a positive-asymptotic barrier function. Numer. Funct. Anal. Optim. 39, 1705–1726 (2018)
de Klerk, E., Roos, C., Terlaky, T.: Initialization in semidefinite programming via a self-dual skew-symmetric embedding. Oper. Res. Lett. 20, 213–221 (1997)
de Klerk, E., Roos, C., Terlaky, T.: Self-dual embeddings. In: Wolkowicz, H., Saigal, R., Vandenberghe, L. (eds.) Handbook of Semidefinite Programming: Theory, Algorithms, and Applications, International Series in Operations Research & Management Science, vol. 27, pp. 111–138. Kluwer Academic Publishers, Dordrecht (2000)
Faraut, J., Korányi, A.: Analysis on Symmetric Cones, Oxford Mathematical Monographs. The Clarendon Press, New York (1994)
Faybusovich, L.: Linear systems in Jordan algebras and primal-dual interior-point algorithms. J. Comput. Appl. Math. 86, 149–175 (1997)
Güler, O.: Barrier functions in interior point methods. Math. Oper. Res. 21, 860–885 (1996)
Illés, T., Nagy, M., Terlaky, T.: A polynomial path-following interior point algorithm for general linear complementarity problems. J. Glob. Optim. 47, 329–342 (2010)
Illés, T., Nagy, M.: A Mizuno–Todd–Ye type predictor–corrector algorithm for sufficient linear complementarity problems. Eur. J. Oper. Res. 181, 1097–1111 (2007)
Illés, T., Nagy, M., Terlaky, T.: Polynomial interior point algorithms for general linear complementarity problems. Alg. Oper. Res. 5, 1–12 (2010)
Illés, T., Nagy, M., Terlaky, T.: A polynomial path-following interior point algorithm for general linear complementarity problems. J. Glob. Optim. 47, 329–342 (2010)
Karimi, M., Luo, S., Tunçel, L.: Primal-dual entropy-based interior-point algorithms for linear optimization. RAIRO-Oper. Res. 51, 299–328 (2017)
Karmarkar, N.K.: A new polynomial-time algorithm for linear programming. Combinatorica 4, 373–395 (1984)
Kheirfam, B., Haghighi, M.: A full-Newton step feasible interior-point algorithm for \({P}_{*}(\kappa )\)-LCP based on a new search direction. Croat. Oper. Res. Rev. 7, 277–290 (2016)
Kojima, M., Megiddo, N., Noma, T., Yoshise, A.: A unified approach to interior point algorithms for linear complementarity problems. In: Lecture Notes in Computer Science, vol. 538. Springer, New York (1991)
Lesaja, G., Roos, C.: Unified analysis of kernel-based interior-point methods for \(P_*(\kappa )\)-LCP. SIAM J. Optim. 21, 2994–3013 (2011)
Lesaja, G., Wang, G.Q., Zhu, D.T.: Interior-point methods for Cartesian \(P_*(\kappa )\)-linear complementarity problems over symmetric cones based on the eligible kernel functions. Optim. Methods Softw. 27, 827–843 (2012)
Luo, Z.Q., **u, N.: Path-following interior point algorithms for the Cartesian \(P_{*}(\kappa )\)-LCP over symmetric cones. Sci. China Ser. A Math. 52, 1769–1784 (2009)
Luo, Z.Q., Sturm, J.F., Zhang, S.: Conic convex programming and self-dual embedding. Optim. Methods Softw. 14, 169–218 (2000)
Mohammadi, N., Mansouri, H., Zangiabadi, M., Asadi, S.: A full Nesterov–Todd step infeasible-interior-point algorithm for Cartesian \(P_*(\kappa )\) horizontal linear complementarity problem over symmetric cones. Optimization 65, 539–565 (2015)
Nesterov, Y.E., Todd, M.J.: Self-scaled barriers and interior-point methods for convex programming. Math. Oper. Res. 22, 1–42 (1997)
Nesterov, Y.E., Todd, M.J.: Primal-dual interior-point methods for self-scaled cones. SIAM J. Optim. 8, 324–364 (1998)
Pan, S., Li, X., He, S.: An infeasible primal-dual interior point algorithm for linear programs based on logarithmic equivalent transformation. J. Math. Anal. Appl. 314, 644–660 (2006)
Peng, J., Roos, C., Terlaky, T.: Self-Regularity, A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, Princeton (2002)
Permenter, F., Friberg, H.A., Andersen, E.D.: Solving conic optimization problems via self-dual embedding and facial reduction: a unified approach. SIAM J. Optim. 27, 1257–1282 (2017)
Potra, F.A., Sheng, R.: On homogeneous interior-point algorithms for semidefinite programming. Optim. Methods Softw. 9, 161–184 (1998)
Rigó, P.R.: New trends in algebraic equivalent transformation of the central path and its applications. Ph.D. thesis, Budapest University of Technology and Economics, Institute of Mathematics, Hungary (2020)
Rigó, P.R.: New trends in interior-point algorithms. In: Friedler, F. (ed.) Proceeding of the 8th VOCAL Optimization Conference: Advanced Algorithms (short papers), pp. 85–90. Hungary, Esztergom (2018)
Rigó, P.R., Darvay, Z.: Infeasible interior-point method for symmetric optimization using a positive-asymptotic barrier. Comput. Optim. Appl. 71, 483–508 (2018)
Roos, C., Terlaky, T., Vial, J.P.: Theory and Algorithms for Linear Optimization. Springer, New York (2005)
Schmieta, S.H., Alizadeh, F.: Extension of primal-dual interior point algorithms to symmetric cones. Math. Program. 96, 409–438 (2003)
Sturm, J.F.: Similarity and other spectral relations for symmetric cones. Linear Algebra Appl. 312, 135–154 (2000)
Tunçel, L., Todd, M.J.: On the interplay among entropy, variable metrics and potential functions in interior-point algorithms. Comput. Optim. Appl. 8, 5–19 (1997)
Vieira, M.V.C.: Jordan algebraic approach to symmetric optimization. Ph.D. thesis, Delft University of Technology, The Netherlands (2007)
Wang, G.Q., Bai, Y.Q.: A new full Nesterov–Todd step primal-dual path-following interior-point algorithm for symmetric optimization. J. Optim. Theory Appl. 154, 966–985 (2012)
Wright, S.: Primal-Dual Interior-Point Methods. SIAM, Philadelphia (1997)
Ye, Y.: Interior Point Algorithms, Theory and Analysis. Wiley, Chichester (1997)
Acknowledgements
The research of P.R. Rigó has been supported by the ÚNKP-22-4 New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund. Zs. Darvay is grateful for the grant of Babes-Bolyai University with Number AGC32879/18.06.2021.
Funding
Open access funding provided by Corvinus University of Budapest.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article
Research Data Policy and Data Availability Statements
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
Additional information
Communicated by Goran Lesaja.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
We present some results related to the theory of Euclidean Jordan algebras and symmetric cones [18, 19, 45, 47]. Let \(\mathcal {V}\) be an n-dimensional vector space over \(\mathbb {R}\) with the bilinear map \(\circ : (x, y)\rightarrow x \circ y \in \mathcal {V}\). Then, we say that \(( \mathcal {V}, \circ )\) is a Jordan algebra if for all \(x, y \in \mathcal {V}\), we have \(x\circ y=y\circ x\) and \(x\circ (x^2\circ y)=x^2\circ (x\circ y)\), where \(x^2=x\circ x\). We say that \(e\in \mathcal {V}\) is the identity element of \(\mathcal {V}\) if and only if \( e\circ x=x\circ e=x\), for all \(x\in \mathcal {V}\). We call the element x invertible if there exists a unique element \(\bar{x}\), such that \(x\circ \bar{x}=e\) and \(\bar{x}\) is a polynomial in x. The inverse of x is denoted by \(x^{-1}\). Note that \(\mathcal {V}\) with an identity element is called Euclidean Jordan algebra if there exists a symmetric positive definite quadratic form \(\bar{Q}\) on \(\mathcal {V}\), which satisfies \(\bar{Q}( x\circ y,\,z )\!=\!\bar{Q}(x,\,y\circ z)\). For any \(x\in \mathcal {V}\), we define the Lyapunov transformation L(x) as \( L(x)y:=x\circ y,\) for all \(y\in \mathcal {V}\). The quadratic representation P(x) of x is given as \(P(x):= 2L(x)^2-L(x^2)\), where \(L(x)^2=L(x)L(x).\) The degree of an element x is the smallest integer r such that the set \(\{ e,x,\ldots ,x^r\}\) is linearly dependent. This is denoted by deg(x). The rank of \(\mathcal {V}\), is the largest deg(x) for all \(x\in \mathcal {V}\) and we denote it by \(\textrm{rank}\,(\mathcal {V})\). A subset \(\{c_1, c_2, \cdots , c_r\}\) of \(\mathcal {V}\) is called a Jordan frame if it is a complete system of orthogonal primitive idempotents.
Theorem A.1
(Theorem III.1.2 of [18]) Suppose \(\textrm{rank}\,(\mathcal {V})=r\). Then, for any x in \(\mathcal {V}\) there exists a Jordan frame \(c_1 ,\,\cdots , c_r\) and real numbers \(\lambda _1,\,\cdots , \lambda _r\) such that \( x=\sum _{i=1}^{r}\lambda _i c_i\).
Note that the numbers \(\lambda _i\) are named eigenvalues. Let \( \mathrm{tr(x)}=\sum _{i=1}^{r}\lambda _i\) and \(\det (x)=\prod _{i=1}^{r}\lambda _i.\) Consider the vector-valued function using the function \(\varphi \), which is a real-valued univariate function defined on the interval \((\xi ^2,+\infty )\) and differentiable on the interval \((\xi ^2,+\infty )\) such that \(\varphi '(t)>0, \forall t>\xi ^2\). Let \(x = \sum _{i=1}^{r}\lambda _i(x)c_i\), where \(\{c_1,\ldots ,c_r\}\) is the corresponding Jordan frame. We define the vector-valued function \(\varphi \) in the following way:
For any Euclidean Jordan algebra \(\mathcal {V}\), consider the corresponding cone of squares \(\mathcal {K}(\mathcal {V}):=\{x^2~:~x\in \mathcal {V}\}\). This cone is symmetric, i.e., it is self-dual and homogeneous, see [18]. We use the following notations:
and
The inner product is defined as \( \left\langle x,y\right\rangle =\textrm{tr}(x\circ y) \). The induced norm is the Frobenius norm:
The following lemmas are used in the complexity analysis of the IPA.
Proposition A.1
The following statements hold:
-
(i)
\(x\in \mathcal {V}\) is invertible if and only if P(x) is invertible, in which case \(P(x)^{-1}=P(x^{-1})\).
-
(ii)
If \(x\in \mathcal {V}\) is invertible, then \(P(x)\mathcal {K}=\mathcal {K}\) and \(P(x)\text {int}\;\mathcal {K}=\text {int}\;\mathcal {K}\).
-
(iii)
If \(x\in \mathcal {K}\), then \(P(x)^{1/2}=P(x^{1/2})\).
-
(iv)
If \(x\in \mathcal {V}\), then \(x\in \mathcal {K}\) (\(x\in \text {int}\;\mathcal {K}\)) if and only if \(\lambda _i(x)\ge 0\) (\(\lambda _i(x)>0\)), for all \(i=1,\ldots ,r\).
Lemma A.1
(Lemma 14 of [44]) If \(x,s\in \mathcal {V}\), then
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Darvay, Z., Rigó, P.R. New Predictor–Corrector Algorithm for Symmetric Cone Horizontal Linear Complementarity Problems. J Optim Theory Appl (2022). https://doi.org/10.1007/s10957-022-02078-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10957-022-02078-z