Abstract
In conic linear programming—in contrast to linear programming—the Lagrange dual is not an exact dual: it may not attain its optimal value, or there may be a positive duality gap. The corresponding Farkas’ lemma is also not exact (it does not always prove infeasibility). We describe exact duals, and certificates of infeasibility and weak infeasibility for conic LPs which are nearly as simple as the Lagrange dual, but do not rely on any constraint qualification. Some of our exact duals generalize the SDP duals of Ramana, and Klep and Schweighofer to the context of general conic LPs. Some of our infeasibility certificates generalize the row echelon form of a linear system of equations: they consist of a small, trivially infeasible subsystem obtained by elementary row operations. We prove analogous results for weakly infeasible systems. We obtain some fundamental geometric corollaries: an exact characterization of when the linear image of a closed convex cone is closed, and an exact characterization of nice cones. Our infeasibility certificates provide algorithms to generate all infeasible conic LPs over several important classes of cones; and all weakly infeasible SDPs in a natural class. Using these algorithms we generate a public domain library of infeasible and weakly infeasible SDPs. The status of our instances can be verified by inspection in exact arithmetic, but they turn out to be challenging for commercial and research codes.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10107-017-1136-5/MediaObjects/10107_2017_1136_Fig1_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10107-017-1136-5/MediaObjects/10107_2017_1136_Fig2_HTML.gif)
Similar content being viewed by others
Notes
Performing a sequence of operations of type (1) and (2) is the same as replacing A by AM and c by \(M^Tc,\) where M is an \(m \times m\) invertible matrix.
To outline a proof of (1.4), assume for simplicity \(\lambda =2. \,\) Then
$$\begin{aligned} y := \begin{pmatrix} \epsilon &{} 1 \\ 1 &{} 1/\epsilon \end{pmatrix} \succeq 0 \; \mathrm{for \, all} \, \epsilon > 0, \end{aligned}$$and \(A^*y = (\epsilon , 2)^T, \,\) so \((0,2)^T \in {\text {cl}}\,(A^* {{\mathcal {S}}_+^{2}});\) the rest of (1.4) is straightforward to verify.
References
Auslender, A.: Closedness criteria for the image of a closed set by a linear operator. Numer. Funct. Anal. Optim. 17, 503–515 (1996)
Barker, G.P., Carlson, D.: Cones of diagonally dominant matrices. Pac. J. Math. 57, 15–32 (1975)
Bauschke, H., Borwein, J.M.: Conical open map** theorems and regularity. In: Proceedings of the Centre for Mathematics and its Applications 36, pp. 1–10. Australian National University (1999)
Berman, A.: Cones, Matrices and Mathematical Programming. Springer, Berlin (1973)
Bertsekas, D., Tseng, P.: Set intersection theorems and existence of optimal solutions. Math. Progr. 110, 287–314 (2007)
Blum, L., Cucker, F., Shub, M., Smale, S.: Complexity and Real Computation. Springer, Berlin (1998)
Bonnans, F.J., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer Series in Operations Research. Springer, Berlin (2000)
Borwein, J.M., Lewis, A.S.: Convex Analysis and Nonlinear Optimization: Theory and Examples. CMS Books in Mathematics. Springer, Berlin (2000)
Borwein, J.M., Moors, W.B.: Stability of closedness of convex cones under linear map**s. J. Convex Anal. 16(3–4), 699–705 (2009)
Borwein, J.M., Moors, W.B.: Stability of closedness of convex cones under linear map**s II. J. Nonlinear Anal. Optim. 1(1), 1–7 (2010)
Borwein, J.M., Wolkowicz, H.: Facial reduction for a cone-convex programming problem. J. Aust. Math. Soc. 30, 369–380 (1981)
Borwein, J.M., Wolkowicz, H.: Regularizing the abstract convex program. J. Math. Anal. App. 83, 495–530 (1981)
Cheung, V., Wolkowicz, H., Schurr, S.: Preprocessing and regularization for degenerate semidefinite programs. In: Bailey, D., Bauschke, H.H., Garvan, F., Théra, M., Vanderwerff, J.D., Wolkowicz, H. (eds.) Proceedings of Jonfest: A Conference in Honour of the 60th Birthday of Jon Borwein. Springer, Berlin (2013)
Chua, C.B., Tunçel, L.: Invariance and efficiency of convex representations. Math. Progr. B 111, 113–140 (2008)
Drusviyatsky, D., Pataki, G., Wolkowicz, H.: Coordinate shadows of semi-definite and euclidean distance matrices. SIAM J. Opt. 25(2), 1160–1178 (2015)
Güler, O.: Foundations of Optimization. Graduate Texts in Mathematics. Springer, Berlin (2010)
Glineur, F.: Proving strong duality for geometric optimization using a conic formulation. Ann. Oper. Res. 105(2), 155–184 (2001)
Gortler, S.J., Thurston, D.P.: Characterizing the universal rigidity of generic frameworks. Discrete Comput. Geom. 51(4), 1017–1036 (2014)
Klep, I., Schweighofer, M.: An exact duality theory for semidefinite programming based on sums of squares. Math. Oper. Res. 38(3), 569–590 (2013)
Krislock, N., Wolkowicz, H.: Explicit sensor network localization using semidefinite representations and facial reductions. SIAM J. Opt. 20, 2679–2708 (2010)
Liu, M., Pataki, G.: Exact duality in semidefinite programming based on elementary reformulations. SIAM J. Opt. 25(3), 1441–1454 (2015)
Lourenco, B., Muramatsu, M., Tsuchiya, T.: Facial reduction and partial polyhedrality. Optimization Online. http://www.optimization-online.org/DB_FILE/2015/11/5224.pdf (2015)
Lourenco, B., Muramatsu, M., Tsuchiya, T.: A structural geometrical analysis of weakly infeasible SDPs. J. Oper. Res. Soc. Jpn. 59(3), 241–257 (2015)
Pataki, G.: The geometry of semidefinite programming. In: Saigal, R.,Vandenberghe, L., Wolkowicz, H. (eds.) Handbook of semidefiniteprogramming. Kluwer Academic Publishers. Also available from www.unc.edu/~pataki (2000)
Pataki, G.: On the closedness of the linear image of a closed convex cone. Math. Oper. Res. 32(2), 395–412 (2007)
Pataki, G.: On the connection of facially exposed and nice cones. J. Math. Anal. App. 400, 211–221 (2013)
Pataki, G.: Strong duality in conic linear programming: facialreduction and extended duals. In: Bailey, D., Bauschke, H.H.,Garvan, F., Théra, M., Vanderwerff, J.D., Wolkowicz, H. (eds.) Proceedings of Jonfest: A Conference in Honour of the 60th Birthdayof Jon Borwein. Springer. Also available from http://arxiv.org/abs/1301.7717 (2013)
Pataki, G.: Bad semidefinite programs: they all look the same. SIAM J. Opt. 27(1), 146–172 (2017)
Permenter, F., Parrilo, P.: Partial facial reduction: simplified, equivalent sdps via approximations of the psd cone. Technical Report. http://arxiv.org/abs/1408.4685 (2014)
Pólik, I., Terlaky, T.: Exact duality for optimization over symmetric cones. Lehigh University, Betlehem, PA, USA. Technical Report (2009)
Provan, J.S., Shier, D.R.: A paradigm for listing (s, t)-cuts in graphs. Algorithmica 15(4), 351–372 (1996)
Ramana, M.V.: An exact duality theory for semidefinite programming and its complexity implications. Math. Progr. Ser. B 77, 129–162 (1997)
Ramana, M.V., Freund, R.: On the elsd duality theory for sdp. Technical Report. MIT (1996)
Ramana, M.V., Tunçel, L., Wolkowicz, H.: Strong duality for semidefinite programming. SIAM J. Opt. 7(3), 641–662 (1997)
Read, R., Tarjan, R.: Bounds on backtrack algorithms for listing cycles, paths, and spanning trees. Networks 5, 237–252 (1975)
Renegar, J.: A Mathematical View of Interior-Point Methods in Convex Optimization. MPS-SIAM Series on Optimization. SIAM, Philadelphia, USA (2001)
Rockafellar, T.R.: Convex Analysis. Princeton University Press, Princeton (1970)
Roshchina, V.: Facially exposed cones are not nice in general. SIAM J. Opt. 24, 257–268 (2014)
Waki, H.: How to generate weakly infeasible semidefinite programs via Lasserre’s relaxations for polynomial optimization. Optim. Lett. 6(8), 1883–1896 (2012)
Waki, H., Muramatsu, M.: Facial reduction algorithms for conic optimization problems. J. Optim. Theory Appl. 158(1), 188–215 (2013)
Acknowledgements
We are grateful to the referees, the Associate Editor, and Melody Zhu for their insightful comments, and to Imre Pólik for his help in our work with the SDP solvers.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: A definition of certificates
In the Introduction we gave an informal definition of certificates of certain properties of conic linear programs, and the results of the paper can be understood relying only on this informal definition. In this section we give a more rigorous definition of certificates; a fully rigorous definition can be found in [6].
We define the set of primal instances as
We assume that a map \(A: \mathbb {R}^{m} \rightarrow Y\) is represented by a suitable matrix.
Definition 9
Let
be a function with \(Y'\) being a finite dimensional Euclidean space. We say that C provides an exact certificate of infeasibility of (P), if there is an algorithm, a ”verifier”, which
-
(1)
Takes as input (A, b) and C(A, b), where \((A,b) \in {\text {Primal}}\,;\)
-
(2)
Outputs ”yes” exactly when (P) with data (A, b) is infeasible;
-
(3)
Takes a polynomial number of steps in the size of (A, b) and C(A, b).
A ”step” means either a usual arithmetic operation; or checking membership in sets of the form \((K \cap L)^*, \,\) and \((K^* \cap J)^*, \,\) where L and J are subspaces. By ”size” of \(a \in Y^\prime \) we mean the number of components of a.
We define an exact certificate of infeasibility of (D), and of other properties (say of weak infeasibility) of (P) and of (D) analogously.
The alert reader may notice that for our exact certificate of infeasibility given in part (3) of Theorem 4 it is enough to allow a verifier to check membership in a set of the form \((K \cap L)^*; \) and a similar restriction can be made for our exact certificate of infeasibility of (D).
Appendix 2: Proof of Lemmas 1 and 2
Proof of Lemma 1.
Proof of (1)
It is clear that \({\text {FR}}_k(K)\) contains all nonnegative multiples of its elements, so we only need to show that it is convex. To this end, we use the following Claim, whose proof is straightforward:
Claim
If C is a closed, convex cone and \(y, z \in C^*,\) then
\(\square \)
Let \((y_1, \dots , y_k), (z_1, \dots , z_k) \in {\text {FR}}_k(K).\) We will prove
which will clearly imply (1). To start, for brevity, for \(i=0, \dots , k\) we set
(with the understanding that all these cones equal K when \(i=0\)). We first prove the relations
for \(i=1, \dots , k.\) For \(i=1\) the first two hold by definition, and (10.62) follows from the Claim. Suppose now that \(i \ge 2\) and (10.60) through (10.62) hold with \(i-1\) in place of i. Then
where the first containment is by definition, the inclusion is trivial, and the equality is by using the induction hypothesis. This proves (10.60) and Eq. (10.61) holds analogously.
Hence
where the first equation is by definition. The second follows since by (10.60) and (10.61), and since \(K_{y+z,i-1}\) is a closed convex cone, we can use the Claim with \(C = K_{y+z,i-1}, \, y = y_i, \, z = z_i. \,\) The third is by the induction hypothesis, and the last is by definition. This completes the proof of (10.62).
Now by (10.60), (10.61) and since \(K_{y+z,i-1}^*\) is a convex cone, we deduce that
This proves (10.59), and completes the proof of (1). \(\square \)
Proof of (2)
Let \(L = K \cap -K,\) assume that K is not a subspace, i.e., \(K \ne L, \) and also assume \(k \ge 2.\) Let us choose a sequence \(\{ y_{1i} \} \subseteq {\text {ri}}\,K^*, \,\) s.t. \(y_{1i} \rightarrow 0.\) Then
Let \(y_2 \in L^\perp \setminus K^*.\) (Such a \(y_2\) exists, since \(K^* \ne L^\perp .\)) Then \((y_{i1}, y_2, 0, \dots , 0) \in {\text {FR}}_k(K), \,\) and it converges to \((0, y_2, 0, \dots , 0) \not \in {\text {FR}}_k(K). \,\)
Conversely, if K is a subspace, then an easy calculation shows that so is \({\text {FR}}_k(K), \,\) which is hence closed. \(\square \)
Proof of (3)
Let us fix \(T \in {\text {Aut}}(K)\) and let S be an arbitrary set. Then we claim that
hold. Statement (10.63) follows, since
and (10.64) follows analogously. Statement (10.65) follows by
where in the first equation we used (10.63) with \(T^{-*}\) in place of T and \(K \cap S^\perp \) in place of S. The second equation is trivial, and in the third we used \(T^{-1}K = K.\) In the last we used (10.64).
Now let \((y_1, \dots , y_k) \in {\text {FR}}_k(K),\) and \(S_i = \{y_1, \dots , y_{i-1}\}\) for \(i = 1, \dots , k.\) Then by definition we have \( y_i \, \in \, (K \cap S_i^\perp )^* \, \mathrm{for \, all \,} i. \) Hence for all i we have
where the equation follows from (10.65). Thus \( (T^*y_1, \dots , T^*y_k) \in {\text {FR}}_k(K), \) as required. \(\square \)
Proof of (4)
In this proof, for brevity, we will use the notation
for all \(i \ge 0\) (with the understanding that these sets equal \(K, C, \,\) and \(K \times C, \,\) respectively, when \(i=0\)).
We will prove the equivalence (1.9) \(\Leftrightarrow \) (1.10) together with the relation
Clearly, both hold for \(k=1, \,\) so let us assume \(k \ge 2\) and that we proved them with \(k-1\) in place of k. By definition (1.9) is equivalent to
and
By the induction hypothesis (10.67) is equivalent to \((y_1, \dots , y_{k-1}) \in {\text {FR}}_{k-1}(K)\) and \((z_1, \dots , z_{k-1}) \in {\text {FR}}_{k-1}(C).\) So the proof is complete, if we show
To prove (10.69) we see that
where the first equation is by definition, the second comes from the induction hypothesis, and the third follows from \(y_{k-1} \in K_{y,k-2}^*, \, z_{k-1} \in C_{z,k-2}^*.\) Thus the proof is complete. \(\square \)
Proof of Lemma 2
First let us note that \(T \in {\text {Aut}}({{\mathcal {S}}_+^{n}})\) (cf. Eq. 1.7) iff \(T(x) = t^T x t\) for some invertible matrix t.
We prove the lemma by induction. Suppose that \(\ell \ge 0\) is an integer, and we computed a t invertible matrix such that
and the block sizes in the latter sequence are \(p_1, \dots , p_{\ell }, \,\) respectively. Both of these statements hold with \(\ell =0.\) If \(\ell = k, \,\) we stop.
Otherwise, define \(p := p_1 + \dots + p_{\ell }\) and \(y_i^\prime := t^Ty_it\) for \(i=1, \dots , k.\) Let
Then \(y_{\ell +1}' \in K^*, \,\) and K and \(K^*\) are of the form
![](http://media.springernature.com/full/springer-static/image/art%3A10.1007%2Fs10107-017-1136-5/MediaObjects/10107_2017_1136_Equ168_HTML.gif)
where, again, the symbol \(\oplus \) stands for a psd submatrix, and \(\times \) for a submatrix with arbitrary elements.
Let z be the lower \(n-p\) by \(n-p\) block of \(y_{\ell +1}^\prime .\) Since \(z \succeq 0,\) there is a q invertible matrix such that
where \(p_{\ell +1}\) is the rank of z.
Let \(v := I_p \oplus q\) and replace t by tv. Then by part (3) in Lemma 1 statement (10.70) still holds, and by the choice of v Eq. (10.71) now holds with \(\ell +1\) in place of \(\ell .\) This completes the proof. \(\square \)
Appendix 3: Proof of Theorem 8
Proof of (1)
Let us assume that condition (5.36) is violated; we will construct \((a_1, a_2) \in {\text {FR}}(K^*)\) and \((y_1, \dots , y_{\ell +1}) \in {\text {FR}}(K)\) that satisfy (5.38) and (5.39) (with \(k=1, \,\) and \(\ell \) equal to the degree of singularity of \({\mathcal {R}}(A) \cap K\)).
First, we choose
and let F be the minimal cone of \({\mathcal {R}}(A) \cap K \,\) (i.e., the smallest face of K that contains \(a_1\)). Then
where the first equality comes from \(a_1 \in {\text {ri}}\,F\) and the second can be found e.g., in [24]. Hence
hold. We next choose the \(y_j.\) First we pick \(y_1, \dots , y_\ell \in {\mathcal {N}}(A^*)\) such that
and \((y_1, \dots , y_\ell ) \in {\text {FR}}(K).\) Since \(a_2 \not \in {\text {lin}}\,F \,\) (otherwise \(a_2\) would be in \({\text {dir}}\,(a_1,K)\)) we can then choose \(y_{\ell +1} \in F^\perp \) such that
hold. Thus \((a_1, a_2)\) and \((y_1, \dots , y_{\ell +1})\) are as required, and the proof is complete. \(\square \)
Proof of (2)
We fix F and G as stated. Since G is not exposed, and F is the smallest exposed face of K that contains G, we have
(see Eq. (5.40) and the discussion afterwards). For brevity, let us define \(F^\triangle = K^* \cap F^\perp , G^\triangle = K^* \cap G^\perp ,\) and for a face H of \(K^*\) we define \(H^\triangle = K \cap H^\perp .\) Thus, since F is an exposed face, we also have
We will choose \((a_1, a_2)\) and \((y_1, y_2)\) such that
(i.e., to satisfy (5.41) and (5.42) with \(k=1, \, \ell = 1\)).
We first choose \(y_1 \in {\text {ri}}\,F^\triangle . \,\) Next, since \(G \subsetneq F, \,\) we can choose \(y_2 \, \in \, (F^* \cap G^\perp ) \setminus F^\perp .\) Hence
where the first equation comes from \(y_1 \in {\text {ri}}\,F^\triangle . \,\) We thus satisfied (11.74) and (11.75).
Next we choose \(a_1\) and \(a_2:\) we choose \(a_1 \in {\text {ri}}\,G, \,\) and \(a_2 \in {\text {lin}}\,F\) to satisfy (11.77) (this can be done since \(y_2 \not \in F^\perp \)). Thus (11.73) and (11.76) also hold. We claim that (11.72) holds as well. To see this, we observe
where the first equation follows from \(a_1 \in {\text {ri}}\,G, \,\) and the second from \(F^\triangle = G^\triangle .\) Hence
so (11.72) follows, and this completes the proof. \(\square \)
Rights and permissions
About this article
Cite this article
Liu, M., Pataki, G. Exact duals and short certificates of infeasibility and weak infeasibility in conic linear programming. Math. Program. 167, 435–480 (2018). https://doi.org/10.1007/s10107-017-1136-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-017-1136-5
Keywords
- Conic linear programming
- Semidefinite programming
- Facial reduction
- Exact duals
- Exact certificates of infeasibility and weak infeasibility
- Closedness of the linear image of a closed convex cone