Log in

Exact duals and short certificates of infeasibility and weak infeasibility in conic linear programming

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

In conic linear programming—in contrast to linear programming—the Lagrange dual is not an exact dual: it may not attain its optimal value, or there may be a positive duality gap. The corresponding Farkas’ lemma is also not exact (it does not always prove infeasibility). We describe exact duals, and certificates of infeasibility and weak infeasibility for conic LPs which are nearly as simple as the Lagrange dual, but do not rely on any constraint qualification. Some of our exact duals generalize the SDP duals of Ramana, and Klep and Schweighofer to the context of general conic LPs. Some of our infeasibility certificates generalize the row echelon form of a linear system of equations: they consist of a small, trivially infeasible subsystem obtained by elementary row operations. We prove analogous results for weakly infeasible systems. We obtain some fundamental geometric corollaries: an exact characterization of when the linear image of a closed convex cone is closed, and an exact characterization of nice cones. Our infeasibility certificates provide algorithms to generate all infeasible conic LPs over several important classes of cones; and all weakly infeasible SDPs in a natural class. Using these algorithms we generate a public domain library of infeasible and weakly infeasible SDPs. The status of our instances can be verified by inspection in exact arithmetic, but they turn out to be challenging for commercial and research codes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Performing a sequence of operations of type (1) and (2) is the same as replacing A by AM and c by \(M^Tc,\) where M is an \(m \times m\) invertible matrix.

  2. To outline a proof of (1.4), assume for simplicity \(\lambda =2. \,\) Then

    $$\begin{aligned} y := \begin{pmatrix} \epsilon &{} 1 \\ 1 &{} 1/\epsilon \end{pmatrix} \succeq 0 \; \mathrm{for \, all} \, \epsilon > 0, \end{aligned}$$

    and \(A^*y = (\epsilon , 2)^T, \,\) so \((0,2)^T \in {\text {cl}}\,(A^* {{\mathcal {S}}_+^{2}});\) the rest of (1.4) is straightforward to verify.

References

  1. Auslender, A.: Closedness criteria for the image of a closed set by a linear operator. Numer. Funct. Anal. Optim. 17, 503–515 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  2. Barker, G.P., Carlson, D.: Cones of diagonally dominant matrices. Pac. J. Math. 57, 15–32 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bauschke, H., Borwein, J.M.: Conical open map** theorems and regularity. In: Proceedings of the Centre for Mathematics and its Applications 36, pp. 1–10. Australian National University (1999)

  4. Berman, A.: Cones, Matrices and Mathematical Programming. Springer, Berlin (1973)

    Book  MATH  Google Scholar 

  5. Bertsekas, D., Tseng, P.: Set intersection theorems and existence of optimal solutions. Math. Progr. 110, 287–314 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Blum, L., Cucker, F., Shub, M., Smale, S.: Complexity and Real Computation. Springer, Berlin (1998)

    Book  MATH  Google Scholar 

  7. Bonnans, F.J., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer Series in Operations Research. Springer, Berlin (2000)

    Book  MATH  Google Scholar 

  8. Borwein, J.M., Lewis, A.S.: Convex Analysis and Nonlinear Optimization: Theory and Examples. CMS Books in Mathematics. Springer, Berlin (2000)

    Book  Google Scholar 

  9. Borwein, J.M., Moors, W.B.: Stability of closedness of convex cones under linear map**s. J. Convex Anal. 16(3–4), 699–705 (2009)

    MathSciNet  MATH  Google Scholar 

  10. Borwein, J.M., Moors, W.B.: Stability of closedness of convex cones under linear map**s II. J. Nonlinear Anal. Optim. 1(1), 1–7 (2010)

  11. Borwein, J.M., Wolkowicz, H.: Facial reduction for a cone-convex programming problem. J. Aust. Math. Soc. 30, 369–380 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  12. Borwein, J.M., Wolkowicz, H.: Regularizing the abstract convex program. J. Math. Anal. App. 83, 495–530 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  13. Cheung, V., Wolkowicz, H., Schurr, S.: Preprocessing and regularization for degenerate semidefinite programs. In: Bailey, D., Bauschke, H.H., Garvan, F., Théra, M., Vanderwerff, J.D., Wolkowicz, H. (eds.) Proceedings of Jonfest: A Conference in Honour of the 60th Birthday of Jon Borwein. Springer, Berlin (2013)

    Google Scholar 

  14. Chua, C.B., Tunçel, L.: Invariance and efficiency of convex representations. Math. Progr. B 111, 113–140 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  15. Drusviyatsky, D., Pataki, G., Wolkowicz, H.: Coordinate shadows of semi-definite and euclidean distance matrices. SIAM J. Opt. 25(2), 1160–1178 (2015)

    Article  MATH  Google Scholar 

  16. Güler, O.: Foundations of Optimization. Graduate Texts in Mathematics. Springer, Berlin (2010)

    Google Scholar 

  17. Glineur, F.: Proving strong duality for geometric optimization using a conic formulation. Ann. Oper. Res. 105(2), 155–184 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  18. Gortler, S.J., Thurston, D.P.: Characterizing the universal rigidity of generic frameworks. Discrete Comput. Geom. 51(4), 1017–1036 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  19. Klep, I., Schweighofer, M.: An exact duality theory for semidefinite programming based on sums of squares. Math. Oper. Res. 38(3), 569–590 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Krislock, N., Wolkowicz, H.: Explicit sensor network localization using semidefinite representations and facial reductions. SIAM J. Opt. 20, 2679–2708 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  21. Liu, M., Pataki, G.: Exact duality in semidefinite programming based on elementary reformulations. SIAM J. Opt. 25(3), 1441–1454 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  22. Lourenco, B., Muramatsu, M., Tsuchiya, T.: Facial reduction and partial polyhedrality. Optimization Online. http://www.optimization-online.org/DB_FILE/2015/11/5224.pdf (2015)

  23. Lourenco, B., Muramatsu, M., Tsuchiya, T.: A structural geometrical analysis of weakly infeasible SDPs. J. Oper. Res. Soc. Jpn. 59(3), 241–257 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  24. Pataki, G.: The geometry of semidefinite programming. In: Saigal, R.,Vandenberghe, L., Wolkowicz, H. (eds.) Handbook of semidefiniteprogramming. Kluwer Academic Publishers. Also available from www.unc.edu/~pataki (2000)

  25. Pataki, G.: On the closedness of the linear image of a closed convex cone. Math. Oper. Res. 32(2), 395–412 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  26. Pataki, G.: On the connection of facially exposed and nice cones. J. Math. Anal. App. 400, 211–221 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  27. Pataki, G.: Strong duality in conic linear programming: facialreduction and extended duals. In: Bailey, D., Bauschke, H.H.,Garvan, F., Théra, M., Vanderwerff, J.D., Wolkowicz, H. (eds.) Proceedings of Jonfest: A Conference in Honour of the 60th Birthdayof Jon Borwein. Springer. Also available from http://arxiv.org/abs/1301.7717 (2013)

  28. Pataki, G.: Bad semidefinite programs: they all look the same. SIAM J. Opt. 27(1), 146–172 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  29. Permenter, F., Parrilo, P.: Partial facial reduction: simplified, equivalent sdps via approximations of the psd cone. Technical Report. http://arxiv.org/abs/1408.4685 (2014)

  30. Pólik, I., Terlaky, T.: Exact duality for optimization over symmetric cones. Lehigh University, Betlehem, PA, USA. Technical Report (2009)

  31. Provan, J.S., Shier, D.R.: A paradigm for listing (s, t)-cuts in graphs. Algorithmica 15(4), 351–372 (1996)

    MathSciNet  MATH  Google Scholar 

  32. Ramana, M.V.: An exact duality theory for semidefinite programming and its complexity implications. Math. Progr. Ser. B 77, 129–162 (1997)

    MathSciNet  MATH  Google Scholar 

  33. Ramana, M.V., Freund, R.: On the elsd duality theory for sdp. Technical Report. MIT (1996)

  34. Ramana, M.V., Tunçel, L., Wolkowicz, H.: Strong duality for semidefinite programming. SIAM J. Opt. 7(3), 641–662 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  35. Read, R., Tarjan, R.: Bounds on backtrack algorithms for listing cycles, paths, and spanning trees. Networks 5, 237–252 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  36. Renegar, J.: A Mathematical View of Interior-Point Methods in Convex Optimization. MPS-SIAM Series on Optimization. SIAM, Philadelphia, USA (2001)

  37. Rockafellar, T.R.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  38. Roshchina, V.: Facially exposed cones are not nice in general. SIAM J. Opt. 24, 257–268 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  39. Waki, H.: How to generate weakly infeasible semidefinite programs via Lasserre’s relaxations for polynomial optimization. Optim. Lett. 6(8), 1883–1896 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  40. Waki, H., Muramatsu, M.: Facial reduction algorithms for conic optimization problems. J. Optim. Theory Appl. 158(1), 188–215 (2013)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We are grateful to the referees, the Associate Editor, and Melody Zhu for their insightful comments, and to Imre Pólik for his help in our work with the SDP solvers.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gábor Pataki.

Appendices

Appendix 1: A definition of certificates

In the Introduction we gave an informal definition of certificates of certain properties of conic linear programs, and the results of the paper can be understood relying only on this informal definition. In this section we give a more rigorous definition of certificates; a fully rigorous definition can be found in [6].

We define the set of primal instances as

$$\begin{aligned} {\text {Primal}}\,= & {} \{ \, (A,b) \, | \, A: \mathbb {R}^{m} \rightarrow Y, \, b \in Y \, \}. \end{aligned}$$
(9.58)

We assume that a map \(A: \mathbb {R}^{m} \rightarrow Y\) is represented by a suitable matrix.

Definition 9

Let

$$\begin{aligned} C: {\text {Primal}}\,\rightarrow Y' \end{aligned}$$

be a function with \(Y'\) being a finite dimensional Euclidean space. We say that C provides an exact certificate of infeasibility of (P), if there is an algorithm, a ”verifier”, which

  1. (1)

    Takes as input (Ab) and C(Ab),  where \((A,b) \in {\text {Primal}}\,;\)

  2. (2)

    Outputs ”yes” exactly when (P) with data (Ab) is infeasible;

  3. (3)

    Takes a polynomial number of steps in the size of (Ab) and C(Ab).

A ”step” means either a usual arithmetic operation; or checking membership in sets of the form \((K \cap L)^*, \,\) and \((K^* \cap J)^*, \,\) where L and J are subspaces. By ”size” of \(a \in Y^\prime \) we mean the number of components of a.

We define an exact certificate of infeasibility of (D), and of other properties (say of weak infeasibility) of (P) and of (D) analogously.

The alert reader may notice that for our exact certificate of infeasibility given in part (3) of Theorem 4 it is enough to allow a verifier to check membership in a set of the form \((K \cap L)^*; \) and a similar restriction can be made for our exact certificate of infeasibility of (D).

Appendix 2: Proof of Lemmas 1 and 2

Proof of Lemma 1.

Proof of (1)

It is clear that \({\text {FR}}_k(K)\) contains all nonnegative multiples of its elements, so we only need to show that it is convex. To this end, we use the following Claim, whose proof is straightforward:

Claim

If C is a closed, convex cone and \(y, z \in C^*,\) then

$$\begin{aligned} C \cap (y+z)^\perp = C \cap y^\perp \cap z^\perp . \end{aligned}$$

\(\square \)

Let \((y_1, \dots , y_k), (z_1, \dots , z_k) \in {\text {FR}}_k(K).\) We will prove

$$\begin{aligned} (y_1 + z_1, \dots , y_k + z_k) \in {\text {FR}}_k(K), \end{aligned}$$
(10.59)

which will clearly imply (1). To start, for brevity, for \(i=0, \dots , k\) we set

$$\begin{aligned} K_{y,i}= & {} K \cap y_1^\perp \cap \dots \cap y_i^\perp , \\ K_{z,i}= & {} K \cap z_1^\perp \cap \dots \cap z_i^\perp , \\ K_{y+z,i}= & {} K \cap (y_1+z_1)^\perp \cap \dots \cap (y_i+z_i)^\perp \end{aligned}$$

(with the understanding that all these cones equal K when \(i=0\)). We first prove the relations

$$\begin{aligned} y_i\in & {} K_{y+z,i-1}^*, \end{aligned}$$
(10.60)
$$\begin{aligned} z_i\in & {} K_{y+z,i-1}^*, \end{aligned}$$
(10.61)
$$\begin{aligned} K_{y+z,i}= & {} K_{y,i} \cap K_{z,i} \end{aligned}$$
(10.62)

for \(i=1, \dots , k.\) For \(i=1\) the first two hold by definition, and (10.62) follows from the Claim. Suppose now that \(i \ge 2\) and (10.60) through (10.62) hold with \(i-1\) in place of i. Then

$$\begin{aligned} y_i \, \in \, K_{y,i-1}^* \, \subseteq \, (K_{y,i-1} \cap K_{z,i-1})^* \, = \, K_{y+z,i-1}^*, \end{aligned}$$

where the first containment is by definition, the inclusion is trivial, and the equality is by using the induction hypothesis. This proves (10.60) and Eq. (10.61) holds analogously.

Hence

$$\begin{aligned} K_{y+z,i}= & {} K_{y+z,i-1} \cap (y_i + z_i)^\perp \\= & {} K_{y+z,i-1} \cap y_i^\perp \cap z_i^\perp \\= & {} K_{y,i-1} \cap K_{z,i-1} \cap y_i^\perp \cap z_i^\perp \\= & {} K_{y,i} \cap K_{z,i}, \end{aligned}$$

where the first equation is by definition. The second follows since by (10.60) and (10.61), and since \(K_{y+z,i-1}\) is a closed convex cone, we can use the Claim with \(C = K_{y+z,i-1}, \, y = y_i, \, z = z_i. \,\) The third is by the induction hypothesis, and the last is by definition. This completes the proof of (10.62).

Now by (10.60), (10.61) and since \(K_{y+z,i-1}^*\) is a convex cone, we deduce that

$$\begin{aligned} y_i + z_i \in K_{y+z,i-1}^* \,\, \mathrm{holds \, for \,} i=1, \dots , k. \end{aligned}$$

This proves (10.59), and completes the proof of (1). \(\square \)

Proof of (2)

Let \(L = K \cap -K,\) assume that K is not a subspace, i.e., \(K \ne L, \) and also assume \(k \ge 2.\) Let us choose a sequence \(\{ y_{1i} \} \subseteq {\text {ri}}\,K^*, \,\) s.t. \(y_{1i} \rightarrow 0.\) Then

$$\begin{aligned} K \cap y_{1i}^\perp= & {} L \Rightarrow (K \cap y_{1i}^\perp )^* = L^\perp \,\, \mathrm{for \, all} \; i, \\ K \cap 0^\perp= & {} K \Rightarrow (K \cap 0^\perp )^* = K^*. \end{aligned}$$

Let \(y_2 \in L^\perp \setminus K^*.\) (Such a \(y_2\) exists, since \(K^* \ne L^\perp .\)) Then \((y_{i1}, y_2, 0, \dots , 0) \in {\text {FR}}_k(K), \,\) and it converges to \((0, y_2, 0, \dots , 0) \not \in {\text {FR}}_k(K). \,\)

Conversely, if K is a subspace, then an easy calculation shows that so is \({\text {FR}}_k(K), \,\) which is hence closed. \(\square \)

Proof of (3)

Let us fix \(T \in {\text {Aut}}(K)\) and let S be an arbitrary set. Then we claim that

$$\begin{aligned} T^{-1}S^*= & {} (T^*S)^*, \end{aligned}$$
(10.63)
$$\begin{aligned} T^{-1}S^\perp= & {} (T^*S)^\perp , \end{aligned}$$
(10.64)
$$\begin{aligned} T^* (K \cap S^\perp )^*= & {} \left( K \cap (T^*S)^\perp \right) ^* \end{aligned}$$
(10.65)

hold. Statement (10.63) follows, since

$$\begin{aligned} y \in T^{-1} S^*\Leftrightarrow & {} Ty \in S^* \\\Leftrightarrow & {} \langle T y, x \rangle \ge 0 \quad \forall x \in S \, \\\Leftrightarrow & {} \langle y, T^*x \rangle \ge 0 \quad \forall x \in S \\\Leftrightarrow & {} y \in (T^*S)^*, \end{aligned}$$

and (10.64) follows analogously. Statement (10.65) follows by

$$\begin{aligned} T^*\left( K \cap S^\perp \right) ^*= & {} \left( T^{-1}(K \cap S^\perp )\right) ^* \\= & {} \left( T^{-1}K \cap T^{-1}S^\perp \right) ^* \\= & {} \left( K \cap T^{-1}S^\perp \right) ^* \\= & {} \left( K \cap (T^*S)^\perp \right) ^*, \end{aligned}$$

where in the first equation we used (10.63) with \(T^{-*}\) in place of T and \(K \cap S^\perp \) in place of S. The second equation is trivial, and in the third we used \(T^{-1}K = K.\) In the last we used (10.64).

Now let \((y_1, \dots , y_k) \in {\text {FR}}_k(K),\) and \(S_i = \{y_1, \dots , y_{i-1}\}\) for \(i = 1, \dots , k.\) Then by definition we have \( y_i \, \in \, (K \cap S_i^\perp )^* \, \mathrm{for \, all \,} i. \) Hence for all i we have

$$\begin{aligned} T^* y_i \in T^*\left( K \cap S_i^\perp \right) ^* =\left( K \cap \left( T^*S_i\right) ^\perp \right) ^*, \end{aligned}$$

where the equation follows from (10.65). Thus \( (T^*y_1, \dots , T^*y_k) \in {\text {FR}}_k(K), \) as required. \(\square \)

Proof of (4)

In this proof, for brevity, we will use the notation

$$\begin{aligned} K_{y,i}= & {} K \cap y_1^\perp \cap \dots \cap y_i^\perp , \\ C_{z,i}= & {} C \cap z_1^\perp \cap \dots \cap z_i^\perp , \\ (K \times C)_{y,z,i}= & {} (K \times C) \cap (y_1, z_1)^\perp \cap \dots \cap (y_{i}, z_{i})^\perp \end{aligned}$$

for all \(i \ge 0\) (with the understanding that these sets equal \(K, C, \,\) and \(K \times C, \,\) respectively, when \(i=0\)).

We will prove the equivalence (1.9) \(\Leftrightarrow \) (1.10) together with the relation

$$\begin{aligned} (K \times C)_{y,z,i} \, = \, K_{y,i} \times C_{z,i} \, \mathrm{for} \, i \le k-1. \end{aligned}$$
(10.66)

Clearly, both hold for \(k=1, \,\) so let us assume \(k \ge 2\) and that we proved them with \(k-1\) in place of k. By definition (1.9) is equivalent to

$$\begin{aligned} \bigl ( (y_1, z_1), \dots , (y_{k-1}, z_{k-1}) \bigr ) \in {\text {FR}}_{k-1}(K \times C) \end{aligned}$$
(10.67)

and

$$\begin{aligned} (y_k, z_k) \in (K \times C)_{y,z,k-1}^*. \end{aligned}$$
(10.68)

By the induction hypothesis (10.67) is equivalent to \((y_1, \dots , y_{k-1}) \in {\text {FR}}_{k-1}(K)\) and \((z_1, \dots , z_{k-1}) \in {\text {FR}}_{k-1}(C).\) So the proof is complete, if we show

$$\begin{aligned} (K \times C)_{y,z,k-1} \, = \, K_{y,k-1} \times C_{z,k-1}. \end{aligned}$$
(10.69)

To prove (10.69) we see that

$$\begin{aligned} (K \times C)_{y,z,k-1}= & {} (K \times C)_{y,z,k-2} \cap (y_{k-1}, z_{k-1})^\perp \\= & {} \left( K_{y, k-2} \times C_{z,k-2} \right) \cap (y_{k-1}, z_{k-1})^\perp \\= & {} \left( K_{y, k-2} \cap y_{k-1}^\perp \right) \times \left( K_{z, k-2} \cap z_{k-1}^\perp \right) \\= & {} K_{y,k-1} \times C_{z,k-1}, \end{aligned}$$

where the first equation is by definition, the second comes from the induction hypothesis, and the third follows from \(y_{k-1} \in K_{y,k-2}^*, \, z_{k-1} \in C_{z,k-2}^*.\) Thus the proof is complete. \(\square \)

Proof of Lemma 2

First let us note that \(T \in {\text {Aut}}({{\mathcal {S}}_+^{n}})\) (cf. Eq. 1.7) iff \(T(x) = t^T x t\) for some invertible matrix t.

We prove the lemma by induction. Suppose that \(\ell \ge 0\) is an integer, and we computed a t invertible matrix such that

$$\begin{aligned} \left( t^Ty_1t, \dots , t^Ty_{k}t\right)\in & {} {\text {FR}}\left( {{\mathcal {S}}_+^{n}}\right) , \end{aligned}$$
(10.70)
$$\begin{aligned} \left( t^Ty_1t, \dots , t^Ty_{\ell }t\right)\in & {} {\text {REGFR}}\left( {{\mathcal {S}}_+^{n}}\right) , \end{aligned}$$
(10.71)

and the block sizes in the latter sequence are \(p_1, \dots , p_{\ell }, \,\) respectively. Both of these statements hold with \(\ell =0.\) If \(\ell = k, \,\) we stop.

Otherwise, define \(p := p_1 + \dots + p_{\ell }\) and \(y_i^\prime := t^Ty_it\) for \(i=1, \dots , k.\) Let

$$\begin{aligned} K = {{\mathcal {S}}_+^{n}} \cap y_1^{\prime \perp } \cap \dots \cap y_{\ell }^{\prime \perp }. \end{aligned}$$

Then \(y_{\ell +1}' \in K^*, \,\) and K and \(K^*\) are of the form

where, again, the symbol \(\oplus \) stands for a psd submatrix, and \(\times \) for a submatrix with arbitrary elements.

Let z be the lower \(n-p\) by \(n-p\) block of \(y_{\ell +1}^\prime .\) Since \(z \succeq 0,\) there is a q invertible matrix such that

$$\begin{aligned} q^T z q \, = \, \begin{pmatrix} I_{p_{\ell +1}} &{}\quad 0 \\ 0 &{} \quad 0 \end{pmatrix}, \end{aligned}$$

where \(p_{\ell +1}\) is the rank of z.

Let \(v := I_p \oplus q\) and replace t by tv. Then by part (3) in Lemma 1 statement (10.70) still holds, and by the choice of v Eq. (10.71) now holds with \(\ell +1\) in place of \(\ell .\) This completes the proof. \(\square \)

Appendix 3: Proof of Theorem 8

Proof of (1)

Let us assume that condition (5.36) is violated; we will construct \((a_1, a_2) \in {\text {FR}}(K^*)\) and \((y_1, \dots , y_{\ell +1}) \in {\text {FR}}(K)\) that satisfy (5.38) and (5.39) (with \(k=1, \,\) and \(\ell \) equal to the degree of singularity of \({\mathcal {R}}(A) \cap K\)).

First, we choose

$$\begin{aligned} a_1\in & {} {\text {ri}}\,({\mathcal {R}}(A) \cap K), \\ a_2\in & {} {\mathcal {R}}(A) \cap ({\text {cl}}\,{\text {dir}}\,(a_1, K) \setminus {\text {dir}}\,(a_1,K)), \end{aligned}$$

and let F be the minimal cone of \({\mathcal {R}}(A) \cap K \,\) (i.e., the smallest face of K that contains \(a_1\)). Then

$$\begin{aligned} \left( K^* \cap a_1^\perp \right) ^* \, = \, \left( K^* \cap F^\perp \right) ^* \, = \, {\text {cl}}\,{\text {dir}}\,(a_1, K), \end{aligned}$$

where the first equality comes from \(a_1 \in {\text {ri}}\,F\) and the second can be found e.g., in [24]. Hence

$$\begin{aligned} a_1, a_2\in & {} {\mathcal {R}}(A), \\ (a_1, a_2)\in & {} {\text {FR}}(K^*) \end{aligned}$$

hold. We next choose the \(y_j.\) First we pick \(y_1, \dots , y_\ell \in {\mathcal {N}}(A^*)\) such that

$$\begin{aligned} F = K \cap y_1^\perp \cap \dots \cap y_\ell ^\perp , \end{aligned}$$

and \((y_1, \dots , y_\ell ) \in {\text {FR}}(K).\) Since \(a_2 \not \in {\text {lin}}\,F \,\) (otherwise \(a_2\) would be in \({\text {dir}}\,(a_1,K)\)) we can then choose \(y_{\ell +1} \in F^\perp \) such that

$$\begin{aligned} \langle a_1, y_{\ell +1} \rangle= & {} 0, \\ \langle a_2, y_{\ell +1} \rangle= & {} -1 \end{aligned}$$

hold. Thus \((a_1, a_2)\) and \((y_1, \dots , y_{\ell +1})\) are as required, and the proof is complete. \(\square \)

Proof of (2)

We fix F and G as stated. Since G is not exposed, and F is the smallest exposed face of K that contains G,  we have

$$\begin{aligned} G \subsetneq F, \, K^* \cap G^\perp = K^* \cap F^\perp \end{aligned}$$

(see Eq. (5.40) and the discussion afterwards). For brevity, let us define \(F^\triangle = K^* \cap F^\perp , G^\triangle = K^* \cap G^\perp ,\) and for a face H of \(K^*\) we define \(H^\triangle = K \cap H^\perp .\) Thus, since F is an exposed face, we also have

$$\begin{aligned} F^{\triangle \triangle } = F. \end{aligned}$$

We will choose \((a_1, a_2)\) and \((y_1, y_2)\) such that

$$\begin{aligned} (a_1, a_2)\in & {} {\text {FR}}(K^*), \end{aligned}$$
(11.72)
$$\begin{aligned} a_1, a_2\in & {} {\text {lin}}\,F, \end{aligned}$$
(11.73)
$$\begin{aligned} (y_1, y_2)\in & {} {\text {FR}}(K), \end{aligned}$$
(11.74)
$$\begin{aligned} y_1\in & {} F^\perp , \end{aligned}$$
(11.75)
$$\begin{aligned} \langle a_1, y_2 \rangle= & {} 0, \end{aligned}$$
(11.76)
$$\begin{aligned} \langle a_2, y_2 \rangle= & {} -1 \end{aligned}$$
(11.77)

(i.e., to satisfy (5.41) and (5.42) with \(k=1, \, \ell = 1\)).

We first choose \(y_1 \in {\text {ri}}\,F^\triangle . \,\) Next, since \(G \subsetneq F, \,\) we can choose \(y_2 \, \in \, (F^* \cap G^\perp ) \setminus F^\perp .\) Hence

$$\begin{aligned} K \cap y_1^\perp = K \cap (F^\triangle )^\perp = F^{\triangle \triangle } = F, \end{aligned}$$

where the first equation comes from \(y_1 \in {\text {ri}}\,F^\triangle . \,\) We thus satisfied (11.74) and (11.75).

Next we choose \(a_1\) and \(a_2:\) we choose \(a_1 \in {\text {ri}}\,G, \,\) and \(a_2 \in {\text {lin}}\,F\) to satisfy (11.77) (this can be done since \(y_2 \not \in F^\perp \)). Thus (11.73) and (11.76) also hold. We claim that (11.72) holds as well. To see this, we observe

$$\begin{aligned} K^* \cap a_1^\perp \, = \, K^* \cap G^\perp \, = \, K^* \cap F^\perp , \end{aligned}$$

where the first equation follows from \(a_1 \in {\text {ri}}\,G, \,\) and the second from \(F^\triangle = G^\triangle .\) Hence

$$\begin{aligned} \left( K^* \cap a_1^\perp \right) ^* \, = \, \left( K^* \cap F^\perp \right) ^* \, \supseteq \, {\text {lin}}\,F \, \ni a_2, \end{aligned}$$

so (11.72) follows, and this completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, M., Pataki, G. Exact duals and short certificates of infeasibility and weak infeasibility in conic linear programming. Math. Program. 167, 435–480 (2018). https://doi.org/10.1007/s10107-017-1136-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-017-1136-5

Keywords

Mathematics Subject Classification

Navigation