Log in

Prophet secretary through blind strategies

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

In the classic prophet inequality, a well-known problem in optimal stop** theory, samples from independent random variables (possibly differently distributed) arrive online. A gambler who knows the distributions, but cannot see the future, must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet that sees all the realizations in advance gets. In the late seventies, Krengel and Sucheston (Bull Am Math Soc 83(4):745–747, 1977), established that this worst case ratio is 0.5. A particularly interesting variant is the so-called prophet secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms are known to achieve a constant of \(1-1/e \approx 0.632\) and very recently this barrier was slightly improved by Azar et al. (in: Proceedings of the ACM conference on economics and computation, EC, 2018). In this paper we introduce a new type of multi-threshold strategy, called blind strategy. Such a strategy sets a nonincreasing sequence of thresholds that depends only on the distribution of the maximum of the random variables, and the gambler stops the first time a sample surpasses the threshold of the stage. Our main result shows that these strategies can achieve a constant of 0.669 for the prophet secretary problem, improving upon the best known result of Azar et al. (in: Proceedings of the ACM conference on economics and computation, EC, 2018), and even that of Beyhaghi et al. (Improved approximations for posted price and second price mechanisms. CoRR ar** time distribution for the gambler’s strategy that is inspired by the theory of Schur-convex functions. We further prove that our family of blind strategies cannot lead to a constant better than 0.675. Finally we prove that no algorithm for the gambler can achieve a constant better than \(\sqrt{3}-1 \approx 0.732\), which also improves upon a recent result of Azar et al. (in: Proceedings of the ACM conference on economics and computation, EC, 2018). This implies that the upper bound on what the gambler can get in the prophet secretary problem is strictly lower than what she can get in the i.i.d. case. This constitutes the first separation between the prophet secretary problem and the i.i.d. prophet inequality.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Here [n] denotes the set \(\{1,\ldots ,n\}\).

References

  1. Abolhassani, M., Ehsani, S., Esfandiari, H., Hajiaghayi, M.T., Kleinberg, R., Lucier, B.: In: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC, pp. 61–71 (2017)

  2. Alaei, S., Hartline, J., Niazadeh, R., Pountourakis, E., Yuan, Y.: Optimal auctions vs. anonymous pricing. In: Proceedings of the 56th Annual Symposium on Foundations of Computer Science, FOCS, pp. 1446–1463 (2015)

  3. Azar, Y., Chiplunkar, A., Kaplan, H.: Prophet secretary: surpassing the \(1-1/e\) barrier. In: Proceedings of the ACM Conference on Economics and Computation, EC, pp. 303–318 (2018)

  4. Azar, Y., Kleinberg, R., Weinberg, M.: Prophet inequalities with limited information. In: Proceedings of the 25th Annual ACM-SIAM Symposium on Discrete algorithms, SODA, pp. 1358–1377 (2014)

  5. Beyhaghi, H., Golrezaei, N., Paes Leme, R., Pal, M., Sivan, B.: Improved approximations for posted price and second price mechanisms. CoRR ar**v:1807.03435 (2018)

  6. Correa, J., Duetting, P., Fischer, F., Schewior, K.: Prophet inequalities for iid random variables from an unknown distribution. In: Proceedings of the 20th ACM Conference on Economics and Computation, EC, pp. 3–17 (2019)

  7. Chawla, S., Hartline, J., Malec, D.L., Sivan, B.: Multi-parameter mechanism design and sequential posted pricing. In: Proceedings of the 42nd Annual ACM SIGACT Symposium on Theory of Computing, STOC, pp. 311–320 (2010)

  8. Correa, J., Foncea, P., Hoeksma, R., Oosterwijk, T., Vredeveld, T.: Posted price mechanisms for a random stream of customers. In: Proceedings of the ACM Conference on Economics and Computation, EC, pp. 169–186 (2017)

  9. Correa, J., Foncea, P., Pizarro, D., Verdugo, V.: From pricing to prophets and back!. Oper. Res. Lett. 47(1), 25–29 (2019)

    Article  MathSciNet  Google Scholar 

  10. Dütting, P., Kesselheim, T., Lucier, B.: An O(log log m) prophet inequality for subadditive combinatorial auctions. CoRR ar**v:2004.09784 (2020)

  11. Dütting, P., Feldman, M., Kesselheim, T., Lucier, B.: Prophet Inequalities made easy: stochastic optimization by pricing nonstochastic inputs. SIAM J. Comput. 49(3), 540–582 (2020)

    Article  MathSciNet  Google Scholar 

  12. Einav, L., Farronato, C., Levin, J., Sundaresan, N.: Auctions versus posted prices in online markets. J. Polit. Econ. 126(1), 178–215 (2018)

    Article  Google Scholar 

  13. Esfandiari, H., Hajiaghayi, M., Liaghat, V., Monemizadeh, M.: Prophet secretary. In: Proceedings of the 23rd Annual European Symposium, ESA, pp. 496–508 (2015)

  14. Ehsani, S., Hajiaghayi, M., Kesselheim, T., Singla, S.: Prophet secretary for combinatorial auctions and matroids. In: Proceedings of the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, pp. 700–714 (2018)

  15. Ehsani, S., Hajiaghayi, M., Kesselheim, T., Singla, S.: Combinatorial auctions via posted prices. In: Proceedings of the 26th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, pp. 123–135 (2014)

  16. Ezra, T., Feldman, M., Gravin, N., Tang, Z.: Online stochastic max-weight matching: prophet inequality for vertex and edge arrival models. In: Proceedings of the ACM Conference on Economics and Computation, EC, 2020, to appear

  17. Feldman, M., Gravin, N., Lucier, B.: Combinatorial auctions via posted prices. In: Proceedings of the 26th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, pp. 123–135 (2015)

  18. Gilbert, J.P., Mosteller, F.: Recognizing the maximum of a sequence. J. Am. Stat. Assoc. 61(313), 35–76 (1966)

    Article  MathSciNet  Google Scholar 

  19. Gravin, N., Wang, H.: Prophet inequality for bipartite matching: merits of being simple and non adaptive. In: Proceedings of the ACM Conference on Economics and Computation, EC, pp. 93–109 (2019)

  20. Hajiaghayi, M., Kleinberg, R., Sandholm, T.: Automated online mechanism design and prophet inequalities. In: Proceedings of the 22nd AAAI Conference on Artificial Intelligence, AAAI, pp. 58–65 (2007)

  21. Hill, T.P., Kertz, R.P.: Comparisons of stop rule and supremum expectations of i.i.d. random variables. Ann. Probab. 10(2), 336–345 (1982)

    Article  MathSciNet  Google Scholar 

  22. Kertz, R.P.: Stop rule and supremum expectations of i.i.d. random variables: a complete comparison by conjugate duality. J. Multivar. Anal. 19(1), 88–112 (1986)

    Article  MathSciNet  Google Scholar 

  23. Kleinberg, R., Weinberg, S.M.: Matroid prophet inequalities. In: Proceedings of the 44th Annual ACM SIGACT Symposium on Theory of Computing, STOC, pp. 123–136 (2012)

  24. Kleinberg, R., Weinberg, S.M.: Matroid prophet inequalities and applications to multi-dimensional mechanism design. Games Econ. Behav. 113, 97–115 (2019)

    Article  MathSciNet  Google Scholar 

  25. Krengel, U., Sucheston, L.: Semiamarts and finite values. Bull. Am. Math. Soc 83(4), 745–747 (1977)

    Article  MathSciNet  Google Scholar 

  26. Krengel, U., Sucheston, L.: On semiamarts, amarts, and processes with finite value. Adv. Probab. 4, 197–266 (1978)

    MathSciNet  Google Scholar 

  27. Myerson, R.B.: Optimal auction design. Math. Oper. Res. 6(1), 58–73 (1981)

    Article  MathSciNet  Google Scholar 

  28. Pecaric, J., Proshman, F., Tong, Y.: Convex Functions, Partial Orderings, and Statistical Applications. Academic Press, Cambridge (1992)

    Google Scholar 

  29. Rubinstein, A., Wang, J.Z., Weinberg, S.M.: The prophet inequality can be solved optimally with a single set of samples. In: Proceedings of the 11th Innovations in Theoretical Computer Science Conference, ITCS, pp. 60:1–60:10 (2020)

  30. Samuel-Cahn, E.: Comparisons of threshold stop rule and maximum for independent nonnegative random variables. Ann. Probab. 12(4), 1213–1216 (1983)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bruno Ziliotto.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Partially supported by CONICYT-Chile through grant FONDECYT 1190043, by ECOS-CONICYT through grant C15E03, and by a Google Research for Latin America Award. A preliminary version of this paper appeared in the Proceedings of the 30th ACM-SIAM Symposium on Discrete Algorithms (SODA 2019).

Appendix A Missing proofs in Lemma

Appendix A Missing proofs in Lemma

Lemma A.1

Consider \(0 < \beta \le 1\), \(0 \le \gamma \le \beta \) and the optimization problem

$$\begin{aligned} (P) \left\{ \begin{array}{l c} \min _{x} &{} f_{ \lambda }( x ) := \frac{ 1 - x }{ 1 - \lambda + \lambda x } + \frac{ 1 - \frac{ \gamma }{x} }{ 1 - \lambda + \lambda \frac{ \beta }{x} } \\ s.t. &{} \beta \le x \le 1 \end{array} \right. . \end{aligned}$$

If \(\lambda \in [0, 1/2]\), then the value of (P) is \(f_{\lambda }(1)\).

In order to prove Lemma A.1, we will prove three conditions that will imply the result, namely:

  1. 1.

    \(f_{\lambda }( \beta ) \ge f_{\lambda }( 1 )\).

  2. 2.

    \(x = 1\) is local minimum.

  3. 3.

    There exists at most one critical point in the interval \([\beta , 1]\).

Each of these conditions are formally stated in the next three lemmata.

Lemma A.2

Consider \(0 < \beta \le 1\), \(0 \le \gamma \le \beta \) and \(f_{ \lambda }( x ) := \frac{ 1 - x }{ 1 - \lambda + \lambda x } + \frac{ 1 - \frac{ \gamma }{x} }{ 1 - \lambda + \lambda \frac{ \beta }{x} }\). If \(0 \le \lambda < 1\), then

$$\begin{aligned} f_{\lambda }( \beta ) \ge f_{\lambda }( 1 ). \end{aligned}$$

Proof

By direct computation, we have that

$$\begin{aligned} f_{\lambda }( \beta )&\ge f_{\lambda }( 1 ) \\&\Leftrightarrow \frac{ 1 - \beta }{ 1 - \lambda + \lambda \beta } + \frac{ 1 - \gamma / \beta }{ 1 - \lambda + \lambda } \ge 0 + \frac{ 1 - \gamma }{ 1 - \lambda + \lambda \beta } \\&\Leftrightarrow \frac{ \beta - \gamma }{ \beta } \ge \frac{ \beta - \gamma }{ 1 - \lambda + \lambda \beta } \\&\Leftrightarrow ( \beta - \gamma )( 1 - \lambda )( 1 - \beta ) \ge 0, \end{aligned}$$

which is true by assumption. \(\square \)

Lemma A.3

Consider \(0 < \beta \le 1\), \(0 \le \gamma \le \beta \) and \(f_{ \lambda }( x ) := \frac{ 1 - x }{ 1 - \lambda + \lambda x } + \frac{ 1 - \frac{ \gamma }{x} }{ 1 - \lambda + \lambda \frac{ \beta }{x} }\). If \(0 \le \lambda \le 1/2\), then

\(x = 1\) is local minimum of \(f_{ \lambda }(\cdot )\) in \([\beta , 1]\).

Proof

Since the domain is \([\beta , 1]\), it’s sufficient to prove that

$$\begin{aligned} \frac{d}{dx} f_{\lambda }(1) \le 0. \end{aligned}$$

Since \(f_{\lambda }( x ) = \frac{ 1 - x }{ 1 - \lambda + \lambda x } + \frac{ x - \gamma }{ (1 - \lambda ) x + \lambda \beta }\), we have that

$$\begin{aligned} \frac{d}{dx} f_{\lambda }( x ) = \frac{ -1 }{ ( 1 - \lambda + \lambda x )^2 } + \frac{ \lambda \beta + (1 - \lambda ) \gamma }{ ( (1 - \lambda ) x + \lambda \beta )^2 }. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{d}{dx} f_{\lambda }( 1 )&= -1 + \frac{ \lambda \beta + (1 - \lambda ) \gamma }{ ( 1 - \lambda + \lambda \beta )^2 } \\&= \frac{ \lambda \beta + (1 - \lambda ) \gamma - ( 1 - \lambda + \lambda \beta )^2 }{ ( 1 - \lambda + \lambda \beta )^2 } \\&= - \frac{ \lambda ^2 [ \beta - 1 ]^2 + \lambda [ \gamma + \beta - 2] + [ 1 - \gamma ] }{ ( 1 - \lambda + \lambda \beta )^2 }. \end{aligned}$$

Then, \(\partial _{x} f_{\lambda }( 1 ) \le 0\) if and only if

$$\begin{aligned} g_{\beta , \gamma }( \lambda ) := \lambda ^2 [ \beta - 1 ]^2 + \lambda [ \gamma + \beta - 2] + [ 1 - \gamma ] \ge 0. \end{aligned}$$

The function \(g_{\beta , \gamma }( \cdot )\) is a convex quadratic function. Moreover, \(g_{\beta , \gamma }( 0 ) = 1 - \gamma \ge 0\) and \(g_{\beta , \gamma }( 1 ) = (\beta - 1) \beta \le 0\). There are some corner cases where it is easy to conclude. Assume that \(\gamma = 1\), then \(\beta = 1\) and \(g_{\beta , \gamma }( \cdot ) \equiv 0\), therefore, we can assume that \(\gamma < \beta \). Consider the case \(\beta = 1\), in which \(g_{\beta , \gamma }( \cdot )\) is a linear function satisfying \(g_{\beta , \gamma }( 0 ) = 1 - \gamma \ge 0\) and \(g_{\beta , \gamma }( 1 ) = 0\). Therefore, we can assume \(\beta < 1\), i.e. : \(g_{\beta , \gamma }( \cdot )\) is a strictly convex quadratic function such that \(g_{\beta , \gamma }( 0 ) > 0\) and \(g_{\beta , \gamma }( 1 ) < 0\). Moreover, if \(\beta = 0\), then \(g_{\beta , \gamma }( \lambda ) = (\lambda - 1)^2\), so we can also assume that \(\beta > 0\). Define

$$\begin{aligned} \lambda _m( \beta , \gamma ) := \inf \{ \lambda > 0 : g_{\beta , \gamma }( \lambda ) \le 0 \}, \end{aligned}$$

the smallest root of the polynomial \(g_{\beta , \gamma }( \cdot )\). We will prove that

$$\begin{aligned} \inf _{ \begin{array}{c} 0< \beta < 1 \\ 0 \le \gamma \le \beta \end{array}} \lambda _m ( \beta , \gamma ) = \frac{ 1 }{ 2 }. \end{aligned}$$
(A.1)

By solving the quadratic equation,

$$\begin{aligned} \lambda _m( \beta , \gamma ) = \frac{ 2 - \beta - \gamma - \sqrt{ (2 - \beta - \gamma )^2 - 4(1 - \gamma )(1 - \beta )^2 } }{2(1 - \beta )^2}. \end{aligned}$$

Note that \(\lambda _m ( \beta , \gamma ) > 0\), since \(0 \le \gamma \le \beta < 1\).

We will first show that for all \(0 \le \gamma \le \beta \), \(\partial _{\gamma } \lambda _m( \beta , \gamma ) \le 0\), i.e. : \(\lambda _m ( \beta , \cdot )\) is decreasing, which allows us to consider only \(\lambda _m ( \beta , \beta )\) to prove (A.1). We will finish the proof by proving that \(\inf _{0< \beta < 1} \lambda _m( \beta , \beta ) = 1 / 2\).

To see that \(\partial _{\gamma } \lambda _m( \beta , \gamma ) \le 0\), we’ll prove that

$$\begin{aligned} \partial _{\gamma } \lambda _m( \beta , 0 ) \le 0 \quad \text {and} \quad \forall 0 \le \gamma \le \beta \quad \partial _{\gamma , \gamma } \lambda _m( \beta , \gamma ) \le 0. \end{aligned}$$

By direct computation,

$$\begin{aligned} \partial _{\gamma } \lambda _m( \beta , \gamma )&\le 0\\&\Leftrightarrow \partial _{\gamma } \left( \frac{ 2 - \beta - \gamma - \sqrt{ (2 - \beta - \gamma )^2 - 4(1 - \gamma )(1 - \beta )^2 } }{2(1 - \beta )^2} \right) \le 0 \\&\Leftrightarrow -1 - \partial _{\gamma } \left( \sqrt{ (2 - \beta - \gamma )^2 - 4(1 - \gamma )(1 - \beta )^2 } \right) \le 0 \\&\Leftrightarrow -1 - \frac{ -2( 2 - \beta - \gamma ) + 4( 1 - \beta )^2 }{ 2 \sqrt{ (2 - \beta - \gamma )^2 - 4(1 - \gamma )(1 - \beta )^2 } } \le 0 \\&\Leftrightarrow -1 + \frac{ 3\beta - \gamma - 2\beta ^2 }{ \sqrt{ (2 - \beta - \gamma )^2 - 4(1 - \gamma )(1 - \beta )^2 } } \le 0 \end{aligned}$$

Therefore,

$$\begin{aligned} \partial _{\gamma } \lambda _m( \beta , 0 )&\le 0 \\&\Leftrightarrow -1 + \frac{ 3\beta - 2\beta ^2 }{ \sqrt{ (2 - \beta )^2 - 4(1 - \beta )^2 } } \le 0 \\&\Leftrightarrow \frac{ \beta ( 3 - 2\beta ) }{ \sqrt{ \beta ( 4 - 3\beta ) } } \le 1 \\&\Leftrightarrow \sqrt{ \beta } ( 3 - 2\beta ) \le \sqrt{ ( 4 - 3\beta ) } \\&\Leftrightarrow 9\beta - 12\beta ^2 + 4\beta ^3 \le 4 - 3\beta \\&\Leftrightarrow \beta (3 - 2\beta )^2 \le 4 - 3\beta , \end{aligned}$$

which is true for all \(\beta \in (0, 1)\).

On the other hand,

$$\begin{aligned}&\partial _{\gamma , \gamma } \lambda _m( \beta , \gamma ) \le 0\\&\quad \Leftrightarrow \partial _{\gamma , \gamma } \left( \frac{ 2 - \beta - \gamma - \sqrt{ (2 - \beta - \gamma )^2 - 4(1 - \gamma )(1 - \beta )^2 } }{2(1 - \beta )^2} \right) \le 0 \\&\quad \Leftrightarrow \partial _{\gamma } \left( -1 + \frac{ 3\beta - \gamma - 2\beta ^2 }{ \sqrt{ (2 - \beta - \gamma )^2 - 4(1 - \gamma )(1 - \beta )^2 } } \right) \le 0 \\&\quad \Leftrightarrow - \sqrt{ (2 - \beta - \gamma )^2 - 4(1 - \gamma )(1 - \beta )^2 } + \frac{ ( 3\beta - \gamma - 2\beta ^2 )^2 }{ \sqrt{ (2 - \beta - \gamma )^2 - 4(1 - \gamma )(1 - \beta )^2 } } \le 0 \\&\quad \Leftrightarrow ( 3\beta - \gamma - 2\beta ^2 )^2 - (2 - \beta - \gamma )^2 + 4(1 - \gamma )(1 - \beta )^2 \le 0 \\&\quad \Leftrightarrow 9\beta ^2 + \gamma ^2 + 4\beta ^4 - 6\beta \gamma + 4\gamma \beta ^2 - 12\beta ^3 - 4 - \beta ^2 - \gamma ^2 \\&\qquad + 4\beta - 2\beta \gamma + 4\gamma + 4(1 - \gamma )(1 -2\beta + \beta ^2) \le 0 \\&\quad \Leftrightarrow 8\beta ^2 + 4\beta ^4 - 8\beta \gamma + 4\gamma \beta ^2 \\&\qquad - 12\beta ^3 - 4 + 4\beta + 4\gamma + 4(1 - \gamma )(1 -2\beta + \beta ^2) \le 0 \\&\quad \Leftrightarrow 8\beta ^2 + 4\beta ^4 - 8\beta \gamma + 4\gamma \beta ^2 - 12\beta ^3 - 4 + 4\beta + 4\gamma + 4 - 8\beta \\&\qquad + 4\beta ^2 - 4\gamma + 8\beta \gamma - 4\beta ^2\gamma \le 0 \\&\quad \Leftrightarrow 12\beta ^2 + 4\beta ^4 - 12\beta ^3 - 4\beta \le 0 \\&\quad \Leftrightarrow 3 - 3\beta + \beta ^2 \le \frac{ 1 }{ \beta }, \end{aligned}$$

which, again, is true for all \(\beta \in (0, 1)\).

We have proved that for all \(\beta \in (0, 1)\), \(\lambda _m ( \beta , \cdot )\) is decreasing. Therefore, we just need to prove that \(\inf _{0< \beta < 1} \lambda _m( \beta , \beta ) = 1 / 2\), which is true because:

$$\begin{aligned} \lambda _m ( \beta , \beta ) = \frac{ 2 - 2\beta - \sqrt{ (2 - 2\beta )^2 - 4(1 - \beta )^3 } }{2(1 - \beta )^2} = \frac{ 1 - \sqrt{ \beta } }{ 1 - \beta } = \frac{ 1 }{ 1 + \sqrt{ \beta } } \ge \frac{ 1 }{ 2 }. \end{aligned}$$

This implies that \(g_{\beta , \gamma }(\lambda ) \ge 0\), for all \(\lambda \in [0, 1/2]\), which in turn implies that \(x = 1\) is a local minimum of \(f_{\lambda }( \cdot )\) in \([\beta , 1]\). \(\square \)

Lemma A.4

Consider \(0 < \beta \le 1\), \(0 \le \gamma \le \beta \) and \(f_{ \lambda }( x ) := \frac{ 1 - x }{ 1 - \lambda + \lambda x } + \frac{ 1 - \frac{ \gamma }{x} }{ 1 - \lambda + \lambda \frac{ \beta }{x} }\). If \(0 \le \lambda \le 1/2\), then

$$\begin{aligned} \left| \{ x \in (\beta , 1) : \frac{d}{dx} f_{\lambda }( x ) = 0 \} \right| \le 1. \end{aligned}$$

Proof

Notice that

$$\begin{aligned} f_{\lambda }( x )= & {} \frac{ 1 - x }{ 1 - \lambda + \lambda x } + \frac{ x - \gamma }{ (1 - \lambda ) x + \lambda \beta } \\= & {} \frac{ x^2[ 2\lambda - 1 ] + x[ 2 - 2\lambda - \lambda \beta - \lambda \gamma ] + [ \lambda \beta - \gamma (1 - \lambda ) ] }{ x^2[ \lambda (1 - \lambda ) ] + x[ (1 - \lambda )^2 + \lambda ^2 \beta ] + [ \lambda \beta (1 - \lambda ) ] }, \end{aligned}$$

which implies that \(f_{\lambda }( \cdot )\) has at most two extreme points. We will prove that one of them is always negative when \(\lambda \le 1 / 2\), which will conclude the proof. To do this, we compute \(\frac{d}{dx} f_{\lambda }( x )\) and notice that extreme points solve a quadratic equation. By analyzing the corresponding coefficient we will conclude that, if there is a real extreme point, then there must be a real negative extreme point.

By direct computation, notice that \(\frac{d}{dx} f_{\lambda }( x )\) if and only if

$$\begin{aligned}&( 2 x [ 2\lambda \!-\! 1 ] \!+\! [ 2 - 2\lambda - \lambda \beta - \lambda \gamma ] ) (x^2[ \lambda (1 \!-\! \lambda ) ] + x[ (1 - \lambda )^2 + \lambda ^2 \beta ] + [ \lambda \beta (1 - \lambda )) \\&\quad - (2 x [ \lambda (1 - \lambda ) ] + [ (1 - \lambda )^2 + \lambda ^2 \beta ] ) (x^2[ 2\lambda - 1 ] + x[ 2 - 2\lambda - \lambda \beta - \lambda \gamma ]\\&\quad + [ \lambda \beta - \gamma (1 - \lambda ) ] ) = 0. \end{aligned}$$

Define and compute relevant terms by

$$\begin{aligned} a&:= (2 - 2\lambda - \lambda \beta - \lambda \gamma ) \lambda (1 - \lambda ) + ((1 - \lambda )^2 + \lambda ^2 \beta ) 2 (2\lambda - 1) \\&\quad \cdots - (2\lambda - 1) ((1 - \lambda )^2 + \lambda ^2 \beta ) - (2 - 2\lambda - \lambda \beta - \lambda \gamma ) 2 \lambda (1 - \lambda ) \\&= ( 2 \lambda - \lambda ^2( 2 + \beta + \gamma ) ) (1 - \lambda ) + ( 1 - 2 \lambda + \lambda ^2 + \lambda ^2 \beta )( 4\lambda - 2 ) \\&\quad \cdots - ( 1 - 2 \lambda + \lambda ^2 + \lambda ^2 \beta )( 2\lambda - 1) - ( 2 \lambda - \lambda ^2( 2 + \beta + \gamma ) )( 2 - 2\lambda ) \\&= ( 2 \lambda - \lambda ^2( 2 + \beta + \gamma ) )( \lambda - 1 ) + ( 1 - 2 \lambda + \lambda ^2 + \lambda ^2 \beta )( 2\lambda - 1 ) \\&= \lambda ^3 [ \beta - \gamma ] + \lambda ^2 [ \gamma - 1 ] + \lambda [ 2 ] + [ -1 ], \\ b&:= ( 2 - 2\lambda - \lambda \beta - \lambda \gamma )( (1 - \lambda )^2 + \lambda ^2 \beta ) + 2 ( 2\lambda - 1 ) \lambda \beta (1 - \lambda ) \\&\quad \cdots - ( (1 - \lambda )^2 + \lambda ^2 \beta )( 2 - 2\lambda - \lambda \beta - \lambda \gamma ) - 2 \lambda (1 - \lambda ) ( \lambda \beta - \gamma (1 - \lambda ) ) \\&= 2 ( 2\lambda - 1 ) \lambda \beta (1 - \lambda ) - 2 \lambda (1 - \lambda ) ( \lambda \beta - \gamma (1 - \lambda ) ) \\&= 2 \lambda (1 - \lambda )^2 (\gamma - \beta ), \\ c&= ( 2 - 2\lambda - \lambda \beta - \lambda \gamma ) \lambda \beta (1 - \lambda ) - ( (1 - \lambda )^2 + \lambda ^2 \beta )( \lambda \beta - \gamma (1 - \lambda ) ). \end{aligned}$$

Then, we have that, if x is an extreme point of \(f_{\lambda }( \cdot )\), then x is a solution to \(ax^2 + bx + c = 0\).

Notice that \(b \le 0\), since \(\gamma \le \beta \). Then, one of the extreme points has the same sign as a (when it is a real number). Moreover,

$$\begin{aligned}&\forall \gamma \le \beta \in [0,1] \quad a = \lambda ^3 [ \beta - \gamma ] + \lambda ^2 [ \gamma - 1 ] + \lambda [ 2 ] + [ -1 ] \\&\quad = \gamma \lambda ^2 (1 - \lambda ) + \beta \lambda ^3 - \lambda ^2 + 2\lambda - 1 \le 0 \\&\quad \Leftrightarrow \forall \beta \in [0,1] ( \beta - 1 ) \lambda ^2 + 2\lambda - 1 \le 0 \\&\quad \Leftrightarrow 2\lambda - 1 \le 0. \end{aligned}$$

Then, for all \(\lambda \in [ 0, \frac{1}{2} ]\), \(a \le 0\), therefore one of the extreme points of \(f_{\lambda }( x )\), when real, is negative. \(\square \)

Proof of Lemma A.1

Recall the optimization problem (P) is given by

$$\begin{aligned} (P) \left\{ \begin{array}{l c} \min _{x} &{} f_{ \lambda }( x ) := \frac{ 1 - x }{ 1 - \lambda + \lambda x } + \frac{ 1 - \frac{ \gamma }{x} }{ 1 - \lambda + \lambda \frac{ \beta }{x} } \\ s.t. &{} \beta \le x \le 1 \end{array} \right. . \end{aligned}$$

Since \(f_{\lambda }(\cdot )\) is a continuous function, there exists \(x^* \in [\beta , 1]\) such that \(f_{ \lambda }( x^* ) = \min _{\beta \le x \le 1} f_{ \lambda }( x )\). By Lemma A.2, we can consider that \(x^* \in (\beta , 1]\).

Assume by contradiction that there is \(y \in (\beta , 1)\) such that \(f_{ \lambda }( y ) < f_{ \lambda }( 1 )\). Since we would also have that \(f_{ \lambda }( x^* ) < f_{ \lambda }( \beta )\), we can conclude that there exists \(x^* \in (\beta , 1)\) local minimum of \(f_{ \lambda }( \cdot )\). But, by Lemma A.3, \(f_{ \lambda }( \cdot )\) is decreasing close to 1, so there must exist \(y^* \in (x^*, 1)\) which is a local maximum of \(f_{ \lambda }( \cdot )\). Then,

$$\begin{aligned} \left| \{ x \in (\beta , 1) : \frac{d}{dx} f_{\lambda }( x ) = 0 \} \right| \ge 2, \end{aligned}$$

which contradicts Lemma A.4. Therefore, the value of (P) is \(f_{ \lambda }( 1 )\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Correa, J., Saona, R. & Ziliotto, B. Prophet secretary through blind strategies. Math. Program. 190, 483–521 (2021). https://doi.org/10.1007/s10107-020-01544-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-020-01544-8

Mathematics Subject Classification

Navigation