Coherent Choice Functions Without Archimedeanity

  • Chapter
  • First Online:
Reflections on the Foundations of Probability and Statistics

Part of the book series: Theory and Decision Library A: ((TDLA,volume 54))

Abstract

We study whether it is possible to generalise Seidenfeld et al.’s representation result for coherent choice functions in terms of sets of probability/utility pairs when we let go of Archimedeanity. We show that the convexity property is necessary but not sufficient for a choice function to be an infimum of a class of lexicographic ones. For the special case of two-dimensional option spaces, we determine the necessary and sufficient conditions by weakening the Archimedean axiom.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    A family of belief models is called a belief structure when it is a lattice with respect to some partial order ≼, it is closed under infima and it has no top. It is called a strong belief structure when in addition any belief model can be obtained as the infima of the maximal models that dominate it.

  2. 2.

    We want to caution the reader that there are other non-equivalent definitions of weak Archimedeanity: for instance, the one given by Zaffalon and Miranda (2017, Defintion 19) and Zaffalon and Miranda (2021) for binary comparison of the options, differs from ours, even when we restrict our definition to binary option sets.

References

  • Aizerman, M.A. 1985. New problems in the general choice theory. Social Choice and Welfare 2:235–282.

    Article  Google Scholar 

  • Arrow, K.J. 1951. Social choice and individual values. Cowles Foundation Monographs Series. New Haven: Yale University Press.

    Google Scholar 

  • Blume, L., Brandenburger, A., and E. Dekel. 1991. Lexicographic probabilities and choice under uncertainty. Econometrica: Journal of the Econometric Society 61–79.

    Google Scholar 

  • Bock, J.D., and G. de Cooman. 2019. Interpreting, axiomatising and representing coherent choice functions in terms of desirability. In Proceedings of Machine Learning Research, vol 103, 125–134.

    Google Scholar 

  • De Bock, J., and G. de Cooman. 2018. A desirability-based axiomatisation for coherent choice functions. In Uncertainty modelling in data science, 78–86. Berlin: Springer.

    Google Scholar 

  • De Cooman, G. 2005. Belief models: an order-theoretic investigation. Annals of Mathematics and Artificial Intelligence 45(1–2):5–34.

    Article  Google Scholar 

  • De Cooman, G., and E. Quaeghebeur. 2012. Exchangeability and sets of desirable gambles. International Journal of Approximate Reasoning 53(3):363–395.

    Article  Google Scholar 

  • Fishburn, P.C. 1982. The foundations of expected utility. Theory and Decision Library, vol 31. Dordrecht: Springer Netherlands.

    Google Scholar 

  • He, J. 2012. A generalized unification theorem for choice theoretic foundations: avoiding the necessity of pairs and triplets. Economics Discussion Paper 2012-23, Kiel Institute for the World Economy.

    Google Scholar 

  • Miranda, E., Camp, A.V., and G. de Cooman. 2018. Choice functions and rejection sets. In The mathematics of the uncertain: a tribute to Pedro Gil, eds. Gil, E., Gil, E., Gil, J., and M. Gil, 237–246. Berlin: Springer.

    Google Scholar 

  • Quaeghebeur, E. 2014. Desirability. In Introduction to imprecise probabilities, chap 1, eds. Augustin, T., Coolen, F.P.A., De Cooman, G., and M.C.M. Troffaes, 1–27. Hoboken: John Wiley & Sons.

    Google Scholar 

  • Rubin, H. 1987. A weak system of axioms for “rational” behavior and the nonseparability of utility from prior. Statistics & Risk Modeling 5(1–2):47–58.

    Google Scholar 

  • Schwartz, T. 1972. Rationality and the myth of the maximum. Noûs 6(2):97–117.

    Article  Google Scholar 

  • Seidenfeld, T., Schervish, M.J., and J.B. Kadane. 1990. Decisions without ordering. In Acting and reflecting, vol 211, 143–170, ed. Sieg, W. Synthese Library. Dordrecht: Kluwer.

    Google Scholar 

  • Seidenfeld, T., Schervish, M.J., and J. B. Kadane. 2010. Coherent choice functions under uncertainty. Synthese 172(1):157–176.

    Article  Google Scholar 

  • Sen, A. 1971. Choice functions and revealed preference. The Review of Economic Studies 38(3):307–317.

    Article  Google Scholar 

  • Sen, A. 1977. Social choice theory: a re-examination. Econometrica 45:53–89.

    Article  Google Scholar 

  • Troffaes, M.C.M. 2007. Decision making under uncertainty using imprecise probabilities. International Journal of Approximate Reasoning 45(1):17–29.

    Article  Google Scholar 

  • Uzawa, H. 1956. Note on preference and axioms of choice. Annals of the Institute of Statistical Mathematics 8:35–40.

    Article  Google Scholar 

  • Van Camp, A. 2018. Choice functions as a tool to model uncertainty. Ph.D Thesis, Ghent University.

    Google Scholar 

  • Van Camp, A., and G. de Cooman. 2018. Exchangeable choice functions. International Journal of Approximate Reasoning 100:85–104.

    Article  Google Scholar 

  • Van Camp, A., De Cooman, G., and E. Miranda. 2018a. Lexicographic choice functions. International Journal of Approximate Reasoning 92:97–119.

    Article  Google Scholar 

  • Van Camp, A., De Cooman, G., Miranda, E., and E. Quaeghebeur. 2018b. Coherent choice functions, desirability and indifference. Fuzzy sets and systems 341:1–36.

    Article  Google Scholar 

  • Van Camp, A., and E. Miranda. 2019. Irrelevant natural extension for choice functions. In Proceedings of Machine Learning Research, vol 103, 414–423.

    Google Scholar 

  • Walley, P. 1991. Statistical reasoning with imprecise probabilities. London: Chapman and Hall.

    Book  Google Scholar 

  • Walley, P. 2000. Towards a unified theory of imprecise probability. International Journal of Approximate Reasoning 24(2–3):125–148.

    Article  Google Scholar 

  • Zaffalon, M., and E. Miranda. 2017. Axiomatisation of incomplete preferences through sets of desirable gambles. Journal of Artificial Intelligence Research 60:1057–1126.

    Article  Google Scholar 

  • Zaffalon, M. and E. Miranda. 2021. Desirability foundations of robust rational decision making. Synthese 198(Supp. 27), 6529–6570.

    Google Scholar 

Download references

Acknowledgements

We would like to thank Jasper De Bock, Gert de Cooman and Teddy Seidenfeld for stimulating discussion, and two anonymous reviewers for their careful reading and detailed comments. This work has been partially funded by the projects PGC2018-098623-B-I00, GRUPIN/IDI/2018/000176 and the project PreServe (ANR-18-CE23-0008) of the Agence Nationale de Recherche (ANR).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Enrique Miranda .

Editor information

Editors and Affiliations

Appendix: Proofs

Appendix: Proofs

Proof of Proposition 1

It suffices to prove the direct implication. To this end, consider any u 1, …, u n in \(\mathcal {V}\) and μ 1, …, μ n in \(\mathbb {R}_{>0}\), and assume that 0 ∈ R({0, u 1, …, u n}). Let , and ; we need to show that then 0 ∈ R(A). Using Axiom R4a we infer that 0 ∈ R(A′), and using Axiom R3a also that 0 ∈ R(A 1), with . Note that μ u k ∈CH({0, μ k u k}) for every k in {1, …, n}, whence A ⊆ A 1 ⊆CH(A). Therefore, by applying Axiom R5 we find indeed that 0 ∈ R(A). □

Proof of Proposition 2

We will first show that (i) implies (ii) and (iii). That (i) implies (ii) follows immediately from Axiom R3b [with , and ]. That (i) implies (iii) follows immediately from Axiom R3b [with , and ].

We will now assume that R satisfies Axiom R3a and show that then (ii) implies (i). To this end, consider any A, A 1 and A 2 in \(\mathcal {Q}\) in and assume that A ⊆ A 1 ⊆ R(A 2). Then in particular u ∈ R(A 2), and therefore, using (ii), u ∈ R({u}∪ A 2 ∖ R(A 2)), for every u in A 1 ∖ A. Applying Axiom R3a, we infer that u ∈ R(A 2 ∖ A) for every u in A 1 ∖ A, whence indeed A 1 ∖ A ⊆ R(A 2 ∖ A).

To finish the proof, we will assume that R additionally satisfies Axiom R4b, and show that then (iii) implies (i). To this end, infer first that (iii) implies, using Axiom R4b, that

$$\displaystyle \begin{aligned} (\forall A\in\mathcal{Q})(\forall u\in R(A))(\forall{v}\in R(A)\setminus\{u\})u\in R(A\setminus\{{v}\}), \end{aligned} $$
(A.1)

which is easily seen once we realise that Axiom R4b implies that u ∈ R(A) is equivalent to 0 ∈ R(A −{u}), for any A in \(\mathcal {Q}\) and u in A. So assume that R satisfies Eq. (A.1); we will prove that it satisfies Axiom R3b. Let , and , where \(n\in \mathbb {N}\) and \(m,r\in \mathbb {Z}_{\geq 0}\), and assume that A 1 ⊆ R(A 2). Consider any j in {1, …, m}, then we have to prove that v j ∈ R({v 1, …, v m, w 1, …, w r}) = R(A 2 ∖{u 1, …, u n}). Since {u 1, u 2}⊆ R(A 2) and {v j, u 1}⊆ R(A 2), it follows from Eq. (A.1) that {u 2, v j}⊆ R(A 2 ∖{u 1}), whence, again using Eq. (A.1), v j ∈ R(A 2 ∖{u 1, u 2}). Also, {u 1, u 3}⊆ R(A 2), whence u 3 ∈ R(A 2 ∖{u 1}) using Eq. (A.1). Since we already know that also u 2 ∈ R(A 2 ∖{u 1}), we infer that u 3 ∈ R(A 2 ∖{u 1, u 2}), again using again using Eq. (A.1). In turn, this implies that v j ∈ R(A 2 ∖{u 1, u 2, u 3}). We can go on in this way until we reach the desired statement, that v j ∈ R(A 2 ∖{u 1, …, u n}), after a finite number of steps. □

Proof of Corollary 1

That the first statement implies the second is immediate. To establish the converse, we will prove the contraposition. Assume that R does not satisfy Axiom R1. Therefore, we have that A = R(A) for some A in \(\mathcal {Q}\). Consider any u in A, then by Proposition 2 (ii) we find that u ∈ R({u}∪ A ∖ R(A)) = R({u}∪ A ∖ A) = R({u}). By Axiom R4b therefore indeed 0 ∈ R({0}). □

Proof of Lemma 1

If a = 0 then (0, a) = (0, 0)∉K by Property K2. Analogously, if a = 1 then (1 − a, 0) = (0, 0)∉K by Property K2. Assume therefore that a ∈ (0, 1), and assume ex absurdo that both (0, a) and (1 − a, 0) are elements of K. Use Property K3b to infer that (0, 0) ∈ K, which contradicts Property K2. □

Proof of Proposition 7

We first prove that K R satisfies Property K1. Consider any (k 1, k 2) in K R, and any \((k_1^{\prime },k_2^{\prime })\) in [0, 1)2 such that \(k_1^{\prime }\geq k_1\) and \(k_2^{\prime }\geq k_2\). Then (k 1, k 2) ∈ K R simply means that 0 ∈ R({(k 1 − 1, k 1), 0, (k 2, k 2 − 1)}), and \(k_1^{\prime }\geq k_1\) and \(k_2^{\prime }\geq k_2\) implies that \((k_1^{\prime }-1,k_1^{\prime })\geq (k_1-1,k_1)\) and \((k_2^{\prime }-1,k_2^{\prime })\geq (k_2-1,k_2)\). Van Camp et al. (2018a, Proposition 2) tells us that then \(0\in R(\{(k_1^{\prime }-1,k_1^{\prime }),0,(k_2^{\prime },k_2^{\prime }-1)\})\), whence indeed \((k_1^{\prime },k_2^{\prime })\in K_{R}\).

To prove that K R satisfies Property K2, assume ex absurdo that 0 ∈ K R, or equivalently, that 0 ∈ R({(−1, 0), 0, (0, −1)}). Since (−1, 0) < 0, we infer from Axiom R2 that (−1, 0) ∈ R({(−1, 0), 0}), and therefore also that (−1, 0) ∈ R({(−1, 0), 0, (0, −1)}), by Axiom R3a. A similar argument leads from (0, −1) < 0 to (0, −1) ∈ R({(−1, 0), 0, (0, −1)}). This implies that {(−1, 0), 0, (0, −1)} = R({(−1, 0), 0, (0, −1)}), which contradicts Axiom R1.

Next, assume that R satisfies Condition (12.2). To prove that K R then satisfies Property K3, we first prove that it satisfies Property K3a. Consider any a, b and c in [0, 1) and assume that c < a, a + b < 1, and that (b, a) and (1 − a, c) belong to K R. We are going to prove that (b, y) ∈ K R for every y in (c, 1); the proof that also (x, c) ∈ K R for every x in (b, 1) is similar. Consider any λ in \(\mathbb {R}_{>0}\), then Condition (12.2) guarantees that 0 ∈ R({(b − 1, b), 0, λ(a, a − 1)}) and 0 ∈ R({λ(−a, 1 − a), 0, (c, c − 1)}). By Axiom R4b, we then find that − λ(a, a − 1) ∈ R({(b − λa − 1, b − λa + λ), −λ(a, a − 1), 0}) and λ(a, a − 1) ∈ R({0, λ(a, a − 1), (c + λa, c + λa − λ − 1)}), and applying Axiom R3a then leads to {−λ(a, a−1), λ(a, a−1)}⊆ R({(bλa−1, bλa+λ), −λ(a, a−1), 0, λ(a, a−1), (c+λa, c+λaλ−1)}). This, together with Axiom R3a, implies that {−λ(a, a−1), 0, λ(a, a−1)}⊆ R({(bλa−1, bλa+λ), −λ(a, a−1), 0, (c, c−1), λ(a, a−1), (c+λa, c+λaλ−1)}). Applying Axiom R3b implies that − λ(a, a − 1) is included in R({(bλa−1, bλa+λ), −λ(a, a−1), (c, c−1), (c+λa, c+λaλ−1)}) and by Axiom R4b this implies that 0 ∈ R({(b − 1, b), 0, (c + λa, c + λa − λ − 1), (c + 2λa, c + 2λa − 2λ − 1)}). Let us call and , and and ; these real numbers are both positive since 0 ≤ c < a and λ > 0. Then 0 ∈ R({(b − 1, b), 0, u, v}), and 0 ∈ R({(b − 1, b), 0, μ 1 u, μ 2 v}) by Condition (12.2). But μ 1 u < μ 2 v since \(\mu _1u=(1,\frac {c+\lambda a-\lambda -1}{c+\lambda a})\) and \(\mu _2{v}=(1,\frac {c+2\lambda a-2\lambda -1}{c+2\lambda a})\), and \(\frac {c+\lambda a-\lambda -1}{c+\lambda a}<\frac {c+2\lambda a-2\lambda -1}{c+2\lambda a}\) using the assumptions. Then μ 1 u ∈ R({μ 1 u, μ 2 v}) by Axiom R2, whence {0, μ 1 u}⊆ R({(b − 1, b), 0, μ 1 u, μ 2 v}) by Axiom R3a. Then 0 ∈ R({(b − 1, b), 0, μ 2 v}) by Axiom R3b, and 0 ∈ R({(b − 1, b), 0, μ 3 v}) by Condition (12.2) with \(\mu _3=\frac {1}{2\lambda +1}>0\), whence \(\left (b,\frac {c+2\lambda a}{1+2\lambda }\right )\in K_{R}\). Now, by varying λ in \(\mathbb {R}_{>0}\) the number \(\frac {c+2\lambda a}{1+2\lambda }\) can take any value in the interval (c, a). We conclude that (b, y) ∈ K R for every y ∈ (c, 1), after also recalling that we have already proved that K R satisfies Property K1.

To prove that K R satisfies Property K3b, assume that 0 ≤ c < a < 1, (0, a) ∈ K R and (1 − a, c) ∈ K R. Because K R already satisfies Property K3a [with in particular ], we know that (x, c) ∈ K R for every x in (0, 1) and (0, y) ∈ K R for every y in (c, 1). We have to show that (0, c) ∈ K R. To this end, consider the gambles and . Because in particular (x, c) ∈ K R for \(x=\frac {1-c}{2}\in (0,1)\), we have that 0 ∈ R({u, 0, v}). Similarly, because in particular (0, y) ∈ K R for \(y=\frac {1+c}{2}\in (c,1)\), we have that 0 ∈ R({(−1, 0), 0, −u}). Since also (−1, 0) ∈ R({(−1, 0), 0})—and therefore (−1, 0) ∈ R({(−1, 0), 0, −u}) by Axiom R3a—because (−1, 0) < 0 and by Axiom R2, this leads us to conclude that {(−1, 0), 0}⊆ R({(−1, 0), 0, −u}), and therefore also 0 ∈ R({0, −u}) by Axiom R3b. Hence, u ∈ R({u, 0}), by Axiom R4b, and therefore u ∈ R({u, 0, v}), by Axiom R3a. Hence {0, u}⊆ R({u, 0, v}), so Axiom R3b leads to 0 ∈ R({0, v}). Now Axiom R3a implies that indeed (0, c) ∈ K R, so Property K3b is satisfied. Property K3c can be shown to hold in a similar way.

To conclude, assume that R satisfies Axiom R5. Since this implies that Condition (12.2) holds by Proposition 1, we already know that Property K3 is satisfied, so it only remains to prove that K R satisfies Property K4. Consider any (k 1, k 2) in [0, 1)2 such that k 1 + k 2 > 1. Then \(\big (\frac {k_1+k_2-1}{2},\frac {k_1+k_2-1}{2}\big )>0\), whence \(0\in R(\{0,\big (\frac {k_1+k_2-1}{2},\frac {k_1+k_2-1}{2}\big )\})\) by Axiom R2. By Axiom R3a, we get \(0\in R(\{(k_1-1,k_1),0,(k_2,k_2-1),(\frac {k_1+k_2-1}{2},\frac {k_1+k_2-1}{2})\})\). Since \((\frac {k_1+k_2-1}{2},\frac {k_1+k_2-1}{2})\in \mathrm {CH}(\{(k_1-1,k_1),(k_2,k_2-1)\})\), Axiom R5 leads us to conclude that 0 ∈ R({(k 1 − 1, k 1), 0, (k 2, k 2 − 1)}), so indeed (k 1, k 2) ∈ K R. □

Proof of Lemma 2

If Condition (12.7) holds, then {λ 1(k 1 − 1, k 1), (0, −1)}⊆ A ∪{(−1, 0), (0, −1)}, so Condition (12.11) holds with and . Similarly, if Condition (12.8) holds, then {(−1, 0), λ 2(k 2, k 2 − 1)}⊆ A ∪{(−1, 0), (0, −1)}, so Condition (12.11) holds with and . If Condition (12.9) holds, then Condition (12.11) holds trivially.

Conversely, assume that Condition (12.11) holds. If both k 1 ≠ 0 and k 2 ≠ 0, then Condition (12.9) holds trivially, so assume that either k 1 = 0 or k 2 = 0—they cannot both be zero, because 0∉K. So assume that k 1 = 0 and k 2 > 0, then we infer from the assumption that {λ 1(−1, 0), λ 2(k 2, k 2 − 1)}⊆ A ∪{(−1, 0), (0, −1)}. Since k 2 > 0 implies that λ 2(k 2, k 2 − 1) ≠ (−1, 0) and λ 2(k 2, k 2 − 1) ≠ (0, −1) for any choice of λ 2 > 0, it must be that λ 2(k 2, k 2 − 1) ∈ A, so Condition (12.8) holds. The case k 2 = 0 and k 1 > 0 is similar. □

Proof of Proposition 8

For Axiom R2, consider any f and g in \(\mathcal {L}\) such that f < g. Then 0 < g − f, so we infer from Condition (12.6) that 0 ∈ R K({0, g − f}), and then from Condition (12.10) that indeed f ∈ R K({f, g}).

For Axiom R3a, assume that A 1 ⊆ R K(A 2) and A 2 ⊆ A. Then we need to prove that A 1 ⊆ R K(A). Consider any f ∈ A 1, then also f ∈ A 2 and f ∈ A, so we can let and , where \(A^{\prime }_{2}\subseteq A'\). We then infer from Condition (12.10) that \(0\in R_{K}(A^{\prime }_{2})\), which means that at least one of the Conditions (12.6)–(12.9) holds. But any of these conditions implies that also 0 ∈ R K(A′). Condition (12.10) then guarantees that f ∈ R K(A) and therefore that, indeed, A 1 ⊆ R K(A).

That Axioms R4a and R4b are satisfied follows from Conditions (12.6)–(12.10).

For Condition (12.2), consider any option set \(A=\{f_{1},\dots ,{f_n}\}\in \mathcal {Q}\), where n is a natural number, and any positive real numbers μ 1, …, μ n. Assume that 0 ∈ R K({0}∪ A). First of all, if \({f_i}\in \mathcal {L}_{>0}\) for some i in {1, …, n}, then also \(\mu _i{f_i}\in \mathcal {L}_{>0}\) since μ i > 0, whence indeed 0 ∈ R K({0, μ 1 f 1, …, μ n f n}), by Condition (12.6). So assume that \({f_i}\notin \mathcal {L}_{>0}\) for all i in {1, …, n}. There are now only three possibilities. The first is that there are λ 1 in \(\mathbb {R}_{>0}\) and (k 1, 0) in K such that λ 1(k 1 − 1, k 1) = f i for some i in {1, …, n}. Then (λ 1 μ i)(k 1 − 1, k 1) = μ i f i ∈{μ 1 f 1, …, μ n f n}, and Condition (12.7) guarantees that indeed 0 ∈ R K({0, μ 1 f 1, …, μ n f n}). The second possibility is that there are λ 2 in \(\mathbb {R}_{>0}\) and (0, k 2) in K such that λ 2(k 2, k 2 − 1) = f j for some j in {1, …, n}. Then (λ 2 μ j)(k 2, k 2 − 1) = μ j f j ∈{μ 1 f 1, …, μ n f n}, and Condition (12.8) guarantees that indeed 0 ∈ R K({0, μ 1 f 1, …, μ n f n}). And the final possibility is that there are λ 1 and λ 2 in \(\mathbb {R}_{>0}\) and (k 1, k 2) in K ∩ (0, 1)2 such that λ 1(k 1 − 1, k 1) = f i and λ 2(k 2, k 2 − 1) = f j for some i and j in {1, …, n}. Then (λ 1 μ i)(k 1 − 1, k 1) = μ i f i and (λ 2 μ j)(k 2, k 2 − 1) = μ j f j, and Condition (12.9) guarantees that indeed 0 ∈ R K({0, μ 1 f 1, …, μ n f n}).

Assume now that K satisfies in addition Properties K1K3. We begin by proving that R K satisfies Axiom R3b. Assume ex absurdo that it does not, then Proposition 2 guarantees that there are A in \(\mathcal {Q}\) and g in A ∖{0} such that {0, g}⊆ R K(A) and 0∉R K(A ∖{g}).

Because 0 ∈ R K(A), we infer from Definition 8 and Lemma 2 that there are two possibilities: (i) \(A\cap \mathcal {L}_{>0}\neq \emptyset \), or (ii) {λ 1(k 1 − 1, k 1), λ 2(k 2, k 2 − 1)}⊆ A ∪{(−1, 0), (0, −1)} for some λ 1 and λ 2 in \(\mathbb {R}_{>0}\) and some (k 1, k 2) in K.

We first deal with case (i). Here we can assume without loss of generality that \(A\cap \mathcal {L}_{>0}=\{g\}\) because, otherwise \(A\setminus \{g\}\cap \mathcal {L}_{>0}\neq \emptyset \) and we could apply Condition (12.6) to conclude that 0 ∈ R K(A ∖{g}), a contradiction. We will use the notation g = (x, y) > 0. Because also g ∈ R K(A), Condition (12.10) guarantees that 0 ∈ R K(A −{g}), and a similar argument as before shows that there are now two possibilities: (i.a) \((A-\{g\})\cap \mathcal {L}_{>0}\neq \emptyset \); and (i.b) {λ 3(k 3 − 1, k 3), λ 4(k 4, k 4 − 1)}⊆ (A −{g}) ∪{(−1, 0), (0, −1)} for some λ 3 and λ 4 in \(\mathbb {R}_{>0}\) and some (k 3, k 4) in K. But in fact (i.a) is impossible, because it would contradict our earlier conclusion that \(A\cap \mathcal {L}_{>0}=\{g\}\). So we can restrict our attention to case (i.b) with \((A-\{g\})\cap \mathcal {L}_{>0}=\emptyset \). There are now 3 possibilities: (i.b.1) k 3 ≠ 0 ≠ k 4 corresponding to Condition (12.9), (i.b.2) k 3 = 0 ≠ k 4 corresponding to Condition (12.8), and (i.b.3) k 3 ≠ 0 = k 4 corresponding to Condition (12.7)—k 3 = 0 = k 4 is impossible because 0∉K. It is possible to show that each of these three cases leads eventually to 0 ∈ R K(A ∖{g}), a contradiction.

We now turn to case (ii), where we assume that \(A\cap \mathcal {L}_{>0}=\emptyset \) and that there are λ 1 and λ 2 in \(\mathbb {R}_{>0}\) and (k 1, k 2) in K such that {λ 1(k 1 − 1, k 1), λ 2(k 2, k 2 − 1)}⊆ A ∪{(−1, 0), (0, −1)}. Here we distinguish between three possibilities: (ii.a) g∉{λ 1(k 1 − 1, k 1), λ 2(k 2, k 2 − 1)}, (ii.b) g = λ 1(k 1 − 1, k 1), and (ii.c) g = λ 2(k 2, k 2 − 1).

But we see at once that case (ii.a) is impossible, because it implies by Condition (12.11) that 0 ∈ R K(A ∖{g}), a contradiction. So we now concentrate on the cases (ii.b) and (ii.c), where it is by the way obvious that indeed \(A\cap \mathcal {L}_{>0}=\emptyset \).

We begin with the discussion of case (ii.b). We first of all claim that now k 1 > 0. Indeed, if k 1 = 0 then (k 1, k 2) = (0, k 2) ∈ K, and Property K2 implies that k 2 > 0. Since we know that in this case λ 2(k 2, k 2 − 1) ∈ A ∖{g} [since g = λ 1(k 1 − 1, k 1) ≠ λ 2(k 2, k 2 − 1)], Condition (12.8) guarantees that 0 ∈ R K(A ∖{g}), a contradiction.

So we may assume that k 1 > 0, and the assumption that g ∈ R K(A), or in other words, that 0 ∈ R K(A −{g}), leaves us with two possibilities: that (ii.b.1) \((A-\{g\})\cap \mathcal {L}_{>0}\neq \emptyset \), or that (ii.b.2) {λ 3(k 3 − 1, k 3), λ 4(k 4, k 4 − 1)}⊆ (A −{g}) ∪{(−1, 0), (0, −1)} for some λ 3 and λ 4 in \(\mathbb {R}_{>0}\) and (k 3, k 4) in K.

For case (ii.b.1), there is some such that . Since the second component λ 1 k 1 + y′ of f is positive and \(f\notin \mathcal {L}_{>0}\), we find that f must lie in the second quadrant, and therefore its first component λ 1 k 1 − λ 1 + x′ is negative: λ 1 k 1 < λ 1 − x′ and therefore . If we now let , then \(f=\lambda _3^*(k_3^*-1,k_3^*)\). Moreover, \(k_3^*<1\) because this is equivalent to λ 1 k 1 − λ 1 + x′ < 0, which we have already found to be true. Similarly, \(k_3^*\geq k_1\) because this is equivalent to x′k 1 + y′(1 − k 1) ≥ 0. Then \((k_3^*,k_2)\in K\) because (k 1, k 2) ∈ K and K is increasing [Property K1]. Since we now know that \(\{\lambda _3^*(k_3^*-1,k_3^*),\lambda _2(k_2,k_2-1)\}\subseteq A\setminus \{g\}\), Condition (12.9) guarantees that 0 ∈ R K(A ∖{g}), a contradiction.

For case (ii.b.2), {g + λ 3(k 3 − 1, k 3), g + λ 4(k 4, k 4 − 1)}⊆ A ∪{g + (−1, 0), g + (0, −1)}, or in other words, {(λ 1 k 1+λ 3 k 3λ 1λ 3, λ 1 k 1+λ 3 k 3), (λ 1 k 1+λ 4 k 4λ 1, λ 1 k 1+λ 4 k 4λ 4)}⊆ A∪{g+(−1, 0), g+(0, −1)} We claim that here k 3 < k 1. To prove this, assume ex absurdo that k 3 ≥ k 1, then also . Moreover, \(k_3^*<1\) because it is a convex combination of k 1 < 1 and k 3 < 1, and therefore \((k_3^*,k_2)\in [0,1)^2\setminus \{0\}\) and \((k_3^*,k_2)\geq (k_1,k_2)\). Then \((k_3^*,k_2)\in K\) because (k 1, k 2) ∈ K and K is increasing [Property K1]. Moreover, if we also let , then \(\lambda _3^*(k_3^*-1,k_3^*) =g+\lambda _3(k_3-1,k_3) \in A\cup \{g+(-1,0),g+(0,-1)\}\), and since we know that λ 3(k 3 − 1, k 3)∉{(−1, 0), 0, (0, −1)} [because λ 3 > 0 and k 3 ≥ k 1 > 0], this leads us to conclude that \(\{\lambda _3^*(k_3^*-1,k_3^*),\lambda _2(k_2,k_2-1)\}\subseteq A\setminus \{g\}\), so Condition (12.9) together with \((k_3^*,k_2)\in K\) guarantees that 0 ∈ R K(A ∖{g}), a contradiction.

Since k 3 < k 1 rules out the possibility that k 1 = 0, we find that k 1 > 0 as an intermediate result. In the remainder of this case (ii.b), note that nothing depends on whether k 2 = 0 or k 2 > 0. We can now distinguish between three distinct possibilities: (ii.b.2.1) k 3 > 0 and k 4 > 0, (ii.b.2.2) k 3 = 0 and k 4 > 0, and (ii.b.2.3) k 3 > 0 and k 4 = 0, which correspond to Conditions (12.9), (12.8) and (12.7), respectively—k 3 = 0 = k 4 is impossible because 0∉K.

In case (ii.b.2.1) we see that {λ 3(k 3 − 1, k 3), λ 4(k 4, k 4 − 1)}∩{(−1, 0), 0, (0, −1)} = ∅, and therefore {(λ 1 k 1+λ 3 k 3λ 1λ 3, λ 1 k 1+λ 3 k 3), (λ 1 k 1+λ 4 k 4λ 1, λ 1 k 1+λ 4 k 4λ 4)}⊆ A∖{g}. We distinguish between two possibilities, which will determine in what quadrants these points lie: λ 4 ≤ λ 1 and λ 4 > λ 1.

If λ 4 ≤ λ 1, then we establish, reasoning ex absurdo, that k 4 ≤ 1 − k 1. Once we have this, because K is increasing [Property K1], we infer from (k 3, k 4) ∈ K that (k 3, 1 − k 1) ∈ K. We distinguish between two further possibilities: k 1 + k 2 < 1 and k 1 + k 2 ≥ 1.

If k 1 + k 2 < 1 then we can use Property K3a with a = 1 − k 1, b = k 3 and c = k 2. Observe that a + b = 1 − k 1 + k 3 < 1 because k 3 < k 1, that c = k 2 < 1 − k 1 = a by assumption, that (b, a) = (k 3, 1 − k 1) ∈ K has been proved above, and that (1 − a, c) = (k 1, k 2) ∈ K also by assumption, whence \((\forall k_3^{\prime }\in (k_3,1))(k_3^{\prime },k_2)\in K\). In particular, let . Then \(k_3^{\prime }>\min \{k_1,k_3\}=k_3>0\), where the first inequality follows from λ 1 > 0 and λ 3 > 0, and the equality from k 3 < k 1. Moreover, \(k_3^{\prime }<1\) because it is a convex combination of k 1 < 1 and k 3 < 1. Hence \(k_3^{\prime }\in (k_3,1)\) and therefore \((k_3^{\prime },k_2)\in K\). If we now let , then we see that \(\lambda _3^{\prime }(k_3^{\prime }-1,k_3^{\prime })=(\lambda _1k_1+\lambda _3k_3-\lambda _1-\lambda _3,\lambda _1k_1+\lambda _3k_3)\in A\setminus \{g\}\), whence also \(\{\lambda _3^{\prime }(k_3^{\prime }-1,k_3^{\prime }),\lambda _2(k_2,k_2-1)\}\subseteq A\setminus \{g\}\), and Condition (12.9) now guarantees that 0 ∈ R K(A ∖{g}), a contradiction.

If k 1 + k 2 ≥ 1 then we have that k 2 ≥ 1 − k 1 ≥ k 4. Also , where the first inequality follows from λ 1 > 0 and λ 3 > 0, and the equality from k 1 > k 3. Moreover, \(k_3^*<1\) because it is a convex combination of k 1 < 1 and k 3 < 1. This tells us that \((k_3^*,k_2)\in [0,1)^2\setminus \{0\}\) and \((k_3^*,k_2)>(k_3,1-k_1)\). We then find that \((k_3^*,k_2)\in K\) because (k 3, 1 − k 1) ∈ K and K is increasing [Property K1]. If we now let then we find that \(\lambda _3^*(k_3^*-1,k_3^*)=(\lambda _1k_1+\lambda _3k_3-\lambda _1-\lambda _3,\lambda _1k_1+\lambda _3k_3)\in A\setminus \{0\}\), and therefore also \(\{\lambda _3^*(k_3^*-1,k_3^*),\lambda _2(k_2,k_2-1)\}\subseteq A\setminus \{g\}\), and Condition (12.9) now guarantees that 0 ∈ R K(A ∖{g}), a contradiction.

If λ 4 > λ 1, then we establish, again reasoning ex absurdo, that k 4 ≤ 1 − k 1. Once we have this, using that K is increasing, we infer from (k 3, k 4) ∈ K that (k 3, 1 − k 1) ∈ K. We now have the same two possibilities k 1 + k 2 < 1 and k 1 + k 2 ≥ 1 as before, and for each of them, we can construct a contradiction in exactly the same way as for the case when λ 4 ≤ λ 1.

This shows that we always arrive at a contradiction in case (ii.b.2.1).

In case (ii.b.2.2) we see that λ 4(k 4, k 4 − 1)∉{(−1, 0), 0, (0, −1)}, and therefore (λ 1 k 1 + λ 4 k 4 − λ 1, λ 1 k 1 + λ 4 k 4 − λ 4) ∈ A ∖{g}. We distinguish between two possibilities, which will determine in what quadrant this point lies: λ 4 ≤ λ 1 or λ 4 > λ 1.

If λ 4 ≤ λ 1, then we claim that k 4 ≤ 1 − k 1. To prove this, assume ex absurdo that k 4 > 1 − k 1, so k 1 + k 4 − 1 > 0. If λ 1 = λ 4, then (λ 1 k 1 + λ 4 k 4 − λ 1, λ 1 k 1 + λ 4 k 4 − λ 4) = λ 1(k 1 + k 4 − 1, k 1 + k 4 − 1) > 0, a contradiction, so we may assume that λ 4 < λ 1. We now wonder in what quadrant the vector (λ 1 k 1 + λ 4 k 4 − λ 1, λ 1 k 1 + λ 4 k 4 − λ 4) ≠ 0 lies. We infer from k 1 > 0, λ 1 > λ 4 > 0 and k 1 + k 4 > 1 that λ 1 k 1 + λ 4 k 4 − λ 4 > λ 4(k 1 + k 4) − λ 4 > 0. Since \(A\cap \mathcal {L}_{>0}=\emptyset \), we find that (λ 1 k 1 + λ 4 k 4 − λ 1, λ 1 k 1 + λ 4 k 4 − λ 4) must lie in the second quadrant, and therefore its first component λ 1 k 1 + λ 4 k 4 − λ 1 must be negative: λ 1 k 1 + λ 4 k 4 < λ 1. This tells us that . Moreover, \(k_4^*>k_1\) because this is equivalent to k 4 > 1 − k 1. Hence \((k_4^*,k_2)\in [0,1)^2\setminus {0}\) and \((k_4^*,k_2)>(k_1,k_2)\). This tells us that \((k_4^*,k_2)\in K\) because (k 1, k 2) ∈ K and K is increasing [Property K1]. If we now let , then we see that \(\lambda _4^*(k_4^*-1,k_4^*)=(\lambda _1k_1+\lambda _4k_4-\lambda _1,\lambda _1k_1+\lambda _4k_4-\lambda _4)\in A\setminus \{g\}\). Hence also \(\{\lambda _4^*(k_4^*-1,k_4^*),\lambda _2(k_2,k_2-1)\}\subseteq A\setminus \{g\}\), and Condition (12.9) now guarantees that 0 ∈ R K(A ∖{g}), a contradiction.

So we see that 0 < k 4 ≤ 1 − k 1 < 1, so (0, 1 − k 1) ∈ [0, 1)2 ∖{0} and (0, 1 − k 1) ≥ (0, k 4) and hence, because K is increasing [Property K1], we infer from (0, k 4) = (k 3, k 4) ∈ K that also (0, 1 − k 1) ∈ K. We distinguish between two further possibilities: k 1 + k 2 < 1 and k 1 + k 2 ≥ 1.

If k 1 + k 2 < 1 then we can use Property K3b with a = 1 − k 1 and c = k 2. Observe that c = k 2 < 1 − k 1 = a by assumption, that (0, a) = (0, 1 − k 1) ∈ K was derived above, and that (1 − a, c) = (k 1, k 2) ∈ K also by assumption, and therefore we find that (0, k 2) ∈ K. Since λ 2(k 2, k 2 − 1) ∈ A ∖{g}, Condition (12.8) now guarantees that 0 ∈ R K(A ∖{g}), a contradiction.

If k 1 + k 2 ≥ 1 then we have that k 2 ≥ 1 − k 1 ≥ k 4. Then (0, k 2) ∈ K because (0, k 4) ∈ K and K is increasing [Property K1]. Since λ 2(k 2, k 2 − 1) ∈ A ∖{g}, Condition (12.8) now guarantees that 0 ∈ R K(A ∖{g}), a contradiction.

If λ 4 > λ 1, then we claim that, here too, k 4 ≤ 1 − k 1. To prove this, assume ex absurdo that k 4 > 1 − k 1. We wonder in what quadrant the vector (λ 1 k 1 + λ 4 k 4 − λ 1, λ 1 k 1 + λ 4 k 4 − λ 4) lies. Infer from 0 < 1 − k 1 < k 4 and 0 < λ 1 < λ 4 that λ 1 k 1 + λ 4 k 4 − λ 1 > λ 1(k 1 + k 4) − λ 1 > 0. Since \(A\cap \mathcal {L}_{>0}=\emptyset \), we find that the vector (λ 1 k 1 + λ 4 k 4 − λ 1, λ 1 k 1 + λ 4 k 4 − λ 4) must lie in the fourth quadrant, and therefore its second component λ 1 k 1 + λ 4 k 4 − λ 4 must be negative: λ 1 k 1 + λ 4 k 4 < λ 4. This tells us that . Moreover, \(k_4^*>k_4\) because this is equivalent to k 4 > 1 − k 1. Hence \((0,k_4^*)\in [0,1)^2\setminus \{0\}\) and \((0,k_4^*)>(0,k_4)\). This tells us that \((0,k_4^*)\in K\) because (0, k 4) ∈ K and K is increasing [Property K1]. If we now let , then we see that \(\lambda _4^*(k_4^*,k_4^*-1)=(\lambda _1k_1+\lambda _4k_4-\lambda _1,\lambda _1k_1+\lambda _4k_4-\lambda _4)\in A\setminus \{g\}\), and Condition (12.8) now guarantees that 0 ∈ R K(A ∖{g}), a contradiction.

So we see that 0 < k 4 ≤ 1 − k 1 < 0, and hence, because K is increasing, we infer from (k 3, k 4) ∈ K that (k 3, 1 − k 1) ∈ K. We now have the same two possibilities k 1 + k 2 < 1 and k 1 + k 2 ≥ 1 as before, and for each of them, we can construct a contradiction in exactly the same way as for the case when λ 4 ≤ λ 1.

We conclude that case (ii.b.2.2) always leads to a contradiction.

In case (ii.b.2.3) we see that λ 3(k 3 − 1, k 3)∉{(−1, 0), 0, (0, −1)}, and therefore (λ 1 k 1 + λ 3 k 3 − λ 1 − λ 3, λ 1 k 1 + λ 3 k 3) ∈ A ∖{g}, or if we let and , \(\lambda _3^*(k_3^*-1,k_3^*)\in A\setminus \{g\}\). Observe that also \(k_3^*<1\) because it is a convex combination of k 1 < 1 and k 3 < 1. This tells us that \((k_3^*,0)\in [0,1)^2\setminus \{0\}\). Moreover, we have that \(k_3^*>\min \{k_1,k_3\}=k_3>0\) [the strict inequality holds because λ 1 > 0 and λ 3 > 0, and the equality holds because k 1 > k 3. Hence \((k_3^*,0)>(k_3,0)\) and therefore \((k_3^*,0)\in K\), because also (k 3, 0) ∈ K and K is increasing [Property K1]. Since \(\lambda _3^*(k_3^*-1,k_3^*)\in A\setminus \{g\}\), Condition (12.7) now guarantees that 0 ∈ R K(A ∖{g}), a contradiction.

We have now found a contradiction in cases (ii.b.2.1)–(ii.b.2.3), which tells us that case (ii.b.2) always leads to a contradiction. Since case (ii.b.1) also led to a contradiction, we may conclude that case (ii.b) always leads to a contradiction.

The discussion of the last remaining case (ii.c) is completely similar to that of case (ii.b): we can distinguish between similar cases, and in each of them we can construct a contradiction in the same manner, by exchanging the roles of k 1 and k 2, and of k 3 and k 4.

Since we have now arrived at a contradiction in all possible cases, we conclude that R K indeed satisfies Axiom R3b.

We finish the proof by establishing that R K also satisfies Axiom R1. Since we have already shown that R K satisfies Axiom R4b [see Proposition 8] and Axiom R3b [see the argumentation above], by Corollary 1 it suffices to show that 0∉R K({0}). By Condition (12.5), this is indeed the case. □

Proof of Lemma 3

We only prove the first equivalence; the proofs for the second and the third equivalences are analogous. It suffices to establish the direct implication, since the converse follows from Axiom R3a.

Call and for every k in {1, …, m}, and and for every k in {1, …, n}. Then \(0\in R(\{0,f_1,\dots ,f_m,g_1,\dots ,g_n\}) \Leftrightarrow 0\in R(\{0,(\ell _1-1,\ell _1),\dots ,(\ell _m-1,\ell _m),(\ell _1^{\prime },\ell _1^{\prime }-1),\dots ,(\ell _n^{\prime },\ell _n^{\prime }-1)\})\), using Condition (12.2). Let and . Then ( k − 1, k) = R({( i − 1, i), ( k − 1, k)}) by Axiom R2, and then also \((\ell _k-1,\ell _k)\in R(\{0,(\ell _1-1,\ell _1),\dots ,(\ell _m-1,\ell _m),(\ell _1^{\prime },\ell _1^{\prime }-1),\dots ,(\ell _n^{\prime },\ell _n^{\prime }-1)\})\) by Axiom R3a, for all k in \(\{1,\dots ,m\}\setminus \mathcal {I}\). In a similar way, we find that \(\{0\}\cup \{{(\ell _k-1,\ell _k)}\colon {k\in \{1,\dots ,m\}\setminus \mathcal {I}}\}\cup \{{(\ell _{k^{\prime }}^{\prime },\ell _{k^{\prime }}^{\prime }-1)}\colon {k'\in \{1,\dots ,n\}\setminus \mathcal {J}}\} \subseteq R(\{0,(\ell _1-1,\ell _1),\dots ,(\ell _m-1,\ell _m),(\ell _1^{\prime },\ell _1^{\prime }-1),\dots ,(\ell _n^{\prime },\ell _n^{\prime }-1)\})\). Then Axiom R3b implies that \( 0 \in R(\{0\}\cup \{{(\ell _k-1,\ell _k)}\colon {k\in \mathcal {I}}\}\cup \{{(\ell _{k^{\prime }}^{\prime },\ell _{k^{\prime }}^{\prime }-1)}\colon {k'\in \mathcal {J}}\}) =R(\{0,(\ell _i-1,\ell _i),(\ell _j^{\prime },\ell _j^{\prime }-1)\})\), whence indeed 0 ∈ R({0, f i, g j}), by Condition (12.2). □

Proof of Proposition 9

For the first statement, assume that R is coherent and satisfies Condition (12.2). Then we infer from Proposition 7 that K R satisfies Properties K1K3, and therefore Proposition 8 guarantees that \(R_{K_{R}}\) is coherent and satisfies Condition (12.2) as well. To prove that \(R=R_{K_{R}}\), we consider any A in \(\mathcal {Q}\) and f in A, and show that \(f\in R(A)\Leftrightarrow f\in R_{K_{R}}(A)\). Since both R and \(R_{K_{R}}\) satisfy Axiom R4b [Proposition 8], we can assume without loss of generality that f = 0.

For the direct implication, assume that 0 ∈ R(A). If \(A\cap \mathcal {L}_{>0}\neq \emptyset \) then \(0\in R_{K_{R}}(A)\) by Condition (12.6). If \(A\cap \mathcal {L}_{>0}=\emptyset \) then 0 ∈ R(A) implies that g(H) > 0 or g(T) > 0 for some g in A. If we use the notation \(\mathcal {V}_{\mathrm {II}}\cap A=\{{g_1},\dots ,{g_m}\}\) and \(\mathcal {V}_{\mathrm {IV}}\cap A=\{{g_1^{\prime }},\dots ,{g_n^{\prime }}\}\) with m and n in \(\mathbb {Z}_{\geq 0}\), this tells us that \(\max \{n,m\}>0\). Also, we may assume without loss of generality that \(A\cap \mathcal {L}_{<0}=\emptyset \). By Lemma 3 we infer that there are three possibilities:

  1. (i)

    \(0\in R(\{0,\tilde {g},\tilde {g}'\})\), and hence 0 ∈ R({0, h, h′});

  2. (ii)

    \(0\in R(\{0,\tilde {g}\})\), and hence 0 ∈ R({0, h});

  3. (iii)

    \(0\in R(\{0,\tilde {g}'\})\), and hence 0 ∈ R({0, h′});

where we let, to ease the notation, and . For each of these possible cases, we find respectively:

  1. (i)

    (h(T), h′(H)) ∈ K R, which tells us that \(0\in R_{K_{R}}(\{0,\tilde {g},\tilde {g}'\})\);

  2. (ii)

    (h(T), 0) ∈ K R, from which we infer that \(0\in R_{K_{R}}(\{0,\tilde {g}\})\) by Condition (12.8);

  3. (iii)

    (0, h′(H)) ∈ K R , from which we infer that \(0\in R_{K_{R}}(\{0,\tilde {g}'\})\) by Condition (12.7).

In all three cases we can now conclude that, indeed, \(0\in R_{K_{R}}(A)\), by Axiom R3a.

For the converse implication, assume that \(0\in R_{K_{R}}(A)\). If \(A\cap \mathcal {L}_{>0}\neq \emptyset \), then 0 ∈ R(A) by Axioms R2 and R3a, so assume that \(A\cap \mathcal {L}_{>0}=\emptyset \). If Condition (12.7) holds, then there is some k 1 in (0, 1) and some λ 1 in \(\mathbb {R}_{>0}\) such that (k 1, 0) ∈ K R and λ 1(k 1 − 1, k 1) ∈ A. The first statement means that 0 ∈ R({(k 1 − 1, k 1), 0, (0, −1)}), whence, after applying a familiar combination of Axioms R2, R3a and R3b, also 0 ∈ R({(k 1 − 1, k 1), 0}). Applying Condition (12.2), the second statement, and Axiom R3a now leads us to deduce that indeed 0 ∈ R(A).

The remaining possibility is that either Condition (12.8) or Condition (12.9) holds. The proof in this case is similar. This concludes the proof of the first statement.

For the second statement, assume that K satisfies Properties K1K3, then we infer from Proposition 8 that R K is coherent and satisfies Condition (12.2). Proposition 7 then guarantees that \(K_{R_{K}}\) satisfies Properties K1K3 as well. To show that \(K=K_{R_{K}}\), consider any ( 1, 2) in [0, 1)2 ∖{0}. First assume that \((\ell _1,\ell _2)\in K_{R_{K}}\), meaning that 0 ∈ R K({( 1 − 1, 1), 0, ( 2, 2 − 1)}, by the definition of a rejection set of a rejection function. We have to prove that this implies that ( 1, 2) ∈ K. The definition of R K [Definition 8] now tells us that Condition (12.6), Condition (12.7), Condition (12.8), or Condition (12.9) must obtain, with . Since ( 1, 2) ∈ [0, 1)2 ∖{0}, we infer that Condition (12.6) cannot be fulfilled, and we therefore have three remaining: (a) Condition (12.7), (b) Condition (12.8), or (c) Condition (12.9) is satisfied.

In case (a) there are λ 1 in \(\mathbb {R}_{>0}\) and (k 1, 0) in K such that λ 1(k 1 − 1, k 1) ∈ A. But, because A = {( 1 − 1, 1), ( 2, 2 − 1)} with ( 1, 2) ∈ [0, 1)2 ∖{0}, this implies that λ 1 = 1 and k 1 =  1. This guarantees that ( 1, 0) ∈ K and, since K is increasing [Property K1], indeed also that ( 1, 2) ∈ K. The proof in cases (b) and (c) is similar.

Conversely, assume that ( 1, 2) ∈ K, then Condition (12.11) guarantees that in particular 0 ∈ R K({( 1 − 1, 1), 0, ( 2, 2 − 1)}), which implies that \((\ell _1,\ell _2)\in K_{R_{K}}\). □

Proof of Lemma 4

Visual proof: see the three possible situations depicted below (Fig. 12.4). □

Fig. 12.4
figure 4

Visual proof of Lemma 4

Proof of Proposition 10

We first prove that (i)(ii). Assume that R K satisfies Axiom R5, and consider any (k 1, k 2) in [0, 1)2 ∖{0} such that k 1 + k 2 > 1. It then follows that (k 1, k 2) ∈ (0, 1)2, and also that \(\big (\frac {k_1+k_2-1}{2},\frac {k_1+k_2-1}{2}\big )>0\), whence \(0\in R_{K}(\{0,\big (\frac {k_1+k_2-1}{2},\frac {k_1+k_2-1}{2}\big )\})\) by Condition (12.6). By Proposition 8, R K satisfies Axiom R3a, whence \(0\in R_{K}(\{(k_1-1,k_1),0,(k_2,k_2-1),(\frac {k_1+k_2-1}{2},\frac {k_1+k_2-1}{2})\})\). Also, \(\big (\frac {k_1+k_2-1}{2},\frac {k_1+k_2-1}{2}\big )\in \mathrm {CH}(\{(k_1-1,k_1),(k_2,k_2-1)\})\). But then Axiom R5 implies that 0 ∈ R K({(k 1 − 1, k 1), 0, (k 2, k 2 − 1)}), whence indeed (k 1, k 2) ∈ K.

Next, we prove that (ii)(i). Consider arbitrary A and A 1 in \(\mathcal {Q}\) such that A ⊆ A 1 ⊆CH(A), and let us show that R K(A 1) ∩ A ⊆ R K(A). Let and for some n and k in \(\mathbb {N}\). Assume that f i ∈ R K(A 1) for some i in {1, …, n}. We then have to prove that f i ∈ R K(A). We can assume without loss of generality that f i = 0, because also A −{f i}⊆ A 1 −{f i}⊆CH(A) −{f i} = CH(A −{f i}). To ease the notation along, let and for every k such that \({f_k}\in \mathcal {V}_{\mathrm {II}}\) [there might be no such k] and verify that λ k > 0 and f k = λ k( k − 1, k) for every gamble f k in \(A\cap \mathcal {V}_{\mathrm {II}}\). Similarly, for every k in {1, …, n} such that \({f_k}\in \mathcal {V}_{\mathrm {IV}}\) [there might be no such k], let and ; then λ k > 0 and f k = λ k( k, k − 1) for every gamble f k in \(A\cap \mathcal {V}_{\mathrm {IV}}\).

First of all, we see that \(A\cap \mathcal {L}_{>0}\neq \emptyset \) implies that indeed 0 ∈ R K(A), by Condition (12.6). We may therefore in the remainder of this proof assume that \(A\cap \mathcal {L}_{>0}=\emptyset \). Next, we observe that \(\mathrm {CH}(A)\cap \mathcal {L}_{>0}\neq \emptyset \) also implies that 0 ∈ R K(A). This can be proven ex absurdo by observing that it implies that \(A\cap \mathcal {V}_{\mathrm {II}}\neq \emptyset \) and \(A\cap \mathcal {V}_{\mathrm {IV}}\neq \emptyset \) and applying suitably condition (ii).

Now, since we have assumed that f i = 0 ∈ R K(A 1), Definition 8 tells us that there are four possibilities: one of the four Conditions (12.6)–(12.9) must hold for A 1.

Condition (12.6) for A 1 amounts to \(A_{1}\cap \mathcal {L}_{>0}\neq \emptyset \), contradicting our assumption that \(\mathrm {CH}(A)\cap \mathcal {L}_{>0}=\emptyset \), because A 1 ⊆CH(A).

If Condition (12.9) holds for A 1, then \(\{\lambda _1^*(k_1^*-1,k_1^*),\lambda _2^*(k_2^*,k_2^*-1)\}\subseteq A_{1}\) for some \(\lambda _1^*\) and \(\lambda _2^*\) in \(\mathbb {R}_{>0}\) and \((k_1^*,k_2^*)\) in K ∩ (0, 1)2. Let and . Then \(A\cap \mathcal {V}_{\mathrm {II}}\neq \emptyset \) and \(A\cap \mathcal {V}_{\mathrm {IV}}\neq \emptyset \) , so we may assume again without loss of generality that f 1 is a gamble in \( \operatorname *{\mbox{arg max}}\big \{{\frac {{h}(\mathrm {T})}{{h}(\mathrm {T})-{h}(\mathrm {H})}}\colon {{h}\in A\cap \mathcal {V}_{\mathrm {II}}}\big \}\) and that f 2 is a gamble in \( \operatorname *{\mbox{arg max}}\big \{{\frac {{h}(\mathrm {H})}{{h}(\mathrm {H})-{h}(\mathrm {T})}}\colon {{h}\in A\cap \mathcal {V}_{\mathrm {IV}}}\big \}\). Since we have assumed that \(\mathrm {CH}(A)\cap \mathcal {L}_{>0}=\emptyset \), we see that \(\mathrm {CH}(\{{h_1},0,{h_2}\})\cap \mathcal {L}_{>0}=\emptyset \)—and therefore also \(\mathrm {posi}(\{{h_1},0,{h_2}\})\cap \mathcal {L}_{>0}=\emptyset \)—whence, by Equation (12.12), \(k_1^*+k_2^*\leq 1\). If \((k_1^*,k_2^*)=(\ell _{k},\ell _{m})\) for some k and m in {1, …, n} such that \({f_k}\in \mathcal {V}_{\mathrm {II}}\) and \({f_m}\in \mathcal {V}_{\mathrm {IV}}\), then 0 ∈ R K(A) by Condition (12.9). If this is not the case, then we distinguish between three possibilities: (i) \(k_1^*\neq \ell _k\) for all k in {1, …, n} such that \({f_k}\in \mathcal {V}_{\mathrm {II}}\) and \(k_2^*=\ell _m\) for some m in {1, …, n} such that \({f_m}\in \mathcal {V}_{\mathrm {IV}}\), (ii) \(k_1^*=\ell _k\) for some k in {1, …, n} such that \({f_k}\in \mathcal {V}_{\mathrm {II}}\) and \(k_2^*\neq \ell _m\) for all m in {1, …, n} such that \({f_m}\in \mathcal {V}_{\mathrm {IV}}\), and (iii) \(k_1^*\neq \ell _k\) for all k in {1, …, n} such that \({f_k}\in \mathcal {V}_{\mathrm {II}}\) and \(k_2^*\neq \ell _m\) for all m in {1, …, n} such that \({f_m}\in \mathcal {V}_{\mathrm {IV}}\).

In case (i), we already find that \(\lambda (k_2^*,k_2^*-1)\in A\) for some λ in \(\mathbb {R}_{>0}\). If \(k_1^*\leq \ell _1\), then \((k_1^*,k_2^*)\in K\) implies that \((\ell _1,k_2^*)\in K\) because K is increasing. Since we know that f 1 = λ 1( 1 − 1, 1) ∈ A, this guarantees that 0 ∈ R K(A), by Condition (12.9). If \(k_1^*>\ell _1\), then we claim that necessarily also 1 +  2 > 1, and therefore ( 1, 2) ∈ K by Property K4, so indeed 0 ∈ R K(A) by Condition (12.9). To see that 1 +  2 > 1, assume ex absurdo that (a) 1 +  2 < 1 or (b) 1 +  2 = 1; it is not difficult to show that both these cases lead to a contradiction.

In case (ii), a completely similar argument leads us to conclude that 0 ∈ R K(A) here as well.

In case (iii) there are, again, three possibilities: (α) \(k_1^*<\ell _1\) and \(k_2^*<\ell _2\), so ( 1, 2) ∈ K because K is increasing, and therefore 0 ∈ R K(A) by Condition (12.9); (β) \(k_1^*>\ell _1\) and \(k_2^*<\ell _2\), and its symmetric counterpart \(k_1^*<\ell _1\) and \(k_2^*>\ell _2\); and (γ) \(k_1^*>\ell _1\) and \(k_2^*>\ell _2\), and therefore \(\ell _1+\ell _2<k_1^*+k_2^*\leq 1\), so 1 +  2 < 1 and Lemma 4 guarantees that h 1∉posi({f 1, 0, f 2}) = posi(A), and therefore a fortiori h 1∉CH(A), a contradiction. It therefore suffices to consider case (β), and show that \(k_1^*>\ell _1\) and \(k_2^*<\ell _2\) implies that 0 ∈ R K(A), since the case that \(k_1^*<\ell _1\) and \(k_2^*>\ell _2\) can be covered by a completely symmetrical argument. So assume that \(k_1^*>\ell _1\) and \(k_2^*<\ell _2\). Since h 1 ∈CH(A) ⊆posi(A), Lemma 4 and \(k_1^*>\ell _1\) guarantee that necessarily 1 +  2 > 1, so ( 1, 2) ∈ K by Property K4, and therefore once again 0 ∈ R K(A), by Condition (12.9).

The proof when Conditions (12.8) or (12.7) hold is similar to that for Condition (12.9). □

Proof of Proposition 11

We will prove that π 1 is non-increasing; the proof that π 2 is non-increasing is completely analogous. Assume ex absurdo that π 1(z′) > π 1(z) for some z and z′ in [0, 1) such that z′ > z. Then, by the definition of π 1, we have (∀y ∈ (π 1(z), 1))(z, y) ∈ K. Because K is increasing, we find (∀y ∈ (π 1(z), 1))(z′, y) ∈ K, and hence in particular (∀y ∈ (π 1(z), π 1(z′)))(z′, y) ∈ K, a contradiction.

Consider now z ∈ (0, 1). Let us prove the first statement; the proof of the second one is completely analogous. Recall that (z, y) ∈ K for all y in (π 1(z), 1), by the definition of π 1. Call . Since K is increasing, we infer that for all 𝜖 in (0, δ), (z, 1 − z − 𝜖) ∈ K. On the other hand, by definition of π 1 it follows that (z + 𝜖, y′) ∈ K for all 𝜖 in (0, δ) and y′ in (π 1(z + 𝜖), 1). We call b = z, a = 1 − z − 𝜖 and c = y′. Note that a + b = 1 − z − 𝜖 + z < 1 and c = y′ < 1 − z − 𝜖 = a for any y′ in (π 1(z + 𝜖), 1 − z − 𝜖) ⊆ (π 1(z + 𝜖), 1). To see that π 1(z + 𝜖) < 1 − z − 𝜖, assume ex absurdo that π 1(z + 𝜖) ≥ 1 − z − 𝜖, then π 1(z) ≥ 1 − z − 𝜖 by the first statement, indeed a contradiction with the fact that 𝜖 < δ. We use Property K3 to infer that (z, y′) ∈ K for all y in (π 1(z + 𝜖), 1) and 𝜖 in (0, δ). Infer that π 1(z) ≤ π 1(z + 𝜖), and since π 1 is non-increasing by the first part, we conclude that π 1(z) = π 1(z + 𝜖), for all 𝜖 in (0, δ). Therefore, π 1(z) = π 1(t) for all t in (z, 1 − π 1(z)). □

Proof of Proposition 12

We first prove necessity. Assume that R is such that K R is weakly Archimedean, and consider any u in \(\mathcal {V}_{\mathrm {II}}\) and v in \(\mathcal {V}_{\mathrm {IV}}\) such that \(\mathrm {posi}(\{u,{v}\})\cap \mathcal {V}_{{\succeq }0}=\emptyset \), and 0 ∈ R({u + 𝜖, 0, v}) and 0 ∈ R({u, 0, v + 𝜖}) for all 𝜖 in \(\mathbb {R}_{>0}\). Then, due to Proposition 1, we find that \(\forall \epsilon \in \mathbb {R}_{>0}, 0\in R(\{(k_1-1,k_1)+\epsilon ,0,(k_2,k_2-1)\})\) and 0 ∈ R({(k 1 − 1, k 1), 0, (k 2, k 2 − 1) + 𝜖}) for and . In particular, we find that \(\forall k_1^{\prime }\in (k_1,1),k_2^{\prime }\in (k_2,1)\), \(0\in R(\{(k_1^{\prime }-1,k_1^{\prime }),0,(k_2,k_2-1)\}\) and \(0\in R(\{(k_1-1,k_1),0,(k_2^{\prime },k_2^{\prime }-1)\}))\), whence \((k_1^{\prime },k_2)\in K_{R}\) and \((k_1,k_2^{\prime })\in K_{R}\) for all \(k_1^{\prime }\) in (k 1, 1) and \(k_2^{\prime }\) in (k 2, 1), by Definition 7. Also, it can be checked that k 1 + k 2 < 1. The weak Archimedeanity of K R implies that (k 1, k 2) ∈ K R by Definition 10, whence 0 ∈ R({(k 1 − 1, k 1), 0, (k 2, k 2 − 1)}). In turn, that implies by Proposition 1 that 0 ∈ R({u, 0, v}).

We now turn to sufficiency. Assume that R satisfies Equation (12.19) and consider any (k 1, k 2) in (0, 1)2 such that k 1 + k 2 < 1 and \((k_1^{\prime },k_2)\in K_{R}\) and \((k_1,k_2^{\prime })\in K_{R}\) for all \(k_1^{\prime }\) in (k 1, 1) and \(k_2^{\prime }\) in (k 2, 1). Then \(\forall k_1^{\prime }\in (k_1,1),k_2^{\prime }\in (k_2,1)\), \(0\in R(\{(k_1^{\prime }-1,k_1^{\prime }),0,(k_2,k_2-1)\})\) and \(0\in R(\{(k_1-1,k_1),0,(k_2^{\prime },k_2^{\prime }-1)\})\), whence \(\forall \epsilon \in \mathbb {R}_{>0}\), 0 ∈ R({(k 1 − 1, k 1) + 𝜖, 0, (k 2, k 2 − 1)}) and 0 ∈ R({(k 1 − 1, k 1), 0, (k 2, k 2 − 1) + 𝜖}) by Van Camp et al. (2018a, Proposition 2). Clearly, \((k_1-1,k_1)\in \mathcal {V}_{\mathrm {II}}\) and \((k_2,k_2-1)\in \mathcal {V}_{\mathrm {IV}}\). Due to Equation (12.12), \(\mathrm {posi}(\{(k_1-1,k_1),(k_2,k_2-1)\})\cap \mathcal {V}_{{\succeq }0}=\emptyset \). Then, using Equation (12.19), we find that 0 ∈ R({(k 1 − 1, k 1), 0, (k 2, k 2 − 1)}), or in other words, that (k 1, k 2) ∈ K R. □

Proof of Proposition 13

From the correspondence between weak Archimedeanity for rejection functions and rejection sets (Proposition 12) as well as Proposition 9, it suffices to establish the result for rejection sets. Recalling that in that case the infima of the rejection sets corresponds to their intersection, we deduce from the definition that if K i is weakly Archimedean for every i in I, also \(\inf \{{K_i}\colon {i\in I}\}\) is weakly Archimedean. □

Proof of Corollary 2

Taking into account Proposition 13, it suffices to show that any lexicographic rejection function is weakly Archimedean. Assume ex absurdo that this is not the case for some rejection function R on \(\mathcal {V}\). By Proposition 12, this means that its associated rejection set K R is not weakly Archimedean. Thus, there are u in \(\mathcal {V}_{\mathrm {II}}\) and v in \(\mathcal {V}_{\mathrm {IV}}\) such that \(\mathrm {posi}(\{u,{v}\})\cap \mathcal {V}_{{\succeq }0}=\emptyset \) and \(\forall \epsilon \in \mathbb {R}_{>0}\), (0 ∈ R({u + 𝜖, 0, v}) ∩ R({u, 0, v + 𝜖})) while 0∉R({u, 0, v}). Let D R be the lexicographic set of desirable options associated with R. It follows that uD R and vD R, and as a consequence that u + 𝜖 ∈ D R and v + 𝜖 ∈ D R for every 𝜖 in \(\mathbb {R}_{>0}\). If we denote by \(P_{D_{R}}\) the linear prevision induced by D R, given by , it follows that \(P_{D_{R}}(u)=P_{D_{R}}(v)=0\). Since by assumption \(u\in \mathcal {V}_{\mathrm {II}}\) and \({v}\in \mathcal {V}_{\mathrm {IV}}\), it follows that there must be some α in (0, 1) such that αu + (1 − α)v = 0, a contradiction with the assumption \(\mathrm {posi}(\{u,{v}\})\cap \mathcal {V}_{{\succeq }0}=\emptyset \). □

Proof of Proposition 14

Assume ex absurdo that π 1(z) ≠ 1 − z and π 2(1 − z) ≠ z, and hence π 1(z) < 1 − z and π 2(1 − z) < z, for all z in [k 1, 1 − k 2]. Then we use Proposition 11 to infer that in particular π 1(k 1) = π 1(t) for all t in (k 1, 1 − π 1(k 1)). There are two possibilities: (i) π 1(k 1) > k 2 or (ii) π 1(k 1) ≤ k 2.

If (i) π 1(k 1) > k 2 we look at π 1(1 − π 1(k 1)). By the definition of π 1, we find (1 − π 1(k 1), y) ∈ K for all y in (π 1(1 − π 1(k 1)), 1). Moreover, since π 1(k 1) ∈ [0, 1 − k 1] by the definition of π 1, we find that π 1(k 1) ∈ (k 2, 1 − k 1] and hence 1 − π 1(k 1) ∈ [k 1, 1 − k 2). By the assumption that π 1(z) < 1 − z for all z in [k 1, 1 − k 2], we find that π 1(1 − π 1(k 1)) < π 1(k 1). We also look at π 2(π 1(k 1)). By the definition of π 2, we find (x, π 1(k 1)) ∈ K for all x in (π 2(π 1(k 1), 1)). By the assumption that π 2(1 − z) < z for all z in [k 1, 1 − k 2], we find that π 2(π 1(k 1)) < 1 − π 1(k 1). Call a = π 1(k 1), b = x and c = y for x in (π 2(π 1(k 1)), 1) and y in (π 1(1 − π 1(k 1)), 1). Use Property K3 to infer that (x, y′) ∈ K and (x′, y) ∈ K for all x greater than but close enough to π 2(π 1(k 1)), y greater than but close enough to π 1(1 − π 1(k 1)), x′ in (x, 1) and y′ in (y, 1). Hence by weak Archimedeanity (x, y) ∈ K for all x in (π 2(π 1(k 1)), 1) and y in (π 1(1 − π 1(k 1)), 1). Now, take \(x=\frac {\pi _2(\pi _1(k_1))+1-\pi _1(k_1)}{2}\) and \(y=\frac {\pi _1(1-\pi _1(k_1))+\pi _1(k_1)}{2}\) to infer that \((\frac {\pi _2(\pi _1(k_1))+1-\pi _1(k_1)}{2},\frac {\pi _1(1-\pi _1(k_1))+\pi _1(k_1)}{2})\in K\), and take any t in \((\frac {\pi _2(\pi _1(k_1))+1-\pi _1(k_1)}{2},1-\pi _1(k_1))\) and infer that \(\pi _1(t) \leq \pi _1(\frac {\pi _2(\pi _1(k_1))+1-\pi _1(k_1)}{2}) <\pi _1(k_1)\). That is a contradiction with the assumption that π 1(t) = π 1(k 1) for all t in (k 1, 1 − π 1(k 1)).

So we may assume that (ii) π 1(k 1) ≤ k 2 is the case. Infer that then (k 1, y) ∈ K for all y in (k 2, 1) by the definition of π 1. Using a similar argument, we can infer that (x, k 2) ∈ K for all x in (k 1, 1). We use now the assumption that K is weakly Archimedean (Definition 10) to infer that (k 1, k 2) ∈ K, a contradiction. □

Proof of Theorem 2

We first show that K ⊆ K′. Consider any (k 1, k 2) in [0, 1)2 such that (k 1, k 2)∉K′. Then there must be some D′ in \(\mathcal {D}'\) such that \(0\notin R_{D'}(\{(k_1-1,k_1),0,(k_2,k_2-1)\})\). There are a number of possibilities:

  • If D′ = D x for some x in (0, 1), then (x, 1 − x)∉K, k 1 ≤ x and k 2 ≤ 1 − x by Equation (12.13), whence also (k 1, k 2)∉K, taking into account that K is increasing.

  • If \(D'=D_x^H\) for some x in (0, 1), then (x, 1 − x) ∈ K by Equation (12.14), \((\forall \epsilon \in \mathbb {R}_{>0})(x,1-x-\epsilon )\notin K\), k 1 ≤ x and k 2 < 1 − x. This means that there is some x in [k 1, 1 − k 2) such that (x, 1 − x) ∈ K and \((\forall \epsilon \in \mathbb {R}_{>0} )(x,1-x-\epsilon )\notin K\), whence \(((\exists x\in [k_1,1-k_2))((x,1-x-\frac {1-k_2-x}{2}\big )=(x,\frac {1-x+k_2}{2})\notin K)) \Rightarrow (k_1,k_2)\notin K\).

  • If \(D'=D_x^T\), we follow a similar reasoning to conclude that (k 1, k 2)∉K.

  • If \(D'=D_0^H\), then k 1 = 0, and \((\forall \epsilon \in \mathbb {R}_{>0})(0,1-\epsilon )\notin K\), and therefore (k 1, k 2) = (0, k 2)∉K.

  • Finally, if \(D'=D_1^T\), we follow a reasoning similar to that in the previous point and derive that (k 1, k 2) = (k 1, 0)∉K.

We now turn to showing K′⊆ K. Consider any (k 1, k 2) in [0, 1)2 such that (k 1, k 2)∉K. By Proposition 7, k 1 + k 2 ≤ 1. There are two possibilities: either (i) k 1 + k 2 = 1 or (ii) k 1 + k 2 < 1. If (i) k 1 + k 2 = 1 then k 1 in (0, 1) and hence \(D_{k_1}\in \mathcal {D}'\) because (k 1, 1 − k 1) = (k 1, k 2)∉K. Then infer \(0\notin R_{D_{k_1}}(\{(k_1-1,k_1),0,(k_2,k_2-1)\})\) by Equation (12.13), whence \(0\notin \bigcap _{D\in \mathcal {D}'}R_{D}(\{(k_1-1,k_1),0,(k_2,k_2-1)\})\) and hence (k 1, k 2)∉K′. So we may assume that (ii) k 1 + k 2 < 1. We now use Proposition 14 to infer that π 1(z) = 1 − z or π 2(1 − z) = z for some z in [k 1, 1 − k 2]. There are four possible cases: (a) π 1(k 1) = 1 − k 1; (b) π 2(k 2) = 1 − k 2; (c) π 1(z) = 1 − z for some z in (k 1, 1 − k 2); and (d) π 2(1 − z) = z for some z in (k 1, 1 − k 2). In any of them it is not difficult to prove that (k 1, k 2)∉K′. □

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Miranda, E., Van Camp, A. (2022). Coherent Choice Functions Without Archimedeanity. In: Augustin, T., Cozman, F.G., Wheeler, G. (eds) Reflections on the Foundations of Probability and Statistics. Theory and Decision Library A:, vol 54. Springer, Cham. https://doi.org/10.1007/978-3-031-15436-2_12

Download citation

Publish with us

Policies and ethics

Navigation