1 Introduction

Consider a sample \(\{X_1, X_2,\ldots , X_n\}\) of independent and identically distributed random variables having finite expected value, and denote with \(S=\sum _{i=1}^n X_i\) their sum. If one considers the expected value of any of the variables \(X_i\) given that \(S= s \in \mathbb {R}\), i.e., \(E[X_i \vert S=s]\), then it is easy to verify that \(E[X_i \vert S=s] = s/n\); thus, such a conditioned expected value of \(X_i\) increases in s. However, this property is no longer satisfied if a stronger stochastic comparison is considered, such as, for example, the usual stochastic order, as the following simple counterexample shows. To this aim, recall that, given the variables \(Y_1\) and \(Y_2\), then \(Y_1\) is said to be smaller than \(Y_2\) in the usual stochastic order (denoted by \(Y_1 \le _{ST} Y_2\)) if, and only if, \(E[\phi (Y_1)] \le E[\phi (Y_2)]\) for all non-decreasing functions \(\phi \) for which the expectations exist, or, equivalently, if \(P[Y_1> y] \le P[Y_2>y]\) for all \(y \in \mathbb {R}\) (see, e.g., Belzunce et al. 2015 or Shaked and Shanthikumar 2007 for details, properties and applications of the usual stochastic order and other stochastic comparisons).

Example 1.1

Let \(X_1\) and \(X_2\) be two independent discrete random variables that can assume values in \(\{0,1,2,3\}\) with probabilities \(\{1/6, 1/6, 1/6, 1/2\}\), respectively, and let \(S=X_1+X_2\). Then,

$$\begin{aligned} P[X_1>0 \vert S = 2] = 2/3 \ \> \ \ P[X_1>0 \vert S = 3] = 5/8 \end{aligned}$$

and

$$\begin{aligned} P[X_1>1 \vert S = 2] = 1/3 \ \ < \ \ P[X_1>1 \vert S = 3] = 1/2, \end{aligned}$$

so that \([X_1 \vert S = 2]\) and \([X_1 \vert S = 3]\) are not comparable in the usual stochastic order, i.e., \([X_1 \vert S = s]\) is not stochastically increasing in s.

The monotonicity of \(E[\phi (X_i) \vert S=s]\) in s for a non-decreasing function \(\phi \) can find a wide range of applications in different research contexts, for example, in statistical estimation and testing when one can just observe the sum of the sample and must make inferences on the distribution of the \(X_i\), or in applied probability modeling, where one can observe only the total number of individuals in a population but needs to take decisions based on the proportion of a specific sub-category of members.

For this reason, sufficient conditions for the expectation \(E[\phi (X_i) \vert S=s]\) to be increasing in s for any non-decreasing function \(\phi \) have been investigated and finally provided by Efron in Efron (1965), who proved the following statement. For it, recall that an absolutely continuous random variable X is said to have a logconcave density \(f_X\) if it satisfies

$$\begin{aligned} \ln (f_X(\lambda x+(1-\lambda ) y))\ge \lambda \ln f_X(x)+(1-\lambda ) \ln f_X(y) \end{aligned}$$

for all \(\lambda \in (0,1)\) and all xy in the support of X. Logconcavity of the density is a well-known property, which is satisfied by a large number of remarkable distributions, such as the normal or the exponential distributions, and has a straightforward analog definition for discrete random variables (see, e.g., Bagnoli and Bergstrom 2005 or Saumard and Wellner 2014 for two recent comprehensive surveys). Moreover, it must be pointed out that alternative nomenclatures are commonly used in the literature for this property, such as \(PF_2\) (Polya functions of order 2) or ILR (increase in likelihood ratio) densities.

Proposition 1.1

(Efron 1965) Let \(\{X_1, X_2,\ldots , X_n\}\) be a set of independent random variables having logconcave densities, let \(S=\sum _{i=1}^n X_i\) be their sum, and let \(\phi : \mathbb {R}^n \rightarrow \mathbb {R}\) be a real measurable function non-decreasing in each of its arguments. Then, \(E[\phi (X_1,X_2,\ldots ,X_n) \vert S=s]\) is a non-decreasing function of s.

Proposition 1.1 provides conditions for stochastic monotonicity in s of the whole random vector \((X_1,X_2,\ldots ,X_n)\) given \(S=s\), which, in general, is a stronger property rather than the stochastic monotonicity in s of \([X_i \vert S=s]\) for any \(i=1,\ldots ,n\). Also, the assumption of identical distribution for the \(X_i\) is not required. However, independence is still required.

The stochastic monotonicity property stated in Proposition 1.1 can be of interest in a variety of fields. It has been applied, for example, in queueing theory (see, e.g., Masuda 1995 and Shanthikumar and Yao 1987), in economic theory (Edered 2010; Wang 2012), in stochastic comparisons of order statistics (Boland et al. 1996; Zhuang et al. 2010), in dependence modeling (Block et al. 1985; Hu and Hu 1999) and in statistical testing, estimation and regression (Cohen and Sackrowitz 1987, 1990; Hwang and Stefanski 1994). An interesting and exhaustive list of references where the property has been applied can be found in Saumard and Wellner (2018). Moreover, alternative proofs or generalizations of this property have been provided in Daduna and Szekli (1996), where applications in queueing networks are considered, in Shanthikumar (1987), where a more general result for which Proposition 1.1 is just a corollary is proved, or in Liggett (2000), where a discrete version of the statement is obtained (with applications in modeling for interacting particle systems). A different interesting generalization is also described in the recent paper (Oudghiri 2021)

In particular, an important alternative result was proved one year later by Lehmann, in Example 12 in Lehmann (1966). In it, he showed that under the same assumptions on the variables \(X_i\) (except for one of them), the monotonicity property holds for a stronger stochastic order, i.e., for the likelihood ratio order. Given the variables \(Y_1\) and \(Y_2\) having densities \(g_1\) and \(g_2\), \(Y_1\) is said to be smaller than \(Y_2\) in the likelihood ratio order (denoted by \(Y_1 \le _{LR} Y_2\)) if, and only if, the ratio \(g_1(y)/g_2(y)\) is non-increasing in y over the union of the supports of \(Y_1\) and \(Y_2\) (for details see, e.g., Belzunce et al. 2015 or Shaked and Shanthikumar 2007). It must be pointed out that the likelihood ratio order is stronger than the usual stochastic order, in the sense that if \(Y_1 \le _{LR} Y_2\), then \(Y_1 \le _{ST} Y_2\), but not the vice versa.

Proposition 1.2

(Lehmann 1966) Let \(\{X_1, X_2,\ldots , X_n\}\) be a set of independent random variables having logconcave densities, except for \(X_1\), and let \(S=\sum _{i=1}^n X_i\) be their sum. Then, \([X_1 \vert S=s]\) is non-decreasing in the likelihood ratio order in s, i.e., \([X_1 \vert S=s_1] \le _{LR} [X_1 \vert S=s_2]\) whenever \(s_1 \le s_2\).

Note that, since the likelihood ratio order implies the usual stochastic order, under the same assumptions one has \([X_1 \vert S=s_1] \le _{ST} [X_1 \vert S=s_2]\) whenever \(s_1 \le s_2\). Also note that in this statement, as for the previous one, the independence between the variables \(X_i\) is assumed.

Among the main reasons of interest in Proposition 1.2 is the fact that for many parametric families of distributions the likelihood ratio order coincides with the ordering between the parameters. This is the case, for example, of the exponential family, the Poisson family, and the normal family (with respect to the mean \(\mu \), for fixed variance \(\sigma ^2\)). Thus, for example, uniformly most powerful likelihood tests through the value of statistic S can be determined for composite hypothesis on the parameter, according to the Karlin–Rubin theorem (see, e.g., Brown et al. 1976).

In practical situations, however, the assumption of independence seems to be too restrictive. This is the case in many applicative fields like in reliability, where items subjected to common environments are usually considered, or in actuarial sciences, where policyholders may have family relationships or share the same media channels. In these cases the independence assumption is not fulfilled, and the above monotonicity properties can be unsatisfied despite the logconcavity property for the involved variables is satisfied, as shown in the following counterexample.

Example 1.2

Let \((X_1,X_2)\) be a random vector having the bivariate Gumbel exponential distribution, i.e., be such that its joint survival function is

$$\begin{aligned} {\bar{F}}(x_1,x_2) = P(X_1>x_1,X_2>x_2) =\exp [-(\alpha _1 x_1 + \alpha _2 x_2 + \gamma x_1 x_2)] \end{aligned}$$

for \(x_1,x_2 \ge 0\), \(\alpha _1, \alpha _2 > 0\) and \(\gamma \in [0, \alpha _1 \alpha _2] \subseteq \mathbb {R}^+\). Here, \(X_1\) and \(X_2\) have exponential distributions; thus, they have logconcave densities.

Observe that the density of \([X_1 \vert S=s]\) is

$$\begin{aligned} f_{[X_1 \vert S=s]}(x) = {\left\{ \begin{array}{ll} \frac{f(x,s-x)}{f_{S}(s)} \ \ \ \text {if} \ \ x \in [0,s]; \\ 0 \ \ \ \ \ \ \ \ \ \ \ \ \ \,\,\,\text {if} \ \ x > s; \\ \end{array}\right. } \end{aligned}$$

where f is the joint density of \((X_1,X_2)\), whose analytical expression, for \((x_1,x_2) \in [0,\infty )^2\), is

$$\begin{aligned} f(x_1,x_2) = (\alpha _1 \alpha _2 - \gamma + \alpha _1 \gamma x_1 + \alpha _2 \gamma x_2 + \gamma ^2 x_1 x_2) \exp [-(\alpha _1 x_1 + \alpha _2 x_2 + \gamma x_1 x_2)], \end{aligned}$$

being zero elsewhere. The ratio between the densities of \([X_1 \vert S=s_1]\) and \([X_1 \vert S=s_2]\), for \(s_1 \le s_2\), is then given by

$$\begin{aligned} \frac{f_{[X_1 \vert S=s_1]}(x)}{f_{[X_1 \vert S=s_2]}(x)} = \left\{ \begin{array}{l@{\quad }l} \frac{f(x,s_1-x)}{f(x,s_2-x)} \cdot \frac{f_{S}(s_2)}{f_{S}(s_1)} &{} \text {if} \ \ x \in [0,s_1]; \\ 0 &{} \text {if} \ \ x \in (s_1,s_2]; \\ \end{array}\right. \end{aligned}$$

which is defined in the union of the supports of \([X_1 \vert S=s_1]\) and \([X_1 \vert S=s_2]\) (that is, in \((0,s_2)\)).

With straightforward calculations, it is easy to verify that such a ratio is increasing for \(x \in [0,s_1]\), but then it collapses to zero in \((s_1,s_2]\). For example, for \(\alpha _1=\alpha _2=1\), \(\gamma =0.5\), \(s_1=1\) and \(s_2=2\), the ratio assumes values 1.81 for \(x=0\), 2.20 for \(x=0.5\), 2.56 for \(x=1^-\) and 0 for \(x \in (1,2)\). Thus, it does not satisfy monotonicity, and \([X_1 \vert S=s_1]\) and \([X_1 \vert S=s_2]\) are not comparable in the likelihood ratio order. Actually, they are also not comparable in the usual stochastic sense, since the corresponding survival functions do intersect.

Taking also into account the fact that there are few results where the distribution of the sum of dependent random variables is available in a closed form (see, e.g., Navarro and Sarabia 2020 for a detailed discussion on this topic), it becomes important to understand when the properties of monotonicity described above are satisfied also for dependent variables, even without explicitly knowing the distribution of their sum. To the best of our knowledge, generalizations to dependent variables of Proposition 1.1 have been provided only in the recent paper (Saumard and Wellner 2018), while no generalizations of Proposition 1.2 are available in the literature.

Therefore, the aim of this paper is to provide such generalizations of Proposition 1.2 and further generalizations of Proposition 1.1 in the case that the variables \(X_1, X_2, \ldots , X_n\) do not satisfy independence. The new extensions of Proposition 1.1 provided here describe conditions on the joint distribution of the \(X_i\) that seem easier to be verified, and show that the class of bivariate distributions satisfying the property is wider than the one described in Saumard and Wellner (2018). Also, some generalizations in case of random vectors having more than two components are provided here.

Together with this, monotonicity properties for \([S \vert X_1=x]\) in x, which follow easily from the main results, are presented as well

The rest of the paper is organized as follows. Section 2 considers the case of bivariate vectors \((X_1,X_2)\), while the multivariate case, i.e., the case \((X_1,X_2,\ldots ,X_n)\) for \(n>2\), is considered in Sect. 3. Illustrative examples are provided in both sections. Finally, some conclusions are given in Sect. 4.

2 The bivariate case

First we consider the generalization of Proposition 1.2, for which, given an absolutely continuous random vector \((X_1,X_2)\), one can observe that the monotonicity of \([X_1 \vert S=s]\) in s in the likelihood ratio order is actually equivalent to a property of its joint density which is related to the notion of total positivity. To this aim, remember that a function \(\phi : \mathbb {R}^2 \rightarrow \mathbb {R}^+\) is said to be Totally Positive of order 2 (shortly, \(TP_2\)) in its arguments \((x_1,x_2)\) if, and only if, for any \(\mathbf {x},\mathbf {y}\) in \(\mathbb {R}^2\) it satisfies

$$\begin{aligned} \phi (\mathbf {x})\phi (\mathbf {y}) \le \phi (\mathbf {x} \wedge \mathbf {y})\phi (\mathbf {x} \vee \mathbf {y}), \end{aligned}$$

where the operators \(\wedge \) and \(\vee \) denote coordinatewise minimum and maximum, respectively.

Proposition 2.1

Let the vector \((X_1,X_2)\) have a joint density f. Then, the following conditions are equivalent:

  1. (a)

    The function \(f(x,s-x)\) is \(TP_2\) in (xs);

  2. (b)

    \([X_1 \vert S=s_1] \le _{LR} [X_1 \vert S=s_2]\) whenever \(s_1 \le s_2\);

  3. (c)

    \([S \vert X_1=x_1] \le _{LR} [S \vert X_1=x_2]\) whenever \(x_1 \le x_2\).

Proof

For the equivalence between points (a) and (b) observe that, for any \(x,s \in \mathbb {R}\),

$$\begin{aligned} f_{[X_1 \vert S=s]}(x) =\frac{f(x,s-x)}{\int _{-\infty }^{+\infty } f(x,s-x) \mathrm{d}x} = \frac{f(x,s-x)}{A(s)}, \end{aligned}$$

where \(A(s)=\int _{-\infty }^{+\infty } f(x,s-x) \mathrm{d}x\). Taking into account that a function of one variable does not affect the \(TP_2\) property, one can immediately observe that the ratio \(\frac{f_{[X_1 \vert S=s_1]}(x)}{f_{[X_1 \vert S=s_2]}(x)}\) is non-decreasing if, and only if, \(f(x,s-x)\) is \(TP_2\) in (xs).

For the equivalence between points (a) and (c), one can reasoning as above, just observing that

$$\begin{aligned} f_{[S \vert X_1=x]}(s) = \frac{f(x,s-x)}{\int _{-\infty }^{+\infty } f(x,y-x)\mathrm{d}y}= \frac{f(x,s-x)}{B(x)}, \end{aligned}$$

where \(B(x)=\int _{-\infty }^{+\infty } f(x,y-x)\mathrm{d}y\). \(\square \)

Let us see some examples of bivariate random vectors that satisfy the conditions of Proposition 2.1.

Example 2.1

Let \((X_1,X_2)\) have a Gompertz distribution, i.e., be such that it has joint survival function

$$\begin{aligned} {\bar{F}}(x_1,x_2)= \exp [-\theta (e^{\alpha _1 x_1+\alpha _2 x_2}-1)] \end{aligned}$$

for \(x_1,x_2 \ge 0\), \(\alpha _1, \alpha _2 > 0\) and \(\theta \in [1, \infty )\). The corresponding joint density is

$$\begin{aligned} f(x_1,x_2)= & {} \alpha _1 \alpha _2 \theta (\theta e^{\alpha _1 x_1+\alpha _2 x_2}-1) \exp [-\theta (e^{\alpha _1 x_1+\alpha _2 x_2}-1)+\alpha _1 x_1+\alpha _2 x_2] \\= & {} h(\alpha _1 x_1+\alpha _2 x_2), \end{aligned}$$

where \(h(t)= \alpha _1 \alpha _2 \theta (\theta e^{t}-1) \exp [-\theta (e^{t}-1)+t], \ t \ge 0\).

With straightforward computations, one can verify that h(t) is logconcave, i.e., that the ratio \(h(t+s)/h(t)\) is decreasing in t for every \(s \ge 0\). Assume that \(\alpha _1 < \alpha _2\), and observe that in this case

$$\begin{aligned} f(x,s-x)=h((\alpha _1 -\alpha _2)x + \alpha s)=h(\beta x+\alpha _2 s) \end{aligned}$$

for a negative \(\beta \) (and \(\beta x+\alpha _2 s \ge 0\)). Thus, for \(x_1 \le x_2\) and \(s_1 \le s_2\),

$$\begin{aligned} \frac{f(x_2, s_2-x_2)}{f(x_2, s_1-x_2)} = \frac{h(\beta x_2 + \alpha _2 s_2)}{h(\beta x_2 + \alpha _2 s_1)} \ge \frac{h(\beta x_1 + \alpha _2 s_2)}{h(\beta x_1 + \alpha _2 s_1)} = \frac{f(x_1, s_2-x_1)}{f(x_1, s_1-x_1)}, \end{aligned}$$

i.e., \(f(x, s-x)\) is \(TP_2\) in (xs).

Thus, for \(0<\alpha _1 < \alpha _2\), and any \(\theta \in [1, \infty )\), one can apply Proposition 2.1 obtaining that \([X_1 \vert S=s]\) is non-decreasing in the likelihood ratio order in s and that \([S \vert X_1=x]\) is non-decreasing in the likelihood ratio order in x.

Example 2.2

Let \((X_1,X_2)\) have a bivariate Pareto distribution, i.e., be such that it has joint survival function

$$\begin{aligned} {\bar{F}}(x_1,x_2)= \big ( 1+\alpha _1 x_1+\alpha _2 x_2\big )^{-\frac{1}{\gamma }} \end{aligned}$$

for \(x_1,x_2 \ge 0\), \(\alpha _1, \alpha _2, \gamma > 0\). The corresponding density is

$$\begin{aligned} f(x_1,x_2)=\alpha _1 \alpha _2 \frac{1+\gamma }{\gamma ^2}\big ( 1+\alpha _1 x_1+\alpha _2 x_2\big )^{-\frac{2\gamma +1}{\gamma }}, \end{aligned}$$

so that

$$\begin{aligned} f(x,s-x)=\alpha _1 \alpha _2 \frac{1+\gamma }{\gamma ^2}\big ( 1+(\alpha _1-\alpha _2) x+\alpha _2 s\big )^{-\frac{2\gamma +1}{\gamma }}. \end{aligned}$$

It is easy to verify that the latter is \(TP_2\) in (xs) if, and only if, \(\alpha _1 \ge \alpha _2\). In this case, \([X_1 \vert S=s]\) is non-decreasing in the likelihood ratio order in s. On the contrary, if \(\alpha _1 \le \alpha _2\), then \([X_2 \vert S=s]\) is non-decreasing in the likelihood ratio order in s.

It is interesting to observe that \(X_1\) and \(X_2\), marginally, have Pareto distributions, i.e., they have densities

$$\begin{aligned} f_i(x)=\frac{1}{\gamma } \big (1+x\big )^{-\frac{\gamma +1}{\gamma }}, \ \ \ x \ge 0, \end{aligned}$$

for \(i=1,2\), which are logconvex. Thus, this example shows that logconcavity of the density is not a necessary condition for the monotonicity of \([X_1 \vert S=s]\) in likelihood ratio order.

If the vector satisfies properties similar to those stated in Proposition 2.1, then also Proposition 1.1 in the bivariate case can be generalized to dependent variables. Since the comparison considered next is the usual stochastic order between random vectors, rather than between random variables, we recall here its definition. Given the random vectors \(\mathbf {Y}_1=(Y_{1,1}, Y_{1,2},\ldots , Y_{1,n})\) and \(\mathbf {Y}_2=(Y_{2,1}, Y_{2,2},\ldots , Y_{2,n})\), then \(\mathbf {Y}_1\) is said to be smaller than \(\mathbf {Y}_2\) in the usual stochastic order (denoted by \(\mathbf {Y}_1 \le _{ST} \mathbf {Y}_2\)) if, and only if, \(E[\phi (\mathbf {Y}_1)] \le E[\phi (\mathbf {Y}_2)]\) for all functions \(\phi :\mathbb {R}^n \rightarrow \mathbb {R}\) that are non-decreasing in each argument and for which the expectations exist. Equivalently, \(\mathbf {Y}_1 \le _{ST} \mathbf {Y}_2\) if \(P[\mathbf {Y}_1 \in \mathbf {U}] \le P[\mathbf {Y}_2 \in \mathbf {U}]\) for any upper set \(\mathbf {U} \subseteq \mathbb {R}^n\), i.e., a set such that \((y_{2,1}, y_{2,2},\ldots , y_{2,n}) \in \mathbf {U}\) whenever \(y_{1,i} \le y_{2,i}\) for all \(i=1,2,\ldots ,n\) and \((y_{1,1}, y_{1,2},\ldots , y_{1,n}) \in \mathbf {U}\) (see Shaked and Shanthikumar 2007 for details).

To prove such a generalization, which is an adaptation to the case of dependent variables of the proof given in Efron (1965) for Proposition 1.1, we need a preliminary statement.

Lemma 2.1

Let \(g:\mathbb {R}^2 \rightarrow \mathbb {R}^+\) be a function which is \(TP_2\) on its arguments, and it is defined on the whole \(\mathbb {R}^2\). If \(y_1 \le y_2\) and equality

$$\begin{aligned} \frac{\int _{-\infty }^{x_1} g(z,y_1) \mathrm{d}z}{\int _{-\infty }^{+\infty } g(z,y_1) \mathrm{d}z} = \frac{\int _{-\infty }^{x_2} g(z,y_2) \mathrm{d}z}{\int _{-\infty }^{+\infty } g(z,y_2) \mathrm{d}z}, \end{aligned}$$
(1)

holds, then \(x_1 \le x_2\).

Proof

First observe that, by the well-known Basic Composition Formula (see, e.g., Karlin 1968), if g(zy) is \(TP_2\) in (zy), then the integral

$$\begin{aligned} \int _{-\infty }^x g(z,y) \mathrm{d}z = \int _{-\infty }^{+\infty } \mathbf {1}_{(-\infty ,x]}(z) g(z,y) \mathrm{d}z \end{aligned}$$

is \(TP_2\) in (xy), being the indicator function \(\mathbf {1}_{(-\infty ,x]}(z)\) a \(TP_2\) function in (xz).

It follows that, for \(x_2 < \infty \) and \(y_1 \le y_2\), one has

$$\begin{aligned} \frac{\int _{-\infty }^{x_2} g(z,y_1) \mathrm{d}z}{\int _{-\infty }^{+\infty } g(z,y_1) \mathrm{d}z} \ge \frac{\int _{-\infty }^{x_2} g(z,y_2) \mathrm{d}z}{\int _{-\infty }^{+\infty } g(z,y_2) \mathrm{d}z}. \end{aligned}$$
(2)

Since g assumes nonnegative values, the equality in (1) can be obtained by reducing the upper extreme of integration in the integral of the numerator in the left-hand side of (2), i.e., for \(x_1 \le x_2\). \(\square \)

We can now describe the conditions for a vector \((X_1,X_2)\) to satisfy the monotonicity in the usual stochastic order given the value of the sum \(S=X_1+X_2\).

Proposition 2.2

Let the vector \((X_1,X_2)\) have a joint density f. If \(f(x,s-x)\) and \(f(s-x,x)\) are both \(TP_2\) in (xs), then

$$\begin{aligned} {[}(X_1,X_2) \vert S=s_1] \le _{ST} [(X_1,X_2) \vert S=s_2] \end{aligned}$$

for any \(s_1 \le s_2\).

Proof

Observe that the cumulative distribution of \(X_1\), conditioned on \(S=s\) is

$$\begin{aligned} F_{X_1 \vert S=s}(x) =\frac{\int _{-\infty }^x f(z,s-z) \mathrm{d}z}{\int _{-\infty }^{+\infty } f(z,s-z) \mathrm{d}z}. \end{aligned}$$

Fix \(s_1, s_2 \in \mathbb {R}\) and, for any \(\alpha \in (0,1)\), let us denote with \(x_\alpha ^i\) the corresponding quantile with respect to the distribution of \([X_1 \vert S=s_i]\), i.e., let \(x_\alpha ^i\) be such that

$$\begin{aligned} F_{X_1 \vert S=s_i}(x_\alpha ^i) =\frac{\int _{-\infty }^{x_\alpha ^i} f(z,s_i-z) \mathrm{d}z}{\int _{-\infty }^{+\infty } f(z,s_i-z) \mathrm{d}z} = \alpha \end{aligned}$$

for \(i=1,2\). By Lemma 2.1 one has \(x_\alpha ^1 \le x_\alpha ^2\) whenever \(s_1 \le s_2\). By symmetry, again using Lemma 2.1 but switching the arguments, one also gets \(y_\alpha ^1 \le y_\alpha ^2\) whenever \(s_1 \le s_2\), where \(y_\alpha ^i\) is such that

$$\begin{aligned} F_{X_2 \vert S=s_i}(y_\alpha ^i) =\frac{\int _{-\infty }^{y_\alpha ^i} f(s_i-z,z) \mathrm{d}z}{\int _{-\infty }^{+\infty } f(s_i-z,z) \mathrm{d}z} = \alpha \end{aligned}$$

for \(i=1,2\). Let us now denote with \((x_\alpha ^i, y_{1-\alpha }^i)\) the point in the line \(x+y=s_i\) which represents the quantile of level \(\alpha \) for the conditional distribution of \([X_1 \vert S=s_i]\) (the zero quantile being the upper left point in the line). Because of the arguments above, one has, for \(s_1 \le s_2\), that the point \((x_\alpha ^2, y_{1-\alpha }^2)\) is located in the upper right orthant having vertex \((x_\alpha ^1, y_{1-\alpha }^1)\), i.e. \((x_\alpha ^1, y_{1-\alpha }^1) \le (x_\alpha ^2, y_{1-\alpha }^2)\).

Consider now any upper set \(\mathbf {U}\), and observe that, by convexity of upper sets, the intersection \(\mathbf {U} \ \bigcap \ \{(x,y) \in \mathbb {R}^2: x+y=s_1\}\) is a segment contained in the line having equation \(x+y=s_1\). Let us denote with \((x_{\alpha _1}^1, y_{1-\alpha _1}^1)\) and \((x_{\alpha _2}^1,y_{1-\alpha _2}^1)\) the coordinates of the two extremes of such a segment, so that

$$\begin{aligned} P[(X_1,X_2) \in \mathbf {U} \vert S= s_1] = F_{X_1 \vert S=s_1}(x_{\alpha _2}^1) - F_{X_1 \vert S=s_1}(x_{\alpha _1}^1) = \alpha _2-\alpha _1 \end{aligned}$$

for some \(0< \alpha _1 \le \alpha _2 < 1\).

Similarly, the intersection \(\mathbf {U} \ \bigcap \ \{(x,y) \in \mathbb {R}^2: x+y=s_2\}\) is a segment contained in the line \(x+y=s_2\), delimited by the points having coordinates \((x_{\widetilde{\alpha }_1}^2, y_{1-\widetilde{\alpha }_1}^2)\) and \((x_{\widetilde{\alpha }_2}^2,y_{1-\widetilde{\alpha }_2}^2)\), where \(0< \widetilde{\alpha }_1 \le \widetilde{\alpha }_2 < 1\) are such that

$$\begin{aligned} P[(X_1,X_2) \in \mathbf {U} \vert S= s_2] = F_{X_1 \vert S=s_2}(x_{\widetilde{\alpha }_2}^2) - F_{X_1 \vert S=s_2}(x_{\widetilde{\alpha }_1}^2) = \widetilde{\alpha }_2-\widetilde{\alpha }_1. \end{aligned}$$

Let us now consider the points \((x_{\alpha _1}^2, y_{1-\alpha _1}^2)\) and \((x_{\alpha _2}^2,y_{1-\alpha _2}^2)\) in the line \(x+y=s_2\) which correspond to the quantiles of levels \(\alpha _1\) and \(\alpha _2\) for the conditional distribution of \([X_1 \vert S=s_2]\). As seen before, \((x_{\alpha _1}^1, y_{1-\alpha _1}^1) \le (x_{\alpha _1}^2, y_{1-\alpha _1}^2)\) and \((x_{\alpha _2}^1, y_{1-\alpha _2}^1) \le (x_{\alpha _2}^2, y_{1-\alpha _2}^2)\); thus, \((x_{\alpha _1}^2, y_{1-\alpha _1}^2) \in \mathbf {U}\) and \((x_{\alpha _2}^2,y_{1-\alpha _2}^2) \in \mathbf {U}\) (by definition of upper sets). It means that \((x_{\alpha _1}^2, y_{1-\alpha _1}^2)\) and \((x_{\alpha _2}^2,y_{1-\alpha _2}^2)\) are in \(\mathbf {U} \ \bigcap \ \{(x,y); x+y=s_2\}\); thus, also the segment that unifies \((x_{\alpha _1}^2, y_{1-\alpha _1}^2)\) and \((x_{\alpha _2}^2,y_{1-\alpha _2}^2)\) is a subset of the segment that joins \((x_{\widetilde{\alpha }_1}^2, y_{1-\widetilde{\alpha }_1}^2)\) and \((x_{\widetilde{\alpha }_2}^2,y_{1-\widetilde{\alpha }_2}^2)\). This implies that \(x_{\widetilde{\alpha }_1}^2 \le x_{\alpha _1}^2 \le x_{\alpha _2}^2 \le x_{\widetilde{\alpha }_2}^2\) and therefore

$$\begin{aligned} P[(X_1,X_2) \in \mathbf {U} \vert S= s_2]= & {} F_{X_1 \vert S=s_2}(x_{\widetilde{\alpha }_2}^2) - F_{X_1 \vert S=s_2}(x_{\widetilde{\alpha }_1}^2) \\\ge & {} F_{X_1 \vert S=s_2}(x_{\alpha _2}^2) - F_{X_1 \vert S=s_2}(x_{\alpha _1}^2) \\= & {} \alpha _2 -\alpha _1\\= & {} F_{X_1 \vert S=s_1}(x_{\alpha _2}^1) - F_{X_1 \vert S=s_1}(x_{\alpha _2}^1) \\= & {} P[(X_1,X_2) \in \mathbf {U} \vert S= s_1]. \end{aligned}$$

Then the assertion follows. \(\square \)

Note that the bivariate stochastic order implies the upper and lower orthant orders (see Shaked and Shanthikumar 2007, p. 308) and so, under the assumptions of the preceding proposition, we get

$$\begin{aligned} P(X_1\le x_1,X_2\le x_2 \vert S=s_1)\ge P(X_1\le x_1,X_2\le x_2 \vert S=s_2) \end{aligned}$$

and

$$\begin{aligned} P(X_1> x_1,X_2> x_2 \vert S=s_1)\le P(X_1> x_1,X_2> x_2 \vert S=s_2) \end{aligned}$$

for any \(x_1,x_2\) and any \(s_1 \le s_2\).

Also note that, under these assumptions, from Theorems 6.B.16 and 6.B.20 in Shaked and Shanthikumar (2007), p. 273 and 276, we also get

$$\begin{aligned} (\phi (X_1,X_2)\vert S=s_1)\le _{ST} (\phi (X_1,X_2)\vert S=s_2) \end{aligned}$$

and

$$\begin{aligned} (\phi _1(X_1)\vert S=s_1)+(\phi _2(X_2)\vert S=s_1)\le _{ST} (\phi _1(X_1)\vert S=s_2)+(\phi _2(X_2)\vert S=s_2) \end{aligned}$$

for any \(s_1\le s_2\) and any non-decreasing functions \(\phi \), \(\phi _1\) and \(\phi _2\).

The following example, showing a case where Proposition 2.2 can be applied, deals with frailty models. The frailty approach, introduced in Vaupel et al. (1979), provides a tool in survival analysis to model the dependence of lifetimes on common environmental conditions. According to this model, the frailty (an unobservable random variable that describes common risk factors) acts simultaneously on the hazard functions of the lifetimes. Given the vector \((X_{1}, X_{2})\), it is said to be described by a bivariate frailty model if its joint survival function is defined as

$$\begin{aligned} \bar{F}(x_1,x_2) =E_{V} \left[ \prod _{i=1}^2 \bar{G}^{V}(x_i)\right] =\int _{\Omega } {\bar{G}}^{\omega }(x_1) {\bar{G}}^{\omega }(x_2) \mathrm{d} H(\omega ), \ x_1,x_2 \in \mathbb {R}^+, \end{aligned}$$
(3)

where V is a random variable taking values in \(\Omega \subseteq \mathbb {R}^+\) and having cumulative distribution H, while \(\bar{G}\) is any suitable survival function, commonly called the baseline survival function of the \(X_{i}\) (different from the common marginal survival function of \(X_{1}\) and \(X_2\) unless \(V=1\) a.s.). Note that this model is based on the assumption that the components in the vector are independent given the common frailty V. Further details on frailty models can be found in Navarro and Mulero (2020), where Time Transformed Exponential models (a generalization of frailty models) are considered.

In the particular case that baseline survival function is of exponential type then, as shown below, the vector satisfies the assumptions of both Proposition 2.1 and Proposition 2.2.

Example 2.3

Let \((X_{1}, X_{2})\) have a joint survival function defined as in (3), where \({\bar{G}}(x) = \exp (-\lambda x)\), with \(\lambda >0\), and where H is any cumulative distribution of a random environment taking values in \(\Omega \subseteq \mathbb {R}^+\). Then, its joint density function, for \(x_1,x_2 \in \mathbb {R}^+\), is

$$\begin{aligned} f(x_1,x_2)=\int _{\Omega } (\lambda \omega )^2 e^{-\lambda \omega {(x_1+x_2)}} \mathrm{d} H(\omega ), \end{aligned}$$

so that, for \(s \ge 0\) and \(x \in [0,s]\),

$$\begin{aligned} f(x,s-x)=\int _{\Omega } (\lambda \omega )^2 e^{-\lambda \omega s} \mathrm{d} H(\omega ). \end{aligned}$$

Being constant in x, the latter is \(TP_2\) in (xs). Similarly, also \(f(s-x,x)\) is \(TP_2\) in (xs). Thus, one can apply Proposition 2.1 obtaining that \([X_i \vert S=s_1] \le _{LR} [X_i \vert S=s_2]\), for any \(i=1,2\), whenever \(s_1 \le s_2\), and that \([S \vert X_1=x] \le _{LR} [S \vert X_1=y]\) whenever \(x \le y\). Also, one can apply Proposition 2.2 obtaining \([(X_1,X_2) \vert S=s_1] \le _{ST} [(X_1,X_2) \vert S=s_2]\) whenever \(s_1 \le s_2\).

Remark 2.1

It must be pointed out that the assumptions of Proposition 2.1 and of Proposition 2.2 are not always satisfied for any frailty model, as the following example shows. Let \((X_{1}, X_{2})\) have joint survival function defined as in (3), where \({\bar{G}}(x) = 1-x\), with \(x \in [0,1]\), and where V has exponential distribution with hazard rate \(\lambda =1\). With straightforward calculations, one can get that its joint density is

$$\begin{aligned} f(x_1,x_2)= 2 [1-\ln ((1-x_1)(1-x_2))]^{-3} [(1-x_1)(1-x_2)]^{-1}, \ \ \ x_i \in [0,1], \end{aligned}$$

so that

$$\begin{aligned} f(x,s-x)= 2 [1-\ln ((1-x)(1-s+x))]^{-3} [(1-x)(1-s+x)]^{-1}, \ \ \ x \in [0,s], \end{aligned}$$

which is not \(TP_2\) in (xs). For example, for \(s_1=0.5, \ s_2=0.9, \ x_1=0.05\) and \(x_2=0.1\) one has \(0.47394 = f(x_1,s_1-x_1) f(x_2,s_2-x_2) < f(x_1,s_2-x_1) f(x_2,s_1-x_2)= 0.48041\).

Proposition 2.2 can be extended to \((\phi _1(X_1),\phi _2(X_2))\) given the value of the sum \(S^*=\phi _1(X_1)+\phi _2(X_2)\) for increasing functions \(\phi _1\) and \(\phi _2\) as follows. The proof is omitted being easy, but, on the contrary, the conditions described in the statement are quite strong.

Proposition 2.3

Let the vector \((X_1,X_2)\) have a joint density f and let \(S^*=\phi _1(X_1)+\phi _2(X_2)\) for two strictly increasing differentiable functions \(\phi _1\) and \(\phi _2\). Let \(\psi _1\) and \(\psi _2\) be the respective inverse functions. If \(f(\psi _1(x),\psi _2(s-x))\), \(f(\psi _1(s-x),\psi _2(x))\), \(\psi '_1(s-x)\) and \(\psi '_2(s-x)\) are all \(TP_2\) in (xs), then

$$\begin{aligned} {[}(\phi _1(X_1),\phi _2(X_2)) \vert S^*=s_1] \le _{ST} [(\phi _1(X_1),\phi _2(X_2)) \vert S^*=s_2] \end{aligned}$$

for any \(s_1 \le s_2\).

The following statements provide simple sufficient conditions for a joint bivariate density to satisfy the conditions of Propositions 2.1 and  2.2.

Proposition 2.4

Let the vector \((X_1,X_2)\) have a joint density f. If \(f(x_1,x_2)\) is \(TP_2\) in \((x_1,x_2)\) and lonconcave in \(x_2\) for every \(x_1\), then \(f(x,s-x)\) is \(TP_2\) in (xs). Moreover, if \(f(x_1,x_2)\) is also lonconcave in \(x_1\) for every \(x_2\), then also \(f(s-x,x)\) is \(TP_2\) in (xs).

Proof

Observe that \(f(x,s-x)\) is \(TP_2\) in (xs) if, and only if,

$$\begin{aligned} \frac{f(x_2,y+\epsilon _1)}{f(x_2,y)} \ge \frac{f(x_1,y+\epsilon _1+\epsilon _2)}{f(x_1,y+\epsilon _2)} \end{aligned}$$
(4)

for any \(y \in \mathbb {R}\), \(x_1 \le x_2\) and \(\epsilon _1,\epsilon _2 > 0\).

Note that if \(f(x_1,x_2)\) is \(TP_2\) in \((x_1,x_2)\) then

$$\begin{aligned} \frac{f(x_2,y+\epsilon _1)}{f(x_2,y)} \ge \frac{f(x_1,y+\epsilon _1)}{f(x_1,y)}, \end{aligned}$$
(5)

while from logconcavity of f when the first argument is fixed one has

$$\begin{aligned} \frac{f(x_1,y+\epsilon _1)}{f(x_1,y)} \ge \frac{f(x_1,y+\epsilon _1+\epsilon _2)}{f(x_1,y+\epsilon _2)}. \end{aligned}$$
(6)

From (5) and (6) follows (4), thus the assertion. The \(TP_2\) property in (xs) of \(f(s-x,x)\) whenever \(f(x_1,x_2)\) is lonconcave in \(x_1\) for every \(x_2\) can be proved in the same manner. \(\square \)

Proposition 2.4 can be applied, for example, when one knows the marginal distributions of \(X_1\) and \(X_2\) and the connecting copula, or the survival copula, of \((X_1,X_2)\) (see, e.g., Nelsen 2006 for the definition of the copula of a random vector)

Example 2.4

Let the vector \((X_1,X_2)\) have a survival copula \({\hat{C}}\) and marginal univariate survival functions \({\bar{F}}_1\) and \({\bar{F}}_2\), i.e., let

$$\begin{aligned} {\bar{F}}(x_1,x_2)= {\hat{C}}\big ({\bar{F}}_1 (x_1), {\bar{F}}_2 (x_2)\big ), \ \ \ (x_1,x_2) \in \mathbb {R}^2 \end{aligned}$$

be its joint survival function. Then, as one can easily verify, its joint density can be expressed as

$$\begin{aligned} f(x_1,x_2)= c\big ({\bar{F}}_1 (x_1), {\bar{F}}_2 (x_2)\big ) f_1(x_1) f_2(x_2) \end{aligned}$$
(7)

for all \((x_1,x_2)\) in the support of \((X_1,X_2)\), where c is the second mixed partial derivative of \({\hat{C}}\) while \(f_1\) and \(f_2\) are the marginal densities (assuming all of them exist). From (7) immediately follows that \(f(x_1,x_2)\) is \(TP_2\) in \((x_1,x_2)\) if, and only if, c(uv) is \(TP_2\) in \((u,v) \in (0,1)^2\). This latter property of copulas is satisfied by a number of well-known copulas, such as, for example, the Clayton copula, for which

$$\begin{aligned} c(u,v)=(1 + \theta )(uv)^{- \theta -1}(u^{- \theta }+v^{- \theta }-1)^{-\frac{1}{\theta }-2}, \ \ \ (u,v) \in (0,1)^2, \end{aligned}$$

for any value of its parameter \(\theta \in (0,\infty )\) (see, e.g., Tenzer and Elidan 2016, where a list of copulas having \(TP_2\) density is provided). Now note that logconcavity of \(f(x_1,x_2)\) in \(x_1\) for every \(x_2\) is satisfied if the ratio

$$\begin{aligned} \frac{c\big ({\bar{F}}_1(x_1+y), v\big )}{c\big ({\bar{F}}_1(x_1), v\big )} \ \frac{f_1(x_1+y)}{f_1(x_1)} \end{aligned}$$

is non-increasing in \(x_1\) for all \(y \ge 0\) and \(v \in [0,1]\). This monotonicity, in turns, is satisfied if \(X_1\) has a logconcave density, and if the copula and the marginal survival function \({\bar{F}}_1\) are such that

$$\begin{aligned} \frac{c\big ({\bar{F}}_1(x_1+y), v\big )}{c\big ({\bar{F}}_1(x_1), v\big )} \end{aligned}$$
(8)

is non-increasing in \(x_1\) for all \(y \ge 0\) and \(v \in (0,1)\). If, for example, \(X_1\) has an exponential distribution, then the ratio (8) decreases if, and only if, c(auv)/c(uv) increases in u for all \(a,v \in (0,1)\). It turns out that if \((X_1,X_2)\) has a Clayton survival copula and exponentially distributed margins, then both \([X_1 \vert S=s]\) and \([X_2 \vert S=s]\) are non-decreasing in the likelihood ratio order in s.

For the next statement recall that, as in the univariate case, a function \(f:\mathbb {R}^n \rightarrow \mathbb {R}\) is said to be logconcave if it satisfies

$$\begin{aligned} \ln (f(\lambda \mathbf {x}+(1-\lambda ) \mathbf {y}))\ge \lambda \ln f(\mathbf {x})+(1-\lambda ) \ln f(\mathbf {y}) \end{aligned}$$

for all \(\lambda \in (0,1)\) and all \(\mathbf {x},\mathbf {y}\) in \(\mathbb {R}^n\).

Proposition 2.5

Let the vector \((X_1,X_2)\) have a joint density \(f(x_1,x_2)\) which is logconcave and \(TP_2\) in \((x_1,x_2)\). Then:

  1. (a)

    \([X_1 \vert S=s]\) and \([X_2 \vert S=s]\) are non-decreasing in the likelihood ratio order in s;

  2. (b)

    \([S|X_1=x]\) and \([S|X_2=x]\) are non-decreasing in the likelihood ratio order in x;

  3. (c)

    \([(X_1,X_2) \vert S=s_1] \le _{ST} [(X_1,X_2) \vert S=s_2]\) for any \(s_1 \le s_2\);

  4. (d)

    \((\phi _1(X_1)\vert S=s_1)+(\phi _2(X_2)\vert S=s_1)\le _{ST} (\phi _1(X_1)\vert S=s_2)+(\phi _2(X_2)\vert S=s_2)\) for any \(s_1\le s_2\) and any non-decreasing functions \(\phi _1\) and \(\phi _2\).

Proof

For the proof, it is enough to observe that \(f_{[X_2 \vert X_1=x_1]}(x_2) = f(x_1,x_2)/f_{X_1}(x_1)\), so that

$$\begin{aligned} \log f_{[X_2 \vert X_1=x_1]}(x_2) = \log f(x_1,x_2) -\log f_{X_1}(x_1). \end{aligned}$$

For fixed \(x_1\) the term \(\log f_{X_1}(x_1)\) is constant, while \(\log f(x_1,x_2)\) is concave, by definition of logconcavity. Thus, \([X_2 \vert X_1=x_1]\) has a logconcave density. Similarly, one can prove that \([X_1 \vert X_2=x_2]\) has a logconcave density. Thus, one can apply Proposition 2.4, obtaining that both \(f(x,s-x)\) and \(f(s-x,x)\) are \(TP_2\) in (xs). The assertions (a) and (b) now follow from Proposition 2.1 and assertion (c) from Proposition  2.2. The proof of (d) is a consequence of (c) and Theorem 6.B.20 in Shaked and Shanthikumar (2007), p. 276. \(\square \)

The following is an example of application of Proposition 2.5.

Example 2.5

Let \((X_1,X_2)\) be an elliptical vector with scale function \(g: \mathbb {R}^+ \rightarrow \mathbb {R}^+\) and correlation matrix \(\varvec{\Sigma }\), i.e., let

$$\begin{aligned} f(x_1,x_2)= \vert \Sigma \vert ^{-1/2} g\left( (x_1,x_2)' \ \varvec{\Sigma }^{-1} \ (x_1,x_2) \right) , \ \ \ \ \ (x_1,x_2) \in \mathbb {R}^2. \end{aligned}$$

Let also \(\Sigma = \left( \begin{array}{cc} 1 &{} r \\ r &{} 1 \end{array} \right) \), where \(r \in (-1,1)\), and define \(\phi (t)= \log g(t)\).

As stated in Proposition 1.2 of Abdous et al. (2005), \(f(x_1,x_2)\) is \(TP_2\) in \((x_1,x_2)\) if, and only if,

$$\begin{aligned} -\frac{r}{1+r} \le \inf _{t \in T} \frac{t \phi ''(t)}{\phi '(t)} \le \sup _{t \in T} \frac{t \phi ''(t)}{\phi '(t)} \le \frac{r}{1-r}, \end{aligned}$$

where \(T=\{ t \in \mathbb {R}: \phi '(t) < 0\}\). This condition is actually satisfied for every \(r \ge 0\) when \(g(t) \propto \exp (-\beta t^\alpha )\) with \(\alpha \le (1-r)^{-1}\) and \(\beta > 0\).

Moreover, note that \(f(x_1,x_2)\) is logconcave if g(t) is logconcave, since \((x_1,x_2)' \ \varvec{\Sigma }^{-1} (x_1,x_2)\) is concave in \((x_1,x_2)\) for any \(\varvec{\Sigma }\) (see Fang et al. 1990 for details). When g is defined as above then \(\log g(t)= -\beta t^\alpha \), which is concave for any \(\alpha \ge 1\); thus, f satisfies the assumptions of Proposition 2.5 for that g when \(1 \le \alpha \le \, (1-r)^{-1}\) and \( \beta > 0\).

Note that, as a particular case for \(\alpha =1\) and \(\beta =1/2\), this example includes the bivariate normal distributions, whose density is always \(TP_2\) when the covariance between \(X_1\) and \(X_2\) is non-negative (see, e.g., Theorem 3.3 in Fang et al. (2002)).

3 The multivariate case

Multivariate random vectors \((X_1, X_2,\ldots ,X_n)\), with \(n>2\), are considered in this section, and few examples where the monotonicity in s of \([X_1 \vert S=s]\) (in the likelihood ratio order) and monotonicity in s of \([(X_1,\ldots ,X_n) \vert S=s]\) (in the usual stochastic order) are provided, where \(S=\sum _{j=1}^n X_j\).

First observe that, from Proposition 2.5 (a) and (b), the following statement easily follows.

Proposition 3.1

Given the vector \((X_1, X_2,\ldots ,X_n)\), let \(Y_i=\sum _{j, j\ne i} X_j\). If for any i the vector \((X_i,{Y_i})\) has a joint density f(xy) which is logconcave and \(TP_2\) in (xy), then \([X_i \vert S=s]\) is non-decreasing in the likelihood ratio order in s, and \([S |X_i=x]\) is non-decreasing in the likelihood ratio order in x.

Note that the likelihood ratio order implies the usual stochastic order and so, under the assumptions of the preceding proposition, we get

$$\begin{aligned} P(X_i>x_i \vert S=s_1)\le P(X_i>x_i \vert S=s_2) \end{aligned}$$

and

$$\begin{aligned} E(\phi (X_i) \vert S=s_1)\le P(\phi (X_i) \vert S=s_2) \end{aligned}$$

for any \(x_i\), any \(s_1\le s_2\) and any increasing function \(\phi \) such that these conditional expectations exist.

As an immediate example of application of this statement, one gets that the monotonicity in s of \([X_1 \vert S=s]\) in the likelihood ratio order can be satisfied for multivariate normal distributions, as stated in the following corollary.

Corollary 3.1

Let \((X_1, X_2,\ldots ,X_n)\) have a \(\mathcal {N}\big (\overline{\mu }, \varvec{\Sigma }\big )\) distribution. Then, fixed any \(i=1,\ldots ,n\) and defined \({Y_i}=\sum _{j, j\ne i} X_j\), by closure properties of normal distributions the vector \((X_i,{Y_i})\) has a bivariate normal distribution. Thus, it has a logconcave density. Moreover, by Theorem 3.3 in Fang et al. (2002) (see also the remark before Proposition 1.2 in Abdous et al. 2005), the density of \((X_i,{Y_i})\) satisfies the \(TP_2\) property if \( {\mathop {\sum _{j\ne i} Cov (X_i, X_j) \ge 0}}\). Thus, from Proposition 3.1 one has that [\(X_i\)|S=s] is non-decreasing in LR order.

Example 3.1

Let Y (having normal distribution) be a signal from an item, which describes its working state, and assume the item fails when \(Y < 0\). Assume also that Y cannot be directly read, since its reading is subjected by a number n of noises, so that what one can actually read is the “proxy” variable \(S=Y+X_1+ \cdots + X_n\), where the \(X_i\) represent the noises. If the signal and the noises are described by a vector \((Y, X_1, \ldots , X_n)\) having a multivariate normal distribution, then by Corollary 3.1 one has that \(P[Y>t | S=s]\) is non-decreasing in s for all \(t \in \mathbb {R}\); thus, \(P[Y < 0 | S=s]\) is non-increasing in s. It follows that if the reading of the signal is positive, i.e., if \(s >0\), then the probability of failure of the item has the upper bound \(P[Y < 0 | S=0]\), which can be easily calculated given the parameters \(\big (\overline{\mu }, \varvec{\Sigma }\big )\) of the vector \((Y, X_1, \ldots , X_n)\).

Moreover, assume that \(T=\phi (Y)\) represents a non-decreasing function of the signal Y, representing a performance of the item. Corollary 3.1 also shows that the regression \(E[T | S=s]\) is monotone as well, when \( {\mathop {\sum _{i= 1}^n Cov (Y, X_i) \ge 0}}\), so that the regression function with measurement error is a good proxy of the “true” regression function in the sense described in Hwang and Stefanski (1994), even if the noises are not independent on Y (as it is assumed, on the contrary, in Hwang and Stefanski 1994).

Another interesting case where the monotonicity property in the likelihood ratio order is satisfied is the case of vectors having Schur-constant joint survival functions, whose definition is recalled here. A vector \((X_1, X_2,\ldots ,X_n)\) of random lifetimes (i.e., of non-negative random variables) is said to have a Schur-constant joint survival function if, for \(x_i \ge 0,\ i=1,2,..,n,\)

$$\begin{aligned} {\bar{F}}(x_1,x_2,\ldots ,x_n)=P[X_1> x_1,X_2> x_2,\ldots ,X_n > x_n] = {\bar{G}}\left( \sum _{i=1}^n x_i\right) , \ \ \ \ \ \end{aligned}$$
(9)

where \({\bar{G}}\) is a non-increasing function, continuous from the right, such that \({\bar{G}}(0)=1\), \(\lim _{t \rightarrow \infty } {\bar{G}}(t)=0\) and other conditions for which it defines a bona fide joint survival function (see Caramellino and Spizzichino 1994 for details). The family of Schur-constant survival functions is an important family that has been extensively considered in a variety of applicative fields such as reliability and insurance; we refer the reader to Caramellino and Spizzichino (1994) and references therein for applications in reliability, and to the recent paper (Genest and Kolev 2021) for applications in extensions of the law of uniform seniority for insurance contracts to the case of dependent lifetimes.

Proposition 3.2

Let the vector \((X_1, X_2,\ldots ,X_n)\) have a Schur-constant joint survival function. Then, for any \(i=1,2,\ldots ,n\), one has that \([X_i \vert S=s]\) is non-decreasing in s in the likelihood ratio order and that \([S \vert X_i=x]\) is non-decreasing in x in the likelihood ratio order.

Proof

As proved in Caramellino and Spizzichino (1994), Proposition 2.3, a vector \((X_1, X_2,\ldots ,X_n)\) has Schur-constant joint survival function if, and only if, its conditional distribution given \(S=\sum _{j=1}^n X_j=s\) is the uniform distribution over the simplex \(\varphi _s = \{x_j \in \mathbb {R}^+: \sum _{j=1}^n x_j=s\}\). From this property, and from Equation (2.2) in the same paper, it follows that

$$\begin{aligned} {\bar{F}}_{[X_i \vert S=s]}(x) = P[X_i>x \vert S=s] = \left( 1-\frac{x}{s}\right) ^{n-1}, \ \ \ \ x \in [0,s] \subseteq \mathbb {R}^+, \end{aligned}$$

so that, for \(x \in [0,s]\),

$$\begin{aligned} f_{[X_i \vert S=s]}(x) = \frac{n-1}{s^{n-1}} (s-x)^{n-2}. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{f_{[X_i \vert S=s_1]}(x)}{f_{[X_i \vert S=s_2]}(x)} = \left\{ \begin{array}{ll} \left( \frac{s_1}{s_2} \right) ^{n-1} \cdot \left( \frac{s_1-x}{s_2-x} \right) ^{n-2} &{}\quad \text {if} \ \ x \in [0,s_1]; \\ 0 &{}\quad \text {if} \ \ x \in (s_1,s_2], \\ \end{array}\right. \end{aligned}$$

which is non-increasing in x for \(s_1 \le s_2\), and the first assertion follows.

The second assertion follows by considering the vector \((X_i,S)\) and observing that the monotonicity of \(f_{[X_i \vert S=s_1]}(x) / f_{[X_i \vert S=s_2]}(x)\) in x implies the monotonicity of \(f_{[S \vert X_i=x_1]}(s) / f_{[S \vert X_i=x_2]}(s)\) in s, as shown in the proof of Proposition 2.1. \(\square \)

For what concerns the monotonicity in s of \([(X_1,\ldots ,X_n) \vert \sum _{j=1}^n X_j=s]\) in the usual stochastic order, for \(n>2\), we have the following statement.

Proposition 3.3

Let the vector \((X_1, \ldots ,X_n)\) have an absolutely continuous and Schur-constant joint survival function. Then, \(E[\phi (X_1,\ldots ,X_n) \vert S=s]\) is a non-decreasing function of s for any non-decreasing function \(\phi \), i.e., \([(X_1,\dots ,X_n) \vert S=s]\) is non-decreasing in s in the usual stochastic order.

Proof

We assume that the joint survival function of \((X_1,\dots ,X_n)\) can be written as in (9) for \(x_1,\dots ,x_n\ge 0\). Hence its joint density is

$$\begin{aligned} f(x_1,\dots ,x_n)=(-1)^n {\bar{G}}^{(n)}(x_1+\dots +x_n) \end{aligned}$$

for \(x_1,\dots ,x_n\ge 0\) (and zero elsewhere). Therefore, \((-1)^n{\bar{G}^{(n)}}(t)\ge 0\) for all \(t\ge 0\) and the joint density of \((X_1,\dots ,X_{n-1},S)\), with \(S=\sum _{j=1}^n X_j\), is

$$\begin{aligned} g(x_1,\dots ,x_{n-1},s)=f(x_1,\dots ,x_{n-1},s-x_1-\dots -x_{n-1})=(-1)^n {\bar{G}}^{(n)}(s) \end{aligned}$$

for \(x_1,\dots ,x_{n-1} \ge 0\) such that \(x_1+\ldots +x_{n-1} \le s.\) Therefore, the conditional density of \([(X_1,\dots ,X_{n-1})\vert S=s]\) is

$$\begin{aligned} g^*(x_1,\dots ,x_{n-1}\vert s)= \frac{(-1)^n {\bar{G}}^{(n)}(s)}{f_S(s)} \end{aligned}$$

where the density \(f_S\) of S was obtained in Caramellino and Spizzichino (1994) as

$$\begin{aligned} f_S(s)=(-1)^n {\bar{G}}^{(n)}(s)\frac{s^{n-1}}{(n-1)!} \end{aligned}$$

for \(s\ge 0\). Hence

$$\begin{aligned} g^*(x_1,\dots ,x_{n-1}\vert s)= \frac{(n-1)!}{s^{n-1}} \end{aligned}$$

for \(s>0\) and \(x_1,\dots ,x_{n-1}\ge 0\) such that \(x_1+\dots + x_{n-1}\le s\) (and zero elsewhere). Note that \(g^*\) is not defined for \(s \le 0\).

If \(\phi \) is non-decreasing and \(0<s_1\le s_2\), then we get

$$\begin{aligned} E[\phi (X_1,\dots ,X_n) \vert S=s_1]&= E[\phi (X_1,\dots ,s_1-X_1-\dots -X_{n-1}) \vert S=s_1]\\&=\int _{D_1} \phi (x_1,\dots ,x_{n-1},s_1-x_1-\dots -x_{n-1} ) \frac{(n-1)!}{s_1^{n-1}}\\&\quad \times dx_1\dots dx_{n-1} \end{aligned}$$

where \(D_1:=\{ (x_1,\dots ,x_{n-1}): x_1,\dots ,x_{n-1}\ge 0, x_1+\dots + x_{n-1}\le s_1 \}\). By doing the change \(u_i=x_i/s_1\) for \(i=1,\dots ,n-1\), we get

$$\begin{aligned} E[\phi (X_1,\dots ,X_n) \vert S=s_1]&= (n-1)!\int _{D} \phi (s_1u_1,\dots ,s_1u_{n-1},s_1(1-u_1\\&\quad -\dots -u_{n-1}) ) du_1\dots du_{n-1} \end{aligned}$$

where \(D:=\{ (u_1,\dots ,u_{n-1}): u_1,\dots ,u_{n-1}\ge 0, u_1+\dots + u_{n-1}\le 1 \}\). Analogously,

$$\begin{aligned} E[\phi (X_1,\dots ,X_n) \vert S=s_2]&= (n-1)!\int _{D} \phi (s_2u_1,\dots ,s_2u_{n-1},s_2(1-u_1\\&\quad -\dots -u_{n-1}) ) du_1\dots du_{n-1}. \end{aligned}$$

Hence, if \(\phi \) is non-decreasing and \(s_1\le s_2\), then

$$\begin{aligned} E[\phi (X_1,\dots ,X_n) \vert S=s_1]&= (n-1)!\int _{D} \phi (s_1u_1,\dots ,s_1u_{n-1},s_1(1-u_1\\&\quad -\dots -u_{n-1}) ) du_1\dots du_{n-1}\\&\le (n-1)!\int _{D} \phi (s_2u_1,\dots ,s_2u_{n-1},s_2(1-u_1\\&\quad -\dots -u_{n-1}) ) du_1\dots du_{n-1}\\&=E[\phi (X_1,\dots ,X_n) \vert S=s_2] \end{aligned}$$

which concludes the proof. \(\square \)

Note that from Theorem 6.B.16 in Shaked and Shanthikumar (2007), p. 273, the ST ordering obtained in the preceding proposition can be extended to \((\phi (X_1,\ldots ,X_n) \vert S=s)\) in s for any non-decreasing function \(\phi :\mathbb {R}^n\rightarrow \mathbb {R}^k\).

It must be observed that Example 2.3 is, actually, a corollary of both Propositions 3.2 and  3.3, since the frailty model with exponential baseline survival functions reduces to a Schur-constant model.

4 Conclusions

We have studied monotonicity properties of dependent random variables conditioned on their sum, and we have obtained several results that extend the classic results for independent random variables. We have considered both the likelihood ratio order and the usual stochastic order in its univariate and multivariate versions.

The main task for future research could be the extension of the result given in Proposition 2.2 to the multivariate case and/or to other (stronger) stochastic orders. Proposition 3.3 can be seen as a first step in that direction. Other tasks could be to find more models where the conditions assumed here are satisfied and so that the monotonicity properties hold. Inference tools to check that conditions in practice should be investigated as well.