Risk Assessment and Measures

  • Chapter
  • First Online:
Decentralized Insurance

Part of the book series: Springer Actuarial ((SPACT))

  • 377 Accesses

Abstract

This chapter offers an overview of fundamental tools for quantifying, measuring, and assessing risks. Univariate distributions are commonly used to describe the randomness of particular risks to be quantified and modeled. Risk measures are summary statistics that portray various aspects of risks. They are often used by financial institutions as quantitative bases to set risk management policies. Risk measures can be further used to establish the ordering of risks for the purpose of comparison. The ordering of risks is used in later chapters as a gauge for the effectiveness of risk reduction strategies. As risks are often interconnected, multivariate distributions are critical tools for measuring and understanding their relationships. Dependence measures are also introduced as summary statistics that characterize the strength of dependence between risks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 106.99
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 139.09
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
EUR 139.09
Price includes VAT (Germany)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Runhuan Feng .

Appendices

Appendix 2.A Proof of Equivalent Inequalities (2.1)

We shall provide a proof by contrapositive. The contrapositive of the statement \(A \Rightarrow B\) is the opposite of \(B \Rightarrow \) the opposite of A. If one can prove the contrapositive, then the original statement must also be true.

Let us first consider the “\(\Leftarrow \)”. If \(p\le F(x)\), then \(x \in A:=\{x: F(x) \ge p\}\). It is clear from the definition of \(F^{-1}\) that \(F^{-1}(p)=\inf A \le x.\)

For the “\(\Rightarrow \)” part, we consider the contrapositive of this equivalence. We want to show that \(p>F(x) \Rightarrow F^{-1}(p)>x.\) If \(p>F(x)\), then by the right-continuity of F we can find a number \(\epsilon >0\) such that \(p>F(x+\epsilon ).\) Then by the supremum version of the definition of quantile function, we see that \(F^{-1}(p)\ge x+\epsilon >x.\) Now we have established the equivalence.

Appendix 2.B VaR as a Solution to a Minimization Problem

We define the function g and observe that

$$\begin{aligned} g(c)&\;:=\mathbb {E}[(X-c)_+]+c \epsilon = \mathbb {E}\left[ \int ^X_c 1 dx\; I(X>c) \right] + c \epsilon = \mathbb {E}\left[ \int ^\infty _c I(X>x) dx \right] + c \epsilon \nonumber \\&\;=\int ^\infty _c \mathbb {P}(X>x) dx + c \epsilon =\int ^\infty _c \overline{F}(x) dx+ c \epsilon , \end{aligned}$$
(2.17)

where \(\overline{F}\) is the survival function of X. Setting the derivative of g with respect to c to zero yields

$$\begin{aligned} g'(c)=\epsilon -\overline{F}(c)=0. \end{aligned}$$

Since \(g'(c)\) is an increasing function of c, it is clear that \(c^*=\textrm{VaR}_{1-\epsilon }[X]\) is the global minimum of the function g(c). In other words, VaR\(_{1-\epsilon }\) is the minimizer of the optimal capital requirement problem (2.3).

Appendix 2.C Jensen’s Inequality

Theorem 2.7

(Jensen’s inequality) If \(v''(x)>0\) (i.e. v is a convex function) and Y is a random variable, then

$$\begin{aligned} \mathbb {E}[v(Y)]\ge v(\mathbb {E}[Y]). \end{aligned}$$

Proof

Let \(\mu =\mathbb {E}[Y]\) and \(D=\mathbb {E}[v(Y)]-v(\mu ).\) We can rewrite D as

$$\begin{aligned} D= & {} \mathbb {E}\left[ \int ^Y_{-\infty } v'(y) \,\textrm{d}y -\int ^\mu _{-\infty } v'(y)\,\textrm{d}y \right] \\= & {} \mathbb {E}\left[ \int ^Y_\mu v'(y) \,\textrm{d}y\right] \\\ge & {} \mathbb {E}\left[ (Y-\mu ) v'(\mu ) \right] \\= & {} (\mu -\mu )v'(\mu )=0. \end{aligned}$$

Problems

  1. 1.

    Suppose that two random variables X and Y are both normally distributed. Show that for any \(p \in [1/2, 1]\),

    $$\begin{aligned} \textrm{VaR}_p[X+Y]\le \textrm{VaR}_p[X]+\textrm{VaR}_p[Y]. \end{aligned}$$
    (2.18)
  2. 2.

    Let us consider pure endowment insurance, which is a life-contingent contract that pays a survival benefit upon the survival of the policyholder to maturity. Suppose that the survival benefit is \(\$100\), the discount factor over the entire period to maturity is 0.9 and the probability of survival is 0.1. Then it is clear by the expected value premium principle that the pure premium should be given by \(\$100\times 0.9 \times 0.1=9.\) Then the net liability from a single contract is given for any arbitrary policyholder i by

    $$\begin{aligned} L_i={\left\{ \begin{array}{ll} -9, &{} \text{ with } \text{ probability } 0.9; \\ 81, &{} \text{ with } \text{ probability } 0.1. \end{array}\right. } \end{aligned}$$

    The first term shows the profit from the premium \(\$9\) and the second term is the present value \(\$100\times 0.9-9=\$81.\) Show that for some p,

    $$\begin{aligned} \textrm{VaR}_p\left[ \sum ^{100}_{i=1} L_i \right] \ge \sum ^{100}_{i=1}\textrm{VaR}_p\left[ L_i \right] . \end{aligned}$$

    In other words, this example shows that the VaR risk measure is in general not subadditive.

  3. 3.

    For a normal random variable X with mean \(\mu \) and variance \(\sigma ^2\), show that

    $$\begin{aligned} \mathbb {E}[X|X>a]=\mu +\sigma \frac{\phi (\frac{a-\mu }{\sigma })}{1-\Phi (\frac{a-\mu }{\sigma })}, \end{aligned}$$

    and

    $$\begin{aligned} \mathbb {E}[(X-a)_+]=\mu +\sigma \frac{\phi (\frac{a-\mu }{\sigma })}{1-\Phi (\frac{a-\mu }{\sigma })}-a\left( 1-\Phi \left( \frac{a-\mu }{\sigma } \right) \right) . \end{aligned}$$
  4. 4.

    The following expressions for the derivatives of risk measures are used in the derivation of capital allocation principles in Chap. 4.

    1. a.

      Show that for any continuous random variables X and S with joint distribution f(sx),

      $$\begin{aligned} \frac{\partial }{\partial h} \textrm{VaR}_p[S+hX]=\mathbb {E}[X|S+hX=\textrm{VaR}_p[S+hX]]. \end{aligned}$$

      Hint: Consider the expression

      $$\begin{aligned} {\mathbb P}(X+\alpha Y> \textrm{VaR}[X+\alpha Y;p])=1-p, \end{aligned}$$

      which can be rewritten as

      $$\begin{aligned} \int ^\infty _{-\infty } \left[ \int ^\infty _{\textrm{VaR}[X+\alpha Y;p] -\alpha y} f(x,y) \,\textrm{d}x\right] \,\textrm{d}y=1-p. \end{aligned}$$
    2. b.

      Show that for any continuous random variables X and S,

      $$\begin{aligned} \frac{\partial }{\partial h} \textrm{TVaR}_p[S+hX]=\mathbb {E}[X|S+hX \ge \textrm{VaR}_p[S+hX]]. \end{aligned}$$
  5. 5.

    Entropic risk measure is defined through the exponential utility function, i.e.

    $$ \rho [X]=\frac{1}{\theta }\log \mathbb {E}[e^{-\theta X}]. $$

    Suppose that X follows a normal distribution with mean \(\mu \) and variance \(\sigma ^2\). Show the following results:

    1. a.

      The risk measure is given by \(\rho [X]=\mu +\frac{1}{2}\sigma ^2\theta .\)

    2. b.

      The entropic risk measure is not coherent.

  6. 6.

    Given any two random variables X and Y, show the following results.

    1. a.

      The covariance of (XY) can be represented as

      $$\begin{aligned} \mathbb {C}(X,Y)=\int ^\infty _{-\infty }\int ^\infty _{-\infty } \left( {\mathbb P}(X\le x, Y\le y)-F_X(x) F_Y(y)\right) \,\textrm{d}x \,\textrm{d}y. \end{aligned}$$
    2. b.

      If X and Y are comonotonic, then \(\mathbb {C}(X,Y)\ge 0.\)

  7. 7.

    Consider XY to be two arbitrary random variables.

    1. a.

      Show that the regression coefficients \(a_R\) and \(b_R\) that minimize the squared distance

      $$\begin{aligned} \mathbb {E}[(Y-(aX+b))^2] \end{aligned}$$

      are given by

      $$\begin{aligned} a^*= & {} \frac{\mathbb {C}[X,Y]}{\sigma ^2(X)}\\ b^*= & {} \mathbb {E}[Y]-a^*\mathbb {E}[X]. \end{aligned}$$
    2. b.

      Prove that

      $$\begin{aligned} \rho (X,Y)^2=\frac{\sigma ^2(Y)-\min _{a,b} \mathbb {E}[(Y-(aX+b))^2]}{\sigma ^2(Y)}. \end{aligned}$$
    3. c.

      Show that if X and Y are perfectly correlated, then (XY) must be comonotonic.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Feng, R. (2023). Risk Assessment and Measures. In: Decentralized Insurance. Springer Actuarial. Springer, Cham. https://doi.org/10.1007/978-3-031-29559-1_2

Download citation

Publish with us

Policies and ethics

Navigation