Log in

On Accelerating Monte Carlo Integration Using Orthogonal Projections

  • Published:
Methodology and Computing in Applied Probability Aims and scope Submit manuscript

Abstract

Monte Carlo simulation is an indispensable tool in calculating high-dimensional integrals. Although Monte Carlo integration is notoriously known for its slow convergence, it could be improved by various variance reduction techniques. This paper applies orthogonal projections to study the amount of variance reduction, and also proposes a novel projection estimator that is associated with a group of symmetries of the probability measure. For a given space of functions, the average variance reduction can be derived. For a specific function, its variance reduction is also analyzed. The well-known antithetic estimator is a special case of the projection estimator, and new results of its variance reduction and efficiency are provided. Various illustrations including pricing financial Asian options are provided to confirm our claims.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (France)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  • Asmussen S, Glynn P (2007) Stochastic simulation: algorithms and analysis. Springer-Verlag, New York

    Book  Google Scholar 

  • Beardon AF (1983) The geometry of discrete groups. Springer-Verlag, New York

    Book  Google Scholar 

  • Duan JC, Simonato JG (1998) Empirical martingale simulation for asset prices. Manag Sci 44(9):1218–1233

    Article  Google Scholar 

  • Fraleigh JB (2019) A first course in abstract algebra, 7edn. Pearson Education, India

    MATH  Google Scholar 

  • Glasserman P (2004) Monte carlo methods in financial engineering. Springer, New York

    MATH  Google Scholar 

  • Glasserman P, Heidelberger P, Shahabuddin P (1999) Asymptotically optimal importance sampling and stratification for pricing path-dependent options. Math Financ 9:117–152

    Article  MathSciNet  Google Scholar 

  • L’Ecuyer P (1994) Efficiency improvement and variance reduction. In: Proceedings of the 1994 winter simulation conference, Orlando, pp 122–132

  • Neddermeyer JC (2011) Non-parametric partial importance sampling for financial derivative pricing. Quant Finance 11:1193–1206

    Article  MathSciNet  Google Scholar 

  • Park JJ, Choe GH (2016) A new variance reduction method for option pricing based on sampling the vertices of a simplex. Quant Finance 16(8):1165–1173

    Article  MathSciNet  Google Scholar 

  • Ren H, Zhao S, Ermon S (2019) Adaptive antithetic sampling for variance reduction. In: International conference on machine learning. PMLR, pp 5420–5428

  • Ross SM (2013) Simulation. Academic Press, New York

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming-Hsuan Kang.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

We are benefited from the very helpful comments of the Editor and two anonymous referees. The first author was supported by the Ministry of Science and Technology of Taiwan, ROC, under Grant 108-2118-M-009-001-MY2, and the second author was supported under Grant 108-2115-M-009-007-MY2.

Appendices

Appendix A: Proof of Lemma 1

Proof

To prove that \(P_{\mathbb {E}}\) is a projection, we need to show that it satisfies the conditions C1 and C2. For the condition C1, note that for any \(f\in \mathcal {F}\),

$$ P^{2}_{\mathbb{E}}\big(f(X)\big) = P_{\mathbb{E}}(P_{\mathbb{E}}\big(f(X)\big)) = \langle P_{\mathbb{E}}\big(f(X)\big), \mathbf{1} \rangle = P_{\mathbb{E}}\big(f(X)\big). $$

Hence, we have \(P^{2}_{\mathbb {E}}=P_{\mathbb {E}}\). For the condition C2, for any \(f,g\in \mathcal {F}\), we have

$$ \begin{array}{@{}rcl@{}} \langle P_{\mathbb{E}}\big(f(X)\big), g-P_{\mathbb{E}}[g]\rangle & =& \langle P_{\mathbb{E}}\big(f(X)\big),g\rangle - \langle P_{\mathbb{E}}\big(f(X)\big),P_{\mathbb{E}}[g]\rangle\\ & = & P_{\mathbb{E}}\big(f(X)\big) \langle\mathbf{1},g\rangle - P_{\mathbb{E}}\big(f(X)\big)P_{\mathbb{E}}[g] \\ &=& P_{\mathbb{E}}\big(f(X)\big)P_{\mathbb{E}}[g] - P_{\mathbb{E}}\big(f(X)\big)P_{\mathbb{E}}[g]=0. \end{array} $$

As a result, \(P_{\mathbb {E}}\) is an orthogonal projection. □

Appendix B: Proof of Theorem 1

Proof

Let V0 be the space of constant functions. Since \(P(\mathcal {F}) \supset V_{0}\), we have \( P(\mathcal {F})^{\perp } \subset V_{0}^{\perp }\). For all \(f \in \mathcal {F}\), because \( (f- P(f)) \in P(\mathcal {F})^{\perp }\), it is clear that

$$\mathbb{E}[f(X)-P(f)(X)] = \langle f-P(f) ,\mathbf{1} \rangle = 0.$$

As a result, the expectation of f(X) equals

$$ \begin{array}{@{}rcl@{}} \mathbb{E}\big(f(X)\big) &=& \mathbb{E}\Big[P(f)(X) + f(X)-P(f)(X)\Big)= \mathbb{E}[P(f)(X)]\\ &&+ \mathbb{E}[f(X)-P(f)(X)] = \mathbb{E}[P(f)(X)]. \end{array} $$

In addition, we obtain

$$ \begin{array}{@{}rcl@{}} \text{var}[P(f)] + \text{var}[f-P(f)] &=& \|P(f)\|^{2} - \mathbb{E}[P(f)(X)]^{2} + \|f-P(f)\|^{2}\\ &&- \mathbb{E}[f(X)-P(f)(X)]^{2} \\ &=&(\|P(f)\|^{2}+ \|f-P(f)\|^{2}) - \mathbb{E}[P(f)(X)]^{2}\\ &&- \mathbb{E}[f(X)-P(f)(X)]^{2}) \\ &=& \|f\|^{2} - \mathbb{E}\big(f(X)\big)^{2} - 0 \\ & = &\text{var}\big(f(X)\big), \end{array} $$

where the last equality holds by Eq. 2. Therefore, we obtain

$$\text{var}\big(f(X)\big) =\text{var}[P(f)] + \text{var}[f-P(f)] \geq \text{var}[P(f)].$$

Appendix C: Proof of Lemma 2

Proof

Since g is a symmetry of μX, we have

$$ \mathbb{E}\big(f_{g}(X)\big) = {\int}_{\mathbb{R}^{n}} f(gx) d \mu_{X}(x)= {\int}_{\mathbb{R}^{n}} f(y) d \mu_{X}(g^{-1}y)={\int}_{\mathbb{R}^{n}} f(y) d \mu_{X}(y)= \mathbb{E}\big(f(X)\big). $$

Hence, fg(X) is an unbiased estimator. By the same token, we also have \(\mathbb {E}\big (f_{g}(X)^{2}\big ) = \mathbb {E}\big (f(X)^{2}\big )\). Now we have

$$ \text{var}\big(f_{g}(X)\big) = \mathbb{E}\big((f_{g}(X))^{2}\big) - \mathbb{E}\big(f_{g}(X)\big)^{2} = \mathbb{E}\big(f(X)^{2}\big) - \mathbb{E}\big(f(X)\big)^{2}= \text{var}\big(f(X)\big).$$

Because both \(\mathbb {E}\big (f_{g}(X)\big )\) and var(fg(X)) are well-defined, fg remains in \(\mathcal {F}\). □

Appendix D: Proof of Theorem 2

Proof

First, we show that PG is a linear transformation. For \(f_{1}, f_{2}\in \mathcal {F}\), and \(\alpha \in \mathbb {R}\), it is clear that

$$ \begin{array}{@{}rcl@{}} P_{G}(f_{1}+\alpha f_{2})(x) &=& \frac{1}{|G|} \sum\limits_{g\in G} (f_{1}+\alpha f_{2})(gx) \\ & =&\frac{1}{|G|} \sum\limits_{g\in G} \left( f_{1}(gx) +\alpha f_{2} (gx)\right) \\ & =&\frac{1}{|G|} \sum\limits_{g\in G} f_{1}(gx) + \frac{1}{|G|} {\sum}_{g\in G} \alpha f_{2} (gx)\\ & =& P_{G}(f_{1})+\alpha P_{G}(f_{2}). \end{array} $$

Therefore, we conclude that PG is a linear transformation on \(\mathcal {F}\).

Next, let us show that \(P_{G}={P_{G}^{2}}\). For all \(f \in \mathcal {F}\) and all gG,

$$ P_{G}(f_{g}) = \frac{1}{|G|}\sum\limits_{g^{\prime}\in G} f(g g^{\prime}x) = \frac{1}{|G|}\sum\limits_{g^{\prime\prime}\in G} f(g^{\prime\prime}x) = P_{G}(f(x)). $$

Here we use the property that the multiplication by g from left just permutes elements of G. Therefore, it does not change the summation. Now we have

$$ P_{G}(P_{G}(f(x))) = \frac{1}{|G|}\sum\limits_{g\in G} P_{G}(f(gx)) =\frac{1}{|G|}{\sum}_{g\in G} P_{G}(f(x)) = P_{G}(f(x)). $$

Second, let us show that for \(f_{1}, f_{2} \in \mathcal {F}\), 〈PG(f1),f2PG(f2)〉 = 0, or equivalently 〈PG(f1),PG(f2)〉 = 〈PG(f1),f2〉. Now we have

$$ \langle P_{G}(f_{1}),P_{G}(f_{2})\rangle = \frac{1}{|G|^{2}}\sum\limits_{g\in G}\left( \sum\limits_{g^{\prime} \in G} {\int}_{{{\varOmega}}} f_{1}(gx) f_{2}(g^{\prime}x)d\mu(x)\right) $$

Now let us change the variable x by \(y=g^{\prime }x\). Together with the property that dμ(gx) = dμ(x) and multiplying \(g^{\prime -1}\) from right is a permutation on G, we can rewrite the above equation as

$$ \begin{array}{@{}rcl@{}} \langle P_{G}(f_{1}),P_{G}(f_{2})\rangle &=& \frac{1}{|G|^{2}}{\sum}_{g^{\prime}\in G}\left( {\sum}_{g \in G} {\int}_{{{\varOmega}}} f_{1}(gg^{\prime-1}y) f_{2}(y)d\mu(g^{-1}y)\right) \\ &=&\frac{1}{|G|^{2}}{\sum}_{g^{\prime}\in G}\left( {\sum}_{g \in G} {\int}_{{{\varOmega}}} f_{1}(gg^{\prime-1}y) f_{2}(y)d\mu(y)\right) \\ &=&\frac{1}{|G|^{2}}{\sum}_{g^{\prime}\in G}\left( {\sum}_{g \in G} {\int}_{{{\varOmega}}} f_{1}(gy) f_{2}(y)d\mu(y)\right)\\ &=&\frac{1}{|G|}\left( {\sum}_{g \in G} {\int}_{{{\varOmega}}} f_{1}(gy) f_{2}(y)d\mu(y)\right)= \langle P_{G}(f_{1}),f_{2}\rangle. \end{array} $$

We conclude that PG is an orthogonal projection. For the last part of the theorem, it is clear that PG(1) = 1 which implies that \(P(\mathcal {F}) \supset P(\mathbb {R}) = \mathbb {R}\). □

Appendix E: Proof of Proposition 1

Proof

Let \(a_{g}=\langle I_{gD_{0}}, f \rangle \) for short. (1) Consider the following equations.

$$ \mathbb{E}(f_{a}(X)) = \langle f_{a} , \mathbf{1} \rangle = |G| \sum\limits_{g \in G} a_{g} \langle I_{gD_{0}} , \mathbf{1} \rangle = {\sum}_{g \in G} a_{g} = \mathbb{E}\big(f(X)\big). $$

(2) Consider the following equations.

$$ \langle f_{a}, f \rangle = |G| \sum\limits_{g \in G} a_{g} \langle I_{gD_{0}} , f \rangle = |G| {\sum}_{g \in G} (a_{g})^{2} $$

On the other hand, we have

$$ \langle f_{a}, f_{a} \rangle = |G|^{2} \sum\limits_{g \in G} {\sum}_{h \in G} a_{g} a_{h} \langle I_{gD_{0}} , I_{hD_{0}} \rangle = |G| {\sum}_{g \in G} (a_{g})^{2} =\langle f_{a}, f \rangle $$

From the above result, we have

$$ \langle f_{a}, f_{d} \rangle = \langle f_{a}, f- f_{a} \rangle = \langle f_{a}, f \rangle - \langle f_{a}, f_{a} \rangle = 0, $$

which implies the following

$$ \text{var}\big(f(X)\big) = \langle f, f \rangle = \langle f_{a}+f_{b}, f_{a}+f_{b} \rangle = \langle f_{a}, f_{a} \rangle +\langle f_{b}, f_{b} \rangle = \text{var}(f_{a}(X)) + \text{var}[f_{b}]. $$

(3) By definition, we have

$$ P_{G}(f_{a})(x) = \sum\limits_{h \in G} \sum\limits_{g\in G} a_{g} I_{gD_{0}}(hx). $$

Note that hxgD0 if and only if xh− 1gD0. We can rewrite the above equation as

$$ \begin{array}{@{}rcl@{}} P_{G}(f_{a})(x) &=& \sum\limits_{g\in G} a_{g} \left( \sum\limits_{h \in G} I_{h^{-1}gD_{0}}(x)\right)\\ &=&\left( \sum\limits_{g\in G} a_{g} \right)\left( \sum\limits_{h^{\prime} \in G} I_{h D_{0}}(x)\right)\\ &=& \mathbb{E}\big(f(X)\big)\left( \sum\limits_{h \in G} I_{h D_{0}}(x)\right). \end{array} $$

Here we use the fact that {h− 1g : hG} equals to G as a set. Next, consider

$$ \begin{array}{@{}rcl@{}} P_{G}(f)_{a} &=& |G| \sum\limits_{g\in G} \langle I_{g D_{0}}, P_{G}(f) \rangle I_{gD_{0}}(x)\\ &=& \sum\limits_{g\in G} \left \langle I_{g D_{0}}(x), \sum\limits_{h \in G} f(hx) \right\rangle I_{gD_{0}}(x)\\ &=& \sum\limits_{g\in G} \sum\limits_{h \in G} \left \langle I_{h g D_{0}}(x), f(x) \right\rangle I_{gD_{0}}(x) \\ &=& \sum\limits_{g\in G} \left( \sum\limits_{h \in G} a_{hg}\right) I_{gD_{0}}(x) = \sum\limits_{g\in G} \left( \sum\limits_{h \in G} a_{h}\right) I_{gD_{0}}(x) \\ & =& \left( \sum\limits_{h \in G} a_{h}\right)\left( \sum\limits_{g\in G} I_{gD_{0}}(x) \right)= P_{G}(f_{a}). \end{array} $$

Combing the above two results, We have shown that

$$ P_{G}(f)_{a} = P_{G}(f_{a}) =\mathbb{E}\big(f(X)\big)\left( \sum\limits_{h \in G} I_{h D_{0}}(x)\right) $$

which is a constant function except on a measure zero set. implies that var(PG(f)a(X)) = var(PG(f)a(X)) = 0.

(4) Applying (3), we have

$$ P_{G}(f_{d}) = P_{G}(f- f_{a}) = P_{G}(f) - P_{G}(f_{a}) =P_{G}(f)- P_{G}(f)_{a} = P_{G}(f)_{d}. $$

Together with Corollary 1, we have

$$ \text{var}\big(P_{G}(f)_{d}(X)\big) = \text{var}\big(P_{G}(f_{d})(X)\big) \geq \text{var}(f_{d}(X)) . $$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Teng, HW., Kang, MH. On Accelerating Monte Carlo Integration Using Orthogonal Projections. Methodol Comput Appl Probab 24, 1143–1168 (2022). https://doi.org/10.1007/s11009-021-09893-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11009-021-09893-3

Keywords

Mathematics Subject Classification (2010)

Navigation