Abstract
The adaptive Gaussian mixture filter (AGM) was introduced as a robust filter technique for large-scale applications and an alternative to the well-known ensemble Kalman filter (EnKF). It consists of two analysis steps, one linear update and one weighting/resampling step. The bias of AGM is determined by two parameters, one adaptive weight parameter (forcing the weights to be more uniform to avoid filter collapse) and one predetermined bandwidth parameter which decides the size of the linear update. It has been shown that if the adaptive parameter approaches one and the bandwidth parameter decreases, as an increasing function of the sample size, the filter can achieve asymptotic optimality. For large-scale applications with a limited sample size, the filter solution may be far from optimal as the adaptive parameter gets close to zero depending on how well the samples from the prior distribution match the data. The bandwidth parameter must often be selected significantly different from zero in order to make large enough linear updates to match the data, at the expense of bias in the estimates. In the iterative AGM we introduce here, we take advantage of the fact that the history matching problem is usually estimation of parameters and initial conditions. If the prior distribution of initial conditions and parameters is close to the posterior distribution, it is possible to match the historical data with a small bandwidth parameter and an adaptive weight parameter that gets close to one. Hence, the bias of the filter solution is small. In order to obtain this scenario, we iteratively run the AGM throughout the data history with a very small bandwidth to create a new prior distribution from the updated samples after each iteration. After a few iterations, nearly all samples from the previous iteration match the data, and the above scenario is achieved. A simple toy problem shows that it is possible to reconstruct the true posterior distribution using the iterative version of the AGM. Then a 2D synthetic reservoir is revisited to demonstrate the potential of the new method on large-scale problems.
Similar content being viewed by others
References
Evensen, G.: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res. 99(C5), 10143–10162 (1994)
Burgers, G., van Leeuwen, P., Evensen, G.: Analysis scheme in the ensemble Kalman filter. Mon. Weather Rev. 126(6), 1719–1724 (1998)
Bengtsson, T., Snyder, C., Nychka, D.: Toward a nonlinear ensemble filter for high-dimensional systems. J. Geophys. Res. 108, 35–45 (2003)
Hoteit, I., Pham, D.-T., Triantafyllou, G., Korres, G.: A new approximative solution of the optimal nonlinear filter for data assimilation in meteorology and oceanography. Mon. Weather Rev. 136, 317–334 (2008)
Stordal, A.S., Valestrand, R., Karlsen, H.A., Skaug, H.J., Nævdal, G: Comparing the adaptive Gaussian mixture filter with the ensemble Kalman filter on synthetic reservoir models. Comput. Geosci. 16(2), 467–482 (2011)
Stordal, A.S., Karlsen, H.A., Nævdal, G., Skaug, H.J., Vallès, B.: Bridging the ensemble Kalman filter and particle filters. Comput. Geosci. 15(2), 293–305 (2011)
Valestrand, R., Nævdal, G., Shafieirad, A., Stordal, A.S., Dovera, L.: Refined adaptive Gaussian mixture filter—application on a real field case. In: EAGE Annual Conference & Exhibition Incorporating SPE Europec, Copenhagen, Denmark (2012)
Valestrand, R., Nævdal, G., Stordal, A.S.: Evaluation of EnKF and variants on a field model. Oil & Gas Science and Technology-Revue d’IFP Energies nouvelles (2012)
Doucet, A., de Freitas, N., Gordon, N. (eds.): Sequential Monte-Carlo methods in practice. Springer, New York (2001)
Stordal, A.S.: Sequential Data Assimilation in High Dimensional Nonlinear Systems. PhD thesis, University of Bergen (2011)
Robert, C.P., Casella, G.: Monte Carlo Statistical Methods. Springer, Heidelberg (2004)
Bengtsson, T., Bickel, P., Li, B.: Curse-of-dimensionality revisited: collapse of particle filter in very large scale systems. Probab. Stat. 2, 316–334 (2008)
Stordal, A.S., Karlsen, H.A., Nævdal, G., Oliver, D.S., Skaug, H.J.: Filtering with state space localized Kalman gain. Phys. D 241(13), 1123–1135 (2012)
Gordon, N.: Bayesian Methods for Tracking. PhD thesis, University of London (1993)
Givens, G.H., Raftery, A.E.: Local adaptive importance sampling for multivariate densities with strong nonlinear relationships. J. Am. Stat. Assoc. 91(433), 132–141 (1996)
Silverman, B.W.: Density Estimation for Statistics and Data Analysis. Chapman and Hall, London (1986)
Geweke, J.: Bayesian inference in econometric models using monte carlo integration. Econometrica 24, 1317–1399 (1989)
Gordon, N.J., Salmond, D.J., Smith, A.F.M.: Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proc.-F 140(2), 107–113 (1993)
Emerick, A.A., Reynolds, A.C.: Combining the ensemble Kalman filter with Markov chain Monte Carlo for improved history matching and uncertainty characterization. SPE J (2012). doi:10.2118/141336-PA
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Stordal, A.S., Lorentzen, R.J. An iterative version of the adaptive Gaussian mixture filter. Comput Geosci 18, 579–595 (2014). https://doi.org/10.1007/s10596-014-9402-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10596-014-9402-6