Log in

A quasi-Newton trust-region method for optimization under uncertainty using stochastic simplex approximate gradients

  • Original Paper
  • Published:
Computational Geosciences Aims and scope Submit manuscript

Abstract

The goal of field-development optimization is maximizing the expected value of an objective function, e.g., net present value for a producing oil field or amount of CO\(_2\) stored in a subsurface formation, over an ensemble of models that describe the uncertainty range. A single evaluation of the objective function requires solving a system of partial differential equations, which can be computationally costly. Hence, it is most desirable for an optimization algorithm to reduce the number of objective-function evaluations while delivering high convergence rate. Here, we develop a quasi-Newton method that builds on approximate evaluations of objective-function gradients and takes more effective iterative steps using a trust-region approach compared to line search. We implement three gradient formulations: ensemble optimization (EnOpt) gradient, and two variants of the stochastic simplex approximate gradient (StoSAG), all computed using perturbations around the point of interest. We modify the formulations to enable exploiting the objective-function structure. Instead of returning a single value for the gradient, the reformulation breaks up the objective function into its sub-components and returns a set of sub-gradients. We then can utilize our prior problem-specific knowledge through passing a ‘weight’ matrix to act on the sub-gradients. Two quasi-Newton updating algorithms are implemented: Broyden-Fletcher-Goldfarb-Shanno and the symmetric rank 1. We first evaluate the variants of our method on test challenging functions (e.g., stochastic variants of Rosenbrock and Chebyquad). Then, we present an application to a well-control optimization problem for a realistic synthetic problem. Our results confirm that StoSAG gradients are significantly more effective than EnOpt gradients for accelerating convergence. An important challenge to stochastic gradients is determining a priori the adequate number of perturbations. We report that the optimal number of perturbations depends on both the number of decision variables and the size of uncertainty ensemble and provide practical guidelines for its selection. We show on the test functions that imposing our prior knowledge on the problem structure can improve the gradient quality and significantly accelerate convergence. In many instances, the quasi-Newton algorithms deliver superior performance compared to the steepest-descent algorithm, especially during the early iterations. Given the computational cost involved in typical applications, rapid and noteworthy improvements at early iterations is greatly desirable for accelerated project delivery. Furthermore, our method is robust, exploits parallel processing, and can be readily applied in a generic fashion for a variety of problems where the true gradient is difficult to compute or simply not available.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Alhuthali, A.H., Oyerinde, A., Datta-Gupta, A.: Optimal Waterflood Management Using Rate Control. SPE Reservoir Evaluation & Engineering 10(05), 539–551 (2007)

    Article  Google Scholar 

  2. Almasov, A., Onur, M.: Life-Cycle Optimization of the Carbon Dioxide Huff-n-Puff Process in an Unconventional Oil Reservoir Using Least-Squares Support Vector and Gaussian Process Regression Proxies. SPE Journal 26(04), 1914–1945 (2021)

    Article  Google Scholar 

  3. Alpak, F.O., Gao, G.: Field-Development Optimization of the In-Situ Upgrading Process Including the Ramp-Up Phase. SPE Journal 26(04), 2002–2017 (2021)

    Article  Google Scholar 

  4. Alpak, F.O., **, L., Ramirez, B.A.: Robust optimisation of well placement in geologically complex reservoirs. International Journal of Petroleum Engineering 2(4), 247–264 (2016)

    Article  Google Scholar 

  5. Armijo, L.: Minimization of functions having Lipschitz continuous first partial derivatives. Pacific Journal of Mathematics 16(1), 1–3 (1966)

    Article  Google Scholar 

  6. Bangerth, W., Klie, H., Wheeler, M., Stoffa, P., Sen, M.: On optimization algorithms for the reservoir oil well placement problem. Computational Geosciences 10(3), 303–319 (2006)

    Article  Google Scholar 

  7. Bollapragada, R. and Wild, S.M. [2019] Adaptive Sampling Quasi-Newton Methods for Derivative-Free Stochastic Optimization

  8. Brouwer, D.R., Jansen, J.D.: Dynamic Optimization of Waterflooding With Smart Wells Using Optimal Control Theory. SPE Journal 9(04), 391–402 (2004)

    Article  Google Scholar 

  9. Capolei, A., Foss, B. and Jørgensen, J.B. [2015a] Profit and Risk Measures in Oil Production Optimization. IFAC-PapersOnLine, 48(6), 214–220. 2nd IFAC Workshop on Automatic Control in Offshore Oil and Gas Production OOGP 2015

  10. Capolei, A., Suwartadi, E., Foss, B., Jørgensen, J.B.: A mean-variance objective for robust production optimization in uncertain geological scenarios. Journal of Petroleum Science and Engineering 125, 23–37 (2015)

    Article  Google Scholar 

  11. Chen, B., Reynolds, A.C.: Ensemble-Based Optimization of the Water-Alternating-Gas-Injection Process. SPE Journal 21(03), 0786–0798 (2016)

    Article  Google Scholar 

  12. Chen, B., Reynolds, A.C.: CO2 water-alternating-gas injection for enhanced oil recovery: Optimal well controls and half-cycle lengths. Computers & Chemical Engineering 113, 44–56 (2018)

    Article  Google Scholar 

  13. Chen, B. and Xu, J. [2019] Stochastic Simplex Approximate Gradient for Robust Life-Cycle Production Optimization: Applied to Brugge Field. Journal of Energy Resources Technology, 141(9)

  14. Chen, C., Li, G., Reynolds, A.C.: Robust Constrained Optimization of Short- and Long-Term Net Present Value for Closed-Loop Reservoir Management. SPE Journal 17(03), 849–864 (2012)

    Article  Google Scholar 

  15. Chen, C., Wang, Y., Li, G., Reynolds, A.C.: Closed-loop reservoir management on the Brugge test case. Computational Geosciences 14(4), 691–703 (2010)

    Article  Google Scholar 

  16. Chen, Y., Oliver, D.S., Zhang, D.: Efficient Ensemble-Based Closed-Loop Production Optimization. SPE Journal 14(04), 634–645 (2009)

    Article  Google Scholar 

  17. Conn, A.R., Scheinberg, K. and Vicente, L.N. [2009] Introduction to Derivative-Free Optimization. Society for Industrial and Applied Mathematics

  18. Do, S.T., Reynolds, A.C.: Theoretical connections between optimization algorithms based on an approximate gradient. Computational Geosciences 17(6), 959–973 (2013)

    Article  Google Scholar 

  19. Fletcher, R.: Function Minimization Without Evaluating Derivatives-a Review. The Computer Journal 8(1), 33–41 (1965)

    Article  Google Scholar 

  20. Fonseca, R.M., Kahrobaei, S.S., van Gastel, L.J., Leeuwenburgh, O. and Jansen, J.D. [2015a] Quantification of the Impact of Ensemble Size on the Quality of an Ensemble Gradient Using Principles of Hypothesis Testing. In: SPE Reservoir Simulation Conference. SPE–173236–MS. Houston, Texas, USA

  21. Fonseca, R.M., Leeuwenburgh, O., Rossa, E.D., Hof, P.V. and Jansen, J.D. [2015b] Ensemble-Based Multi-Objective Optimization of On-Off Control Devices Under Geological Uncertainty. In: SPE Reservoir Simulation Conference. SPE–173268–MS. Houston, Texas, USA

  22. Fonseca, R.R.M., Chen, B., Jansen, J.D., Reynolds, A.: A Stochastic Simplex Approximate Gradient (StoSAG) for optimization under uncertainty. International Journal for Numerical Methods in Engineering 109(13), 1756–1776 (2017)

    Article  Google Scholar 

  23. Goldstein, A.A.: Convex programming in Hilbert space. Bulletin of the American Mathematical Society 70(5), 709–710 (1964)

    Article  Google Scholar 

  24. Hanssen, K.G., Foss, B.: On selection of controlled variables for robust reservoir management. Journal of Petroleum Science and Engineering 147, 504–514 (2016)

    Article  Google Scholar 

  25. Jansen, J.D., Brouwer, D., Naevdal, G. and Van Kruijsdijk, C. [2005] Closed-loop reservoir management. First Break, 23(1)

  26. Jansen, J.D., Brouwer, R. and Douma, S.G. [2009] Closed Loop Reservoir Management. In: SPE Reservoir Simulation Symposium. SPE–119098–MS

  27. Jeong, H., Sun, A.Y., Jeon, J., Min, B. and Jeong, D. [2020] Efficient Ensemble-Based Stochastic Gradient Methods for Optimization Under Geological Uncertainty. Frontiers in Earth Science, 8

  28. Killough, J. [1995] Ninth SPE Comparative Solution Project: A Reexamination of Black-Oil Simulation. In: SPE Reservoir Simulation Conference. SPE–29110–MS. San Antonio, Texas, USA

  29. Kraaijevanger, J., Egberts, P., Valstar, J. and Buurman, H.W. [2007] Optimal Waterflood Design Using the Adjoint Method. In: SPE Reservoir Simulation Conference. SPE–105764–MS

  30. Liu, Z., Reynolds, A.C.: A Sequential-Quadratic-Programming-Filter Algorithm with a Modified Stochastic Gradient for Robust Life-Cycle Optimization Problems with Nonlinear State Constraints. SPE Journal 25(04), 1938–1963 (2020)

    Article  Google Scholar 

  31. Nævdal, G., Brouwer, D.R., Jansen, J.D.: Waterflooding using closed-loop control. Computational Geosciences 10(1), 37–60 (2006)

    Article  Google Scholar 

  32. Nocedal, J., Wright, S.J.: Numerical optimization. Springer (1999)

    Book  Google Scholar 

  33. Nocedal, J., Wright, S.J.: Fundamentals of Unconstrained Optimization, pp. 10–29. Springer, New York, New York, NY (2006)

    Google Scholar 

  34. Nocedal, J., Wright, S.J.: Fundamentals of Unconstrained Optimization, pp. 66–100. Springer, New York, New York, NY (2006)

    Google Scholar 

  35. Oliveira, D.F., Reynolds, A.: An Adaptive Hierarchical Multiscale Algorithm for Estimation of Optimal Well Controls. SPE Journal 19(05), 909–930 (2014)

    Article  Google Scholar 

  36. Pinto, M.A., Ghasemi, M., Sorek, N., Gildin, E. and Schiozer, D.J. [2015] Hybrid Optimization for Closed-Loop Reservoir Management. In: SPE Reservoir Simulation Conference. SPE–173278–MS. Houston, Texas, USA

  37. Polyak, B.: Gradient methods for the minimisation of functionals. USSR Computational Mathematics and Mathematical Physics 3(4), 864–878 (1963)

    Article  Google Scholar 

  38. Powell, M.J.: Developments of NEWUOA for minimization without derivatives. IMA journal of numerical analysis 28(4), 649–664 (2008)

  39. Raniolo, S., Dovera, L., Cominelli, A., Callegaro, C. and Masserano, F. [2013] History match and polymer injection optimization in a mature field using the ensemble Kalman filter. In: IOR 2013-17th European Symposium on Improved Oil Recovery. European Association of Geoscientists & Engineers, cp–342

  40. Rosenbrock, H.H.: An Automatic Method for Finding the Greatest or Least Value of a Function. The Computer Journal 3(3), 175–184 (1960)

    Article  Google Scholar 

  41. Sibaweihi, N., Awotunde, A.A., Sultan, A.S., Al-Yousef, H.Y.: Sensitivity studies and stochastic optimization of CO2 foam flooding. Computational Geosciences 19(1), 31–47 (2015)

    Article  Google Scholar 

  42. Siraj, M.M., Van den Hof, P.M., Jansen, J.D.: Robust optimization of water-flooding in oil reservoirs using risk management tools. IFAC-PapersOnLine 49(7), 133–138 (2016)

    Article  Google Scholar 

  43. Spall, J.: Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Transactions on Automatic Control 37(3), 332–341 (1992)

    Article  Google Scholar 

  44. Spall, J.: Implementation of the simultaneous perturbation algorithm for stochastic optimization. IEEE Transactions on Aerospace and Electronic Systems 34(3), 817–823 (1998)

    Article  Google Scholar 

  45. Stordal, A.S., Szklarz, S.P., Leeuwenburgh, O.: A theoretical look at ensemble-based optimization in reservoir management. Mathematical Geosciences 48(4), 399–417 (2016)

    Article  Google Scholar 

  46. Sun, Z., Xu, J., Espinoza, D.N., Balhoff, M.T.: Optimization of subsurface CO2 injection based on neural network surrogate modeling. Computational Geosciences 25(6), 1887–1898 (2021)

    Article  Google Scholar 

  47. Yan, X., Reynolds, A.C.: Optimization Algorithms Based on Combining FD Approximations and Stochastic Gradients Compared With Methods Based Only on a Stochastic Gradient. SPE Journal 19(05), 873–890 (2014)

    Article  Google Scholar 

  48. Zhang, Y., Lu, R., Forouzanfar, F., Reynolds, A.C.: Well placement and control optimization for WAG/SAG processes using ensemble-based method. Computers & Chemical Engineering 101, 193–209 (2017)

    Article  Google Scholar 

  49. Zhao, H., Chen, C., Do, S., Oliveira, D., Li, G., Reynolds, A.C.: Maximization of a Dynamic Quadratic Interpolation Model for Production Optimization. SPE Journal 18(06), 1012–1025 (2013)

    Article  Google Scholar 

  50. Zhou, K., Hou, J., Zhang, X., Du, Q., Kang, X., Jiang, S.: Optimal control of polymer flooding based on simultaneous perturbation stochastic approximation method guided by finite difference gradient. Computers & Chemical Engineering 55, 40–49 (2013)

    Article  Google Scholar 

Download references

Acknowledgements

This research is supported by the Reservoir Simulation Joint Industry Project (RSJIP) in the Center for Subsurface Energy and the Environment at The University of Texas at Austin. The authors acknowledge Computer Modeling Group Ltd. for providing CMG simulation package and making it available for academic use.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Esmail Eltahan.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to declare. The data that support the findings of this study will be made available upon reasonable requests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Fig. 22
figure 22

Comparison of gradient-based methods on the stochastic Rosenbrock function with \(N=1000\)

Fig. 23
figure 23

Comparison of gradient-based methods on the stochastic Chebyquad function with \(\sigma =10^{-1}\)

Fig. 24
figure 24

Comparison of gradient-based methods on the stochastic Chebyquad function with \(\sigma =1\)

Fig. 25
figure 25

Progression of the analyzed methods from the initial point (black circle) to their final convergence points

Fig. 26
figure 26

Chebyquad function with \(\sigma =10^{-3}\). Progression of the analyzed methods from the initial point (black circle) to their final convergence points

Fig. 27
figure 27

Chebyquad function with \(\sigma =1\). Progression of the analyzed methods from the initial point (black circle) to their final convergence points

In this section, some supplementary plots are presented to support the ideas developed in the main body of the manuscript. First, we discuss the effect of increasing the dimension of the optimization-parameter vector. As can be seen in Figs. 21 and 22, when N is large, the stochastic gradients (StoSAG and EnOpt) are more efficient than the FD gradient. However, for small scale problems, the FD gradient is more efficient. The results for the stochastic Chebyquad function are plotted for a varying range of \(\sigma \) are plotted in Figs. 23-24. When the variance is large, The performance of the EnOpt gradient is comparable to that of the StoSAG gradients. However, when the variance is small, the EnOpt gradient leads to poor results.

The progress of the iterations can be visualized for the 2-dimensional Rosenbrock problem in Fig. 25. In this plot, each arrow represents the step taken between iteration, while the color of the arrow designates the method, and is the same as in the previous figures. Unlike the SD, the Newton method exhibits quadratic convergence and takes much less iterations to converge. The Newton direction learns about the curvature of the surface of F resulting in smooth downhill navigation, as opposed to the zigzagging movement in case of SD. The BFGS exhibits behavior that is intermediate between the SD and Newton. In Figs. 26 and  27, the response surface of the Chebyquad function is plotted in contours for different uncertainty distributions. It is clear from the figures that the EnOpt gradient becomes less accurate as the variance increases.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Eltahan, E., Alpak, F.O. & Sepehrnoori, K. A quasi-Newton trust-region method for optimization under uncertainty using stochastic simplex approximate gradients. Comput Geosci 27, 627–648 (2023). https://doi.org/10.1007/s10596-023-10218-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10596-023-10218-1

Navigation