Abstract
This paper shows the usefulness of the Perov contraction theorem, which is a generalization of the classical Banach contraction theorem, for solving Markov dynamic programming problems. When the reward function is unbounded, combining an appropriate weighted supremum norm with the Perov contraction theorem yields a unique fixed point of the Bellman operator under weaker conditions than existing approaches. An application to the optimal savings problem shows that the average growth rate condition derived from the spectral radius of a certain nonnegative matrix is sufficient and almost necessary for obtaining a solution.
Similar content being viewed by others
Data Availability
No data have been used for this research.
References
Bertsekas, D.P.: Abstract Dynamic Programming, 2nd edn. Athena Scientific, Belmont (2018)
Blackwell, D.: Discounted dynamic programming. Ann. Math. Stat. 36(1), 226–235 (1965). https://doi.org/10.1214/aoms/1177700285
Bloise, G., Van, C.L., Vailakis, Y.: Do not blame Bellman: it is Koopmans’ fault. Econometrica 92(1), 111–140 (2024). https://doi.org/10.3982/ECTA20386
Borovička, J., Stachurski, J.: Necessary and sufficient conditions for existence and uniqueness of recursive utilities. J. Finance 75(3), 1457–1493 (2020). https://doi.org/10.1111/jofi.12877
Boyd, J.H., III.: Recursive utility and the Ramsey problem. J. Econ. Theory 50(2), 326–345 (1990). https://doi.org/10.1016/0022-0531(90)90006-6
Chamberlain, G., Wilson, C.A.: Optimal intertemporal consumption under uncertainty. Rev. Econ. Dyn. 3(3), 365–395 (2000). https://doi.org/10.1006/redy.2000.0098
Christensen, T.M.: Existence and uniqueness of recursive utilities without boundedness. J. Econ. Theory 200, 105413 (2022). https://doi.org/10.1016/j.jet.2022.105413
Denardo, E.V.: Contraction map**s in the theory underlying dynamic programming. SIAM Rev. 9(2), 165–177 (1967). https://doi.org/10.1137/1009030
Durán, J.: On dynamic programming with unbounded returns. Econ. Theory 15(2), 339–352 (2000). https://doi.org/10.1007/s001990050016
Durán, J.: Discounting long run average growth in stochastic dynamic programs. Econ. Theory 22(2), 395–413 (2003). https://doi.org/10.1007/s00199-002-0316-5
Hansen, L.P., Scheinkman, J.A.: Recursive utility in a Markov environment with stochastic growth. Proc. Natl. Acad. Sci. 109(30), 11967–11972 (2012). https://doi.org/10.1073/pnas.1200237109
Hernández-Lerma, O., Lasserre, J.B.: Further Topics on Discrete-Time Markov Control Processes, Applications of Mathematics, vol. 42. Springer, Berlin (1999). https://doi.org/10.1007/978-1-4612-0561-6
Horn, R.A., Johnson, C.R.: Matrix Analysis, 2nd edn. Cambridge University Press, New York (2013)
Li, H., Stachurski, J.: Solving the income fluctuation problem with unbounded rewards. J. Econ. Dyn. Control 45, 353–365 (2014). https://doi.org/10.1016/j.jedc.2014.06.003
Lippman, S.A.: On dynamic programming with unbounded rewards. Manag. Sci. 21(11), 1225–1233 (1975). https://doi.org/10.1287/mnsc.21.11.1225
Ma, Q., Toda, A.A.: A theory of the saving rate of the rich. J. Econ. Theory 192, 105193 (2021). https://doi.org/10.1016/j.jet.2021.105193
Ma, Q., Toda, A.A.: Asymptotic linearity of consumption functions and computational efficiency. J. Math. Econ. 98, 102562 (2022). https://doi.org/10.1016/j.jmateco.2021.102562
Ma, Q., Stachurski, J., Toda, A.A.: The income fluctuation problem and the evolution of wealth. J. Econ. Theory 187, 105003 (2020). https://doi.org/10.1016/j.jet.2020.105003
Ma, Q., Stachurski, J., Toda, A.A.: Unbounded dynamic programming via the Q-transform. J. Math. Econ. 100, 102652 (2022). https://doi.org/10.1016/j.jmateco.2022.102652
Perov, A.I.: On the Cauchy problem for a system of ordinary differential equations. Pviblizhen. Met. Reshen. Differ. Uvavn. 2, 115–134 (1964) (in Russian)
Sargent, T.J., Stachurski, J.: Dynamic Programming (2024). https://quantecon.github.io/book-dp1-public-companion/_downloads/dp.pdf
Schechtman, J.: An income fluctuation problem. J. Econ. Theory 12(2), 218–241 (1976). https://doi.org/10.1016/0022-0531(76)90075-2
Shapley, L.S.: Stochastic games. Proc. Natl. Acad. Sci. 39(10), 1095–1100 (1953). https://doi.org/10.1073/pnas.39.10.1095
Stachurski, J.: Economic Dynamics: Theory and Computation, 2nd edn. MIT Press, Cambridge (2009). https://johnstachurski.net/edtc.html
Stachurski, J., Zhang, J.: Dynamic programming with state-dependent discounting. J. Econ. Theory 192, 105190 (2021). https://doi.org/10.1016/j.jet.2021.105190
Toda, A.A.: Wealth distribution with random discount factors. J. Monet. Econ. 104, 101–113 (2019). https://doi.org/10.1016/j.jmoneco.2018.09.006
Toda, A.A.: Perov’s contraction principle and dynamic programming with stochastic discounting. Oper. Res. Lett. 49(5), 815–819 (2021). https://doi.org/10.1016/j.orl.2021.09.001
Wessels, J.: Markov programming by successive approximations with respect to weighted supremum norms. J. Math. Anal. Appl. 58(2), 326–335 (1977). https://doi.org/10.1016/0022-247X(77)90210-4
Zabreĭko, P.P.: \({K}\)-metric and \({K}\)-normed linear spaces: survey. Collect. Math. 48(4–6), 825–859 (1997)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Toda, A.A. Unbounded Markov dynamic programming with weighted supremum norm Perov contractions. Econ Theory Bull (2024). https://doi.org/10.1007/s40505-024-00267-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40505-024-00267-9
Keywords
- Dynamic programming
- Gelfand formula
- Optimal savings
- Perov contraction
- Spectral radius
- Weighted supremum norm