This Python-based code repository supplements the work of Philipp Renner and Simon Scheidegger, titled Machine Learning for Dynamic Incentive Problems (Renner and Scheidegger; 2018), which introduces a highly scalable computational technique to solve dynamic incentive problems (with potentially a substantial amount of heterogeneity).
However, the scope of the method is much broader: as it is a generic framework to compute global solutions to dynamic (stochastic) models with many state variables, it is applicable to almost any dynamic model. The available solution algorithms are based on "value function iteration", as well as "time iteration".
- This repository aims to make our method easily accessible to the computational economics and finance community.
- The computational framework located here is extensively documented, leverages GPyTorch, and combines Gaussian Process regression with performance-boosting options such as Bayesian active learning, the active subspace method, Deep Gaussian processes, and MPI-based parallelism.
- Replication codes for the dynamic incentive problems are provided.
- Furthermore, to demonstrated the broad applicablility of the method, several additional examples are provided. Specifically, a stochastic optimal growth model (solved with value function iteration), and an international real business cycle model (solved with time iteration) are provided.
- In addition, simple code examples to introduce Gaussian Processe regression and Bayesian active learning to new users in a standalone fashion are provided here.
- Philipp Renner (University of Lancaster, Department of Economics)
- Simon Scheidegger (University of Lausanne, Department of Economics)
Please cite Machine Learning for Dynamic Incentive Problems, P. Renner, S. Scheidegger, 2018 in your publications if it helps your research:
@article{rennerscheidegger_2018,
title={Machine learning for dynamic incentive problems},
author={Renner, Philipp and Scheidegger, Simon},
year={2018},
url = "https://ssrn.com/abstract=3282487",
journal={Available at SSRN 3282487}
}
Introductory example on Gaussian Process Regression: To illustrate how Gaussian Process regression can be applied to approximate functions, we provide a simple notebook.
Introductory examples on Bayesian Active Learning: To illustrate how Bayesian Active Learning in conjunction with Gaussian Process regression can be used to approximate functions, we provide a 1-dimensional example as well as a multi-dimensional example.
Baseline model by Fernandes and Phelan (2000): We provide the code used to solve our baseline model (section 4.6), including the result files, and an explanation on how to run the code.
Heterogeneous agent model: We provide the code used to solve our adverse selection model with heterogeneous agents (section 5), including the result files, and an explanation on how to run the code.
We provide implementations which use python 3.
This work was generously supported by grants from the Swiss National Supercomputing Centre (CSCS) under project IDs s885, s995, the Swiss Platform for Advanced Scientific Computing (PASC) under project ID "Computing equilibria in heterogeneous agent macro models on contemporary HPC platforms", the Swiss National Science Foundation under project IDs “New methods for asset pricing with frictions”, "Can economic policy mitigate climate change", and the Enterprise for Society (E4S).
