|Forscher:||Michael P. Evers, Markus Kontny|
Topics & Objectives
Key elements in modern dynamic economic modeling are uncertainty and forward-looking behavior. While understanding the interaction of uncertainty and forward-looking behavior in nonlinear settings is of considerable interest, it is confronted with the non-trivial challenge to solve the models.
In this research project, we present a novel approach to the implementation of fix-point iterations based on the Euler equation, aka Policy Function Iteration. The standard implementation evaluates the conditional expectations using a discretization of the exogenous stochastic processes based on approximated Markov-chains, e.g., Tauchen, Tauchen-Hussey, or Rouwenhorst. As these numerical routines are grid-based, they face the curse of dimensionality. We propose to circumvent this computationally expensive step by using a k-th order Taylor series approximation to the Euler equation, instead. Expanding the Euler equation in exogenous disturbances about the case where shocks are absent allows a direct representation of expectation formation in the Euler equation in terms of the first k-moments of the distribution of exogenous shocks.
In Evers and Kontny (“Policy Function Iteration using Approximate Equilibrium Systems”), we provide a general description of our approach and then apply it to a consumption - savings problem. Non-linearities enter the model because the household is risk-averse and faces a non-negativity constraint on consumption where both income and asset returns are subject to shocks. We implement the policy function iteration (PFI) on a second-order Taylor approximation to the Euler Equation.
- Computational Speed: A substantial increase in computational speed as compared to standard policy function iteration methods, as the size of the exogenous state grid increases: For a standard parameterization with a grid size of ten nodes per exogenous state (income and asset returns), our approach is about 40 times faster as the standard PFI.
- Accuracy: The Euler Error are in the same order of magnitude as the standard PFI and correspond on average to a solution error of 1 euro in 10:000 euros. Our approach thus does not suffer the loss of accuracy.
- Curse of Dimensionality: The computational burden of PFI using a discrete conditional expectation operator increases quadratically in the grid size of exogenous state variables/shocks whereas it only increases linearly using the Taylor approximation to the Euler equation. Our approach thereby alleviates the challenge of the curse of dimensionality.
- We perform two different exercises: First, we compute the implied Euler error to the exact solution for different grid sizes N in the discretization of the Markov-chain. This yields a statement about false rejection and thus the accuracy of the Euler error assessment itself.
- Second, we compute a numerical global solution (based on time iteration method) to the model and then compare the implied Euler error for different grid sizes N in the discretization of the Markov-chain. This yields a statement about false acceptance of the numerical solutions.
- False Rejection: When assessing the Euler error of the true, exact solution to the model – which should by zero up to machine precision - the Euler errors computed from the discretized Markov-chain can reach up to 10-20% for small grid sizes and high risk aversion, decreasing to 1% for larger grids. Those numbers would typically lead to a rejection of the computed solution as being inaccurate.
- False Acceptance: Inversely and trivially true, when the Euler error assessment itself is based on the very same discretization of the Markov-chain as the numerical routine to actually compute the global solution, the Euler error must necessarily be “zero”. The Euler error then coincides with the stopping threshold of the numerical routine and would spuriously indicate an acceptable Euler error.
- The central message of the project is to remind us that as global solution routines to structural models with forward looking behavior and uncertainty are based on approximations to the expectations, they are prone to approximation errors.
- The curse of dimensionality imposes a trade-off between accuracy of the approximation to the expectations and hence the computed solution on the one hand and the computational burden and feasibility on the other hand: Circumventing the curse of dimensionality by reducing the grid size might lead to a substantial if not inacceptable inaccuracy of the numerical solution to the model and its predications.