Survey_SW_2006
Metadata
Zur Langanzeige
Zusammenfassung
We consider a Markov Jump-Linear-Quadratic (MJLQ) model of an economy with forward-looking variables. The economy has a private sector and a policymaker. We let Xt denote an nX-vector of predetermined variables in period t, xt an nx-vector of forward-looking variables, and it an ni-vector of (policymaker) instruments (control variables). Our model of the economy takes the form of a so called Markov jump-linear-quadratic (MJLQ) system, extended to include forward-looking variables. In this setup, model uncertainty takes the form of different “modes” or regimes that follow a Markov process. This setup can be adapted to handle many different forms of model uncertainty, but yet provides a relatively simple structure for analysis. Due to the curse of dimensionality, the Bayesian optimal policy (BOP) is only feasible in relatively small models. Confronted with these difficulties, we also consider adaptive optimal policy (AOP). In this case, the policymaker in each period does update the probability distribution of the current mode in a Bayesian way, but the optimal policy is computed each period under the assumption that the policymaker will not learn in the future from observations. In our MJLQ setting, the AOP is significantly easier to compute, and in many cases provides a good approximation to the BOP. Moreover, the AOP analysis is of some interest in its own right, as it is closely related to specifications of adaptive learning which have been widely studied in macroeconomics (see [6] for an overview). Further, the AOP specification rules out the experimentation which some may view as objectionable in a policy context.
Publikationstyp
Research Data
Link zur Publikation
Collections
- External Research Data [777]