Week 8 – Dynamic Programming & Value Function Iteration

Learning Outcomes By the end of this week, students will be able to:

  1. Understand the Bellman equation formulation for dynamic economic problems.
  2. Implement value function iteration (VFI) in MATLAB.
  3. Discretise the state space for computation.
  4. Simulate optimal decision rules from the computed policy function.
  5. Apply VFI to the deterministic neoclassical growth model.

Suggested Readings

In-Class Activities

  • Introduce the Bellman equation for the growth model:
    [ V(k) = \max_{k’} \left{ u(f(k) - k’) + \beta V(k’) \right} ]
  • Create a discrete grid for capital kgrid.
  • Implement value function iteration:
    • Initialise value function guess.
    • Iterate until convergence.
    • Extract the optimal policy function ( k’(k) ).
  • Plot the policy function and compare with the 45-degree line.

Homework / Practice

  • Extend the VFI code to store and plot the value function at each iteration (to visualise convergence).
  • Simulate a time path for capital starting from ( k_0 ) using the policy function.
  • Experiment with different values of ( \beta ) and ( \alpha ) and analyse changes in the policy rule.
  • Write a short comment (in script) on how the discount factor affects savings behaviour.

Files

Homework submission

  • Submit your homework here
  • Please upload your homework as a single zip file. Access it’s now open with any email address! But remember to name your file with your full name and or student ID.
  • The submission should include the .m file used to produce the results.
  • You can modify you submission until the beginning of week 9 at 9am.