• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Mini-course «Monte Carlo methods for optimal stochastic control»

From 26 November to 30 November, associate professor of Leeds University (UK) Jan Palczewski delivered a mini-course «Monte Carlo methods for optimal stochastic control»

Description:
In recent years, numerical methods for stochastic control problems have seen a number of developments beyond classical finite-difference approximations to Hamilton-Jacobi-Bellman (HJB) equation and finite Markov chain approximations. In the course, I want to explore two strands of research: approximations of solutions of discrete-time stochastic control problems and probabilistic representations of solutions to HJB equations (leading to Monte Carlo schemes). In the first part, the emphasis will be put on dual formulations (for optimal stopping and control problems) and on approximations of the dynamic programming equation. I will also sketch methods for proving convergence (including a glimpse at the theory surrounding the uniform law of large numbers). In the second part, I will talk about probabilistic representations of solutions to Hamilton-Jacobi-Bellman equations linking them to backward stochastic differential equations. I will show relations between numerical methods for solution of BSDEs and the results from the first part of the course