Webrelaunch 2020

Markovsche Entscheidungsprozesse / Markov Decision Processes (Summer Semester 2012)

There will be no exercise on Friday, the 13th of July. The exercise on Friday, the 20th of July will be held as normal and the corresponding exercise sheet can be downloaded from the "Studierendenportal".

Lecture: Wednesday 8:00-9:30 Z 1
Problem class: Friday 11:30-13:00 Z 1
Lecturer Prof. Dr. Nicole Bäuerle
Office hours: by appointment.
Room 2.016 Kollegiengebäude Mathematik (20.30)
Email: nicole.baeuerle@kit.edu


Suppose a system is given which can be controlled by sequential decisions. The state transitions are random and we assume that the system state process is Markovian which means that previous states have no influence on future states. Given the current state of the system (which could be for example the wealth of an investor) the controller or decision maker has to choose an admissible action (for example a possible investment). Once an action is chosen
there is a random system transition according to a stochastic law (for example a change in the asset value) which leads to a new state. Markov Decision Processes theory deals with controlling the process in such a way that the expected discounted reward of the system is maximized.
We will consider problems with finite and infinite horizon, stopping problems and problems with partial observation. Applications in the area of portfolio optimization, queueing and operations research are considered.


Excellent knowledge of probability theory. Basic knowledge in Markov chains is helpful.


Bäuerle, N. and Rieder, U. (2011): Markov Decision Processes with applications to finance. Springer-Verlag.

Bertsekas, D. P. (2001) Dynamic programming and optimal control. Vol.II. Athena Scientific.

Bertsekas, D. P. (2005) Dynamic programming and optimal control. Vol.I. Athena Scientific.

Hernández-Lerma, O. and Lasserre, J. B. (1996) Discrete-time Markov control processes. Springer-Verlag.

Hernández-Lerma, O. and Lasserre, J. B. (1999) Further topics on discrete-time Markov control processes. Springer-Verlag.

Puterman, M. L. (1994) Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons.

Ross, S. (1983) Introduction to stochastic dynamic programming. Academic Press.

Schäl, M. (1990) Markoffsche Entscheidungsprozesse. B. G. Teubner.