AG Stochastik (Wintersemester 2015/16)
- Dozent*in: Prof. Dr. Nicole Bäuerle, Prof. Dr. Vicky Fasen-Hartmann, Prof. i. R. Dr. Norbert Henze, Prof. Dr. Daniel Hug, Prof. Dr. Günter Last
- Veranstaltungen: Seminar (0127200)
- Semesterwochenstunden: 2
Termine | ||
---|---|---|
Seminar: | Dienstag 15:45-17:15 | SR 2.59 |
Lehrende | ||
---|---|---|
Seminarleitung | Prof. Dr. Nicole Bäuerle | |
Sprechstunde: nach Vereinbarung. | ||
Zimmer 2.016 Kollegiengebäude Mathematik (20.30) | ||
Email: nicole.baeuerle@kit.edu | Seminarleitung | Prof. Dr. Vicky Fasen-Hartmann |
Sprechstunde: Nach Vereinbarung. | ||
Zimmer 2.053 Kollegiengebäude Mathematik (20.30) | ||
Email: vicky.fasen@kit.edu | Seminarleitung | Prof. i. R. Dr. Norbert Henze |
Sprechstunde: nach Vereinbarung | ||
Zimmer 2.020, Sekretariat 2.002 Kollegiengebäude Mathematik (20.30) | ||
Email: henze@kit.edu | Seminarleitung | Prof. Dr. Daniel Hug |
Sprechstunde: Nach Vereinbarung. | ||
Zimmer 2.051 Kollegiengebäude Mathematik (20.30) | ||
Email: daniel.hug@kit.edu | Seminarleitung | Prof. Dr. Günter Last |
Sprechstunde: nach Vereinbarung. | ||
Zimmer 2.001, Sekretariat 2.056 Kollegiengebäude Mathematik (20.30) | ||
Email: guenter.last@kit.edu |
Vorträge
Wenn nicht explizit anders angegeben, finden die Vorträge im SR 2.059 (Geb. 20.30) statt.
21.-23.03.2016
Dienstag, 09.02.2016
15.45 Uhr Dipl.-Math. Dirk Lange (Institut für Stochastik, KIT):
Optimal Control of Piecewise Deterministic Markov Processes under Partial Observation
Abstract: This work deals with the optimal control problem for Piecewise Deterministic Markov Processes (PDMP) under Partial Observation (PO). The total expected discounted cost over lifetime shall be minimized while neither the states of the PDMP nor the current or cumulated cost are observable. Only noisy measurements (with known noise distribution) of the post-jump states are observable. The cost function, however, depends on the trajectory of the unobservable PDMP as well as on the observable noisy measurements of the post-jump states.
Admissible control strategies are history dependent relaxed piecewise open loop strategies: For each point in time and depending on the observable history up to this time, a probability distribution on the action space is selected. This probability distribution defines an expected control action on the jump rate, the drift and the transition kernel at jump times of the PDMP.
We first transform the initial continuous-time optimization problem under PO into an equivalent discrete-time optimization problem under PO. For the latter one, we obtain a recursive formulation for the filter: the probability distribution of the unobservable post-jump state of the PDMP given the observable history. This leads to an equivalent fully observable
optimization problem in discrete time. Classical approaches of stochastic dynamic programming in combination with results for measurable selection of optimizers
are then applied to prove the existence of optimal control strategies. We derive sufficient conditions for the existence of optimal control strategies for lower semi-continuous cost functions and in the case of finite dimensional filters, i.e. if the set of possible post-jump states of the PDMP is finite.
Dienstag, 02.02.2016
15.45 Uhr Dipl.-Math. oec. Markus Scholz (Institut für Stochastik, KIT):
Estimation of Cointegrated MCARMA Processes
Abstract: We extend in this talk the cointegrated discrete-time model, which was introduced by Engle and Granger, to a continuous-time setting using cointegrated multivariate continuous-time autoregressive moving average (MCARMA) processes. The concept of cointegration describes the phenomena, that two or more non-stationary processes, which are integrated, can have stationary linear combinations. Cointegration models therefore stochastic trends of some or all the variables.
A question that imposes itself in this framework is how to estimate the parameters of the cointegrated MCARMA model from discrete-time observations. Since the necessary uniform convergence results do not hold for the sum of squares function, we will use a stepwise approach. For this reason, we separate the pa-rameter vector into two vectors, where the parameters in the first vector model the cointegration space and the parameters in the second vector model the stationary part. To this end, we show super-consistency for the estimator of the cointegration parameters. In the next step, we are able to deduce the consistency for the estimator of the stationary parameters with the knowledge of the cointegration parameters. Finally, we will derive the limiting distributions of the estimators.
Dienstag, 26.01.2016
15.45 Uhr Prof. Dr. Wolfgang Stummer (Friedrich-Alexander-Universität Erlangen-Nürnberg):
Robust Statistics by means of Scaled Bregman Distances
Abstract: We show how scaled Bregman distances can be used for the goal-oriented design of new outlier- and inlier-robust statistical inference tools. Those extend several known distance-based robustness (respectively, stability) methods at once. Numerous special cases are illustrated, including 3D computer-graphical comparison methods. For the discrete case, some universally applicable results on the asymptotics of the underlying scaled-Bregman-distance test statistics are derived as well.
Dienstag, 19.01.2016
15.45 Uhr Prof. Dr. Ralf Wunderlich (Brandenburgische Technische Universität Cottbus - Senftenberg):
Partially Observable Stochastic Optimal Control Problems for an Energy Storage
Abstract: We address the valuation of an energy storage facility in the presence of stochastic energy prices as it arises in the case of a hydro-electric pump station. The valuation problem is related to the problem of de-termining the optimal charging/discharging strategy that maximizes the expected value of the resulting dis-counted cash flows over the lifetime of the storage. We use a regime switching model for the energy price which allows for a changing economic environment described by a non-observable Markov chain.
The valuation problem is formulated as a stochastic control problem under partial information in continuous time. Applying filtering theory we find an alternative state process containing the filter of the Markov chain, which is adapted to the observable filtration. For this alternative control problem we derive the associated Hamilton-Jacobi-Bellman (HJB) equation which is not strictly elliptic. Therefore we study the HJB equation using regularization arguments. We use numerical methods for computing approximations of the value function and the optimal strategy.
Finally, we present some numerical results.
Dienstag, 12.01.2016
15.45 Uhr Dr. Michal Warchol (Université catholique de Louvian):
Modelling serial extremal dependence with tail processes
Abstract: Classical time series techniques are ill-suited to model serial dependence of extremes. The framework of regular variation allows to model time series extremes for non-Gaussian data. \cite{MW} introduced the estimators of the spectral tail process as a flexible nonparametric tool for modeling serial extremal dependence of Markov chains independently from the magnitude of extreme values. This theory is extended to a more general setting of stationary and jointly regularly varying time series. The estimators are applied to stock prices data to study the persistence of positive and negative shocks.
Reference: Drees, H. and Segers, J. and Warchol, M. (2015), Statistics for tail processes of Markov chains.
Extremes, 18, 369--402.
Dienstag, 24.11.2015
15.45 Uhr Dr. Wayan Somayasa (Haluoleo University, Kendari / Indonesien):
Accessing the Appropriateness of Multivariate Spatial Regressions in High Dimensional Space Using Set-Indexed Partial Sums of the Least Squares Residuals
Abstract: In this talk we discuss an asymptotic method for checking the appropriateness of an as-sumed multivariate spatial regression with correlated responses by considering the partial sums (CUmulative SUM) process of the least squares residuals indexed by a family of convex sets. In the first part of the talk we establish a functional central limit theorem for the sequence of the partial sums processes by applying the multivariate analog of Prohorov's theorem. Next we propose test procedures based on the Kolmogorov-Smirnov (KS) and Cramér-von Mises (CvM) functionals of the process. In the second part we pre-sent a simulation study to demonstrate the finite sample sizes behavior of the KS and CvM-test in the comparison with the classical likelihood-ratio test (LR-test). In the last of the talk we present an application of the proposed method to a mining data.
Dienstag, 17.11.2015
15.45 Uhr Prof. Yogeshwaran Dhandapani (Indian Statistical Institute (Bangalore)):
Topology of dynamic random clique complexes
Abstract: We consider a simple dynamical version of the classical Erdos-Renyi random graphs. We look at the process of Betti numbers of the associated clique complexes and show that under suitable normalization, the process converges to the Ornstein-Uhlenbeck process. In the first part of the talk, we shall try to survey the recent literature concerning random complexes and the second part, we shall focus on the dynamic model and also connections to zigzag persistence. The relevant algebraic topology notions shall be introduced or at least sketched heuristically in the talk. This is a joint work with Gugan Thoppe and Robert J. Adler.
Dienstag, 27.10.2015
15.45 Uhr Sebastian Lerch (HITS/Institut für Stochastik, KIT):
Forecaster's Dilemma: Extreme Events and Forecast Evaluation
Abstract: In public discussions of the quality of forecasts, attention typically focuses on the predictive performance in cases of extreme events. However, the restriction of conventional forecast evaluation methods to subsets of extreme observations has unexpected and undesired effects and is bound to discredit even the most skillful forecast available. Conditioning on outcomes is incompatible with the theoretical assumptions of established forecast evalua-tion methods, thereby confronting forecasters with what we refer to as the forecaster's dilemma. For probabilistic forecasts, suitably weighted proper scoring rules provide decision theoretically justifiable alternatives for forecast evaluation with an emphasis on extreme events. Using theoretical arguments, simulation experiments, and a real data study on probabilistic forecasts of U.S. inflation and gross domestic product (GDP) growth, we illustrate and discuss the forecaster's dilemma along with potential remedies.