Webrelaunch 2020

Studierende und Gäste sind jederzeit herzlich willkommen. Wenn nicht explizit anders unten angegeben, finden alle Vorträge im Seminarraum 2.59 im Mathebau (Gebäude 20.30) statt.

Termine
Seminar: Dienstag 15:45-17:15 SR 2.59
Lehrende
Seminarleitung Prof. Dr. Nicole Bäuerle
Sprechstunde: nach Vereinbarung.
Zimmer 2.016 Kollegiengebäude Mathematik (20.30)
Email: nicole.baeuerle@kit.edu
Seminarleitung Prof. Dr. Vicky Fasen-Hartmann
Sprechstunde: Nach Vereinbarung.
Zimmer 2.053 Kollegiengebäude Mathematik (20.30)
Email: vicky.fasen@kit.edu
Seminarleitung Prof. Dr. Tilmann Gneiting
Sprechstunde: nach Vereinbarung
Zimmer 2.019 Kollegiengebäude Mathematik (20.30)
Email: tilmann.gneiting@kit.edu
Seminarleitung Prof. i. R. Dr. Norbert Henze
Sprechstunde: nach Vereinbarung
Zimmer 2.020, Sekretariat 2.002 Kollegiengebäude Mathematik (20.30)
Email: henze@kit.edu
Seminarleitung Prof. Dr. Daniel Hug
Sprechstunde: Nach Vereinbarung.
Zimmer 2.051 Kollegiengebäude Mathematik (20.30)
Email: daniel.hug@kit.edu
Seminarleitung Prof. Dr. Günter Last
Sprechstunde: nach Vereinbarung.
Zimmer 2.001, Sekretariat 2.056 Kollegiengebäude Mathematik (20.30)
Email: guenter.last@kit.edu

Dienstag, 04.02.2020

15:45 Uhr M.Sc. Moritz Otto (Institut für Stochastik, KIT)



Dienstag, 21.01.2020

15:45 Uhr Prof. Dr. Nadja Klein (Humboldt-Universität zu Berlin)

Marginally-calibrated deep distributional regressions

Abstract: Deep neural network (DNN) regression models are widely used in applications requiring state-of-the-art predictive accuracy. However, until recently there has been little work on accurate uncertainty quantification for predictions from such models. We add to this literature by outlining an approach to constructing predictive distributions that are `marginally calibrated'. This is where the long run average of the predictive distributions of the response variable matches the observed empirical margin. Our approach considers a DNN regression with a conditionally Gaussian prior for the final layer weights, from which an implicit copula process on the feature space is extracted. This copula process is combined with a non-parametrically estimated marginal distribution for the response. The end result is a scalable distributional DNN regression method with
marginally calibrated predictions, and our work complements existing methods for probability calibration. The approach is first illustrated using two applications of dense layer feed-forward neural networks. However, our main motivating applications are in likelihood-free inference, where distributional deep regression is used to estimate marginal posterior distributions. In two complex ecological time series examples we employ the
implicit copulas of convolutional networks, and show that marginal calibration results in improved uncertainty quantification. Our approach also avoids the need for manual specification of summary statistics, a requirement that is burdensome for users and typical of competing likelihood-free inference methods.



Dienstag, 14.01.2020

15:45 Uhr M.Sc. Judith Schilling (Institut für Stochastik, KIT)

Untersuchungen zur Asymptotik und zum Erwartungswert im verallgemeinerten Coupon-Collector-Problem In



Dienstag, 07.01.2020

15:45 Uhr Prof. Dr. Alfred Müller (Universität Siegen)

Dependence uncertainty bounds for the energy score Theres

Abstract: There is an increasing interest in recent years in methods for assessing the quality of probabilistic forecasts by so called scoring rules. For forecasting general multivariate distributions, however, there are only a very few scoring rules that are considered in the literature. In their fundamental paper, Gneiting and Raftery (2007) considered the so called energy score as an example of a scoring rule that is strictly proper for arbitrary multivariate distributions. Pinson and Tastu (2013) started a debate on the discrimination ability of this scoring rule with respect to the dependence structure.
In this paper we want to contribute to this discussion by deriving dependence uncertainty bounds for the energy score and the related multivariate Gini mean difference. This means that we derive bounds for the score under the assumption that we only know the marginals of the distributions, but do not know anything about the dependence structure, i.e. the copula. Using methods from stochastic orderings we will derive some analytical bounds that are sharp in some cases. In other cases we will derive interesting numerical bounds by using a variant of a swapping algorithm. It turns out that some of these bounds are attained for some non-standard copulas that are of interest in their own right.
The talk is based on joint work with Carole Bernard (Grenoble) and Marco Oesting (Siegen).



Dienstag, 17.12.2019

15:45 Uhr M.Sc. Gregor Leimcke (Institut für Stochastik, KIT)

Bayesianisches optimales Investieren und Rückversichern zur exponentiellen Endnutzenmaximierung des Reserveprozesses

Abstract: Wir betrachten den Reserveprozess einer Versicherung mit verschiedenen Versicherungssparten. Die Scha-denankunftszeiten der Sparten werden durch einen multivariaten Punktprozess mit Abhängigkeiten zwischen den marginalen Prozessen beschrieben, wobei sich die Abhängigkeitsmodellierung auf die Wahl von Aus-dünnungswahrscheinlichkeiten reduziert. Das Ziel der Versicherung ist die Maximierung des exponentiellen Endnutzens des Reserveprozesses durch Investitions- und Rückversicherungsentscheidungen. Das sich daraus ergebende stochastische Kontrollproblem wird unter der Annahme von unbekannten Schadenan-kunftsintensitäten untersucht. Dieser Unsicherheit wird mittels eines Bayesianischen Ansatzes begegnet, woraus ein reduziertes stochastisches Kontrollproblem resultiert, für welches wir die Wertfunktion und die optimale Strategie mithilfe der verallgemeinerten Hamilton-Jacobi-Bellman Gleichung charakterisieren. Des Weiteren analysieren wir den Einfluss der unbekannten Schadenankunftsintensitäten auf die optimale Rück-versicherungsstrategie durch ein Vergleichsresultat, welches an einem numerischen Beispiel illustriert wird.


Dienstag, 10.12.2019

15:45 Uhr M.Sc. Celeste Mayer (Institut für Stochastik, KIT)

Whittle-Schätzung für Lévy-getriebene multivariate CARMA Prozesse

Abstract: In vielen technischen, physikalischen und finanzwirtschaftlichen Anwendungen ist man mit Daten konfron-tiert, die zwar theoretisch von zeitstetigen Prozessen generiert werden, aber aus unterschiedlichen Gründen nur diskret beobachtet werden können. Im Vortrag betrachten wir den Whittle-Schätzer für Lévy-getriebene multivariate CARMA Prozesse, welche äquidistant abgetastet werden. Unser Fokus liegt dabei auf Prozes-sen, deren 2. Moment existiert. Wir beschäftigen uns mit einer geeigneten Modellierung und untersuchen unter daraus resultierenden Voraussetzungen die asymptotischen Eigenschaften des Schätzers. Im An-schluss stellen wir einen angepassten Schätzer vor, welcher jedoch nur im univariaten Fall Anwendung fin-det. In einer Simulationsstudie vergleichen wir diese beiden Schätzer mit dem weit verbreiteten Quasi-Maximum-Likelihood-Schätzer.


Dienstag, 03.12.2019

15:45 Uhr Dr. Jose Ameijeiras-Alonso (Katholieke Universiteit Leuven)

A brief history of nonparametric multimodal tests

Abstract: The identification of peaks or maxima in probability densities, by mode testing or bump hunting, has become an important problem in applied fields. For real (univariate) random variables, this task has been approached in the statistical literature from different perspectives. The objective of this talk will be presenting different exploratory and testing nonparametric approaches for determining the number of modes (and their estimated location). The main focus will be on the testing perspective, where different methods, based on kernel density estimators or the quantification of excess mass, will be reviewed. Since none of the existing proposals for determining the general number of modes provides satisfactory performance in practice, a new method, showing a superior behavior (with good calibration and power results), will be presented. Finally, the extension of these techniques to the multivariate setting will be discussed in this presentation.


Dienstag, 19.11.2019

15:45 Uhr Johannes Bracher

Forecasting infectious disease incidence based on routine surveillance data

Abstract: Public health institutions like the Robert Koch Institut or the US Centers for Disease Control and Prevention routinely monitor a broad range of infectious diseases. In recent years, there has been a growing interest in forecasting future incidence based on such data. I will give an overview on infectious disease surveillance, modelling and forecasting, emphasizing the particular challenges and some recent developments. Subsequently I will zoom in on the endemic-epidemic framework, a model class developed specifically for multivariate time series of disease surveillance counts. In two case studies I will show how probabilistic forecasts for different prediction horizons are obtained and evaluated. In the last part of the talk I will share some thoughts on a particular metric for forecast evaluation, the multibin logarithmic score. Following its use in a series of forecasting competitions this score has become widely used in infectious disease epidemiology. However, as will be shown, it favours too sharp predictive distributions and thus creates incentives for dishonest forecasting. Most presented topics are joint work with Leonhard Held.


Dienstag, 12.11.2019

15:45 Uhr Prof. Dr. Donald Richards (Pennsylvania State University)

Integral Transform Methods in Goodness-of-Fit Testing for the Wishart Distributions

Abstract: In recent years, random data consisting of positive definite (symmetric or Hermitian) matrices have appeared in several areas of applied research, e.g., diffusion tensor imaging, wireless communication systems, synthetic aperture radar, and volatility models in finance. Given a random sample of such matrices, we wish to test whether the data are drawn from a given distribution. In this talk, we apply the Hankel transform of matrix argument to develop goodness-of-fit tests for the Wishart distributions. The asymptotic distribution of the test statistic is derived in terms of the integrated square of a Gaussian random field, and an explicit formula is obtained for the corresponding covariance operator. The eigenfunctions of the covariance operator are determined explicitly, and the eigenvalues are shown to satisfy certain interlacing properties. Throughout this work, the Bessel functions of matrix argument of Herz (1955) and the zonal polynomials of James (1964) play a crucial role, and the results obtained raise the issue of developing good-ness-of-fit tests for matrix data analogous to the Laplace and Mellin transform-based tests developed by Henze and his co-authors.
(This talk is based on joint work with Elena Hadjicosta.)


Dienstag, 22.10.2019

15:45 Uhr Prof. Dr. Nicholas G. Reich (University of Massachusetts Amherst)

Statistical considerations for probabilistic ensemble forecasts of infectious disease outbreaks

Abstract: Seasonal influenza outbreaks cause substantial annual morbidity and mortality worldwide. Accurate forecasts of key features of influenza epidemics, such as the timing and severity of the peak incidence in a given season, can inform public health response to outbreaks. Our team has built a collaborative multi-model probabilistic ensemble forecast of influenza outbreaks in the US, which has been deployed in real-time since 2017. This ensemble model has been optimized to achieve a high log-score, a measure of probabilistic forecast accuracy. In this talk, I will describe the gradual evolution of the model-averaging methods used to build this probabilistic ensemble. We improved upon a simple equal-weighted average of models by estimating model- and target-specific weights using a simple Expectation-Maximization algorithm. In further iterations, we have explored having weights adapt in real-time as a function of recent, in-season perfomance. Weights for this adaptive approach are estimated using a variational inference algorithm of which the EM is a special case. Finally, we are currently exploring methods for transforming individual component forecast models in an attempt to provide better overall calibration and probabilistic accuracy. Our models have consistently achieved near-top rankings in forecasting challenges run by the US Centers for Disease Control and Prevention (CDC) and are used by the CDC for improving situational awareness of governmental health officials and the general public during the influenza season.