Webrelaunch 2020

Studierende und Gäste sind jederzeit herzlich willkommen. Wenn nicht explizit anders unten angegeben, finden alle Vorträge in Präsenz im Raum 2.059 statt. Für die Aufnahme in den E-Mail-Verteiler für die Einladungen kontaktieren Sie bitte Tatjana Dominic (tatjana.dominic@kit.edu).

Termine
Seminar: Dienstag 15:45-17:15 20.30 SR 2.59
Lehrende
Seminarleitung Prof. Dr. Nicole Bäuerle
Sprechstunde: nach Vereinbarung.
Zimmer 2.016 Kollegiengebäude Mathematik (20.30)
Email: nicole.baeuerle@kit.edu
Seminarleitung Prof. Dr. Vicky Fasen-Hartmann
Sprechstunde: Nach Vereinbarung.
Zimmer 2.053 Kollegiengebäude Mathematik (20.30)
Email: vicky.fasen@kit.edu
Seminarleitung Prof. Dr. Tilmann Gneiting
Sprechstunde: nach Vereinbarung
Zimmer 2.019 Kollegiengebäude Mathematik (20.30)
Email: tilmann.gneiting@kit.edu
Seminarleitung Prof. Dr. Daniel Hug
Sprechstunde: Nach Vereinbarung.
Zimmer 2.051 Kollegiengebäude Mathematik (20.30)
Email: daniel.hug@kit.edu
Seminarleitung Prof. Dr. Günter Last
Sprechstunde: nach Vereinbarung.
Zimmer 2.001, Sekretariat 2.056 Kollegiengebäude Mathematik (20.30)
Email: guenter.last@kit.edu
Seminarleitung Prof. Dr. Mathias Trabs
Sprechstunde: Sprechzeit nach Vereinbarung
Zimmer 2.020 Kollegiengebäude Mathematik (20.30)
Email: trabs@kit.edu

Dienstag, 13.02.2023, 15.45 Uhr, Ort: SR 1.059
Prof. Dr. Sophie Langer (University of Twente)
The Role of Statistical Theory in Understanding Deep Learning
Abstract: In recent years, there has been a surge of interest across different research areas to improve the theoretical understanding of deep learning. A very promising approach is the statistical one, which interprets deep learning as a nonlinear or nonparametric generalization of existing statistical models. For instance, a simple fully connected neural network is equivalent to a recursive generalized linear model with a hierarchical structure. Given this connection, many papers in recent years derived convergence rates of neural networks in a nonparametric regression or classification setting. Nevertheless, phenomena like overparameterization seem to contradict the statistical principle of bias-variance trade-off. Therefore, deep learning cannot only be explained by existing techniques of mathematical statistics but also requires a radical overthinking. In this talk we will explore both, the importance of statistics for the understanding of deep learning, as well as its limitations, i.e., the necessity to connect with other research areas.



Dienstag, 16.01.2024, 15.45 Uhr, Ort: SR 1.059
Maximilian Steffen (Institut für Stochastik, KIT)
Multivariate estimation in nonparametric models: Stochastic neural networks and Lévy processes
Abstract: Nowadays, statistical problems characterized by large sample sizes and large parameter spaces are ubiquitous. Moreover, a lot of training methods, while strong in practice, cannot statistically guarantee their performance in terms of risk bounds. As a consequence, the design of cutting edge methods is characterized by a tension between numerically feasible and efficient algorithms, and approaches which also satisfy theoretically justified statistical properties. In this talk, we consider two fairly disjoint problems showcasing the wide spectrum of fields where nonparametric statistics can provide answers to the challenges presented by modern applications while also admitting sta-tistical guarantees.
First, we approach a classical nonparametric regression with a stochastic neural network whose weights are drawn from the Gibbs posterior. To save computational costs when sampling from the Gibbs posterior, a naive stochastic Metropolis-Hastings approach can be used but leads to less ac-curate estimates. However, we demonstrate that this drawback can be avoided with a simple cor-rection term. We prove PAC-Bayes oracle inequalities for the invariant distribution of the resulting algorithm. Further, we investigate size and coverage of credible sets constructed from this invariant distribution. We validate the theoretical merits of our method with a simulation study.
Second, we estimate the jump density of a multivariate Lévy process based on time-discrete ob-servations using the spectral approach. We present uniform risk bounds for our estimator over fully nonparametric classes of Lévy processes under mild assumptions and illustrate the results with a simulation example.
Parts of this talk are based on joint work with Sebastian Bieringer, Gregor Kasieczka and Mathias Trabs.



Dienstag, 09.01.2024, 15.45 Uhr, Ort: SR 1.059
Lea Kunkel (Institut für Stochastik, KIT)
A Wasserstein perspective of Vanilla GANs
Abstract:The empirical success of Generative Adversarial Networks (GANs) caused an increasing interest in theoretical research. The statistical literature is mainly focused on Wasserstein GANs and generalizations thereof, which especially allow for good dimension reduction properties. Statistical results for Vanilla GANs, the original optimization problem, are still rather limited and require assumptions such as smooth activation functions and equal dimensions of the latent space and the ambient space. To bridge this gap, we draw a connection from Vanilla GANs to the Wasserstein distance. By doing so, existing results for Wasserstein GANs can be extended to Vanilla GANs. In particular, we obtain an oracle inequality for Vanilla GANs in Wasserstein distance. The assumptions of this oracle inequality are designed to be satisfied by network architectures commonly used in practice, such as feedforward ReLU networks. Using Hölder-continuous ReLU networks we conclude a rate of convergence for estimating an unknown probability distribution.



Dienstag, 12.12.2023, 15.45 Uhr, Ort: SR 1.059
Prof. Dr. Marko Obradović (University of Belgrad)
Some Equidistribution-type Characterizations of the Exponential Distribution Based on Order Statistics
Abstract: The exponential distribution has got the largest number of characterization theorems, thanks to both its applicability and its simplicity. Several characterization theorems will be shown, which are all based on equality in distribution of some random variables involving order statistics from an iid sample. Their method of proof is based on Maclaurin series expansions and some identities involving Stirling numbers of the second kind. Some applications of these characterizations in goodness-of-fit testing will also be mentioned.



Dienstag, 28.11.2023, 15.45 Uhr, Ort: SR 1.059
Prof. Dr. Siegfried Hörmann (TU Graz)
Measuring dependence between a scalar response and a functional covariate
Abstract: We extend the scope of a recently introduced dependence coefficient between scalar re-sponses and multivariate covariates to the case of functional covariates. While formally the extension is straight forward, the limiting behaviour of the sample version of the coefficient is delicate. It crucially depends on the nearest- neighbour structure of the covariate sam-ple. Essentially, one needs an upper bound for the maximal number of points which share the same nearest neighbour. While a deterministic bound exists for multivariate data, this is no longer the case in infinite dimensional spaces. To our surprise, very little seems to be known about properties of the nearest neighbour graph in a high-dimensional or even functional random sample, and hence we try to advise a way how to overcome this prob-lem. An important application of our theoretical results is a test for independence between scalar responses and functional covariates.
The talk is based on joint work Daniel Strenger.



Miniworkshop on Percolation and related areas

Montag, 06.11.2023, 10:00 Uhr
Mathew Penrose (University of Bath)
Random Euclidean coverage and connectivity problems

Montag, 06.11.2023, 11:00 Uhr
Hermann Thorisson (University of Iceland)
Shift-Coupling and Maximality

Dienstag, 07.11.2023, 15:45 Uhr
Takis Konstantopoulos (University of Liverpool)
Barak-Erdös Graphs and last passage percolation
Abstract: I will present new and older work on directed random graphs focusing mostly on behavior of longest path. Their growth rate is a function of the parameters of the graph (connectivity probability, weights on edges). This function (that can be called last passage percolation constant) has interesting properties. For example, if we assign weight 1 to every existing edge and weight x to every nonexisting one then, as a function of x, we obtain a convex function of x whose derivative fails to exist when x is a nonpositive rational number or x = 2, 3, ... or x=1/2, 1/3,... Depending on the order structure of the set of vertices, we can obtain functional central limit theorems that can range from Brownian motions to Brownian percolation processes whose distribution is related to the largest eigenvalue of a GUE random matrix. There are also relations with branching processes and the PWIT (Poisson weighted infinite tree) that appear in a sparse regime.
The talk will be based on work that has been done with various collaborators over the years: D Denisov, S Foss, B Mallein, A Pyatkin, S Ramassamy.



Freitag, 03.11.2023, 14:00 - 18:30 Uhr
Festkolloquium zum 65. Geburtstag von Prof. Dr. Günter Last
Programm des Kolloquiums



Dienstag, 31.10.2023, 15.45 Uhr, Ort: SR 1.059
Tamara Göll (Institut für Stochastik, KIT)
Erwartete Endnutzenmaximierung für kompetitive Investoren
Abstract: Bei klassischen Portfolio-Optimierungsproblemen wird von einem einzelnen Investor aus-gegangen, der mit einem festen Startkapital sein Vermögen zu einem vorab festgelegten Endzeitpunkt optimieren möchte. Ein klassisches Optimierungskriterium ist dabei der erwartete Nutzen. Motiviert durch das kompetitive Verhalten von Hedgefonds-Managern werden zunehmend auch kompetitive Portfolio-Optimierungsprobleme untersucht. Dabei wird eine endliche Anzahl von Spielern betrachtet, die das Ziel der Maximierung des erwarteten End-nutzens verfolgen, aber zusätzlich ihre Leistung im Vergleich zu ihren Konkurrenten berücksichtigen.
Im Vortrag geben wir zunächst eine Motivation für die oben genannten Probleme und erläutern anschließend eine Möglichkeit, den Wettbewerbsfaktor in klassische Optimierungs-probleme zu integrieren. Unter Verwendung des additiven Ansatzes zur Konstruktion der kompetitiven Nutzenfunktion können wir das eindeutige Nash-Gleichgewicht durch Lösun-gen klassischer Portfolio-Optimierungsprobleme explizit darstellen. Dabei müssen weder der Finanzmarkt noch die Nutzenfunktion näher spezifiziert werden. Die vorgestellte Me-thode ist nicht auf Probleme der erwarteten Endnutzenmaximierung beschränkt. Wir gehen auf mögliche Verallgemeinerungen der Methode ein und wenden sie auf einige Beispiele an. Dabei können wir insbesondere Eigenschaften des Nash-Gleichgewichts diskutieren. Am Ende des Vortrags geben wir einen Ausblick auf weitere von uns betrachtete Probleme der kompetitiven Portfolio-Optimierung.



Dienstag, 24.10.2023, 15.45 Uhr, Ort: SR 1.059
Eva-Maria Walz (Institut für Stochastik, KIT)
Easy Uncertainty Quantification (EasyUQ)
Abstract: How can we quantify uncertainty if our favorite computational tool - be it a numerical, a statistical, or a machine learning approach, or just any computer model - provides single-valued output only? In this talk, we introduce Easy Uncertainty Quantification (EasyUQ) technique, which transforms real-valued model output into calibrated statistical distributions, based solely on training data of model output–outcome pairs, without any need to access model input. In its basic form, EasyUQ is a special case of the recently introduced Isotonic Distributional Regression (IDR) technique that leverages the pool-adjacent-violators algorithm for nonparametric isotonic regression. EasyUQ yields discrete predictive distributions that are calibrated and optimal in finite samples, subject to stochastic monotonicity. The workflow is fully automated, without any need for tuning. The Smooth EasyUQ approach supplements IDR with kernel smoothing, to yield continuous predictive distributions that preserve key properties of the basic form, including both, stochastic monotonicity with respect to the original model output, and asymptotic consistency. For the selection of kernel parameters, we introduce multiple one-fit grid search, a computationally much less demanding approximation to leave-one-out cross-validation. In a study of benchmark problems from machine learning, we show how EasyUQ and Smooth EasyUQ can be integrated into the workflow of neural network learning and hyperparameter tuning and find EasyUQ to be competitive with state-of-the-art approaches.