Home | english | Impressum | Sitemap | Intranet | KIT
Institut für Stochastik

Sekretariat
Kollegiengebäude Mathematik (20.30)
Zimmer 2.056 und 2.002

Adresse
Hausadresse:
Karlsruher Institut für Technologie (KIT)
Institut für Stochastik
Englerstr. 2
D-76131 Karlsruhe

Postadresse:
Karlsruher Institut für Technologie (KIT)
Institut für Stochastik
Postfach 6980
D-76049 Karlsruhe

Öffnungszeiten:
Mo-Fr 10:00 - 12:00

Tel.: 0721 608 43270/43265

Fax.: 0721 608 46066

AG Stochastik (Wintersemester 2016/17)

Dozent: Prof. Dr. Vicky Fasen-Hartmann, Prof. Dr. Daniel Hug, Prof. Dr. Nicole Bäuerle, Prof. Dr. Günter Last, Prof. Dr. Norbert Henze
Veranstaltungen: Seminar (0127200)
Semesterwochenstunden: 2


Studierende und Gäste sind jederzeit willkommen.
Wenn nicht explizit anders unten angegeben, finden alle Vorträge im Seminarraum 2.59 im Mathebau (Gebäude 20.30) statt.

Termine
Seminar: Dienstag 15:45-17:15 SR 2.59
Dozenten
Seminarleitung Prof. Dr. Vicky Fasen-Hartmann
Sprechstunde: Im WS 2017/2018 im Forschungsfreisemester.
Zimmer 2.053 Kollegiengebäude Mathematik (20.30)
Email: vicky.fasen@kit.edu
Seminarleitung Prof. Dr. Daniel Hug
Sprechstunde: Nach Vereinbarung.
Zimmer 2.051 Kollegiengebäude Mathematik (20.30)
Email: daniel.hug@kit.edu
Seminarleitung Prof. Dr. Nicole Bäuerle
Sprechstunde: Mittwoch, 11:00-12:00 Uhr und nach Vereinbarung
Zimmer 2.016 Kollegiengebäude Mathematik (20.30)
Email: nicole.baeuerle@kit.edu
Seminarleitung Prof. Dr. Günter Last
Sprechstunde: Nach Vereinbarung.
Zimmer 2.001, Sekretariat 2.056 Kollegiengebäude Mathematik (20.30)
Email: guenter.last@kit.edu
Seminarleitung Prof. Dr. Norbert Henze
Sprechstunde: Nach Vereinbarung.
Zimmer 2.020, Sekretariat 2.002 Kollegiengebäude Mathematik (20.30)
Email: henze@kit.edu

Vorträge

Dienstag, 07.02.2017

15.45 Uhr Prof. Dr. Ralf Korn (TU Kaiserslautern):

Chancen-Risiko-Klassifizierung vonAltersvorsorgeprodukten: Finanzmathematische Aspekte und Probleme

Abstract: Seit dem 1.1.2017 müssen geförderte Altersvorsorgeprodukte in Deutschland zur Information des Verbrau-chers durch die Produktinformationsstelle Altersvorsorge in Kaiserslautern in eine von fünf sogenannten Chancen-Risiko-Klassen eingeordnet werden. Dies geschieht auf der Basis stochastischer Simulation des jeweils erzielten Endvermögens der Produkte.
Hierzu waren im Vorfeld eine Reihe theoretischer und konzeptioneller Aspekte zu klären wie z.B.:

  • Wahl eines Kapitalmarktmodells (Welches Zins-, welches Aktienmodell?)
  • Wahl von Chancen- und Risikomaßen
  • Simulation von Altersvorsorgeprodukten (Detailliert vs. standardisiert)
  • Kalibrierung der verwendeten Modelle und der Chancen-Risiko-Klassen

Im Vortrag wird auf praktische und theoretische Aspekte der Modelle, der Altersvorsorgeprodukte sowie der Kalibrierung eingegangen. Dabei werden bekannte und überraschende Eigenschaften in einzelnen Berei-chen auftreten.

Dienstag, 31.01.2017

15.45 Uhr Dipl.-Math. M.Eng. Dirk Lange (Institut für Stochastik, KIT):

Cost Optimal Control of Piecewise Deterministic Markov Processes under Partial Observation

Abstract: We developed a general model for a controlled Piecewise Deterministic Markov Process (PDMP) under partial observa-tion: An unobservable, underlying PDMP is assumed and partial observation is modeled by noisy measurements of its post-jump states. Inter-jump times are assumed to be perfectly observable. Admissible control policies are then history dependent relaxed piecewise open loop policies.
We derive sufficient conditions for the existence of optimal policies for the total discounted cost problem in this model. To do so, we reformulate the initial optimization problem under partial observation in continuous time into an equivalent optimization problem under complete observation in discrete time. The main result on this way is the development of an adequate filter: a recursive calculation of the conditional distribution of the unobservable post-jump state given the ob-served history.
We further study a second model of partial observation where we assume, in addition to the first model, an unobservable inter-jump time. For this model we derive existence of optimal policies for an even broader class of processes.
Finally, we apply our results to an example with convex cost function where we characterize an optimal policy as of “bang-bang” type and determine its concrete decision rules.

Key facts on Piecewise Deterministic Markov Processes:
Piecewise Deterministic Markov Processes (PDMP) were introduced by Davis as "general class of non-diffusion models". This statement is to be understood in the context of the following result of Cinlar and Jacod from 1981: "Every strong Markov process with values in $\R^d$ and continuous paths of locally bounded total variation is deterministic". Hence, if one wants a non-trivial stochastic process with paths of locally bounded total variation, one has to allow for jumps. A PDMP is essentially described by its three characteristics: the deterministic drift, the jump intensity and the jump transi-tion kernel. A PDMP starts in its initial state to then follow the path described by its deterministic drift. At a random point in time, there is a jump of the path. The jump intensity here characterizes the probability distribution for the jump time, the jump transition kernel describes the transition from the pre-jump state to the post-jump state of the process. After a jump, a PDMP follows again the path of its deterministic drift up to the next random jump time and all iterates.


Dienstag, 17.01.2017

15.45 Uhr Dr. Dennis Dobler (Universität Ulm):

Resampling-based inference for the Wilcoxon-Mann-Whitney effect in survival analysis for possibly tied data

Abstract: In a two-sample survival setting with independent survival variables T1 and T2 and independent right-censoring, the Wilcoxon-Mann-Whitney effect
dobler.jpg
is an intuitive measure for discriminating two survival distributions. When comparing two treatments, the case p > 1=2 suggests the superiority of the first over the second. Nonparametric maximum likelihood estimators based on normalized Kaplan-Meier estimators naturally handle tied data, which are omnipresent in practical applications. Studentizations allow for asymptotically accurate inference for p. For small samples, however, coverage probabilities of confidence intervals are considerably enhanced by means of bootstrap and permutation techniques. The latter even yields finitely exact procedures in the situation of exchangeable data. Simulation results support all theoretic properties under various censoring and distribution set-ups.


Dienstag, 10.01.2017

15.45 Uhr Prof. Dr. Pierre Calka (Université de Rouen):

The typical Poisson-Voronoi cell around an isolated nucleus

Abstract: We construct the planar Voronoi tessellation generated by the union of the origin and a homogeneous Poisson point process. We are interested in the cell associated with the origin and conditioned on containing a fixed convex body K. When the intensity of the Poisson point process goes to infinity, we obtain explicit asymptotics for the mean characteristics of this random polygon. As in Rényi and Sulanke's seminal papers on random convex hulls, the regularity of the boundary of the convex body K is of crucial importance. We then describe the asymptotic shape of two other random polygons: first the cell containing K when the point process is conditioned on the event that K is included in one of the cells and secondly the cell associated with the origin when the point process is conditioned on not intersecting a fixed deterministic set around the origin. This is joint work with Yann Demichel and Nathanaël Enriquez (Université Paris Ouest, France).


Dienstag, 13.12.2016

15.45 Uhr Prof. Dr. Mathew Penrose (University of Bath, UK):

Optimal cuts of random geometric graphs

Abstract: Given a `cloud' of n points sampled independently uniformly at random from a Euclidean domain D, one may form a geometric graph by connecting nearby points using a distance parameter r(n). We consider the problem of partitioning the cloud into two pieces to minimise the number of `cut edges' of this graph, subject to a penalty for an unbalanced partition. The optimal score is known as the Cheeger constant of the graph. We discuss convergence of the Cheeger constant (suitably rescaled) for large n with suitably chosen r(n), towards an analogous quantity defined for the original domain D.


Dienstag, 06.12.2016

15.45 Uhr Prof. Dr. Gerd Schröder-Turk (Murdoch University, Perth):

Hyperuniformisation by Lloyd’s algorithm for Centroidal Voronoi diagrams (?)

Abstract: Gerd Schröder-Turk (Murdoch University, Perth) & Jakov Lovric (Ruder Boskovic Institute, Zhagreb)
gratefully acknowledging advice, algorithms, help and discussion by Sebastian Kapfer, Fabian Schaller (both FAU Erlangen), Michael Klatt (Karlsruhe Institute of Technology), Bruce Gardiner (Murdoch University), Ana Smith (FAU Erlangen) and others. In this talk, I will give an account of preliminary numerical work and present incomplete data to support my hypotheses. The talk will be a discussion of the subject matter, rather than a comprehensive representation of a fully conclusive research finding. Hyper-uniformity is a concept that has been developed by Salvatore Torquato in the last decade to single out spatial point processes with particularly uniform distributions of points. Given a realisation of a point process, one analyses the number of points N within a spherical window of observation of radius and of full-dimensional volume , and the variations of N as one moves the window of observation across the sample. A point process (or a realisation of a point process) is called hyper-uniform if variations of N are proportional to the surface area of the window of observation, rather than to its volume (this is to be analysed in the limit of large). The Poisson’s point process is obviously not hyper-uniform. All crystal lattices are hyper-uniform, but it turns out that there are also disordered spatial patterns that hyper-uniform. Amongst these are especially those disordered configurations that are singled out in physics as ‘special’, such as the structure of hard-core assemblies of equal-sized spheres at the so-called ‘random close packing limit’ near packing fraction 64%. Torquato has described hyper-uniform structures as “the fourth state of matter”. Centroidal Voronoi Diagrams (CVD) are Voronoi diagrams of special configurations of points where the generating points coincide with the center of mass of its Voronoi cell. Interestingly, these Voronoi diagrams are those where the cell shapes minimise a functional that is very closely related to the Minkowski tensor . There is an algorithm, called Lloyd’s algorithm, that converts a set of points (and its Voronoi diagram) into a centroidal Voronoi diagram that is based on minimisation of this functional. Several aspects of Centroidal Voronoi diagrams have been extensively studied, partially in the context of meshing in computer graphics. Here we discuss an application of centroidal Voronoi diagrams that seems to have been overlooked. The simple idea is that the evolution of points under Lloyd’s algorithm leads to more uniform structures, in a loose sense. We therefore investigated, and found an initial positive result, that Lloyd’s algorithm can be used to achieve hyper-uniform structures when starting from several types of disordered point patterns. We have not found a point pattern where Lloyd’s algorithm does not lead to a hyper-uniform structure. A related, and important, question is whether Lloyd’s algorithm induces crystallisation (i.e. ordering into at least locally crystalline structures) of the point configuration; one may have thought this possible, given that the body-centered cubic Bravais lattice is the configuration with lowest energy w.r.t. the functional used in Lloyd’s algorithm. Preliminary data indicates that we see no crystallisation, although this requires some more rigorous analysis.


Dienstag, 22.11.2016

15.45 Uhr Viet Son Pham (TU München):

Volterra-type Ornstein-Uhlenbeck processes in space and time

Abstract: We propose a novel class of tempo-spatial Ornstein-Uhlenbeck processes
as solutions to Lévy-driven Volterra equations with additive noise and multiplicative drift. After formulating conditions for the existence and uniqueness of solutions, we derive an explicit solution formula and discuss distributional properties such as stationarity, second-order structure and short versus long memory. Furthermore, we analyze in detail the path properties of the solution process. In particular, we introduce different notions of càdlàg paths in space and time and establish conditions for the existence of versions with these regularity properties. The theoretical results are accompanied by illustrative examples.


Dienstag, 18.10.2016

15.45 Uhr Dr. Michael Pokojovy (Institut für Analysis, KIT):

The Taut String Estimator: Weak Convergence and Confidence Bands

Abstract: Davies and Kovac (2001) proposed their taut string estimator to estimate the conditional mean from a set of process data with iid errors within the framework of nonparametric regression. We prove the convergence of the taut string estimator in the negative Sobolev spaces at the optimal rate of n 􀀀1=2 as the sample size n goes to infinity and derive the confidence bands for the (unknown) conditional expectation, which is only assumed H¨ older-continuous. Further, under an additional regularity assumption, the explicit form of the leading error term is derived. As an application, we show how the taut string estimator can be used to solve inverse problems with noise. An illustration based on real data is given and a numerical study on the robustness of our approach is presented.


Dienstag, 05.10.2016

15.45 Uhr Gregor Leimcke:

Stochastische Filtertheorie in der Schaden- und Unfallversicherung

Abstract: Wir betrachten die Modellierung von Schadeneintrittszeitpunkten durch Punktprozesse mit stochastischen Intensitäten, wobei es Einschränkungen der Informationen gibt, die für das Versicherungsunternehmen ver-fügbar sind. Wir nehmen an, dass die Versicherung die Anzahl von Versicherungsschäden beobachten kann, aber nicht die Intensität des zugrundeliegenden Punktprozesses. Dies führt zu Filterproblemen, bei der die Beobachtung ein Punktprozess ist. Um solche Filterprobleme zu lösen, führen wir die Innovationsmetho-de für Punktprozess-Beobachtungen ein. Anschließend nutzen wir die Innovationsmethode für das Filter-problem, wenn die Beobachtung Schadeneintrittszeitpunkte sind. Die Eintrittszeitpunkte werden durch einen Punktprozess mit selbstanregender Eigenschaft modelliert, wobei die Intensität dieses Prozesses unbeobachtbar ist. Für dieses Filterproblem berechnen wir die Kushner-Stratonovich Gleichung der beding-ten Verteilung der Intensität des Punktprozesses unter der Annahme, dass die vergangenen Schadenein-trittszeitpunkte gegeben sind.