Webrelaunch 2020

Modellansatz: Modell086 - Complex Geometries

modellansatz.de/complex-geometries

On closer inspection, we find science and especially mathematics throughout our everyday life, from the tap to automatic speed regulation on motorways, in medical technology or on our mobile phone. What the researchers, graduates and academic teachers in Karlsruhe puzzle about, you experience firsthand in our Modellansatz Podcast: "The modeling approach“.

Modellansatz: Complex Geometries, Geometry of low swirl burner developed at LBNL, Visualization: Sandra May

Sandra May works at the Seminar for Applied Mathematics at ETH Zurich and visited Karlsruhe for a talk at the CRC Wave phenomena. Her research is in numerical analysis, more specifically in numerical methods for solving PDEs. The focus is on hyperbolic PDEs and systems of conservation laws. She is both interested in theoretical aspects (such as proving stability of a certain method) and practical aspects (such as working on high-performance implementations of algorithms). Sandra May graduated with a PhD in Mathematics from the Courant Institute of Mathematical Sciences (part of New York University) under the supervision of Marsha Berger. She likes to look back on the multicultural working and learning experience there.

We talked about the numerical treatment of complex geometries. The main problem is that it is difficult to automatically generate grids for computations on the computer if the shape of the boundary is complex. Examples for such problems are the simulation of airflow around airplanes, trucks or racing cars. Typically, the approach for these flow simulations is to put the object in the middle of the grid. Appropriate far-field boundary conditions take care of the right setting of the finite computational domain on the outer boundary (which is cut from an infinite model). Typically in such simulations one is mainly interested in quantities close to the boundary of the object.

Instead of using an unstructured or body-fitted grid, Sandra May is using a Cartesian embedded boundary approach for the grid generation: the object with complex geometry is cut out of a Cartesian background grid, resulting in so called cut cells where the grid intersects the object and Cartesian cells otherwise. This approach is fairly straightforward and fully automatic, even for very complex geometries. The price to pay comes in shape of the cut cells which need special treatment. One particular challenge is that the cut cells can become arbitrarily small since a priori their size is not bounded from below. Trying to eliminate cut cells that are too small leads to additional problems which conflict with the goal of a fully automatic grid generation in 3d, which is why Sandra May keeps these potentially very small cells and develops specific strategies instead.

The biggest challenge caused by the small cut cells is the small cell problem: easy to implement (and therefore standard) explicit time stepping schemes are only stable if a CFL condition is satisfied; this condition essentially couples the time step length to the spatial size of the cell. Therefore, for the very small cut cells one would need to choose tiny time steps, which is computationally not feasible. Instead, one would like to choose a time step appropriate for the Cartesian cells and use this same time step on cut cells as well.

Sandra May and her co-workers have developed a mixed explicit implicit scheme for this purpose: to guarantee stability on cut cells, an implicit time stepping method is used on cut cells. This idea is similar to the approach of using implicit time stepping schemes for solving stiff systems of ODEs. As implicit methods are computationally more expensive than explicit methods, the implicit scheme is only used where needed (namely on cut cells and their direct neighbors). In the remaining part of the grid (the vast majority of the grid cells), a standard explicit scheme is used. Of course, when using different schemes on different cells, one needs to think about a suitable way of coupling them.

The mixed explicit implicit scheme has been developed in the context of Finite Volume methods. The coupling has been designed with the goals of mass conservation and stability and is based on using fluxes to couple the explicit and the implicit scheme. This way, mass conservation is guaranteed by construction (no mass is lost). In terms of stability of the scheme, it can be shown that using a second-order explicit scheme coupled to a first-order implicit scheme by flux bounding results in a TVD stable method. Numerical results for coupling a second-order explicit scheme to a second-order implicit scheme show second-order convergence in the L^1 norm and between first- and second-order convergence in the maximum norm along the surface of the object in two and three dimensions.

We also talked about the general issue of handling shocks in numerical simulations properly: in general, solutions to nonlinear hyperbolic systems of conservation laws such as the Euler equations contain shocks and contact discontinuities, which in one dimension express themselves as jumps in the solution. For a second-order finite volume method, typically slopes are reconstructed on each cell. If one reconstructed these slopes using e.g. central difference quotients in one dimension close to shocks, this would result in oscillations and/or unphysical results (like negative density). To avoid this, so called slope limiters are typically used. There are two main ingredients to a good slope limiter (which is applied after an initial polynomial based on interpolation has been generated): first, the algorithm (slope limiter) needs to detect whether the solution in this cell is close to a shock or whether the solution is smooth in the neighborhood of this cell. If the algorithm thinks that the solution is close to a shock, the algorithm reacts and adjusts the reconstruted polynomial appropriately. Otherwise, it sticks with the polynomial based on interpolation. One commonly used way in one dimension to identify whether one is close to a shock or not is to compare the values of a right-sided and a left-sided difference quotient. If they differ too much the solution is (probably) not smooth there. Good reliable limiters are really difficult to find.

Literature and additional material


Cite this podcast episode