Efficient turbulent compressible flows resolution: a balance between consolidation and innovation

Matteo Zancanaro
SISSA mathLab
Published in
6 min readNov 15, 2021

Curiosity on fluid motion laws dates back to the Ancient Greeks: questions about the rules describing fluid flows behaviours started arising for both pure phenomena understanding interests as well as for practical employment purposes.

Nowadays a mathematical description for these kind of problems is pretty well established: the equations representing a fluid dynamics quantification are the very well known Navier-Stokes laws. This set of equations can be tuned to describe very different regimes ranging from laminar flows (take a look here) to very complicated phenomena: the most accurate, and thus intricate, case is the one represented by the complete compressible turbulent Navier-Stokes equations set.

This kind of problems is generally solved by the use of a Direct Numerical Simulation (DNS) approach for very accurate and detailed solutions.

Do we really need such a complexity in real life?

Most of the times, when dealing with Computational Fluid Dynamics (CFD), what we are interested in are forces or, more in general, macro scale phenomena. For this reason tiny scales behaviours are not playing a valuable role in the actual events. A classical example for such an idea is turbulence. Turbulence is a general name given to a huge number of different micro scale dynamics which are responsible for the viscous energy dissipation into the system, at the end of the so called “energy cascade”. Since we are dealing with discrete solutions of the fluid flow, a DNS would require, from one side, a subdivision of the domain of interest into an enormous number of tiny cells so that the small eddies representing the turbulence effects do not “drop into the holes of the net”. On the other side, those eddies are not contributing in the observable aspects of the problem except for the energy dissipation.

This duality is, in general, treated by dividing all the variables involved in the problem of interest into a main part and a fluctuating one:

where the tilde is indicating the main component of the variable v which is what we are interested in. By the application of some classical averaging techniques (Reynolds, Favre) it is possible to recombine the Navier-Stokes equations is such a way that the whole problem can be described as dependent on just those main variables. The only additional evaluation, in this case, concerns the amount of energy to be dissipated because of the presence of turbulence: the eddy viscosity.

What is a parametric problem?

In many cases it happens that the problem is not only dependent on the variables: it may also be influenced by some fixed parameters varying, e.g., the physical properties of the fluid as well as the shape of the domain of interest or more then one thing at the same time. If this is the case, for many real applications, the necessity to evaluate a huge amount of different configurations often arises.

Taking into consideration that the more we need an accurate solution, the bigger should be the number of cells we employ for the domain approximation, the higher is the computational cost of a single evaluation, if we combine this aspect together with the necessity of a wide range of parametric solutions, such an architecture becomes quickly unfeasible.

Is it possible to exploit what we “learnt” by solving some parametric solutions to evaluate possible additional ones in an easier way?

The overtaking of this obstacle is the main scope of a big variety of mathematical methods, all collected into a large group named Reduced Order Models. In our very case, we use a technique called Proper Orthogonal Decomposition (POD) capable of extracting the necessary information from a set of accurate solutions to evaluate new, much cheaper, realisations from a computational point of view.

To better understand how it works, let us imagine to have some solutions represented by audio tracks. An example is provided below:

It can be decomposed into frequency bands so that every band contains some “hidden dynamics”. The original solution can be reconstructed by putting together a certain amount of the collected bands. The higher the amount of retained frequencies, the more accurate the reconstructed solution.

It is pretty clear that, in case too few dynamics are retained, the reconstructed solution would have no significant meaning. On the contrary, it is not mandatory to use all the bands obtained from the solution to reconstruct the main content: usually high frequencies are more related with noise and their presence does not improve the real meaning of the reconstructed solution. An example is provided in the two movies reported here.

A POD approach simply collects the “hidden dynamics”, called modal basis functions, from a certain amount of parametric solutions, so that a new additional solution for a new value of the parameter, on which the problem is dependent, can be evaluated as just a combination of these functions in a much more efficient way. This process is based on the idea of projection: a discretized problem characterized by a huge dimension gets projected onto a reduced space ending up with a much lower number of unknowns.

Projection of the high fidelity discretized problem over the reduced space generated as a combination of the modal basis functions

Is the eddy viscosity no more needed?

Clearly, again, we have to evaluate the viscous energy dissipation due to turbulence. To obtain an efficient procedure, we rely on the training and then evaluation of a Neural Network (NN) which is constructed to be able to analyze a certain amount of discrete accurate solutions (full order solutions) and provide as an output new solutions at a reduced cost (Reduced order solutions).

Eddy viscosity evaluation

What are these techniques, in practice, used for?

This architecture is often employed for problems characterised by high subsonic velocities where a simpler approach would not be able to catch the right features of the involved phenomena.

Reduced velocity and pressure for a geometry variation test case of an aerofoil.

A classical example is given by shape optimisation problems: when a certain performance for an object immersed in a fluid has to be enhanced, many different configurations have to be tested and the application of such a scheme could become very efficient.

The techniques we developed can be very useful for wide ranges of applications, from automotive to aerospace engineering, passing by nautical studies or civil engineering problems. This brief introduction is just a collection of perspectives and difficulties related with this world, but the journey leading to a general, efficient and handy architecture is anything but over…

References

[1] Matteo Zancanaro, Markus Mrosek, Giovanni Stabile, Carsten Othmer, and Gianluigi Rozza. “Hybrid neural network reduced order modelling for turbulent flows with geometric parameters”. In: MDPI Fluids 6.8 (2021): 296.

[2] Giovanni Stabile, Matteo Zancanaro, and Gianluigi Rozza. “Efficient geo-metrical parametrisation for finite-volume-based reduced order methods”. In: International Journal for Numerical Methods in Engineering 121.12 (2020), pp. 2655–2682.

[3] Giovanni Stabile and Gianluigi Rozza. ITHACA-FV — In real Time Highly Advanced Computational Applications for Finite Volumes. Accessed: 08/11/2021. URL: http://www.mathlab.sissa.it/ithaca-fv.

--

--