How will Deep Learning be used to speed up Physical Simulations?

Dario Coscia
SISSA mathLab
Published in
5 min readAug 7, 2023

Artificial Intelligence for the Natural Sciences progress

Photo by Google DeepMind on Unsplash

This is the first article of the series Deep Learning 4 Natural Sciences. In the series we will dive into deep learning methods for solving differential equations, present open source software for the problem, and report latest advancements in the field. We will cover various models such as Physics Informed Neural Network, Neural Operar Learning, or recent reduced order models based on generative modelling [10, 11].

Deep Learning 4 Natural Sciences series

How will deep learning be used to speed up physical simulations?

Blending Neural Networks with Physics: the Physics Informed Neural Network

Neural Operators and Where to Find Them

PINA, a Python Software for Scientific Machine Learning

Autoregression is all you need: Autoregressive Neural Operators

Generative Models for Physical Simulations

Are we already there? Latest Advancements and Challenges in Deep Learning for Natural Sciences

In the last decades, Deep Learning has seen increasing attention for solving complex tasks, in academia or industry. Computer vision, speech recognition, autonomous driving, and voice assistance are just a few of the incredible applications of deep learning. Alongside these advancements, a pressing question has emerged: can we use deep learning for natural sciences problems?

Natural sciences phenomena can be explained using differential equations. This mathematical object is so powerful that it can be used to model very complex and different systems. From predicting the weather to understanding the heart dynamics to model the molecules’ motion. Differential equations are fundamental tools used to understand the world around us and to be able to make predictions. In fields like sports car design or tumor expansion analysis, differential equations are crucial for making accurate predictions and developing efficient solutions. However, solving these equations by hand is extremely challenging, and expensive numerical solvers need to be used to accurately obtain a solution. Besides the computational complexity, numerical solvers can sometimes be so slow that the computation of a solution might be unfeasible, becoming challenging the use for engineering or industry.

Different problems can be solved by differential equations. The images show potential applications in academia and industry. Image sources from left to right in [1][2][3][4].

To address these challenges, Reduced Order Models (ROMs) have emerged as a promising field in computational science, offering efficient tools for real-time simulations. The underlying “mantra” of ROMs is to make differential equation simulations efficient and fast, at the cost of small errors compared to numerical solvers.

In recent years, deep learning techniques have played a pivotal role in advancing efficient ROM methods with exceptional accuracy and reduced computational costs, as also evidenced by the vast literature on it. Similarly to the human learning process, where the assimilation of knowledge is related to the experience gained from different examples, deep learning reduced-order models are trained by giving them little data from which the model generalises. These data come from differential equation solutions with different system parameters (e.g. time evolution, the shape of geometry, or differential equation-free variables). Nowadays two main frameworks are driving the deep learning community for learning differential equations: Physics Informed Neural Networks, and Neural Operator Learning. Both of them can be considered reduced-order models, enhanced by deep learning.

Reduced order models are smaller, efficient and faster with respect to a full complex model. Starting from few full complex model data, a reduced model is constructed approximating the full order solutions.

Physics Informed Neural Networks (PINNs) [5] were first introduced to include physical properties in deep learning models. Specifically, in addition to the little data coming from expensive simulations, the differential equation itself, which explains the phenomenon, can be inserted into the learning paradigm. With Physics Informed Neural Networks only one specific differential equation is learned, i.e. it can not generalise across multiple system configurations . More generally, Neural Operator Learning [6, 7, 8] has been introduced for generalising across different instances of the same kind of differential equation. Indeed, as also the name suggests, Neural Operator Learning approximates the differential operator that defines the differential equation. Once the differential operator is learned by the model, it can generalise across different system configurations.

Symmetry in Nature. The butterfly is symmetric with respect to the axis passing through its center (grey in the picture). Image source adapted from [9].

Nowadays, there are many challenges related to solving differential equations with the help of artificial intelligence. First, there are no mathematical guarantees about the errors the model will make. Quantifying the uncertainty of deep learning model predictions is very important to make the methodology reliable and trustful, allowing safe use in industry. Second, including symmetries in deep learning models. Symmetries can be thought of as repetitions of the same pattern, e.g. a butterfly is symmetric with respect to its center, see the picture above. This means that only half of the butterfly is needed to reconstruct the whole one. Intuitively, the same idea can be extended to differential equations: symmetries provide a way to reduce the number of possible solutions. Including these symmetries is fundamental to speeding up the model training, obtaining solutions that guarantee mathematical properties. Furthermore, symmetries can be used to augment data to obtain a more rich dataset. Finally, interpreting and understanding the learned models in the context of solving differential equations using deep learning is essential for responsible and trustworthy applications in industry. When making predictions about a system, having the ability to interpret the learned models fosters trust and confidence in the technology.

Deep learning will play a fundamental role in speeding up simulations due to the exceptional accuracy and reduced computational cost of the algorithms. Physics Informed Neural Network or Neural Operator are the first steps taken to tackle this challenging problem. However, there is still a long way to go and many challenges to face: uncertainty quantification, symmetry inclusion, interpretability and many more.

References

  1. Image source website: https://xenonhealth.com/nonlinear-dynamics-of-heart-rate-variability/
  2. Image source website: https://redirect.cs.umbc.edu/2021/02/talk-modeling-and-simulation-for-reducing-risks-associated-with-extreme-weather-11-12-2-10/
  3. Image source website: https://motorsportengineer.net/jobs-in-f1-how-to-become-a-formula-1-aerodynamicist/
  4. Image source website: https://en.wikipedia.org/wiki/Atomic_orbital
  5. Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.” Journal of Computational physics 378 (2019): 686–707.
  6. Lu, Lu, Pengzhan Jin, and George Em Karniadakis. “Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators.” arXiv preprint arXiv:1910.03193 (2019).
  7. Li, Zongyi, et al. “Fourier neural operator for parametric partial differential equations.” arXiv preprint arXiv:2010.08895 (2020).
  8. Brandstetter, Johannes, Daniel Worrall, and Max Welling. “Message passing neural PDE solvers.” arXiv preprint arXiv:2202.03376 (2022).
  9. Image source website: https://easydrawingguides.com/how-to-draw-a-monarch-butterfly/
  10. Vinuesa, Ricardo, et al. “β-Variational autoencoders and transformers for reduced-order modelling of fluid flows.” (2023).
  11. Coscia, Dario, Nicola Demo, and Gianluigi Rozza. “Generative Adversarial Reduced Order Modelling.” arXiv preprint arXiv:2305.15881 (2023).

--

--

Dario Coscia
SISSA mathLab

PhD student in the MathLab group at the International School for Advanced Studies and at the University of Amsterdam studying Deep Learning methods for PDEs