Blending Neural Networks with Physics: the Physics-Informed Neural Network

Dario Coscia
SISSA mathLab
Published in
6 min readFeb 18, 2024

Artificial Intelligence for the Natural Sciences progress

This is the second article of the series Deep Learning 4 Natural Sciences. In this article, we will present physics-informed neural networks for solving differential equations, and how inductive bias, i.e. extra knowledge of the system, can be used to regularize and faster the training.

Deep Learning 4 Natural Sciences series

How will deep learning be used to speed up physical simulations?

Blending Neural Networks with Physics: the Physics Informed Neural Network

Neural Operators and Where to Find Them

PINA, a Python Software for Scientific Machine Learning

Autoregression is all you need: Autoregressive Neural Operators

Generative Models for Physical Simulations

Are we already there? Latest Advancements and Challenges in Deep Learning for Natural Sciences

The Basics of Physics Informed Machine Learning

In the first article of the series, we saw how deep learning plays a pivotal role in accelerating numerical simulations. However, the amount of available data required to solve complex simulations is often insufficient to make AI predictions reliable and robust. Physics-informed neural networks [1] (PINNs) have been formulated to overcome the issues of missing data, by incorporating the physical knowledge into the neural network training. PINNs aim to approximate any differential equation solution by a neural network. The network is trained by solving a minimization problem in a supervised learning setting, where the physical constraints are used to obtain the network loss function.

Consider the simple example of solving an ordinary differential equation:

with the analytical solution 𝓊(x) which is the exponential function. Suppose to have a neural network depending on some parameters θ which approximate the solution, let’s call it 𝓊(x; θ). Then if we plug the neural network into the above equation, we would obtain a right-hand side equal to zero only if 𝓊(x; θ) = 𝓊(x). That’s intuitive! Then we can use the right-hand side (residual) as a loss function to train the model:

which is zero when the network learns the solution. Implementing the PINN framework, for this simple problem is easy thanks to automatic differentiation, which computes the differential operator exactly. In the picture below you can see the result of training a PINN on this very simple problem, optimizing a shallow neural network with only 30 parameters using the Adam optimizer.

Physics Informed Neural Network solution compared to the real solution and its absolute error on a simple ordinary differential equation.

Inductive Bias in Physics Informed Machine Learning

Since the original formulation of Physics-Informed Neural Networks (PINNs) by Raissi and his colleagues [1], researchers have made several advancements and improvements to enhance the performance and versatility of PINNs. Indeed, it is quite common to have some external knowledge about the solution, such as symmetry properties, periodicity, or boundness, which can be added to the PINN methodology. Introducing inductive bias, i.e. external knowledge of the system properties, into the learning strategy eases the training and delivers better results.

As before, to deliver the idea of inductive bias we consider a specific problem, namely the Helmholtz equation:

This equation forces the solution to be infinitely differentiable, and periodic on the domain x ∈ (- ∞, ∞). Due to the infinite differentiability condition on the boundaries, we cannot rewrite the loss function as before, since we would need infinite terms… A possible solution, diverging from the original PINN formulation, is to use coordinates augmentation. In coordinates, augmentation a coordinates transformation x → Φ(x) is used to ensure the infinite differentiability condition. It turns out (see [2]) that this can be easily achieved by writing Φ(x) = [1, cos(π x), sin(π x)], and training using these extra augmented coordinates only for the interior points. Above we can see the plot of the Helmholtz problem, where the solution is analytical u(x)=sin(πx)cos(3πx).

Physics Informed Neural Network solution compared to the real solution and its absolute error on the Helmholtz equation.

A nice property of this technique is the fact that it generalizes by construction to all the real line since the function is periodic by construction. Indeed, if we test the model trained on x ∈ (0,2) on a bigger interval, for example, x ∈(-4, 4), we can expect zero loss in performance.

Introducing the coordinates augmentation which forces the periodicity of the function ensures that the PINN solution is periodic on the real axis line.

From the picture is evident that not only the neural network solution but also the errors associated with it are periodic. There are other methods to do coordinate augmentations, such as the one based on the Fourier feature [3] for multi-scale differential equations, or learnable features [4] for augmenting the original coordinates.

Inductive bias can be exploited not only by coordinates augmentation but also in other ways. For example, in [5, 6] the authors developed different ways to ensure that the neural network solution satisfied specific boundary terms (hard constraints). Another possibility is to work by incorporating physical extra constraints in the loss, such as orthonormality [7]. Eventually, it is also possible [8] to add the underlying PDEs’ symmetries in the PINN’s training objective, which can reduce drastically the number of scattered domain data and speed up the training. Interestingly, it seems that all of these techniques which might be different at first glance, can be combined in a unique preconditioning operator framework [9]. Indeed adding inductive bias eases the training, and the authors in [9] show that this is because the conditioning of a specific differential operator is reduced when the above techniques are implemented!

In conclusion, we have seen that blending physics and more in general inductive bias of the underlying differential equation can be used to solve it efficiently using Neural Networks. The source code associated with this blog post can be found in the GitHub repository of the series. The code based on PyTorch is a very simple start-off with PINNs, for more complex tasks we suggest the software PINA [10], an open-source Python library providing an intuitive interface for solving differential equations by Neural Networks.

References

  1. Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.” Journal of Computational physics 378 (2019): 686–707.
  2. Dong, Suchuan, and Naxian Ni. “A method for representing periodic functions and enforcing exactly periodic boundary conditions with deep neural networks.” Journal of Computational Physics 435 (2021): 110242.
  3. Wang, Sifan, Hanwen Wang, and Paris Perdikaris. “On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks.” Computer Methods in Applied Mechanics and Engineering 384 (2021): 113938.
  4. Demo, Nicola, Maria Strazzullo, and Gianluigi Rozza. “An extended physics informed neural network for preliminary analysis of parametric optimal control problems.” Computers & Mathematics with Applications 143 (2023): 383–396.
  5. Moseley, Ben, Andrew Markham, and Tarje Nissen-Meyer. “Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations.” Advances in Computational Mathematics 49.4 (2023): 62.
  6. Lu, Lu, et al. “Physics-informed neural networks with hard constraints for inverse design.” SIAM Journal on Scientific Computing 43.6 (2021): B1105-B1132.
  7. Kim, T. and Yun, S.-Y. Revisiting orthogonality regularization: A study for convolutional neural networks in image classification. IEEE Access, 10:69741–69749, 2022. doi: 10.1109/ACCESS.2022.3185621.
  8. Akhound-Sadegh, T., Perreault-Levasseur, L., Brandstetter, J., Welling, M., & Ravanbakhsh, S. “Lie Point Symmetry and Physics Informed Networks”, accepted at NeurIPS 2023.
  9. De Ryck, Tim, et al. “An operator preconditioning perspective on training in physics-informed machine learning.” arXiv preprint arXiv:2310.05801 (2023).
  10. Coscia, Dario, et al. “Physics-Informed Neural networks for Advanced modeling.” Journal of Open Source Software 8.87 (2023): 5352.

--

--

Dario Coscia
SISSA mathLab

PhD student in the MathLab group at the International School for Advanced Studies and at the University of Amsterdam studying Deep Learning methods for PDEs