Learned Turbulence Modelling with Differentiable Fluid Solvers: Physics-based Loss-functions and Optimisation Horizons
Please login to view abstract download link
We train turbulence models for unsteady simulations with convolutional neural networks. These improve under-resolved solutions to the incompressible Navier-Stokes equations at simulation time. We develop a differentiable numerical solver that propagates optimisation gradients through solver steps. The superior stability and accuracy of models that unroll more solver steps during training show this property to be crucial. However, back-propagating gradients through multiple solver steps can lead to numerical instabilities. We apply a gradient stopping technique mitigating instabilities similarly to gradient truncation known from recurrent neural networks. Furthermore, we introduce loss-functions based on turbulence physics that improve model accuracy. Our approach is applied to three two-dimensional flows. Our models improve long-term a-posteriori statistics when compared to no-model simulations, and gain performance improvements over no-model simulations at inference time.