CFC2023

Student

Model reduction with latent variables for discretely filtered equations

  • Rosenberger, Henrik (Centrum Wiskunde & Informatica)
  • Sanderse, Benjamin (Centrum Wiskunde & Informatica)

Please login to view abstract download link

A paramount challenge in the simulation of fluid flows is their multiscale nature: all length and time scales interact with each other. As a consequence, we cannot neglect the smallest scales even if we are only interested in large scale dynamics. Because resolving those smallest scales exactly is unfeasibly expensive for many applications, the effect of small scales on the large scale dynamics are approximated by so-called closure models. Physics-based closure models have been derived that involve model constants [1] which can be determined for example via machine learning [2]. However, these approaches often generalize poorly beyond training data, can be ill-posed and unstable [3]. Instead, we present a new approach with two distinctive ingredients: discrete filtering and latent variables. The first ingredient means that instead of filtering at the continuous (PDE) level (and formulating a continuous closure model) and then discretizing it, we propose to first discretize the PDE and then filter on the discrete level. This helps in circumventing issues of instability and ill-posedness and simultaneously narrows the closure problem. The second ingredient pertains to the way of how we devise the discrete closure term. Most existing closure terms are only a function of the resolved scales. However, the Mori-Zwanzig formalism states that these resolved scales are in general not sufficient to describe their own dynamics [4]. This has lead researchers to consider a memory term that comprises the history of the resolved scales [5]. In contrast, we augment the filtered model by a system of equations for latent variables that account for the effects of the unresolved scales on the resolved scales. The dimension of the latent variables is on the order of the resolved scales, thus much smaller than the dimension of the unresolved scales. The two main advantages of this formulation are that (i) explicitly modelling memory terms like in [5] is not necessary anymore, and (ii) we can enforce energy conservation (or dissipation) over the sum of the resolved and unresolved scales, and hence obtain energy stability. For modelling these latent variables, we consider both, equation- and data-driven approaches, such as POD-Galerkin model reduction and machine learning.