CFC2023

Towards model-based deep reinforcement learning for accelerated learning from simulations

  • Weiner, Andre (TU Braunschweig)
  • Geise, Janis (TU Braunschweig)

Please login to view abstract download link

Deep reinforcement learning (DRL) algorithms had their first big breakthrough in the field of computer gaming but in recent years, the same algorithms have also become popular in the fluid mechanics community, for example, to solve flow control problems [1] or to improve turbulence modeling [2]; refer also to the references in these articles. One example demonstrating the huge potential and flexibility of DRL is its recent application to algorithmic discovery [3]. In the field of fluid mechanics, DRL has been combined mostly with computational fluid dynamics (CFD). However, due to the necessity of repeatedly running time-resolved simulations, only relatively simple test cases that can be solved in a matter of minutes or a few hours have been considered so far. On the path to applying DRL to more complex scenarios, the sample efficiency of available algorithms has to be improved. One approach to deal with computationally expensive environments is model-based DRL, where the environment/simulation is partially replaced with an additional model. However, model bias can quickly lead to a failure of policy optimization [4]. One way to deal with model uncertainty is learning from model ensembles [5]. In our contribution, we discuss potential modeling approaches for CFD environments and show results for vanilla and ensemble model-based DRL. We believe that model-based DRL will be a crucial enabler for the success of DRL applied to technically relevant applications of flow control or turbulence modeling.