The complexity of today's models and simulations in computational science and engineering has dramatically increased and often outpaced the boost in computing power. The situation becomes even worse when many simulation runs for different parameter configurations are required, such as in optimization, uncertainty quantification, and statistical inverse problems. To cope with this increase in complexity, model order reduction methods construct low-cost surrogates of large-scale simulations by incorporating additional knowledge of the current problem at hand. Thus, the problem is not solved in a general, high-dimensional solution space but in a problem-dependent, low-dimensional subspace, which, at least approximately, includes the solution. The challenge is now to efficiently utilize the additional knowledge of the problem to find the most appropriate subspace.
Since in many cases this additional knowledge is given in the form of data, we propose to make use of machine learning methods. In particular, we employ unsupervised learning methods (clustering and feature extraction) to detect characteristic system behaviors of the large-scale simulation for which we then construct multiple, local subspaces. As the computation proceeds, we classify the current state of the system into one of the learned regimes (supervised learning problem) and use the corresponding local subspace for the approximation. The dimensions of the local subspaces, and thus the computational costs, remain low even though the system might exhibit a wide range of different behaviors. We demonstrate our approach for a simulation of a jet engine described by a nonlinear reacting flow of an H2-Air flame.