Developing Surrogate models for dynamical systems with improved accuracy while decreasing the computational cost and complexity, using deep neural networks
Consider the following system
Since these are nonlinear maps, we also need to find their inverse mappings,
The problem of developing surrogate models for dynamical systems is an important task in many fields of science and engineering. The goal is to find a simplified, approximate model of a complex system that captures its essential dynamics, while reducing the computational cost and complexity. The use of deep neural networks (DNNs) for this purpose has gained significant attention in recent years due to their ability to learn complex nonlinear mappings.
The specific problem I've formulated is to find mappings
To approximate these mappings, I propose to use DNNs, specifically a network structure inspired by autoencoders. Autoencoders are a type of neural network that are trained to learn a compact, low-dimensional representation of the input data, known as the bottleneck or latent representation. This structure is particularly well-suited to the problem of model reduction because it allows the network to learn a compact, interpretable representation of the input-output behavior of the original system. Motivated by the autoencoders, I use the following network
The objective is to minimize the following loss:
where
Used tensorflow subclassing to model and train the neural network. Below represents the neural network snap-shot.
In order to train the network, I would need to collect a large dataset of inputs and outputs from the original system, which can be used to train the network to learn the desired mappings. Once the network is trained, it can be used to predict the outputs of the original system for new inputs, or to map inputs and outputs to the reduced LTI system.
It is important to note that the accuracy of the surrogates model is highly dependent on the quality of the training data and the architecture of the neural network. Choosing appropriate neural network architectures and training algorithms, as well as pre-processing and cleaning the data is crucial for achieving good performance.