A dynamic weighted loss function for enhancing the performance of neural networks
Résumé
In machine learning process, hyper parameters are chosen in a way to decrease the prediction error and improve the convergence. However, the optimized hyper parameters have a limit in terms of enhancing the performance of the neural networks.
In this work, the datasets used for the numerical experiments arise from the resolution of partial differential equations (PDE) defined on a spatial domain. We propose a dynamic weighted loss function-based approach for neural networks that are used to learn these PDE’s solutions. This a two-step process: first we train for a few numbers of epochs in a classical way then the dynamic weighted loss function replaces the classical loss function by leveraging the information from past training error histories. To validate this method, we carry out numerical experiments with different neural networks on datasets arising on three different physics: Goldstein equation (Bensalah, Joly, Mercier, 2022) and radiative transfer equation (Modest, 1993). Thus, in order to demonstrate the relevance of this approach, we provide a comparison among a neural network model using a classical loss function, with and without hyper parameters optimization, and a dynamic weighted loss function for both versions.
Origine | Fichiers produits par l'(les) auteur(s) |
---|