Presentation

MEDIA-2024 : ModElling, Partial DIfferential Equations and Artificial Intelligence

Artificial intelligence (AI) is emerging as a powerful and fast alternative to traditional methods of numerically solving partial differential equations (PDEs). Indeed, AI-based methods enable calculations to be carried out in real time, an area where traditional numerical schemes often reach their limits due to their complexity and the computing time they require. However, these numerical methods have the advantage of providing highly accurate results, a quality that AI does not always achieve, particularly when the solutions require in-depth mathematical rigour and fine resolution of details.

In this context, the 2024 conference aims to explore and present innovative research that combines traditional methods for solving PDEs with AI techniques. The aim is to demonstrate how this synergy can lead to calculations that are both fast and accurate, paving the way for real-time applications that were previously impractical. For example, these hybrid tools can improve forecasting capabilities, optimise risk management, and offer new perspectives on the analysis of the socio-economic impacts of various phenomena.

The fields of application of this approach are vast and limitless: from extreme weather events and global warming to medicine and engineering. The integration of AI techniques in the resolution of PDEs represents not only a technological leap forward, but also an opportunity to address some of the most pressing challenges of our time through fast and accurate calculations, enabling more informed and proactive decision-making.

Main speakers

Victor Michel DansacUniversité de Strasbourg, Institut de Recherche Mathématique Avancée

Hybrid methods for elliptic and hyperbolic PDEs
The goal of this talk is to give an overview of two new results in the development of hybrid methods for elliptic and hyperbolic partial differential equations (PDEs). A hybrid method combines classical numerical analysis techniques (finite element method (FEM), discontinuous Galerkin (DG), ...) with tools from machine learning (ML). The first part of this talk is dedicated to a broad presentation of such ML tools, including a common framework to represent PDE approximators, be they classical or ML-based. Then, in a second part, we explain how to use a physics-informed prior to lower the error constant of the FEM while keeping the same order of accuracy. Thanks to the FEM framework applied to elliptic PDEs, we rigorously prove that our correction improves the FEM error constant by a factor depending on the prior quality. If time permits, in a third part, we discuss how to enhance the DG basis with physics-informed priors, to increase the resolution of near-equilibrium solutions to hyperbolic systems of balance laws. Once again, we rigorously prove that the error constant is improved. Numerical illustrations will be present throughout the presentation, to validate our results.

 

Bruno DesprèsSorbonne Université, Laboratoire Jacques Louis Lions

Lipschitz stability of Deep Neural Networks in view of applications
The use of functions constructed by deep neural networks is attractive for data-driven applications, and is an active area of research at present. However, it is almost universally observed that the stability of these functions is difficult to guarantee. In fact, we can expect this stability problem to grow in tandem with applications in sciences and technology.With Moreno Pintore, we focused on deriving computable upper bounds of the Lipschitz constant of deep neural networks and obtained new estimators. The estimators are based on the optimal computation of the norm of product of matrices.The optimality of the estimators  will be illustrated with numerical tests and compared with the literature

Hugo FrezatUniversité Paris Cité, Institut de Physique du Globe de Paris

Learning stable and accurate subgrid-scale models for turbulent systems
Realistic simulations of complex turbulent systems, like stellar convection and ocean-atmosphere dynamics, remain far beyond reach due to the resolution required to capture all scales of turbulent motion. One approach to accelerate these simulations is through modeling unresolved (or sub-grid) processes. While this problem dates back to 1963, progress has been slow. However, recent advances in deep learning have made it possible to leverage high-resolution simulations to frame this as a supervised learning problem. In this talk, we'll explore various strategies to formulate this supervised problem. In particular, we'll demonstrate the importance of using the solver dynamics during training to ensure stable model performance.

Jeffrey Harris, École nationale des ponts et chaussées, Laboratoire d'Hydraulique Saint-Venant

Faster than real-time phase-resolving data-driven ocean wave modeling
Ocean wave prediction is a complex problem governed by nonlinear and dispersive wave dynamics. Many practical problems may also involve breaking waves, which are mathematically complex to describe. For many engineering applications, it may be sufficient to predict how the time series of wave elevation evolves over a certain propagation distance, or to understand how the force on a structure relates to the elevation of incoming waves. This question is well suited for treatment by machine learning algorithms for time-series prediction, which have advanced significantly in recent years. In this talk, we will explore how a Time-series Dense Encoder (TiDE) approach can be applied to ocean wave problems, considering both various classical laboratory benchmarks and field data in open ocean conditions, with particular attention to the training data required to achieve a given accuracy.

Olivier LafitteSorbonne Paris Nord Université, Institut Galiléé 

Supervision of supervised learning by truth tables
Two classes supervised learning is classically studied through combination of elementary (weak) binary classifiers that are combined to construct a better classifier. A traditional way of combining them is to investigate the best linear combination of these classifiers and to minimize the convexified risk for the set of training examples.

We show that, for a given number m of classifiers, the 0/1 loss and the convexified 0/1 loss is handled by a partition of the examples treated in 2^m classes, partition associated with a truth table. We use this structuration of the examples to construct the point of minimum, if it exists, of the convexified loss, which is a function of $m$ variables depending on 2^m parameters. This function generalizes the generic $\phi-$risk of Bartlett et al (2006) to a generic multidimensional $\phi-$risk. Investigation of the existence and uniqueness of a point of minimum of this function can be derived. Formulae (in the case of three classifiers, m=3) are readily obtained. The cases for which there is an infimum or there is a not unique minimum can be understood in this set-up, which implies that there might be cases where we cannot calculate a point of minimum for the convexified risk, and this has not been observed before. joint work with  J.M. Brossier, GIPSA-Lab, Grenoble INP:

 

 

Rodolphe TurpaultUniversité de Bordeaux, Institut de Mathématiques de Bordeaux

Méthodes fiables et robustes pour l'apprentissage machine.
Les réseaux de neurones peuvent être utilisés avantageusement dans plusieurs contextes, y compris en calcul scientifique. Dans ce cadre notamment, l'apprentissage est le point le plus sensible. Il est donc crucial de disposer d'outils robustes, fiables et si possible nécessitant peu d'hyperparamètres à régler.
Dans cet exposé, j'aborderai de telles méthodes, développées au cours de la thèse de Bilel Bensaid et basées sur des équations différentielles et des techniques numériques garantissant certaines propriétés. Je montrerai certains mauvais comportements pouvant émerger d'optimiseurs classiques et une manière simple de s'en débarrasser. Les optimiseurs résultants sont relativement simples, garantis par des résultats théoriques forts et montrent une efficacité redoutable en pratique que j'illustrerai sur de nombreux cas. 
Enfin, je reviendrai sur certains écueils à éviter et certaines bonnes pratiques qui paraitront de bon sens aux numériciens mais malheureusement pas assez répandues dans la communauté de l'apprentissage machine.

Organization committee

M. Ersoy, G. Faccanoni, C. Galusinski, Y. Mannes

ResizedLogos_1.png

Online user: 1 Privacy
Loading...