Presentation MEDIA-2024 : ModElling, Partial DIfferential Equations and Artificial Intelligence Artificial intelligence (AI) is emerging as a powerful and fast alternative to traditional methods of numerically solving partial differential equations (PDEs). Indeed, AI-based methods enable calculations to be carried out in real time, an area where traditional numerical schemes often reach their limits due to their complexity and the computing time they require. However, these numerical methods have the advantage of providing highly accurate results, a quality that AI does not always achieve, particularly when the solutions require in-depth mathematical rigour and fine resolution of details. In this context, the 2024 conference aims to explore and present innovative research that combines traditional methods for solving PDEs with AI techniques. The aim is to demonstrate how this synergy can lead to calculations that are both fast and accurate, paving the way for real-time applications that were previously impractical. For example, these hybrid tools can improve forecasting capabilities, optimise risk management, and offer new perspectives on the analysis of the socio-economic impacts of various phenomena. The fields of application of this approach are vast and limitless: from extreme weather events and global warming to medicine and engineering. The integration of AI techniques in the resolution of PDEs represents not only a technological leap forward, but also an opportunity to address some of the most pressing challenges of our time through fast and accurate calculations, enabling more informed and proactive decision-making. Main speakers Victor Michel Dansac, Université de Strasbourg, Institut de Recherche Mathématique Avancée Hybrid methods for elliptic and hyperbolic PDEs
Bruno Desprès, Sorbonne Université, Laboratoire Jacques Louis Lions Lipschitz stability of Deep Neural Networks in view of applications Hugo Frezat, Université Paris Cité, Institut de Physique du Globe de Paris Learning stable and accurate subgrid-scale models for turbulent systems Jeffrey Harris, École nationale des ponts et chaussées, Laboratoire d'Hydraulique Saint-Venant Faster than real-time phase-resolving data-driven ocean wave modeling Olivier Lafitte, Sorbonne Paris Nord Université, Institut Galiléé Supervision of supervised learning by truth tables We show that, for a given number m of classifiers, the 0/1 loss and the convexified 0/1 loss is handled by a partition of the examples treated in 2^m classes, partition associated with a truth table. We use this structuration of the examples to construct the point of minimum, if it exists, of the convexified loss, which is a function of $m$ variables depending on 2^m parameters. This function generalizes the generic $\phi-$risk of Bartlett et al (2006) to a generic multidimensional $\phi-$risk. Investigation of the existence and uniqueness of a point of minimum of this function can be derived. Formulae (in the case of three classifiers, m=3) are readily obtained. The cases for which there is an infimum or there is a not unique minimum can be understood in this set-up, which implies that there might be cases where we cannot calculate a point of minimum for the convexified risk, and this has not been observed before. joint work with J.M. Brossier, GIPSA-Lab, Grenoble INP:
Rodolphe Turpault, Université de Bordeaux, Institut de Mathématiques de Bordeaux Méthodes fiables et robustes pour l'apprentissage machine.
Les réseaux de neurones peuvent être utilisés avantageusement dans plusieurs contextes, y compris en calcul scientifique. Dans ce cadre notamment, l'apprentissage est le point le plus sensible. Il est donc crucial de disposer d'outils robustes, fiables et si possible nécessitant peu d'hyperparamètres à régler.
Dans cet exposé, j'aborderai de telles méthodes, développées au cours de la thèse de Bilel Bensaid et basées sur des équations différentielles et des techniques numériques garantissant certaines propriétés. Je montrerai certains mauvais comportements pouvant émerger d'optimiseurs classiques et une manière simple de s'en débarrasser. Les optimiseurs résultants sont relativement simples, garantis par des résultats théoriques forts et montrent une efficacité redoutable en pratique que j'illustrerai sur de nombreux cas.
Enfin, je reviendrai sur certains écueils à éviter et certaines bonnes pratiques qui paraitront de bon sens aux numériciens mais malheureusement pas assez répandues dans la communauté de l'apprentissage machine. Organization committee M. Ersoy, G. Faccanoni, C. Galusinski, Y. Mannes |
Online user: 1 | Privacy |