Advanced research in scientific calculation or Centre d’Eté Mathématique de Recherche Avancée en Calcul Scientifique


July 17 – August 25, 2023


Scientific Committee 
Comité scientifique 


Rémi Abgrall (Universtät Zürich, Switzerland)
Patrick Galinari (Sorbonne Université, Paris, France)
Jan Hesthaven (École Polytechnique Fédérale de Lausanne, Switzerland)
Gitta Kutyniok (Ludwig-Maximilians-Universität München, Germany)
Yvon Maday (Sorbonne Université, Paris, France)
Marc Schoenauer (Inria Saclay, France)
Aretha Teckentrup (University of Edinburgh, Scotland)

Organizing Committee
Comité d’organisation


Didier Auroux (Université Côte d’Azur, Nice, France)
Konstantin Brenner (Université Côte d’Azur, Nice, France)
Martin Campos Pinto (Max-Planck-Institut für Plasmaphysik, Garching bei München, Germany)
Bruno Després (Sorbonne Université, Paris, France)
Victorita Dolean (Université Côte d’Azur, Nice, France)
Emmanuel Frénod (Université Bretagne Sud, Vannes, France)
Stéphane Lanteri (Inria, Nice, France)
Victor Michel-Dansac (Inria, Strasbourg, France)


contact: cemracs23@smai.emath.fr

Call for Applications for the CEMRACS Summer School

Scientific Machine Learning

Through the CIMPA-CARMIN program, and with the support of the Labex Carmin, CIMPA and CIRM, the organizers wish to fund, several young mathematicians based in developing countries to participate in the activities of CIRM.

Young scientists (master’s students soon looking for a PhD, PhD students, postdocs) meeting these criteria and interested in the topics are strongly encouraged to apply for support to participate. 


Candidates have to apply on the web page: https://www.cimpa.info/fr/node/6929

IMPORTANT WARNING:  Scam / Phishing / SMiShing ! Note that ill-intentioned people may be trying to contact some of participants by email or phone to get money and personal details, by pretending to be part of the staff of our conference center (CIRM).  CIRM and the organizers will NEVER contact you by phone on this issue and will NEVER ask you to pay for accommodation/ board / possible registration fee in advance. Any due payment will be taken onsite at CIRM during your stay.


The goal of this event is to gather scientists from both the academic and industrial communities to work and discuss on Scientific Machine Learning. The program includes:

  • 1 week summer school (July 17-21)
  • 5 weeks Hackathon on projects proposed by academic scientists and/or industrial partners (July 24 – August 25)

Participants may register to the summer school only or to the entire program. Junior participants may apply for fellowships to cover part or the whole stay.


Scientific Machine Learning

Scientific Machine Learning (SciML) is a relatively new research field based on both machine learning (ML) and scientific computing tools. Its aim is the development of new methods to solve several kinds of problems, which can be forward multi-dimensional partial differential equations, identification of parameters, or inverse problems. The methods we seek must be robust, reliable and interpretable. The new SciML tools allow the natural inclusion of data in the numerical simulation in order to generate new results. This new methodology will be at the forefront of the next wave of data-driven scientific discovery in the physical and engineering sciences. More precisely, our goals are:

  • to introduce the mathematical concepts lying at the foundation of machine learning as used in scientific computing;
  • to give an overview of the different approaches developed to address these problems;
  • to give insight on the latest numerical tools in scientific machine learning and data inclusion for the solution of PDEs.

Summer school

The summer school is organized around five topics, each topic corresponding to one course. The format of each course is 4 hours of lectures and 2 hours of computer sessions (CS) on Jupyter Notebooks with Python or Julia (in the case of a more theoretical topic, one or two focused talks will complement the main lecture). In the computer session, the goal would be to implement with the participants an elementary working example that could potentially be of help later on in the projects. The lectures range from theoretical (approximation theory) to applied (solution of PDEs, forward and inverse problems) up to and including real-world applications.


Abstract: Over the past years, machine learning and deep learning methods have achieved state-of-the-art performance in a large variety of applications. Yet despite their great empirical success, the underlying mathematical foundations are still not fully understood. In this course, I will review the opportunities that ideas from more mature fields such as PDEs and numerical analysis can bring to the understanding of (deep) machine learning. I will also discuss the challenges and opportunities that ML offers for scientific computing applications involving PDEs. In particular, we will focus on how ML can be leveraged to address challenges in forward and inverse PDE problems.

Tentative schedule: (4×1.5 = 6 hours)

  1. PDEs for Machine Learning and Deep Neural Networks:
    • Stochastic Gradient Descent and Gradient Flows
    • PDE-based Convolutional Neural Networks
    • Optimal Control and Deep Neural Networks
  2. Machine Learning methods for Forward PDE problems:
    • Physically Informed Neural Networks
    • Time stepping methods
  3. Nonlinear methods for Model Order Reduction (MOR):
    • Notions of optimality, Kolmogorov n-width
    • Linear MOR
    • Nonlinear MOR
  4. Nonlinear Methods for Inverse Problems:
    • Notions of optimality
    • Linear methods
    • Nonlinear Methods

This course will be followed by a computer session. Tentative prerequisites: PythonNumPyPyTorch.

Abstract: In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the relevant optimization problems. I will present theoretical results that explains the success as well as the challenges of PINN and FNM that are trained by gradient based methods such as SGD and Adam. I will then present some new classes of training algorithms that can theoretically achieve and numerically observe the asymptotic rate of the underlying discretization algorithms (while the gradient based methods cannot). Motivated by our theoretical analysis, I will finally report some competitive numerical results of CNN and MgNet using an activation function with compact support for image classifications.

This course will be followed by a computer session.


Abstract: Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.

This course will be followed by a computer session.

Abstract: High-fidelity numerical simulation of physical systems modeled by time-dependent partial differential equations (PDEs) has been at the center of many technological advances in the last century. However, for engineering applications such as design, control, optimization, data assimilation, and uncertainty quantification, which require repeated model evaluation over a potentially large number of parameters, or initial conditions, these simulations remain prohibitively expensive, even with state-of-art PDE solvers. The necessity of reducing the overall cost for such downstream applications has led to the development of surrogate models, which captures the core behavior of the target system but at a fraction of the cost. In this context, new advances in machine learning provide a new path for developing surrogates models, particularly when the PDEs are not known and the system is advection-dominated. In a nutshell, we seek to find a data-driven latent representation of the state of the system, and then learn the latent-space dynamics. This allows us to compress the information, and evolve in compressed form, therefore, accelerating the models. In this series of lectures, I will present recent advances in two fronts: deterministic and probabilistic modeling latent representations. In particular, I will introduce the notions of hyper-networks, a neural network that outputs another neural network, and diffusion models, a framework that allows us to represent probability distributions of trajectories directly. I will provide the foundation for such methodologies, how they can be adapted to scientific computing, and which physical properties they need to satisfy. Finally, I will provide several examples of applications to scientific computing.

Tentative schedule:

  1. Intro to time domain problems:
    • time dependent PDEs
    • advection dominated problems
    • Kolmogorov n-widths
    • Failure of linear projections
  2. Introduction to HyperNetworks:
    • Radiance-networks
    • Hypernetworks
    • Hyper encoders
    • Latent Space trajectories
    • Smoothness constraints
  3. Intro to SDEs and Diffusion models:
    • Diffusion processes
    • Denoising and backwards evolution
    • Score-based diffusion models
  4. Sampling with Constraints:
    • Tweedies’ formula
    • Linear constraints and probabilistic approximations
    • Non-linear constraints and extreme events

This course will be followed by a computer session. Tentative prerequisites: PythonJAXFlax.

Abstract: In this talk, I will give an overview of recent successes (and some failures) of combining modern, high order discretization schemes with machine learning submodels and their applications for large scale computations. The primary focus will be on supervised learning strategies, where a multivariate, non-linear function approximation of given data sets is found through a high-dimensional, non-convex optimization problem that is efficiently solved on modern GPUs. This approach can thus for example be employed in cases where current submodels in the discretization schemes currently rely on heuristic data. A prime of example of this is shock detection and shock capturing for high order methods, where essentially all known approaches require some expert user knowledge as guiding input. As an illustrative example, I will show how modern, multiscale neural network architectures originally designed for image segmentation can ameliorate this problem and provide parameter free and grid independent shock front detection on a subelement level. With this information, we can then inform a high order artificial viscosity operator for inner-element shock capturing. In the second part of my talk, I will present data-driven approaches to LES modeling for implicitly filtered high order discretizations. Whereas supervised learning of the Reynolds force tensor based on non-local data can provide highly accurate results that provide higher a priori correlation than any existing closures, a posteriori stability remains an issue. I will give reasons for this and introduce reinforcement learning as an alternative optimization approach. Our experiments with this method suggest that is it much better suited to account for the uncertainties introduced by the numerical scheme and its induced filter form on the modeling task. For this coupled RL-DG framework, I will present discretization-aware model approaches for the LES equations and discuss the future potential of these solver-in-the-loop optimizations.

Tentative schedule:

  1. Deep Neural Networks and Supervised Learning for augmenting PDEs:
    • Basic and Advanced Architectures for Supervised Learning
    • Incorporating constraints
    • Examples: Combining CFD and ML for solving real-world problems
  2. Data-Driven Models for Large Eddy Simulation of Turbulence:
    • Large Eddy Simulation, turbulence and numerics
    • Subgrid scale modeling with ML
    • Model-Data Consistency
  3. Deep Reinforcement Learning for turbulence:
    • Introduction to RL
    • Optimizable PDE solvers on HPC systems
    • Deep RL for turbulence modelling

This course will be followed by a practical session, where we will investigate model training for canonical turbulent flows through supervised learning and explore methods to incorporate physical constraints into the ML-based models. Tentative prerequisites: PythonNumPyTensorFlow.

Full version in pdf

Andrea Beck’s research works

Tentative Schedule


 Monday 17/07Tuesday 18/07Wednesday 19/07Thursday 20/07Friday 21/07
09:00 – 10:30Lecture 1 (O. Mula)Lecture 2 (J. Xu)Lecture 3 (S. Mishra)Lecture 4 (L. Zepeda-Núñez)Lecture 5 (A. Beck)
10:30 – 10:45coffee breakcoffee breakcoffee breakcoffee breakcoffee break
10:45 – 12:15Lecture 1 (O. Mula)Lecture 2 (J. Xu)Lecture 3 (S. Mishra)Lecture 4 (L. Zepeda-Núñez)Lecture 5 (A. Beck)
12:15 – 14:00lunch breaklunch breaklunch breaklunch breaklunch break
14:00 – 15:00Lecture 1 (O. Mula)Lecture 2 (J. Xu)Lecture 3 (S. Mishra)Lecture 4 (L. Zepeda-Núñez)Lecture 5 (A. Beck)
15:00 – 15:30coffee breakcoffee breakcoffee breakcoffee breakcoffee break
15:30 – 17:30Computer sessionComputer sessionComputer sessionComputer sessionComputer session


to be announced