CONFERENCE

Learning and Optimization in Luminy – LOL
Apprentissage et Optimization à Luminy

17 – 21 June, 2024

INTRANET FOR ORGANIZERS

Scientific Committee 
Comité scientifique 

Malgorzata Bogdan (University of Wroclaw)
Emilie Chouzenoux (INRIA Saclay)
Mikael Johansson (KTH – Royal Institute of Technology)
Gabriel Peyré (ENS Paris)
Justin Solomon (MIT)

Organizing Committee
Comité d’organisation

Elsa Cazelles (CNRS – Université de Toulouse)
Aymeric Dieuleveut (École Polytechnique)
Mathurin Massias (INRIA Lyon)
Thomas Moreau (INRIA Saclay)
Lorenzo Rosasco (University of Genoa)

   The core interest of the Learning and Optimization in Luminy conference is the study of interactions between learning and optimization. From text translation to image generation to health applications, the practical successes of machine learning are ubiquitous, and have been strongly stimulated by the increase in model size. This ever-increasing size of successful models raises many questions, from ownership to environmental impact, calling for the development of new statistical modeling tools and better optimization techniques.
   The conference program will therefore focus on three topics related to this crucial challenge of modern learning: frugal learning, optimal transportation and collaborative optimization. These topics, which are interesting in their own right, are also deeply interconnected in the search for more sustainable approaches to machine learning. Learning and Optimization in Luminy 2024 aims at gathering young research and experts from those fields, to discover advances in the field and stimulate novel collaborations.
   The most influential and internationally recognized researchers in the different fields mentioned above will be invited to present their most innovative work. This interdisciplinary scientific dissemination will also benefit young researchers, as doctoral and post-doctoral students will also be invited. Finally, time slots will be specially dedicated to encourage the setting up of working groups.

 Le sujet central de la conférence Apprentissage et Optimisation à Luminy est l’étude des interactions entre apprentissage et optimisation. De la traduction de textes à la génération d’images en passant par les applications de santé, les succès pratiques de l’apprentissage automatique sont omniprésents, et ont été fortement stimulés par l’augmentation de la taille des modèles. Cette taille toujours plus massive des modèles performants soulève de nombreuses questions, de la propriété à l’impact environnemental, appelant le développement de nouveaux outils de modélisation statistique et de meilleures techniques d’optimisation.
   Le programme de la conférence se concentrera donc sur trois sujets liés à ce défi crucial de l’apprentissage moderne : l’apprentissage frugal, le transport optimal et l’optimisation collaborative. Ces sujets, qui sont intéressants en eux mêmes, sont aussi profondément interconnectés dans la recherche d’approches plus durables de l’apprentissage automatique. Apprentissage et Optimisation à Luminy vise à rassembler les jeunes chercheurs et les experts de ces domaines, afin de découvrir les avancées dans le domaine et de stimuler de nouvelles collaborations.
   Les chercheurs les plus influents et reconnus internationalement dans les différents domaines mentionnés ci-dessus seront invités à présenter leurs travaux les plus innovants. Cette diffusion scientifique interdisciplinaire profitera également aux jeunes chercheurs, puisque des doctorants et des post-doctorants seront également invités. Enfin, des créneaux horaires seront spécialement dédiés pour encourager la mise en place de groupes de travail.

SPEAKERS 

 

Julio Backhoff (University of Vienna)  Martingale Benamou-Brenier: duality and gradient flow
Malgorzata Bogdan (University of Wroclaw)  Unveiling low-dimensional patterns induced by convex non-differentiable regularizers
Claire Boyer (Sorbonne Université)     A primer on physics-informed learning
Aurélie Boisbunon (Ericsson)  Deep Unsupervised Domain Adaptation for Time Series Classification: a Benchmark
Sinho Chewi (MIT)     Log-concave sampling
Nicolas Courty (Université Bretagne Sud)  Unbalancing Sliced Wasserstein
Marco Cuturi (Google – ENSAE)  Elastic Costs and Monge Transport Maps
Mathieu Even (Inria – ENS)  Asynchronous speedups in centralized and decentralized distributed optimization
Rémi Flamary (École Polytechnique)  Any2Graph: Deep End-To-End Supervised Graph Prediction With An Optimal Transport Loss
Rémi Gribonval (ENS – INRIA)    Conservation laws for gradient flows 

Hadrien Hendrikx (Inria Grenoble)  Investigating Variance Definitions for Stochastic Mirror Descent with Relative Smoothness
Mikael Johansson (KTH – Royal Institute of Technology)  Better models for asynchronous optimization
Anastasia Koloskova (EPFL)    Gradient Descent with Linearly Correlated Noise: Theory and Applications to Differential Privacy
Flavien Léger (INRIA)     A nonsmooth geometry for alternating minimization

Cyril Letruit (Université Paris-Saclay)    Some insights on training two-layers transformers
Julien Mairal (INRIA)  Functional Bilevel Optimization for Machine Learning
Daniel McKenzie (Colorado School of Mines)     Reducing the complexity of derivative-free optimization using compressed sensing
Aryan Mokhtari (University of Texas at Austin)     Online Learning Guided Quasi-Newton Methods: Improved Global Non-asymptotic Guarantees
Kimia Nadjahi (Sorbonne Univerité)    Slicing Mutual Information Generalization Bounds for Neural Networks
Atsushi Nitanda (A-STAR)     Improved Particle Approximation Error for Mean Field Neural Networks
Barbara Pascal (CNRS, Université de Nantes)    Bilevel optimization for automated data-driven inverse problem resolution
Gabriel Peyré (CNRS, ENS Paris)  A Survey of Wasserstein Flow in Neural Network Training Analysis
Aram-Alexander Pooladian (New York University)     Algorithms for mean-field variational inference via polyhedral optimization in the Wasserstein space
Clarice Poon (University of Bath)    Sparse recovery guarantees for inverse optimal transport
Audrey Repetti (Heriott-Watt University)  An optimisation view of learning robust and flexible denoisers for inverse imaging problems
Emmanuel Soubies (CNRS, IRIT)    Exact Continuous Relaxations of L0-Regularized Criteria
Saverio Salzo (Sapienza University of Rome)     Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence Rates
Taiji Suzuki (University of Tokyo)  Nonlinear feature learning of neural networks with gradient descent: Information theoretic optimality and in-context learning
Silvia Villa (University of Genoa)  Zeroth order optimization with structured directions

SPONSORS