Dear Colleagues, dear participants,
Unfortunately, due to the unusual circumstances, we were forced to cancel the conference « Mathematical Methods of Modern Statistics 2 » in its original format. We will nevertheless organize a virtual version of it.We will record the talks and will make them available on this webpage before June 15. During the week which starts on June 15, we will organize virtual office hours in order to allow the participants to interact and discuss the content of the talks with their authors. Best regards, Małgorzata Bogdan, Piotr Graczyk, Fabien Panloup, Frédéric Proïa , Étienne Roquain |
Mathematical Methods of Modern Statistics 2
Méthodes Mathématiques en Statistiques Modernes 2 CONFERENCE 15 – 19 June 2020 Organizing Committee Scientific Committee After the success of the first CIRM-Luminy meeting on Mathematical Methods of Modern Statistics (July 2017) we would like to continue the tradition. The title of the conference makes allusion to the famous book of Harald Cramèr Mathematical Methods of Statistics (1946) which is a landmark both for mathematics and statistics. The main objectives of the conference are to respond on the world highest statistical and mathematical level to:
(a) a strong need of reflection on the interactions between different branches of modern statistics and modern mathematics and of summarizing them (b) the actual scientific strategy of development of Data Sciences in France and abroad. We propose a wide spectrum of topics and general paradigms of modern statistics with deep mathematical implications. The major unifying topic is analysis of large dimensional data. The statistical inference based on such data is possible under certain assumptions on the structure of the underlying model, using different notions of model sparsity. The conference talks will give an overview of current knowledge on the modern statistical methods addressing this issue: including modern graphical models, different methods of multiple testing and model selection, regularization techniques and missing data treatments. These topics will be viewed both from the frequentist and Bayesian perspective, including modern nonparametric Bayes methods. |
Jean-Marc Bardet (Université Paris 1 Panthéon-Sorbonne)
Pierre Bellec (Rutgers University)
Tilmann Gneiting( HITS Heidelberg & KIT)
Ruth Heller (Tel Aviv University)
Hideyuki Ishi (Osaka City University)
Lucas Janson (Harvard University)
Julie Josse (Ecole Polytechnique)
Yoshihiko Konno (Japan Women’s University) Shrinkage estimation of mean for complex multivariate normal distribution with unknown covariance when p > n
Gérard Letac (Université de Toulouse)
Błażej Miasojedow (University of Warsaw)
Pierre Neuvial (Université de Toulouse)
Dominique Picard (Université Paris-Diderot Paris 7)
Aaditya K. Ramdas (Carnegie Mellon University)
Veronika Rockova (University of Chicago)
Etienne Roquain (UPMC Paris)
Saharon Rosset (Tel Aviv University)
Chiara Sabatti (Stanford University)
Joseph Salmon (Université de Montpellier)
Richard Samworth (University of Cambridge)
David Siegmund (Stanford University)
Weijie Su (Wharton, University of Pennsylvania)
Daniel Yekutieli (Tel Aviv University)
Stefan Wager (Stanford University)
Jonas Wallin (University of Lund)
Hua Wang (Wharton, University of Pennsylvania)
- MONDAY 15 JUNE
- TUESDAY 16 JUNE
- WEDNESDAY 17 JUNE
- THURSDAY 18 JUNE
17h30-18h30 Weijie Su (Wharton, University of Pennsylvania) Gaussian Differential Privacy
Abstract: Privacy-preserving data analysis has been put on a firm mathematical foundation since the introduction of differential privacy (DP) in 2006. This privacy definition, however, has some well-known weaknesses: notably, it does not tightly handle composition. In this talk, we propose a relaxation of DP that we term « f-DP », which has a number of appealing properties and avoids some of the difficulties associated with prior relaxations. First, f-DP preserves the hypothesis testing interpretation of differential privacy, which makes its guarantees easily interpretable. It allows for lossless reasoning about composition and post-processing, and notably, a direct way to analyze privacy amplification by subsampling. We define a canonical single-parameter family of definitions within our class that is termed « Gaussian Differential Privacy », based on hypothesis testing of two shifted normal distributions. We prove that this family is focal to f-DP by introducing a central limit theorem, which shows that the privacy guarantees of any hypothesis-testing based definition of privacy (including differential privacy) converge to Gaussian differential privacy in the limit under composition. This central limit theorem also gives a tractable analysis tool. We demonstrate the use of the tools we develop by giving an improved analysis of the privacy guarantees of noisy stochastic gradient descent. This is joint work with Jinshuo Dong and Aaron Roth.
Relevant paper
(Seminar hosted jointly with the ISSI=International Seminar on Selective Inference). https://www.selectiveinferenceseminar.com/
- FRIDAY 19 JUNE