Online Seminar: Learning Solution Operators for PDEs with Uncertainty

Join us for an online seminar on Monday, June 10th at 4PM, given by Emilia Magnani, a Ph.D. candidate at the University of Tübingen. She will present her work on “Learning Solution Operators for PDEs with Uncertainty”.

Abstract: We provide a Bayesian formulation of the problem of learning solution operators of PDEs in the formalism of Gaussian processes. We consider neural operators, recent deep architectures that have shown promising results in tackling the task of learning PDE solution operators. The current state of the art for these models lacks explicit uncertainty quantification. Our approach offers a practical and theoretically sound way to apply the linearized Laplace approximation to neural operators to provide uncertainty estimates. Moreover, we introduce a new framework for Bayesian uncertainty quantification in neural operators using function-valued Gaussian processes.

Bio: Emilia Magnani is a Ph.D. candidate at the University of Tübingen under the supervision of Philipp Hennig. She is also part of the ELLIS program and spent part of her Ph.D. in Genoa working with Lorenzo Rosasco. Before that, Emilia obtained her Master’s degree in Mathematics from ETH Zurich. Her research interests span various areas of machine learning such as probabilistic numerics, Gaussian processes, and operator learning.

Zoom Link: https://epfl.zoom.us/j/63925499984?pwd=GOEI1rAQrMOXaIgFLG5B3IYle4Funr.1

Online seminar on optimal sampling

This Wednesday (April 17th), we are pleased to host a talk by Philippe Trunschke (PostDoc at Centrale Nantes & Nantes Université), which should be interesting to many of our members!

The seminar will take place online at the zoom link below, and will also be projected live in the room CM 1 517. https://epfl.zoom.us/j/61353461236?pwd=MnI2VkRMWlE2WUJxalRmNVJwc2JGQT09

Title: Optimal sampling for stochastic gradient descent

Abstract: Approximating high-dimensional functions often requires optimising a loss functional that can be represented as an expected value. When computing this expectation is unfeasible, a common approach is to replace the exact loss with a Monte Carlo estimate before employing a standard gradient descent scheme. This results in the well-known stochastic gradient descent method. However, using an estimated loss instead of the true loss can result in a “generalisation error”. Rigorous bounds for this error usually require strong compactness and Lipschitz continuity assumptions while providing a very slow decay with increasing sample size. This slow decay is unfavourable in settings where high accuracy is required or sample creation is costly. To address this issue, we propose a new approach that involves empirically (quasi-)projecting the gradient of the true loss onto local linearisations of the model class through an optimal weighted least squares method. The resulting optimisation scheme converges almost surely to a stationary point of the true loss, and we investigate its convergence rate.

Philipp TRUNSCHKE studied Mathematics at the Humboldt University in Berlin, specialising in statistical learning theory. He completed his doctoral studies focusing on tensor product approximation at the Technical University of Berlin 2018. Currently, he is working with Anthony NOUY in Nantes on compositional function networks and optimal sampling.

PhD prize in Quantitative Research by G-Research

The company G-Research is organizing a PhD prize in Quantitative research of up to € 5,000! Open to final or penultimate PhD years at EPFL, working across areas including, but not limited to: Machine Learning, Quantitative Finance, Mathematics, Computer Science.

See the poster below for application details, or this EPFL webpage. The deadline for applications is March 28th, 2024. Important: Please note that doctoral candidates must have the permission of their thesis director(s) to apply for the prize in order to be able to share thesis data with a private company!

Seminar on AlphaTensor – Francisco Ruiz (Google DeepMind)

We were honored to host Francisco Ruiz, Research Scientist @ Google DeepMind, for an online seminar on AlphaTensor, last Tuesday 05/12 evening.

AlphaTensor is a Deep Reinforcement Learning agent designed to automatically discover fast algorithms for matrix multiplication, a mathematical operation ubiquitous in science and engineering. It discovered new improved algorithms, with heavy impact on theory and practice.
Francisco spoke about its development and shared firsthand insights into its design.
Memento link

SIAM Career Event

We were delighted to host Matthew Wiener, Justina Ivanauskaite and Radu Popescu for a Career Event, last Monday 04/12 evening. They presented their experience working in Applied Math, Computational Science or Data Science in different industries (from pharmaceutical to supply chain management to high-performance computing…), making these career paths more concrete to Master and PhD students in those fields. The event was followed by a networking apéro. Memento link

Fête des maths @ EPFL

Very happy to have had so many people come by our booth at the Fête des maths @ EPFL last Saturday (Nov 25th)! Members of the student chapter presented some of their numerical experiments, from fluid models of plasma, to thermal cloaking solutions, to simulation of blood flow in a carotid artery. Visitors of all ages and backgrounds stopped by, including an unexpected celebrity visit, Maryna Viazovska!

Scimpact

We are happy to relay the announcement of the program Scimpact organized by Reatch, which may be of interest to some students at EPFL:

You want to learn how to write a good blog-article or moderate a discussion? You want to be part of a young science community that wants to make a difference? Welcome to Scimpact! Scimpact is a training program for young people who want to bring science into societal debates. The program consists of hands-on workshops and 1:1 coaching, lasts 4 or 8 months and offers you the chance to organize a public event! Apply by September 30 at www.reatch.ch/en/scimpact