Online seminar on optimal sampling

This Wednesday (April 17th), we are pleased to host a talk by Philippe Trunschke (PostDoc at Centrale Nantes & Nantes Universit√©), which should be interesting to many of our members!

The seminar will take place online at the zoom link below, and will also be projected live in the room CM 1 517. https://epfl.zoom.us/j/61353461236?pwd=MnI2VkRMWlE2WUJxalRmNVJwc2JGQT09

Title: Optimal sampling for stochastic gradient descent

Abstract: Approximating high-dimensional functions often requires optimising a loss functional that can be represented as an expected value. When computing this expectation is unfeasible, a common approach is to replace the exact loss with a Monte Carlo estimate before employing a standard gradient descent scheme. This results in the well-known stochastic gradient descent method. However, using an estimated loss instead of the true loss can result in a “generalisation error”. Rigorous bounds for this error usually require strong compactness and Lipschitz continuity assumptions while providing a very slow decay with increasing sample size. This slow decay is unfavourable in settings where high accuracy is required or sample creation is costly. To address this issue, we propose a new approach that involves empirically (quasi-)projecting the gradient of the true loss onto local linearisations of the model class through an optimal weighted least squares method. The resulting optimisation scheme converges almost surely to a stationary point of the true loss, and we investigate its convergence rate.

Philipp TRUNSCHKE studied Mathematics at the Humboldt University in Berlin, specialising in statistical learning theory. He completed his doctoral studies focusing on tensor product approximation at the Technical University of Berlin 2018. Currently, he is working with Anthony NOUY in Nantes on compositional function networks and optimal sampling.

Seminar on AlphaTensor – Francisco Ruiz (Google DeepMind)

We were honored to host Francisco Ruiz, Research Scientist @ Google DeepMind, for an online seminar on AlphaTensor, last Tuesday 05/12 evening.

AlphaTensor is a Deep Reinforcement Learning agent designed to automatically discover fast algorithms for matrix multiplication, a mathematical operation ubiquitous in science and engineering. It discovered new improved algorithms, with heavy impact on theory and practice.
Francisco spoke about its development and shared firsthand insights into its design.
Memento link

G-Research Quant Finance Challenge

On Thursday 3rd November 2022, G-Research came to the EPFL campus for a “Quant Finance Challenge”, an algorithmic trading-based game. In teams of 2-3, over 100 Master, PhD students and postdocs tried their hands (and their Python skills) at a few problems inspired from quantitative finance.

The Challenge was followed by pizza and drinks, and the opportunity to discuss with Quant Researchers and Machine Learning Specialists from G-Research.

Jane Street estimathon

We hosted two events organized by the trading company Jane Street, on Thursday 10th March 2022, in collaboration with CLIC and EPFelles (two student associations from EPFL). Slightly over 130 participants joined for

  • a Tech Talk by Andrey Mokhov from Jane Street: “Algorithmic challenges in build systems and incremental computation”,
  • and an Estimathon Game, where teams had 30 minutes to work on a set of 13 estimation problems, the winning team being the one with the best set of estimates.

This was followed by pizza and a Q&A with a Jane Street Trader, Software Engineer, and Recruiters.