Research Seminar "Machine Learning Theory"
This is the research seminar by Ulrike's group.When and where
Each thursday 14:00 - 15:00, Seminar room 3rd floor, MvL6.What
Most sessions take place in form of a reading group: everybody reads the assigned paper before the meeting. Then we jointly discuss the paper in the meeting. Sometimes we also have talks by guests or members of the group.Who
Everybody who is interested in machine learning theory: Students, PhD students and researchers of the University of Tübingen. We do not mind people dropping in and out depending on whether they find the current session interesting or not.Upcoming meetings
- 16.3.2023 (Paper discussion) High-dimensional analysis of double descent for linear regression with random projections, pdf
- 23.3.2023 (Paper discussino, who?) Iterative Teaching by Data Hallucination pdfaistats 2023
- 30.3.2023 Exceptionally at 9:00 Talk by Gunnar Koenig, Title: Improvement-Focused Causal Recourse
- 6.4.2023 Easter break
- 13.4.2023 Easter break
- 20.4.2023 Loss Landscapes are All You Need: Neural Network Generalization Can Be Explained Without the Implicit Bias of Gradient Descent. ICLR 2023
- 27.4.2023 tba
Past meetings
Listed here.Suggested papers for future meetings
Feel free to make suggestions!If you do, please (i) try to select short conference papers rather than 40-page-journal papers; (ii) please put your name when entering suggestions; it does not mean that you need to present it, but then we can judge where it comes from; (iii) Please provide a link, not just a title.
- Who Should Predict? Exact Algorithms For Learning to Defer to Humans, aistats 20022, pdf (ulrike)
- Performative Prediction by Perdomo, Zrnic, Mendler-Dünner and Hardt (ICML'20) (David)
- Fast rates for noisy interpolation require rethinking the effects of inductive bias, by Konstantin Donhauser, Nicolo Ruggeri, Stefan Stojanovic, and Fanny Yang (ICML 2022) (Moritz)
- Understanding contrastive learning requires incorporating inductive biases, 2021 pdf
- On-Demand Sampling: Learning Optimally from Multiple Distributions. by Nika Haghtalab, Michael Jordan, Eric Zhao (NeurIPS 22) (Moritz)
- Beyond neural scaling laws: beating power law scaling via data pruning. by Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, Ari S. Morcos (NeurIPS 22) (Moritz)
- pdf A Kernel-Based View of Language Model Fine-Tuning, Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, Sanjeev Arora