Research Seminar "Machine Learning Theory"

This is the research seminar by Ulrike's group.

Attention, we shifted the time and room.

When and where

Each thursday 13:00 - 14:00, Seminar room 2rd floor (!!!), MvL6.

What

Most sessions take place in form of a reading group: everybody reads the assigned paper before the meeting. Then we jointly discuss the paper in the meeting. Sometimes we also have talks by guests or members of the group.

Who

Everybody who is interested in machine learning theory: Students, PhD students and researchers of the University of Tübingen. We do not mind people dropping in and out depending on whether they find the current session interesting or not.

Upcoming meetings

  • Nov 16, 2023,paper discussion, Moritz: Feature Learning in Infinite-Width Neural Networks, Yang, Hu, 2020. pdf (if you are really interested, a more gentle, experimental reading about the same effect is this paper)
  • Nov 23, 2023, What Algorithms can Transformers Learn? A Study in Length Generalization (Sebastian), Hattie Zhou, Arwen Bradley, Etai Littwin, Noam Razin, Omid Saremi, Josh Susskind, Samy Bengio, Preetum Nakkiran, 2023. pdf to understand this paper it is very beneficial to know the basics of the transformer architecture
  • Nov 30, 2023, Symposium AI and law all afternoon
  • Dec 7, 2023, Ulrike test talk for the NeurIPS XAI workshop
  • Dec 13, 2023, tba
  • Dec 20, 2023, no reading group, we resume in January
  • Jan 11, 2024, tba
  • Jan 18, 2024, tba
  • Jan 25, 2024, tba
  • Feb 1, 2024, tba
  • Feb 8, 2024, no reading group (IMPRS interviews)

Past meetings

Listed here.

Suggested papers for future meetings

Feel free to make suggestions!
If you do, please (i) try to select short conference papers rather than 40-page-journal papers; (ii) please put your name when entering suggestions; it does not mean that you need to present it, but then we can judge where it comes from; (iii) Please provide a link, not just a title.
  • What Algorithms can Transformers Learn? A Study in Length Generalization (Sebastian), Hattie Zhou, Arwen Bradley, Etai Littwin, Noam Razin, Omid Saremi, Josh Susskind, Samy Bengio, Preetum Nakkiran, 2023. pdf
  • Robust Explanation for Free or At the Cost of Faithfulness. ICML 2023. link (Ulrike)
  • Trade-off Between Efficiency and Consistency for Removal-based Explanations, Neurips 2023 link (Ulrike)
  • Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning link (Ulrike)
  • But Are You Sure? An Uncertainty-Aware Perspective on Explainable AI Risk minimization by median-of-means tournaments (Ulrike)AISTATS 2023
  • G. Lopardo, F. Precioso, D. Garreau, A Sea of Words: An In-Depth Analysis of Anchors for Text Data, AISTATS 2023 (Ulrike)
  • Who Should Predict? Exact Algorithms For Learning to Defer to Humans, aistats 20022, pdf (ulrike)
  • Understanding contrastive learning requires incorporating inductive biases, 2021 pdf
  • On-Demand Sampling: Learning Optimally from Multiple Distributions. by Nika Haghtalab, Michael Jordan, Eric Zhao (NeurIPS 22) pdf (Moritz)
  • Getting Aligned on Representational Alignment, 2023 pdf (David)
  • On Provable Copyright Protection for Generative Models, ICML 2023 pdf (Peru)