Schedule
Location: Corinthia, ROOM: Bastion 2
09:15 AM - 09:30 AM: Opening Remarks
09:30 AM - 10:30 AM: Keynote 1: Edoardo M. Ponti
10:30 AM - 11:00 AM: Coffee Break
11:00 AM - 12:20 PM: Session 1: Efficient use of Adapters
- Papers:
- The Impact of Language Adapters in Cross-Lingual Transfer for NLU Jenny Kunz and Oskar Holmström
- Modular Adaptation of Multilingual Encoders to Written Swiss German Dialect Jannis Vamvas, Noëmi Aepli and Rico Sennrich
- Less is Fed More: Sparsity Reduces Feature Distortion in Federated Learning Aashiq Muhamed, Harshita Diddee and Abhinav Rao
- Toward the Modular Training of Controlled Teemu Vahtola and Mathias Creutz
12:20 PM - 02:00 PM: Lunch Break
02:00 PM - 03:00 PM: Session 2: Selection and weighting of modules
- Papers:
- Mixing and Matching: Combining Independently Trained Translation Model Components Taido Purason, Andre Tättar and Mark Fishel
- Sequence Shortening for Context-Aware Machine Translation Paweł Maka, Yusuf Can Semerci, Jan Scholtes and Gerasimos Spanakis
- What the Weight?! A Unified Framework for Zero-Shot Knowledge Composition Carolin Holtermann, Markus Frohmann, Navid Rekabsaz and Anne Lauscher
03:00 PM - 03:40 PM: Session 3: Tuning LLMs
- Papers:
- Soft Prompt Tuning for Cross-Lingual Transfer: When Less is More Fred Philippy, Siwen Guo, Shohreh Haddadan, Cedric Lothritz, Jacques Klein and Tegawendé F. Bissyandé
- Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca Pinzhen Chen, Shaoxiong Ji, Nikolay Bogoychev, Andrey Kutuzov, Barry Haddow and Kenneth Heafield
03:40 PM - 04:00 PM: Coffee Break
04:00 PM - 05:00 PM: Keynote 2: Angela Fan
05:00 PM - 05:20 PM: Closing Remarks
Detailed information
Invited Speakers
Edoardo M. Ponti
Edoardo M. Ponti is a Lecturer (≈ Assistant Professor) in Natural Language Processing at the University of Edinburgh, where he is part of the Institute for Language, Cognition, and Computation (ILCC), and an Affiliated Lecturer at the University of Cambridge. Previously, he was a visiting postdoctoral scholar at Stanford University and a postdoctoral fellow at Mila and McGill University in Montreal. In 2021, he obtained a PhD in computational linguistics from the University of Cambridge, St John’s College. His main research foci are modular deep learning, sample-efficient learning, faithful text generation, computational typology and multilingual NLP. His research earned him a Google Research Faculty Award and 2 Best Paper Awards at EMNLP 2021 and RepL4NLP 2019. He is a board member and co-founder of SIGTYP, the ACL special interest group for computational typology, and a scholar of the European Lab for Learning and Intelligent Systems (ELLIS). He is a (terrible) violinist, football player, and an aspiring practitioner of heroic viticulture.
Angela Fan
Angela is a research scientist at Meta AI Research in New York, focusing on research in text generation. Currently, Angela works on language modeling and developing the line AI Agents Meta products. Recent research projects include No Language Left Behind, Universal Speech Translation for Unwritten Languages, and Llama2.