Skip down to main content

The Oxford Internet Institute heads to Vancouver for NeurIPS 2024

The Oxford Internet Institute heads to Vancouver for NeurIPS 2024

Published on
6 Dec 2024
Several researchers and DPhil students from the Oxford Internet Institute, University of Oxford, will head to Vancouver for the Thirty-Eighth annual Conference on Neural Information Processing Systems (NeurIPS) from 10-15 December 2024.

Several researchers and DPhil students from the Oxford Internet Institute (OII), University of Oxford, are set to attend the 38th annual Conference on Neural Information Processing Systems (NeurIPS) in Vancouver, from December 10-15, 2024. As one of the world’s premier AI and machine learning conferences, NeurIPS brings together leading experts from diverse fields to discuss the latest advancements in algorithms, applications, and the societal implications of AI.

As AI continues to reshape society, NeurIPS also increasingly addresses broader issues like fairness, accountability, and transparency in AI systems, making it a key forum for bringing together technical experts along with social scientists, ethicists, and policymakers. The OII delegation will contribute to these discussions through their participation in a series of presentations and workshops, outlined below.

Workshops and presentations to watch

All times below are PT, UTC-8:

  • Tuesday 10th December at 6:00pm: Sofia Hafner (RA) in Affinity Poster session
  • Wed 11th December at 10:00am: Hannah Kirk (DPhil student) in Oral presentation: Oral Session 1B
  • Thursday 12th December at 3:30pm: Andrew Bean (DPhil student) and Harry Mayne (DPhil student) in Oral Session 4A
  • Saturday 14th December, at 11:00am: Yushi Yang (DPhil student), Harry Mayne (DPhil student), and Hannah Kirk (DPhil student) at the Socially Responsible Language Modelling Research (SoLaR) Workshop
  • Saturday 14th December at 8:20am: Andrew Bean (DPhil student) at AIM-FM: Advancements In Medical Foundation Models: Explainability, Robustness, Security, and Beyond
  • Saturday 14th December at 8:50am: Yushi Yang (DPhil student) and Andrew Bean (DPhil student) at FITML: Fine-Tuning in Modern Machine Learning: Principles and Scalability
  • Sunday 15th December at 2:10pm: William Hawkins (DPhil student) and Hannah Kirk (DPhil student) at the Safe Generative AI workshop
  • Sunday 15th December: Harry Mayne (DPhil student) at the Interpretable AI and Foundation Model Interventions workshops
  • Sunday 15th December: Hannah Kirk (DPhil student) at Behavioural ML and Pluralistic Alignment

 

OII researchers at NeurIPS 2024

 

Andrew Bean
Andrew Bean, a DPhil student at OII, is presenting his work on “LingOly,” a new low-memorisation reasoning benchmark. Andrew’s research explores how LLMs perform when faced with out-of-domain tasks, such as solving Linguistics Olympiad puzzles. His presentation will take place during Oral Session 4A on Thursday, 12th December.Beyond LingOly and PRISM, he has two co-authored workshop papers at NeurIPS this year:

Andrew is also a first-author on the AIM-FM Workshop paper:

Do Large Language Models have Shared Weaknesses in Medical Question Answering?

 

Hannah Kirk

Hannah Kirk, a DPhil student, is presenting her work on PRISM (Pluralistic Representation and Interactive Social Modeling). This approach focuses on ensuring that diverse human perspectives are incorporated into AI systems, particularly language models. Her research, which draws on feedback from 1,500 people worldwide, underscores the importance of considering multiple viewpoints in aligning AI with human values. Hannah will present her work during Oral Session 1B on Wednesday, 11th December.

Hannah is also a co-author on two workshop papers Modulating Language Model Experiences through Frictions accepted at the Behavioral ML workshop (Saturday 15th December) and Lexically-constrained automated prompt augmentation: A case study using adversarial T2I data at the SafeGenAI workshop. She is also joint author on the workshop paper Beyond the Binary: Capturing Diverse Preferences With Reward Regularization at Socially Responsible Language Modelling Research (SoLaR) workshop.

In addition to presenting, she is giving keynotes at three workshops on Saturday 15th December: Behavioral ML: https://sites.google.com/view/behavioralml/, SoLaR: https://solar-neurips.github.io/, Pluralistic Alignment: https://pluralistic-alignment.github.io/.

 

Harry Mayne

Harry Mayne, a DPhil student, is exploring interpretability in large language models. His paper, “Can Sparse Autoencoders Be Used to Decompose and Interpret Steering Vectors?” will be presented at the Interpretable AI and Foundation Model Interventions workshop on Sunday, 15th December. Harry’s research aims to understand how internal model structures can be controlled to improve model transparency and behaviour.

In addition, he will be presenting “Ablation is Not Enough to Emulate DPO: How Neuron Dynamics Drive Toxicity Reduction” at the Socially Responsible Language Modelling Research (SoLaR) and Foundation Model Interventions workshops (work led by Yushi Yang, OII), and “LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages”, which was accepted at the main conference as an oral and will be presented on Thursday 12th December (work led by Andrew Bean, OII).

 

Sofia Hafner

Sofia Hafner, a Research Assistant, will present her findings on the gendered biases in language models at the Affinity Poster Session on Tuesday, 10th December. Her research, which draws on gender theory, reveals significant issues in how language models handle gender identity, such as the absence of meaningful representations for nonbinary identities and harmful associations with mental illness.

After completing her MSc in Social Data Science in 2023/24, she is now working with Dr Luc Rocher as a Research Assistant within the Synthetic Society research team. She will present her findings at the Affinity Poster session on Tuesday, 10th December. You can read her paper here.

 

William Hawkins

William Hawkins, a DPhil student, is researching AI safety, particularly in the context of open models and fine-tuning. Their paper, “The Effect of Fine-Tuning on Language Model Toxicity,” co-authored with Prof. Brent Mittelstadt and Prof. Chris Russell, will be presented at the Safe Generative AI workshop on Sunday, 15th December. The paper investigates how fine-tuning can affect the safety of generative models, an important issue as AI systems are increasingly deployed in sensitive environments. You can read their paper here.

 

Yushi Yang

Yushi Yang, a DPhil student, will present her work on Direct Preference Optimization (DPO) for toxicity reduction at the Socially Responsible Language Modelling Research (SoLaR) workshop on Saturday, 14th December. Yushi’s research explores methods to make AI systems safer and more responsible by refining the ways in which they process user feedback and preferences. You can reach out to her via email (yushi.yang@oii.ox.ac.uk) or LinkedIn. You can read her paper at: https://arxiv.org/abs/2411.06424.

 

The OII’s varied presence at NeurIPS 2024 in Vancouver highlights the Institute’s leadership in the sector and important work contributing to critical conversations. To learn more about the OII’s research projects in neural information processing and other related areas, contact press@oii.ox.ac.uk.

Related Topics: