Applications are now closed. Any questions? Contact Suryansh, the organiser, at [email protected].

About

The Oxford Group on AI Policy (OxGAP) is a team of researchers seeking to inform governments’ responses to AI. We work on projects that aim to make progress on important questions that can guide policy, such as how the deployment of advanced AIs will affect economic growth and understanding what standards or regulations might be appropriate for reducing the harms from AI. There’s an exciting opportunity here to (i) get additional people working on the problems we’re excited about and that students can make a big difference in and (ii) help individuals upskill, test their fit, and build career capital for careers in policy.

You’ll take on a short, well-scoped research project and spend 5-10 hours on it per week as part of a 2-5 person team. Your exact responsibilities will depend on the nature of the projects and your experience and background. However, we do want all those involved in the Oxford GAP to have ownership over their work. Particularly for those with relevant expertise, we expect most researchers will be taking on mini-projects rather than RA-type work. Teams will meet weekly for an opportunity to co-work, sync up on progress, and clarify questions and concerns.

Why are we doing this?

Current Project Leads

Claire Dennis is a Research Scholar at GovAI. Her research focuses on international AI governance, diplomacy, and the role of multilateral institutions in frontier AI regulation. Prior to joining GovAI, Claire served as a U.S. diplomat and consultant at the United Nations Executive Office of the Secretary-General. She holds a Master's in Public Affairs from Princeton University and a B.A. in International Affairs from the George Washington University.

Ben Bucknall is a Research Scholar at GovAI, focusing on technical topics with downstream implications for AI policy and governance. His previous work has explored system access requirements for AI research and evaluation, as well as responsible deployment best practices. Ben holds an MMath in Mathematics from Durham University and an MSc in Computational Science from Uppsala University. He has also worked as an intern at the University of Cambridge.

Konstantin Pilz is pursuing an MA in Security Studies at Georgetown University, with a focus on emerging technology. He currently works as a semester analyst at CSET's Emerging Technology Observatory, conducting independent research on compute governance. Konstantin previously worked as a research assistant to Lennart Heim at the Centre for the Governance of AI, where he investigated data centers and their role in AI development and policy. His current research centers on the implications of increasing compute efficiency and the regulation of compute providers and data centers.

Current Projects

Here’s some information on projects that lie within our areas of expertise, we feel comfortable working on with others, and we are excited to work on. It is highly likely that we, or other project leads we find, might offer different projects as well, so please still apply if none of these are particularly appealing.

Structured Access for Auditing and Evaluation (Ben)

'Sensitive Compartment Information Facilities' (SCIFs) for AI (Ben)

UN AI Agency (Claire)

AI Driven Development: Scaling Compute Infrastructure (Claire)

Concentration of Power and the Techno-Polar World Order (Claire)

Using advanced AI models to defend against proliferation of weaker ones (Konstantin)