**Applications are now closed. Any questions? Contact Suryansh, the organiser, at [email protected].**

About

Oxford Philosophy for Safe AI (OxPSAI) is a team of researchers using tools or concepts from academic philosophy to reduce society-scale risks from AI. We work on projects that either help to clarify these risks or proposals that directly make AI systems safer through progress in areas such as decision theory. There’s an exciting opportunity here to (i) get additional people working on the problems we’re excited about and that students can make a big difference in and (ii) help individuals upskill, test their fit, and build career capital for careers in philosophy or AI.

Every term, you will take on a short, well-scoped research project and spend 5-10 hours on it per week as part of a 2-5 person team. Your exact responsibilities will depend on the nature of the projects and your experience and background. However, we do want all those involved in OxPSAI to have ownership over their work. Particularly for those with relevant expertise, we expect most researchers will be taking on mini-projects rather than RA-type work. Teams will meet weekly for an opportunity to co-work, sync up on progress, and clarify questions and concerns.

Why are we doing this?

Project Leads

[Open] Our current open projects are led by Brad Saad.

Brad Saad is a senior research fellow in philosophy at Oxford's Global Priorities Institute. His past research has focused on phenomenal consciousness and mental causation, their place in the world, and empirical constraints on theorizing about them. More recently, he has been thinking about digital minds, catastrophic risks, and the long-term future.

[Closed] Our other active projects are led by Adam Bales, Elliott Thornley, and Andreas Mogensen.

Adam Bales is a Senior Research Fellow in philosophy at Oxford’s Global Priorities Institute and serves as their Assistant Director for philosophy. Having completed a PhD from Cambridge University, his research has included works on decision theory and normative ethics. He is interested in the role that philosophy can play in clarifying, and helping us to engage with, the perils and promises posed by AI.

Andreas Mogensen is a Senior Research Fellow in philosophy at Oxford’s Global Priorities Institute. Before coming to GPI, he worked as a Tutorial Fellow at Jesus College, and was an Examination Fellow at All Souls College from 2010 to 2015. His current research interests are primarily in normative and applied ethics. His previous publications have addressed topics in meta-ethics and moral epistemology, especially those associated with evolutionary debunking arguments.

Elliott Thornley is a Postdoctoral Research Fellow at the Global Priorities Institute and a Research Affiliate at the Center for AI Safety. He completed a PhD in Philosophy at Oxford University where he wrote about the moral importance of preventing global catastrophes and protecting future generations. He is now using techniques from decision theory to predict the likely behaviour of advanced artificial agents. He is also investigating ways we might ensure that these agents obey human instruction and allow themselves to be turned off.

[Open] New Projects

Brad’s projects are primarily about the moral patiency of AI. Specific areas include: