Meet Dr Laura Fearnley

Headshot of Dr Laura Fearnley

What is your role in SAINTS, and what does it involve?

I’m research co-lead for SAINTS. My role sits in the core management team and involves shaping SAINTS’ long-term research strategy, as well as helping to ensure that our interdisciplinary work aligns with the CDT’s broader goals and values.

Tell us about your research interests. What do you find most interesting or enjoyable about your work

I work on the ethics and safety of AI systems. My background is in philosophy, specifically moral philosophy and causation. I’m interested in exploring the ways in which philosophy can help us build safe AI systems, like ensuring we don’t accidentally reinvent the trolley problem with real trolleys. In my current role, I have the pleasure of collaborating with people from a range of disciplines. Working with technical colleagues, in particular, is both rewarding and essential to advancing AI systems responsibly. I very much enjoy being at the intersection where theory meets practice.

What working achievement or initiative are you most proud of?

I’m especially proud of the initiatives I’ve organised which create spaces for voices that are sometimes left out, from feminist reading groups to conferences on minorities in philosophy. Fostering a vibrant and supportive research community isn’t just good academic citizenship, it’s necessary for meaningful, successful work.

What’s next on the research horizon for you?

I’m currently looking at using synthetic data for AI-based predictions. There are many benefits to synthetic data; it can reduce bias, protect individual privacy, and expand datasets in controlled ways. But there may be costs. I’m exploring the risks of using synthetic data, with the aim of answering questions about when and how it can be used responsibly, and what safeguards are needed to ensure it supports fairness and transparency.

Can you share some interesting work that you read about recently?

I’ve recently re-read “On the site of predictive justice” (Lazar and Stone: 2023). The authors argue that ML-based predictions can be unjust. That is, there can be moral grounds for criticising the predictions themselves, independent of the harmful downstream effects of these predictions. I think the paper is an excellent example of interesting and important philosophical work on AI.

What are your thoughts on the future of AI?

The future of AI is both exciting and deeply uncertain. On the optimistic side, AI holds huge promise: it can accelerate science, expand access to education and healthcare, help address climate change, and automate tedious work. It’s a powerful tool that, if steered wisely, could contribute enormously to human flourishing. But there’s serious concerns about misuse, misalignment, and concentration of power. As AI systems become more capable, the stakes get higher. The future of AI is what we make of it. And that means asking not just what AI can do, but what it should do, and who gets to decide.

What one piece of advice do you have for SAINTs postgraduate researchers?

Read. Read stuff not remotely related to your PhD. Reading fiction, in particular, makes me a better thinker and writer.

Leave a Reply

Your email address will not be published. Required fields are marked *