Colin Training co-lead and Professional and academic skills lead for the SAINTS (UKRI AI Centre for Doctoral Training in Safe Artificial Intelligence Systems) CDT here at the University of York.
Colin plays an important role in SAINTS and the Department of Computer Science at York. He’s helped to rebuild a PGR community since returning to campus from Covid and he hopes his work will live at the interface between theory and practice and will consider the gap between models used in autonomous systems and the real world.
What is your role in SAINTS, and what does it involve?
I am the SAINTS training lead and that means I’m responsible for designing a programme of study that supports our Post Graduate Researchers (PGRs). This means providing modules which allow our multi-disciplinary cohort to speak the same language and share an understanding of the problems we face for assuring the safety of AI-enabled systems. It also means preparing them for life beyond the PhD, securing their first post and making an impact beyond the walls of the university.
Tell us about your research interests. What do you find most interesting or enjoyable about your work?
My research considers the gap between models used in autonomous systems and the real world into which they are deployed. This means understanding the impact of simplifying assumptions and uncertainty and the tools and techniques we can employ to make systems fit for purpose.
I want my work to live at the interface between theory and practice, not only finding where the issues are but also finding practical solutions which can impact industrial practice.
What working achievement or initiative are you most proud of?
The piece of published work I’m proudest of is my paper on “Assuring the machine learning lifecycle”. It was one of the first pieces of research I did as a research associate. The work has been the foundation of a good number of pieces of work since and has provided me with many opportunities for collaboration.
More generally I’m proud of the work I have done in rebuilding a PGR community here since returning from Covid. The PGRs we have in the department are super enthusiastic and to see them achieving so many wonderful things in the last two years has brought me great joy.
What’s next on the research horizon for you?
I’m working with some great people in the Institute for Safe Autonomy to build a platform for investigating the practical application of our safety assurance guidance. I’m really looking forward to this for a number of reasons. Firstly I get to play with robotics again, something I haven’t done for a long time, secondly I get to test out my ideas and see how well they work when the “rubber hits the road”, and finally it gives me the opportunity to engage with organisations outside of the university to develop practical guidance to help them use our ideas in the wild.
Can you share some interesting work that you read about recently?
I’ve been reading about conformal prediction recently which seems like an interesting idea which may have practical use in assuring safety. Conformal prediction is an approach for understanding the uncertainty inherent in ML prediction algorithms providing users with a set of possible ‘classes’, rather than a point estimate. We may be able to use approaches such as these to improve decision-making in autonomous systems.
What one piece of advice do you have for SAINTS postgraduate researchers?
Remember this is meant to be fun, but fun doesn’t always mean easy!
What are your thoughts on the future of AI?
AI (including machine learning and data science) will change the world but this may not be in ways which we currently anticipate. For us to make a difference we need to be able to see the full picture and that’s why, to me, SAINTS is so important. We need to be prepared to take a principled approach to safety and apply that to the technology as it develops.
Find out more about the SAINTS (UKRI AI Centre for Doctoral Training in Safe Artificial INtelligence Systems) CDT.