SAINTS CDT partnership with NATS

NATS R&D Capability and Delivery Manager at NATS, Sarah Dow, shares why NATS are supporting the SAINTS CDT, explains the exciting future of AI and aviation, and why it’s integral that SAINTS postgraduate researchers maintain strong ties with industry partners.

Why was your organisation interested in supporting the SAINTS CDT?

Safety is fundamental to everything that NATS does. In a typical year, NATS handles 2.5M flights through UK airspace and the North Atlantic. Our future systems will increasingly look to leverage AI, so the ability to develop and assure AI-based elements of our safety critical systems will be essential.

The emphasis of the CDT on human-centred AI is crucial to NATS as we research ways of supporting Air Traffic Controllers with autonomous capabilities and continue to develop the human-AI teaming concept, an example of which we prototyped through the recent NATS-York collaborative WIZARD project.

Security is also a significant and increasing focus for air traffic management systems and the future platforms upon which these services are delivered will inevitably take advantage of distributed and cloud architectures that will need to be security assured. We are also very mindful of the need for responsible AI and this requires full engagement with our staff, especially our controllers.

Also, the emphasis of the CDT on safety assurance is vital as our systems need to be approved and are continually assessed to ensure and assure that we provide safe and effective services.

NATS is keen to ensure that the unique challenges of air traffic control are tackled through ongoing world-class research activities that have industrial applicability and real-world value. The SAINTS CDT clearly reflects our research interests, and we believe that the CDT will help to equip the next generation of professionals with the skills necessary to address the challenges of the safety of AI-enabled autonomous systems.

We also see SAINTS as offering an opportunity to continue our long-standing partnership between NATS and the University of York. Over the last 5 years, we have jointly undertaken projects that cover the design, development, validation and assurance of increased automation in complex safety-critical applications.

How do you see AI featuring in your organisation’s future?

NATS recognises the potential transformative power of Artificial Intelligence (AI) in air traffic management and there is ongoing research, innovation, and strategic partnerships to understand where we can safely integrate AI technology to enhance safety, efficiency and resilience.

AI-powered systems could optimise airspace utilisation, thereby reducing fuel consumption and emissions. Enhanced predictive capabilities could enable proactive management of traffic flows, mitigating delays and improving overall efficiency. AI technologies could also facilitate the integration of uncrewed aerial systems (UAS) into existing airspace structures, ensuring safe and seamless integrated operations.

NATS already utilises machine learning and optimisation tools, as well as advanced route planning and airspace management technologies, to enable more precise aircraft trajectories. By analysing vast amounts of data in real-time, AI algorithms can further assist in predicting and mitigating potential conflicts, thereby enhancing safety standards. AI-powered decision support tools could also support controllers in making informed decisions by providing them with accurate and timely information.

Looking ahead, we expect further evolution in AI implementation. One key area of focus is to explore AI applications in collaborative decision-making to enhance situational awareness for controllers and human-machine collaboration.

We are actively investing in research and development, and collaborating with academic institutions and industry partners, in support of those future developments. Research efforts include investigating AI-driven automation techniques for routine tasks, enabling controllers to focus on higher-level decision-making and complex scenarios, and how AI could be validated, assured and implemented into an operational environment working in collaboration with human Air Traffic Controllers (human machine teaming).

Another key area of focus is on exploring trust, explainability and safety issues arising from the use of AI in air traffic management. Having recently established an AI Policy Group this is an issue we take extremely seriously. Safety is, and always will be, our number one priority, so any technology deployed into the operational environment must first and foremost be safety assured.

What do you enjoy most about your role in your organisation? 

Working in Research and Development at NATS is highly enjoyable as it offers the opportunity to explore new ideas and push the boundaries of innovation. With such a complex, safety critical industry as airspace management, this provides additional challenge in ensuring that, whist pushing the boundaries, safety is at heart of everything we do.

I get much enjoyment through leading the team and collaborating with many other like-minded colleagues across many disciplines which helps to fuel creativity and provide fresh perspectives to ideas. The satisfaction of seeing a concept evolve from an initial idea to a tangible solution that can make a real-world impact adds a profound sense of accomplishment and purpose to the work I do.

What working achievement or initiative are you most proud of?

One of the working achievements that I am most proud of is an example of seeing an idea through from initial concept into implementation. I managed a large European project within the SESAR Programme which looked at making arrivals and departures at airports more efficient and resilient by optimising the separation needed between aircraft. I led the work from initial concept design through to validation and acceptance as a NATS investment project.

It now forms part of the Intelligent Approach suite of adaptive controller tools that safely optimise arrival spacing for all conditions to maximise runway capacity, maintain operational resilience and provide better on-time performance. It was implemented at Heathrow in 2018 and since then at a number of other airports across the world.

What one piece of advice might you have for SAINTS postgraduate researchers?

My advice would be to stay in regular touch with industry partners, which I know is a core part of the SAINTS CDT. This is crucial, not just for bridging the gap between academic theory and real-world application but also helps to ensure that the research remains relevant and addresses tangible problems, whilst simultaneously opening doors to collaborative opportunities, resources, and potential career pathways. NATS R&D is really looking forward to continuing to work with the University of York through the SAINTS CDT to see what can be unlocked in advances of the lifelong safety in increasingly autonomous systems within our highly complex, safety critical industry.

Meet Dr Ana MacIntosh

What is your role in SAINTS, and what does it involve?

Director of Operations and Lead for Partnerships – I lead the professional services team responsible for the day-to-day running of SAINTS. Together, we will be working with our postgraduate researchers to make their SAINTS journey valuable, from recruitment through to graduation and keeping in touch with our alumni too.

The CDT is a complex organisational challenge, working slightly differently to a ‘regular’ PhD programme at the University of York, and it’s my responsibility to make sure that we keep everything and everyone on track. The SAINTS network of partners is valuable and diverse, and I’ll be working to make sure that we all get what we need from it – from placements to communicating new research, sponsorship or supporting our partners with recruitment.

Tell us about your research interests. What do you find most interesting or enjoyable about your work?

I don’t carry out research any more, although I have a background, and PhD, in tissue engineering (combining engineered material structures with biological cells to try to create new living tissues to repair damage). I have been working in the area of robotics, autonomy and AI for over 10 years, and I’ve always enjoyed working at the interface of different disciplines – it’s both rewarding to bring your own specialist knowledge to a group, and humbling to find out what you don’t know, or hadn’t thought about.

What working achievement or initiative are you most proud of?

I’m proud of the work I do to nudge people and processes to improve ways of doing things. I am currently involved in several non-standard projects, programmes and centres that I’ve been involved with since the first idea, for example the Centre for Assuring Autonomy, and I make sure that they deliver more than their expected outcomes. Since 2017, I’ve been involved with initiatives at York that have resulted in more than £75M of investment in the area of safety and assurance of autonomy – funding research, education and impact.

What’s next on the research horizon for you?

I’m interested in supporting policy change in the area of Safe AI, finding routes for York’s research to make a difference. For me, that’s all about communicating cutting-edge research in a way that’s accessible and engaging, and finding the right audiences.

What are your thoughts on the future of AI?

Having seen AI and robots up close, I’m excited about their potential for specific and targeted applications, but I’m not concerned that they’ll run away by themselves. AI, autonomy and robotics are tools – advanced ones of course – but we need to remember that people are responsible for their development and use, and those same people have the power to make good or bad decisions.

What one piece of advice do you have for SAINTS postgraduate researchers?

Learn to read and critique other people’s work – it’s the most useful skill you can develop and will help you not only to write better but also to spot inconsistencies and challenge the consensus in a constructive manner.

Meet Dr Colin Paterson

Colin Training co-lead and Professional and academic skills lead for the SAINTS (UKRI AI Centre for Doctoral Training in Safe Artificial Intelligence Systems) CDT here at the University of York.

Colin plays an important role in SAINTS and the Department of Computer Science at York. He’s helped to rebuild a PGR community since returning to campus from Covid and he hopes his work will live at the interface between theory and practice and will consider the gap between models used in autonomous systems and the real world.

What is your role in SAINTS, and what does it involve?

I am the SAINTS training lead and that means I’m responsible for designing a programme of study that supports our Post Graduate Researchers (PGRs). This means providing modules which allow our multi-disciplinary cohort to speak the same language and share an understanding of the problems we face for assuring the safety of AI-enabled systems. It also means preparing them for life beyond the PhD, securing their first post and making an impact beyond the walls of the university.

Tell us about your research interests. What do you find most interesting or enjoyable about your work?

My research considers the gap between models used in autonomous systems and the real world into which they are deployed. This means understanding the impact of simplifying assumptions and uncertainty and the tools and techniques we can employ to make systems fit for purpose.

I want my work to live at the interface between theory and practice, not only finding where the issues are but also finding practical solutions which can impact industrial practice.

What working achievement or initiative are you most proud of?

The piece of published work I’m proudest of is my paper on “Assuring the machine learning lifecycle”. It was one of the first pieces of research I did as a research associate. The work has been the foundation of a good number of pieces of work since and has provided me with many opportunities for collaboration.

More generally I’m proud of the work I have done in rebuilding a PGR community here since returning from Covid. The PGRs we have in the department are super enthusiastic and to see them achieving so many wonderful things in the last two years has brought me great joy.

What’s next on the research horizon for you?

I’m working with some great people in the Institute for Safe Autonomy to build a platform for investigating the practical application of our safety assurance guidance. I’m really looking forward to this for a number of reasons. Firstly I get to play with robotics again, something I haven’t done for a long time, secondly I get to test out my ideas and see how well they work when the “rubber hits the road”, and finally it gives me the opportunity to engage with organisations outside of the university to develop practical guidance to help them use our ideas in the wild.

Can you share some interesting work that you read about recently?

I’ve been reading about conformal prediction recently which seems like an interesting idea which may have practical use in assuring safety. Conformal prediction is an approach for understanding the uncertainty inherent in ML prediction algorithms providing users with a set of possible ‘classes’, rather than a point estimate. We may be able to use approaches such as these to improve decision-making in autonomous systems.

What one piece of advice do you have for SAINTS postgraduate researchers?

Remember this is meant to be fun, but fun doesn’t always mean easy!

What are your thoughts on the future of AI?

AI (including machine learning and data science) will change the world but this may not be in ways which we currently anticipate. For us to make a difference we need to be able to see the full picture and that’s why, to me, SAINTS is so important. We need to be prepared to take a principled approach to safety and apply that to the technology as it develops.

Find out more about the SAINTS (UKRI AI Centre for Doctoral Training in Safe Artificial INtelligence Systems) CDT.

Meet Dr Phillip Morgan

Phillip leads the legal aspect of the interdisciplinary training within the SAINTS CDT.

He has a particular interest in tort law and contract, and his teaching and research is focused on this area.

Philip has published widely, and his academic work has been used in argument before the UK Supreme Court, and cited with approval in courts worldwide.

Phillip is a member of the Law School at the University of York.

What is your role in SAINTS, and what does it involve?

I lead the legal aspect of our interdisciplinary training, and coordinate the involvement of law supervisors within SAINTS.

Tell us about your research interests. What do you find most interesting or enjoyable about your work?

I’m a private lawyer by background, with a particular interest in tort law and contract. For me, the replacement of human agents with AI systems has fascinating consequences for private law, particularly tort. We are yet to fully work out what these are, and how the law should adjust to these developments (if at all). I really enjoy the variety of activities available within academia, and the ability to pursue one’s own interests.

What working achievement or initiative are you most proud of?

During this academic year, I published two books. One of these is The Cambridge Handbook of Private Law and Artificial Intelligence (Cambridge University Press, 2024). It’s a large volume covering a wide range of private law topics and their interface with AI.

What’s next on the research horizon for you?

I’m currently completing a monograph for Cambridge University Press.

Can you share some interesting work that you read about recently?

Interesting work can come from a wide range of scholars at all levels, and I’m keen to champion the work of junior scholars.

One particularly interesting set of ideas I recently encountered was in a PhD I examined on tort law and AI. It examined different forms of AI constructing different forms of liability for AI with different qualities – I won’t give too much away as the individual will be publishing this material soon!

Another work I think is really worth reading on AI is that of my recent PhD student Stefano Faraoni. His work focuses on persuasive technologies (PTs) that use AI and how (if at all) contract law can be used to regulate the contracts formed as a result of exposure to PTs. It’s well worth reading his papers.

What are your thoughts on the future of AI?

The future of AI should involve professionals and academics from a wide variety of disciplines. Lawyers should be heavily involved in this process, but not simply as obstacles to progress, or as compliance checkers, but rather as informed and creative participants who meaningfully help to shape AI technologies.

What one piece of advice do you have for SAINTS postgraduate researchers?

Your ideas will evolve from the instincts you have within the first year of your programme, particularly as you become more well read within a field, and as you have time to reflect. Don’t be afraid to change your mind and revisit earlier assumptions.

Meet Professor Cynthia Iglesias

Cynthia is a Professor of Health Economics, and is the Equality, Diversity and Inclusion (EDI) lead for SAINTS. 

She has a track record of more than twenty years in health economics and outcomes research. Her rich research portfolio focuses on the development and evaluation of medical devices.

Cynthia is a member of the Department of Health Sciences and has been at the University of York since 1998.

What is your role in SAINTS, and what does it involve?

I provide academic leadership in health economics and health sciences for the SAINTS CDT. I am also responsible for developing and overseeing the practical implementation of the Equality, Diversity and Inclusion (EDI) strategy for SAINTS, ensuring that EDI principles inform and guide all CDT activity.

Tell us about your research interests. What do you find most interesting or enjoyable about your work?

My research interests focus on Health Technology Assessment. I enjoy learning about challenges in healthcare and collaborating with multidisciplinary teams to apply – or extend – existing mixed research methods to address these challenges.

What working achievement or initiative are you most proud of?

I really enjoyed my time as an independent member of the Medical Technologies Advisory Committee at the National Institute for Health and Care Excellence (NICE) in England.

What’s next on the research horizon for you?

I will continue to pursue my agenda of research in Evaluation of Medical Devices, including software as a medical device.

Can you share some interesting work that you read about recently?

My team and I are working on an early health technology assessment of GP video group consultations. It has been enlightening to learn about the perceived opportunities and challenges from different stakeholders (e.g., healthcare providers, patients, administrators, etc.) associated with this innovative model of healthcare provision.

What are your thoughts on the future of AI?

AI is associated with immense promise. At the same time, humanity needs to reflect on how it would like to make use of this resource. This is vital to ensure that all stakeholders in the digital ecosystem gain awareness of what safe and responsible use of AI may look like.

What one piece of advice do you have for SAINTS postgraduate researchers?

Be curious, civil and collaborative. Work hard, but don’t forget to enjoy it and look after yourself!

SAINTS welcomes new members of staff to the team

Meet Helen Poyer

Helen joined the SAINTS Centre for Doctoral Training as the Centre Manager in June 2024.

She has spent many years working in Professional Support Services (PSS) for several universities in London, and joined the University of York in 2011.

Her most recent role was Student Administration Manager (Postgraduate Research), where she had responsibility for the oversight and management of academic progression, examinations and awards relating to postgraduate researchers.

She was also responsible for the policies and procedures relating to research degree, and oversaw a large portfolio of UKRI scholarships, and other postgraduate funding.

Helen and her husband now live in York with their two children. She is originally from Middlesbrough, where its residents are affectionately known as ‘smoggies’, due to the long history of steel and chemical industry in the town. A little-known fact is that the Sydney Harbour Bridge is made with steel from Middlesbrough!

Meet Alex Blundell-Joyce

Alex joined the SAINTS team as the CDT’s Coordinator in July 2024.

Since 2018, Alex has held numerous roles across the University in communications, events and student services. In this time, she has worked with a variety of University staff, students and alumni.

Alex’s most recent role was coordinating the Undergraduate Administration Office within the Physics team in the School of Physics, Engineering and Technology.

Alex says: “I absolutely love working with students and I can’t wait to support the new SAINTS postgraduate researchers through this brand new and exciting journey!”

In her spare time, Alex loves reading, going on adventures with her husband and young daughter, and spending time with her family in her home town of Wigan.

Meet Dr Jennifer Chubb

Dr Jenn Chubb standing in a field with a wood in the background. She is smiling at the camera.

Jenn is the Responsible AI lead and Co-investigator for SAINTS.

Her background is in philosophy and social science, and her work focuses on the role of responsibility in science and the public perception of science and technology.

Jenn is a member of the Department of Sociology here at the University of York. In this post, she tells us more about her own research, and her role in the SAINTS CDT.

What is your role in SAINTS, and what does it involve?

I am the Responsible AI lead for SAINTS and Co-Investigator from the Department of Sociology. My role involves coordinating training in Responsible AI (RAI) to ensure that students’ AI safety research is conducted responsibly, serves the public interest and responds to the needs of diverse stakeholders. I will also help students with their RAI action plans and RAI reflections from secondments with project partners.

Tell us about your research interests. What do you find most interesting or enjoyable about your work?

My research career began with my PhD which focused on epistemic responsibility in science. I then researched scientific diplomacy and policy before moving into emerging technologies in terms of ethics, public perception and science communication.

Most of my work in recent years has focused on AI, especially the societal implications in health, education and the creative industries. I have a particular interest in responsibility, algorithmic justice and representation. What I find most enjoyable is working across disciplines. For instance, I am currently working on a project on the voice rights of the individual with Law and Linguistics in the age of AI.

What working achievement or initiative are you most proud of?

I am most proud of my fellowship which focused on music and AI. It was great to design my own project on the way AI is portrayed to the public via documentaries and sound.

Relating to this, I am also very proud of some work I did on AI voices and conversational AI for children. This involved consulting for the BBC on the ethical use of conversational AI for children’s storytelling, which was very interesting.

What’s next on the research horizon for you?

Currently, my main research focus is a project funded by YorVoice focusing on the voice rights of the individual, where I am a Co-Investigator.

The project, ‘Setting the Legal Tone: Towards a framework for the protection of rights in voice personality‘, explores whether a framework concerning the rights and responsibilities of voice personality can be developed. I am working with Peter Harrison (Principal Investigator for Law) and James Tompkinson (Co-Investigator for Languages and Linguistics).

Alongside this, I am continuing my research into attitudes towards AI generated music and I have a growing interest in the role of AI in mental health and educational practice.

Can you share some interesting work that you read about recently?

I have been reading about the environmental cost of AI, which worries me given the huge challenges we face with respect to climate change. It is a cost which is not discussed enough in the literature and is gaining more attention of late, and it is too important for us to ignore.

Beyond this, I’ve been reading the work of sociologist Walter Benjamin and his essay ‘The Work of Art in the Age of Mechanical Reproduction‘, which highlights issues of authenticity in original (art)works vs reproduced art.

What are your thoughts on the future of AI?

I don’t really think my thoughts matter: I think the horse has bolted.

What we do now is to attempt to minimise harm and raise awareness and understanding of this technology. Where AI is being seen as the potential solution to problems in areas it is not yet used, my view is “can we pause and ask if we need to use it at all?”

Things are moving too quickly to properly assess the impacts, but we know the fundamentals and so many people are being left behind. We need to bring in a range of views and voices to AI development and big tech need to stop and think (for once) and stop firing their responsible AI teams (for once).

What one piece of advice do you have for SAINTS postgraduate researchers?

Stay curious and be happy to change lanes. It might seem like interdisciplinary research is tough but you might just be surprised where that conversation will take you.

Meet Professor Ibrahim Habli

Professor Ibrahim Habli is sitting in a lab with a laptop open in front of him. In the background there is a robot.

Ibrahim Habli is the Director of the UKRI AI Centre for Doctoral Training in Lifelong Safety Assurance of AI-enabled Autonomous Systems (SAINTS CDT).

Ibrahim’s expertise is in the design and assurance of safety-critical systems, with a particular focus on AI and autonomous systems. He is a member of the Department of Computer Science.

We caught up with Ibrahim to find out more about his work, and his thoughts on Artificial Intelligence.

What is your role in SAINTS and what does it involve?

As the SAINTS Director, I get to wear many hats. I love supporting and being part of all sorts of training, research and outreach activities, working closely with our fantastic doctoral researchers and the broader SAINTS community. That’s the best part of my job! I also handle things like strategy, budget, external relationships and – yes – even a bit of paperwork.

Tell us about your research interests. What do you find most interesting or enjoyable about your work?

My research focuses on understanding safety of complex systems, specifically software-intensive ones like AI, through an interdisciplinary lens.

I’m particularly interested in the conceptual foundations of safety – informed by collaborations with philosophers – and how those concepts translate to challenging real-world environments. Clinicians provide invaluable insights in this regard. I’m drawn to safety problems without easy answers, as they offer the greatest potential for impactful research.

What working achievement or initiative are you most proud of?

I’m incredibly proud of SAINTS!

This Centre for Doctoral Training embodies the best of academia: close student collaboration, amazing colleagues, unique partnerships, and an inclusive environment. It’s all in service of a crucial public good: making AI safer for everyone!

What’s next on the research horizon for you?

Establishing a foundation for safety science for AI, with SAINTS making a great push on this front!

Can you share some interesting work that you read about recently?

A book titled ‘There Is Nothing For You Here: Finding Opportunity in the Twenty-First Century‘ by Fiona Hill. It covers a wide range of topics, including opportunities, personal struggles, the complexities of politics, and so much more.

What are your thoughts on the future of AI?

Common sense will prevail! Billionaires and politicians may eventually get bored of the existential risk narrative, allowing us to focus on helping the world realise the benefits of AI while mitigating any potential harm.

What one piece of advice do you have for SAINTS postgraduate researchers?

Immerse yourself in the doctoral research journey and actively participate in every aspect.