Welcome to the webpage of the Lab for AI Verification

The Lab for AI and Verification opened in 2019, with the idea of providing a forum to support an inter-disciplinary community of AI, Formal Methods and Verification researchers working in the wider Edinburgh area and interested to connect these topics to the activities of the Edinburgh Centre for Robotics. Within LAIV, we had many exciting seminars and discussions, the archive of which is still available here. The biggest LAIV initiative has been the EPSRC project AISEC — AI Secure and Explainable by Construction, that investigated programming languages and formal methods approaches to AI verification. We had several cohorts of MSc and PhD students who developed their research ideas together with the Lab, and graduated with amazing research projects submitted as MSc/PhD dissertations. In 2024, together with other researchers in ECR, we celebrated the opening of a new Center for Doctoral Training in Dependable and Deployable AI for Robotics. Prompted by this new exciting chapter, we are opening a new “Research theme” at the Edinburgh Center for Robotics, under the name of Safe and Secure AI for Robotics (SAIR), you can find more about activities of this new research theme here.

The SAIR Theme will seek to answer the following questions:

  • How can we ensure safety and security of autonomous systems with machine-learning components​?
  • How can we verify neural networks deployed in robotics?​
  • How can we ensure robust use of machine learning in security applications? ​
  • How can we define desirable formal properties of learning and autonomous systems?​
  • What are the legal requirements for safe and secure autonomous systems?

Also on this page, you will find details of a new initiative that I and Dr Xiaohao Cai from Southampton University are leading — in gathering a community of researchers interested in Mathematical Foundations of AI.

Maths-AI group will discuss the following questions:

  • What are the actual and desirable mathematical properties of machine learning algorithms?
  • How can types and functional programming help to verify AI and machine learning?
  • What are the mathematical underpinnings of the methods that allow to verify neural networks and other related machine-learning algorithms?
  • How can machine learning improve software verification and theorem proving?

LAIV webpage continues to be a portal for researchers working on a range of inter-disciplinary problems that combine AI, Mathematics and Formal Verification. All events we advertise here are open for participants from any background or affiliation: join in one of our physical locations or online!

Prof. Ekaterina Komendantskaya,

Heriot-Watt and Southampton Universities

LAIV News

Inaugural SAIR Lecture

We are opening the seminar series SAIR: Safe and Secure AI for Robotics. The inaugural lecture will be given by Prof Ram Ramamoorthy, Edinburgh University, on “Safety and Trustworthiness of Human-Centred Autonomous Systems”. More details are available here: https://laiv.uk/sair-safe-and-secure-ai-for-robotics/

Talks on formal Methods for Robotics

In the next 2 weeks, we are hosting two great talks on applications of formal methods in Robotics: https://laiv.uk/laiv-seminars/. If you are in Edinburgh area, join us at HWU or Edinburgh Universities. Otherwise, join on-line!

DAIR CDT New Course on Safe AI

The LAIV group has started to teach an AI Safety Course in the New DAIR CDT: the Center for Doctoral training in Dependable and Deployable AI for Robotics. Ekaterina, Colin and Marco teach relevant aspects of verification for Robotics and AI PhD students within this CDT.