Welcome to the Lab for AI Verification

Artificial Intelligence is a research and engineering area that develops methods for adaptive and autonomous applications. For example, when your mobile phone learns to recognise your voice — this is an example of its adaptive behaviour. And when your car navigator suggests a better route — this is prototypical autonomous planning. It is easy to see that adaptive and autonomous applications have become pervasive in both the global economy and our everyday lives. However, can we really trust them? The question of trust in computer systems is traditionally a subject of the Formal Verification domain. The two different domains — AI and Formal Verification — thus have to meet.

LAIV is a team of researchers working on a range of inter-disciplinary problems that combine AI and Formal Verification.

For example, we seek answers to the following questions:

  • How do we establish safety and security of AI applications?
  • What are the mathematical properties of AI algorithms?
  • How can types and functional programming help to verify AI?
  • How can we verify neural networks and other related machine-learning algorithms?
  • How can machine learning improve software verification?

The Lab was established in 2019, with the initial intent to provide a local hub where researchers and research students from the Edinburgh Center for Robotics and the National Robotarium can meet with Computer Scientists, Logicians and Programming Language experts interested in verification of AI. Since then, the range of our projects and collaborations widened. Descriptions of our projects can be found here, and here you can learn more about LAIV members and publications. Get in touch with us if you are interested in establishing a new collaboration!

LAIV News

DAIR CDT New Course on Safe AI

The LAIV group has started to teach an AI Safety Course in the New DAIR CDT: the Center for Doctoral training in Dependable and Deployable AI for Robotics. Ekaterina, Colin and Marco teach relevant aspects of verification for Robotics and AI PhD students within this CDT.

Accepted paper at ITP’24

Many congratulations to Natalia Slusarz, and supporting collaborators/supervisors, for having the following paper accepted at ITP’24: Taming Differentiable Logics with Coq FormalisationReynald Affeldt, Alessandro Bruni, Ekaterina Komendantskaya, Natalia Ślusarz, Kathrin Stark https://arxiv.org/abs/2403.13700