Welcome to the Lab for AI Verification

Artificial Intelligence is a research and engineering area that develops methods for adaptive and autonomous applications. For example, when your mobile phone learns to recognise your voice — this is an example of its adaptive behaviour. And when your car navigator suggests a better route — this is prototypical autonomous planning. It is easy to see that adaptive and autonomous applications have become pervasive in both the global economy and our everyday lives. However, can we really trust them? The question of trust in computer systems is traditionally a subject of the Formal Verification domain. The two different domains — AI and Formal Verification — thus have to meet.

LAIV is a team of researchers working on a range of inter-disciplinary problems that combine AI and Formal Verification.

For example, we seek answers to the following questions:

  • How do we establish safety and security of AI applications?
  • What are the mathematical properties of AI algorithms?
  • How can types and functional programming help to verify AI?
  • How can we verify neural networks and other related machine-learning algorithms?
  • How can machine learning improve software verification?

The Lab was established in 2019, with the initial intent to provide a local hub where researchers and research students from the Edinburgh Center for Robotics and the National Robotarium can meet with Computer Scientists, Logicians and Programming Language experts interested in verification of AI. Since then, the range of our projects and collaborations widened. Descriptions of our projects can be found here, and here you can learn more about LAIV members and publications. Get in touch with us if you are interested in establishing a new collaboration!

LAIV News

CAV’22 contribution

Many congratulations to Marco Casadio et al for having a paper accepted at the international conference on Computer-Aided Verification CAV’22 (Part of FLOC’22). Marco Casadio, Ekaterina Komendantskaya, Matthew L. Daggitt, Wen Kokke, Guy Katz, Guy Amir and Idan Refaeli: “Neural Network Robustness as a Verification Property: A Principled Case Study”