Welcome to the Lab for AI Verification

Artificial Intelligence is a research and engineering area that develops methods for adaptive and autonomous applications. For example, when your mobile phone learns to recognise your voice — this is an example of its adaptive behaviour. And when your car navigator suggests a better route — this is prototypical autonomous planning. It is easy to see that adaptive and autonomous applications have become pervasive in both the global economy and our everyday lives. However, can we really trust them? The question of trust in computer systems is traditionally a subject of Formal Verification domain. The two different domains — AI and Formal Verification — thus have to meet.

LAIV is a team of researchers working on a range of inter-disciplinary problems that combine AI and Formal Verification.

For example, we seek answers to the following questions:

  • How do we establish safety and security of AI applications?
  • What are the mathematical properties of AI algorithms?
  • How can types and functional programming help to verify AI?
  • How can we verify neural networks and other related machine-learning algorithms?
  • How can machine learning improve software verification?

Descriptions of our projects can be found here.



Ali and Katya gave two talks at PPDP’20 this week:  Proof-Carrying Plans: a Resource Logic for AI Planning  and Continuous Verification of AI: a Declarative Programming Approach, covering two main blocks of AI methods: for modelling reasoning and perception.

APLAS’20 publication by the LAIV team

Many congratulations to Wen, Katya, Daniel, David and Bob for having their paper Neural Networks, Secure by Construction: An Exploration of   Refinement Types accepted for publication and presentation at APLAS’20 (the 18th Asian Symposium on Programming Languages and Systems) !!!