Welcome to the Lab for AI and Verification

Artificial Intelligence is a research and engineering area that develops methods for adaptive and autonomous applications. For example, when your mobile phone learns to recognise your voice — this is an example of its adaptive behaviour. And when your car navigator suggests a better route — this is prototypical autonomous planning. It is easy to see that adaptive and autonomous applications have become pervasive in both the global economy and our everyday lives. However, can we really trust them? The question of trust in computer systems is traditionally a subject of Formal Verification domain. The two different domains — AI and Formal Verification — thus have to meet.

LAIV is a team of researchers working on a range of inter-disciplinary problems that combine AI and Formal Verification.

For example, we seek answers to the following questions:

  • What are the mathematical properties of AI algorithms and applications?
  • How can types and functional programming help to verify AI planning languages?
  • How can we verify neural networks and other related machine-learning algorithms?
  • How can machine learning improve software verification?

Descriptions of our projects can be found here.

We are a part of Dependable Systems Group at HWU.

LAIV News

New LAIV members

We welcome several new members to LAIV team: the new cohort of MSc students: Alexandre, Marco Vincent, to work on topics in Neural Net verification, and Wen Kokke (from Edinburgh University) as an RA in the Neural Networks with Security Contracts grant.

LAIV’s first research grant

We are very proud to announce the first successful research funding bid by LAIV team: NCSC-funded research project SecCon-NN: Neural Networks with Security Contracts — towards lightweight, modular security for neural networks. Funded as part of NCSC “Security for AI” call.