Welcome to the Lab for AI Verification

Artificial Intelligence is a research and engineering area that develops methods for adaptive and autonomous applications. For example, when your mobile phone learns to recognise your voice — this is an example of its adaptive behaviour. And when your car navigator suggests a better route — this is prototypical autonomous planning. It is easy to see that adaptive and autonomous applications have become pervasive in both the global economy and our everyday lives. However, can we really trust them? The question of trust in computer systems is traditionally a subject of Formal Verification domain. The two different domains — AI and Formal Verification — thus have to meet.

LAIV is a team of researchers working on a range of inter-disciplinary problems that combine AI and Formal Verification.

For example, we seek answers to the following questions:

  • What are the mathematical properties of AI algorithms and applications?
  • How can types and functional programming help to verify AI planning languages?
  • How can we verify neural networks and other related machine-learning algorithms?
  • How can machine learning improve software verification?

Descriptions of our projects can be found here.

We are a part of Dependable Systems Group at HWU.

LAIV News

Workshop on Logic Programming

We are organising a workshop on Trends, Extensions, Applications and Semantics of Logic Programming and 28-29 May 2020. We invite participants to check out pre-recorded talks and join live discussions: https://www.coalg.org/tease-lp/.

New Project and Vacancies

We are starting a big, multi-site and multi-million project on AI verification in September 2020! The project AISEC: AI Secure and Explainable by Construction is funded by EPSRC, and will investigate novel methods of AI verification in Autonomous vehicles and conversational agents. We are looking for Research assistants and PhD students: see http://laiv.uk/index.php/vacancies/

LAIV paper at IJCNN’20

Many congratulations to all for having the following paper accepted: Kirsty Duncan, Ekaterina Komendantskaya, Robert Sewart, Michael Lones:  Relative Robustness of Quantized Neural Networks Against Adversarial Attacks,  accepted to be published and presented at the International Joint Conference on Neural Networks, IJCNN’20, part of the World Congress on Computational Intelligence: https://wcci2020.org/ 19-24 July 2020, Glasgow, Scotland.