AISEC: AI Secure and Explainable by Construction.
This is a multi-site, interdisciplinary and multi-site research project, funded by EPSRC. It aims to give a holistic view on AI verification, from legal to programming infrastructure for creating safer AI applications. The project has a dedicated page.
CONVENER: Continuous Verification of Neural Networks
Funded by the UK Research Institute in Verified Trustworthy Software Systems (VETSS), this project seeks to establish robust methods for embedding neural network verification into implementation of neural networks. Funded as part of “Digital Security Through Verification” call. This project has a dedicated page.
SecConn-NN: Neural Networks with Security Contracts
NCSC-funded research project SecCon-NN: Neural Networks with Security Contracts — towards lightweight, modular security for neural networks. Funded as part of NCSC “Security for AI” call. This project has a dedicated page.
Verification of AI Planning languages
It is becoming increasingly important to verify safety and security of AI applications. While declarative languages (of the kind found in automated planners and model checkers) are traditionally used for verifying AI systems, a big challenge is to design methods that generate verified executable programs. A good example of such a “verification to implementation” cycle is given by AI planning languages like PDDL/STRIPS, where plans are found via a model search in a declarative language, but then interpreted or compiled into executable code in an imperative language. We show that this method can itself be verified. We present a formal framework and a prototype Agda implementation that represent PDDL plans as executable functions that inhabit types that are given by formulae describing planning problems. By exploiting the well-known Curry-Howard correspondence, type-checking then automatically ensures that the generated program corresponds precisely to the specification of the planning problem.
LAIV Members involved: Hill, Komendantskaya, Farka
Verification of Neural Networks
Neural Networks and Convolutional Neural Networks are no longer just a popular research topic, they have found their way into our everyday lives via a variety of tools and applications: from face recognition apps on our phones to autonomous driving features in our cars. But are they really reliable? LAIV’s PhD and MSc students are investigating various aspects of neural network verification.
LAIV Members involved: Komendantskaya, Stewart, Duncan, Kienitz, MSc students
Relative Robustness of Neural Networks
Deep learning is increasingly being moved to edge computing devices and smart sensors, to reduce latency and save band width. Neural network compression such as quantization is necessary to fit trained deep neural networks into these resource constrained devices. At the same time, use of deep learning in safety-critical applications raises the need to verifyproperties of neural networks. Adversarial perturbations have potential to be used as an attack mechanism on neural networks,leading to “obviously wrong” misclassification. SMT solvers have been proposed to formally prove robustness guarantees against such adversarial perturbations. We investigate how well these robustness guarantees are preserved when the precision of a neural network is quantized. We also evaluate how effectively adversarial attacks transfer to quantized neural networks. Our results show that quantized neural networks are generally robust relative to their full precision counterpart (98.6%–99.7%), and the transfer of adversarial attacks decreases to as low as 52.05% when the subtlety of perturbation increases. These results show that quantization introduces resilience against transfer of adversarial attacks whilst causing negligible loss of robustness.
LAIV Members involved: Duncan, Stewart, Komendantskaya
Coinduction and Coalgebra: semantics of nonterminating processes
Coinduction is a mathematical (and reasoning) principle allowing to express properties of nonterminating or concurrent processes, programs and systems. We are interested in using coinduction to analyse properties of nonterminating logic programs.
LAIV Members involved: Komendantskaya, and various co-authors: Basold, Rozplokhas, Li, Farka, Power.
Machine Learning for verification
Formal verification can be a laborious but also a repetitive task. Machine learning can be used to data mine the history of past verification efforts, in order to discover common proof heuristics and guide new proof development. Development of such methods is a subject of our ML4PG and ACL2ML projects.
LAIV Members involved: Komendantskaya, Hill.
LAIV has a general (but somewhat under-used) Githib page, for projects by students and staff.