LAIV Research and Projects

PhD project “Verification of Neural Networks with industrial applications”

Start date: September 2023. Heriot-Watt University, Scotland. Fee waiver and full UK stipend are available. 

Neural networks (NNs) and other machine learning algorithms are often embedded as pattern-recognition or signal processing components into complex software. However, NNs are known to be unstable when out-of-distribution shifts occur in data. These are often known as “adversarial attacks”. Only very recently, solvers were developed, such as Marabou or alpha-beta-Crown, that can verify regions of stable behavior for NNs. Until recently, three problems prevented practical adoption of NN solvers: a) lack of convenient user interfaces in which complex property specifications for NNs could be written; b) poor integration of these solvers into richer provers where analysis or verification of the overall complex system behaviour takes place; c) efficient ways to automatically repair NNs when their verification fails. Our research team has implemented a tool, Vehicle (https://github.com/vehicle-lang/vehicle), that uses types and differentiable logic to solve these three problems.

The aim of this PhD project is to use Vehicle to explore a range of NN-related vulnerabilities, both in terms of fragility of NN and NN repair, cast as “property specification” and “property-driven training”, respectively. This project is funded by SLB Cambridge, and will involve close collaboration with the AI team at SLB, as well as a 6 months internship with SLB (https://www.slb.com/) at Cambridge.

We are looking for a talented student, interested either in functional programming and theorem proving, neural networks or statistical modelling of physical systems. Whichever background you come from, you will have to learn all three.

If you wish to discuss any details of the project informally, please contact Katya Komendantskaya,  Email: komendantskaya@gmail.com

AISEC: AI Secure and Explainable by Construction.

This is a multi-site, interdisciplinary and multi-site research project, funded by EPSRC. It aims to give a holistic view on AI verification, from legal to programming infrastructure for creating safer AI applications. The project has a dedicated page.

Formalisation of Criminal Law in Catala.

In this project, we look into programming language support for formalisation and implementation of Law , especially in cases where Law applies to AI-enabled complex systems

CONVENER: Continuous Verification of Neural Networks

Funded by the UK Research Institute in Verified Trustworthy Software Systems (VETSS), this project seeks to establish robust methods for embedding neural network verification into implementation of neural networks. Funded as part of “Digital Security Through Verification” call.  This project has a dedicated page. 

SecConn-NN: Neural Networks with Security Contracts

NCSC-funded research project SecCon-NN: Neural Networks with Security Contracts — towards lightweight, modular security for neural networks. Funded as part of NCSC “Security for AI” call. This project has a dedicated page.

Verification of AI Planning languages

It is becoming increasingly important to verify safety and security of AI applications. While declarative languages (of the kind found in automated planners and model checkers) are traditionally used for verifying AI systems, a big challenge is to design methods that generate verified executable programs. A good example of such a “verification to implementation” cycle is given by AI planning languages like PDDL/STRIPS, where plans are found via a model search in a declarative language, but then interpreted or compiled into executable code in an imperative language. We show that this method can itself be verified. We present a formal framework and a prototype Agda implementation that represent PDDL plans as executable functions that inhabit types that are given by formulae describing planning problems. By exploiting the well-known Curry-Howard correspondence, type-checking then automatically ensures that the generated program corresponds precisely to the specification of the planning problem.

LAIV Members involved: Hill, Komendantskaya, Farka

Verification of Neural Networks

Neural Networks and Convolutional Neural Networks are no longer just a popular research topic, they have found their way into our everyday lives via a variety of tools and applications: from face recognition apps on our phones to autonomous driving features in our cars. But are they really reliable? LAIV’s PhD and MSc students are investigating various aspects of neural network verification.

LAIV Members involved: Komendantskaya, Stewart, Duncan, Kienitz, MSc students

Relative Robustness of Neural Networks

Deep learning is increasingly being moved to edge computing devices and smart sensors, to reduce latency and save band width. Neural network compression such as quantization is necessary to fit trained deep neural networks into these resource constrained devices. At the same time, use of deep learning in safety-critical applications raises the need to verifyproperties of neural networks. Adversarial perturbations have potential to be used as an attack mechanism on neural networks,leading to “obviously wrong” misclassification. SMT solvers have been proposed to formally prove robustness guarantees against such adversarial perturbations. We investigate how well these robustness guarantees are preserved when the precision of a neural network is quantized. We also evaluate how effectively adversarial attacks transfer to quantized neural networks. Our results show that quantized neural networks are generally robust relative to their full precision counterpart (98.6%–99.7%), and the transfer of adversarial attacks decreases to as low as 52.05% when the subtlety of perturbation increases. These results show that quantization introduces resilience against transfer of adversarial attacks whilst causing negligible loss of robustness.

LAIV Members involved: Duncan, Stewart, Komendantskaya

Coinduction and Coalgebra: semantics of nonterminating processes

Coinduction is a mathematical (and reasoning) principle allowing to express properties of nonterminating or concurrent processes, programs and systems. We are interested in using coinduction to analyse properties of nonterminating logic programs.

LAIV Members involved: Komendantskaya, and various co-authors: Basold, Rozplokhas, Li, Farka, Power.

Machine Learning for verification

Formal verification can be a laborious but also a repetitive task. Machine learning can be used to data mine the history of past verification efforts, in order to discover common proof heuristics and guide new proof development. Development of such methods is a subject of our ML4PG and ACL2ML projects.

LAIV Members involved: Komendantskaya, Hill.

LAIV has a general (but somewhat under-used) Githib page, for projects by students and staff.