Continuous Verification of Neural Networks

Verified Trustworthy Software Systems (VETSS) funded research project Convener: Continuous Verification of Neural Networks. Funded as part of “Digital Security Through Verification” call.

Grant holders: Katya Komendantskaya (HWU) and David Aspinall (Edinburgh University).

Researchers: Wen Kokke (HWU and Edinburgh), Daniel Kienitz (HWU), Marco Casadio (HWU)

Motivation:

Most of the challenges we encounter in neural network verification are due to the conflict between continuous methods (that enable data classification in multi-dimensional real space) and the discrete methods (used by solvers and provers). But perhaps disadvantages can be turned into capabilities. Conventionally, we assume that the object we verify is uniquely defined, often hand-written, and therefore needs to be verified as-is. Neural networks are different — often there is a continuum of models that can serve as suitable classifiers, and we usually do not have much preference for any of them, as long as they give reasonable prediction accuracy. Given the task of verifying a neural network, we are no longer required to think of the object as immutable, i.e. we are allowed to verify and deploy a different neural network instead.

This opens up new possibilities for verification and justifies several methods of neural network transformation, e.g.:
• neural network size reduction (e.g. by pruning or weight quantisation);
• piece-wise linearisation of activation functions either during or after training;
• use of constraint-aware loss functions during training, or interleave verification with adversarial training, which improves safety of neural networks, and hence ease verification tasks.

Thus verification becomes part of the object construction. Moreover, we assume that the training-verification cycle may repeat potentially indefinitely, especially if neural networks are retrained using new data.

We call such approach to verification Continuous Verification.

However, to be truly successful, this methodology needs proper programming language support. Ideally, the programmer should only need to specify basic neural network parameters and the desired verification constraints, leaving the work of fine-tuning of the training-verification cycle to the integrated tools.

Technical description of main results and conclusions:

I. Refinement types for Neural network verification (Kokke, Komendantskaya, Kienitz):

In the published paper [2], we cast a type-theoretic view on these problems. A conventional verification project aims to establish a proof Γ ⊢ f : A, where f is a function or code we verify, A is a verification property, and Γ is the given theory. In [2,3] we show that the existing neural network verification projects in fact amount to working with a special sort of types, that are refined with SMT-constraints. Simplest examples of refinement types are positive reals (x : R{x > 0}), or booleans which are true (b : Bool{b = true}).

For example, a usual notion of neural network robustness for some class y would be to define an ε-ball B(x,ε) around a sample input x of that class: B(x,ε) = {x’ ∈ [R] : || x’ − x || ≤ ε}, and then prove that any object in B( x, ε) will be classified as y. We use notation [R] to denote a vector of real numbers.

It is easy to see that the idea of verifying neural networks for all objects “within the ε-ball”, fits exactly with the syntax of refinement types! I.e., we verify a neural network N : [R] → R, by imposing a type N : (x : [R] {|| x’ − x || 2 ≤ ε}) → (y : R{y = y}), for some given image x’. Our first successful experiments at building a verification infrastructure around this idea are described in [2]. We initially used F* and Liquid Haskell, functional languages with refinement types, see [3].

Within this project we also explored different ways of integrating neural network verification into other mainstream languages. See, for example, our Z3 verifier that works with TensorFlow models in Python [5] and the Agda extension that allows us to perform neural network verification via refinement types [4].

II. Training Neural Networks with constraints (Casadio, Kienitz, Daggitt, Komendantskaya, Kokke):

In the second half of this project, we explored the problem of neural network verification as program synthesis. In particular, we looked at the problem of training neural networks to satisfy certain type/constraint.

Various methods such as adversarial training, data-augmentation and Lipschitz robustness training have been proposed as means of improving neural network robustness. However, these training methods each optimise for a different definition of robustness. We performed an in-depth comparison of these different definitions, including their relationship, assumptions, interpretability and verifiability after training. We also looked at constraint-driven training, a general approach designed to encode arbitrary constraints, and showed that not all of these definitions are directly encodable. Finally we performed experiments to compare the applicability and efficacy of the training methods at ensuring the network obeys these different definitions. These results highlight that even the encoding of such a simple piece of knowledge such as robustness in neural network training is fraught with difficult choices and pitfalls.

Papers, Drafts and Libraries:

  1. Wen Kokke, Ekaterina Komendantskaya, and Daniel Kienitz. Continuous Verification of Machine Learning: a Declarative Programming Approach. Invited talk at PPDP’20.
  2. Wen Kokke, Ekaterina Komendantskaya, Daniel Kienitz, Robert Atkey, and
    David Aspinall. Neural Networks, Secure by Construction: An Exploration
    of Refinement Types.
    APLAS’20
  3. Wen Kokke, Ekaterina Komendantskaya, and Daniel Kienitz. 2020. StarChild, a library for leveraging the refinement types and SMT solving of F* to verify properties of neural networks.
  4. Wen Kokke. Amethyst: Neural Network Verification in Agda. 2020.
  5. Wen Kokke. Sapphire. A library for translating TensorFlow models to Z3. 2020.
  6. Sabina Jedrzejczyk, Wen Kokke, Ekaterina Komendantskaya, Matthew Daggitt, Daniel Kienitz. A Tutorial on Neural Network Verification in Sapphire (on YouTube).
  7. Marco Casadio, Matthew L. Daggitt, Ekaterina Komendantskaya, Wen Kokke, Daniel Kienitz, Rob Stewart. Property-driven Training: All You (N)Ever Wanted to Know About. ARXIV draft, and the library. 2021

Talks and Invited Seminars:

  1. W. Kokke. Robustness as a Refinement Type. Scottish Programming Languages Seminar, 22 July 2020.
  2. E.Komendantskaya, W. Kokke, D.Kienitz. Continuous Verification of AI: a Declarative Programming Approach. Invited talk at PPDP’20, the 22nd International Symposium on Principles and Practice of Declarative Programming. 9 September 2020
  3. E. Komendantskaya. AI Verification, in Need of Programming Language Support at the SICSA annual conference 2020. 1 October 2020
  4. E.Komendantskaya Refinement types for Verification of Neural Networks at the Workshop on Program Semantics, Specification and Verification (PSSV). 04 November 2020
  5. W. Kokke. Verifying Neural Networks with Agda. Chalmers Functional Programming group, 27 November 2020
  6. E.Komendantskaya. Continuous Verification of Machine Learning: a Declarative Programming Approach . Invited talk at Chalmers Machine Learning for Functional Programmers interest group (24 November 2020) and TU Kaiserslautern (11 March 2021)
  7. E.Komendantskaya, M.Daggitt. Neural network verification as program synthesis. Invited talk at Katz Lab, the Hebrew University of Jerusalem. 18 March 2021
  8. Marco Casadio, Matthew L. Daggitt, Ekaterina Komendantskaya, Wen Kokke, Daniel Kienitz, Rob Stewart. Neural network robustness – Careful what you verify! FMAI’2021, 15 April 2021