Continuous Verification of Neural Networks

Verified Trustworthy Software Systems (VETSS) funded research project Convener: Continuous Verification of Neural Networks. Funded as part of “Digital Security Through Verification” call.

Grant holders: Katya Komendantskaya (HWU) and David Aspinall (Edinburgh University).

Researchers: Wen Kokke (HWU and Edinburgh), Daniel Kienitz (HWU), Marco Casadio (HWU)

Motivation:

Most of the challenges we encounter in neural network verification are due to the conflict between continuous methods (that enable data classification in multi-dimensional real space) and the discrete methods (used by solvers and provers). But perhaps disadvantages can be turned into capabilities. Conventionally, we assume that the object we verify is uniquely defined, often hand-written, and therefore needs to be verified as-is. Neural networks are different — often there is a continuum of models that can serve as suitable classifiers, and we usually do not have much preference for any of them, as long as they give reasonable prediction accuracy. Given the task of verifying a neural network, we are no longer required to think of the object as immutable, i.e. we are allowed to verify and deploy a different neural network instead.

This opens up new possibilities for verification and justifies several methods of neural network transformation, e.g.:
• neural network size reduction (e.g. by pruning or weight quantisation);
• piece-wise linearisation of activation functions either during or after training;
• use of constraint-aware loss functions during training, or interleave verification with adversarial training, which improves safety of neural networks, and hence ease verification tasks.

Thus verification becomes part of the object construction. Moreover, we assume that the training-verification cycle may repeat potentially indefinitely, especially if neural networks are retrained using new data.

We call such approach to verification Continuous Verification.

However, to be truly successful, this methodology needs proper programming language support. Ideally, the programmer should only need to specify basic neural network parameters and the desired verification constraints, leaving the work of fine-tuning of the training-verification cycle to the integrated tools.

Technical description of progress so far:

This project casts a type-theoretic view on these problems. A conventional verification project aims to establish a proof Γ ⊢ f : A, where f is a function or code we verify, A is a verification property, and Γ is the given theory. In [2] we show that the existing neural network verification projects in fact amount to working with a special sort of types, that are refined with SMT-constraints. Simplest examples of refinement types are positive reals (x : R{x > 0}), or booleans which are true (b : Bool{b = true}).

For example, a usual notion of neural network robustness for some class y would be to define an ε-ball B(x,ε) around a sample input x of that class: B(x,ε) = {x’ ∈ [R] : || x’ − x || ≤ ε}, and then prove that any object in B( x, ε) will be classified as y. We use notation [R] to denote a vector of real numbers.

It is easy to see that the idea of verifying neural networks for all objects “within the ε-ball”, fits exactly with the syntax of refinement types! I.e., we verify a neural network N : [R] → R, by imposing a type N : (x : [R] {|| x’ − x || 2 ≤ ε}) → (y : R{y = y}), for some given image x’. Our first successful experiments at building a verification infrastructure around this idea are described in [2]. We initially used F* and Liquid Haskell, functional languages with refinement types, see [3].

Within this project we explore different ways of integrating neural network verification into other mainstream languages. See, for example, our Z3 verifier that works with TensorFlow models in Python [5] and the Agda extension that allows us to perform neural network verification via refinement types [4].

Papers and Drafts:

  1. Wen Kokke, Ekaterina Komendantskaya, and Daniel Kienitz. Continuous Verification of Machine Learning: a Declarative Programming Approach. Invited talk at PPDP’20.
  2. Wen Kokke, Ekaterina Komendantskaya, Daniel Kienitz, Robert Atkey, and
    David Aspinall. Neural Networks, Secure by Construction: An Exploration
    of Refinement Types. APLAS’20
  3. Wen Kokke, Ekaterina Komendantskaya, and Daniel Kienitz. 2020. StarChild, a library for leveraging the refinement types and SMT solving of F* to verify properties of neural networks.
  4. Wen Kokke. Amethyst: Neural Network Verification in Agda. 2020.
  5. Wen Kokke. Sapphire. A library for translating TensorFlow models to Z3.