SecConn-NN

Neural Networks with Security Contracts

NCSC-funded research project SecCon-NN: Neural Networks with Security Contracts — towards lightweight, modular security for neural networks. Funded as part of NCSC “Security for AI” call.

Grant holders: Katya Komendantskaya (HWU) and David Aspinall (Edinburgh University).

Researchers: Wen Kokke (HWU and Edinburgh), Daniel Kienitz (HWU)

Motivation:

Neural networks are widely used for classification and pattern-recognition tasks in computer vision, signal processing, data mining, and many other domains. They have always been valued for their ability to work with noisy data, yet only recently, it was discovered that they are prone to adversarial attacks — specially crafted inputs that lead to unexpected, an possibly dangerous, outputs. Verifying properties of neural networks, such as, e.g., robustness against adversarial attacks, is a recognised research challenge. Several current approaches involve: (a) encoding properties as satisfiability problems; (b) proving properties via abstract interpretation; (c) or using an interactive theorem prover.

In this project, we complement the state of the art by employing refinement types in the task of neural network verification. F and Liquid Haskell are functional languages with refinement types, i.e., types can be refined with SMT-checkable constraints. For instance, the type of positive reals (x:R{x > 0}), or booleans which are true (b:bool{b ≡ true}), or a type of neural networks which are robust against adversarial attacks. Unlike, e.g., Python, Fand Liquid Haskell are referentially transparent, which means the semantics of pure programs in these languages can be directly encoded in the SMT logic. This tight integration allows users to specify neural network models and their properties in the same language, while leveraging the powerful automated verification offered by SMT solvers! 

Technical description of progress so far:

We introduce StarChild and Lazuli, two proof-of-concept libraries which leverage the type system and theorem proving capabilities of F ∗ and Liquid Haskell, respectively, to verify properties of pre-trained neural networks.

We demonstrate that (a) it is possible to leverage a sufficiently advanced type system to model properties of neural networks such as robustness as types, and check them without any proof burden; and in service of that, we demonstrate that (b) it is possible to approximately translate neural network models to SMT logic.

Applications. Refinement types as security contracts:

Pragmatically, the above development gives a lightweight way to introduce an arbitrary number of constraints into the code that implements neural networks. These constraints may guarantee a range of safety and security properties of the code that implements neural networks. Here, our leading examples concern guarantees against adversarial attacks. In this case, refinement types serve as “security contracts” that a neural network must satisfy before being deployed in security-critical applications.

Papers and Drafts:

  1. Wen Kokke, Ekaterina Komendantskaya, Daniel Kienitz and David Aspinall. Robustness as a Refinement Type Verifying Neural Networks in Liquid Haskell and F*.Accepted at ETAPS Workshop LiVe’20: 4th Workshop on Learning in Verification, 25 April, Dublin, Ireland.
  2. E. Komendantskaya, R. Stewart, K. Duncan, D. Kienitz, P. Le Hen, P. Bacchus. Neural Network Verification for the Masses. Experience report, June 2019