LAIV Seminars and Events

Welcome to LAIV seminars and events page. Non-members are welcome to attend and contribute! If you wish to sign up for LAIV mailing list, please contact Daniel Kienitz.

We are holding research meetings every Wednesday at 12.15, in Room PG305, unless otherwise stated ( below). Some of those meetings are joint with the Dependable Systems Group (DSG).

COVID-19 Update: All LAIV seminars will now run on-line, at the same time 12.15 — 14.15 on Wednesdays. (“Bringing” lunch is welcome as usual) The virtual venue for all seminars is

March — August 2020

Date Topics
25 March Masterclass on Neural Net Verification with Z3 in Python/Jupiter Notebooks. By Ashwin Rajendran.
1 April Security Research talk, tutorial by Sasha Radomirovich.
8 April PhD Progression talk by Kirsty Duncan. Title: Relative Robustness of Quantized Neural Networks Against Adversarial Attacks
15 April PhD Progression talk by Alasdair Hill. Title: AI Planning in Separation Logic
22 April PhD Progression talk by Daniel Kienitz. Title: Improving Adversarial Robustness via Feature Selection
30 April Reading Group: Probabilistic Model Checker PRISM – 3
We will consider tutorial 2 and the paper

January — March 2020

Date Topics
15 January Reading Group: DL2: Training and Querying Neural Networks with Logic
Presented by Daniel Kienitz.
22 January No seminar
29 January No seminar
05 February Z3 API in Python for Verification of Neural Networks,
tutorial by Katya Komendantskaya. Sources
12 February DSG Seminar: Efficiency of Distributed Compilation Platforms,
tutorial by Alexander Lyon
19 February Liquid Haskell for verification of Neural Networks, tutorial by Wen Kokke
26 February Implementing verification constraints as loss functions for neural networks: overview of DL2 code, by Marco Casadio and Kirsty Duncan
04 March Introduction to Explainable AI using LIME,
reading group led by Alexandre Cardaillac,
information about LIME and the code are available here.
06 March, 13.00 — 14.00 LAIV Lunch in Celebration of LAIV’s 1st year Anniversary.
11 March Reinforcement learning for adversarial attack and defense. By Vincent Larcher.
18 March Was: Scottish Programming Language Seminar (SPLS) at Stirling. Wen Kokke is giving a talk on Verification of Neural Nets in F*. SPLS is now postponed until further notice.
Instead: LAIV Group on-line meeting, testing our Zoom connectivity:
25 March Masterclass on Neural Net Verification with Z3 in Python/Jupiter Notebooks: improvement and elaboration of Katya’s demo given on the 5th February. By Ashwin Rajendran. Zoom meeting:

Calendar of events (LAIV and CS seminars):

Past Seminars

September — December 2019

Date Topics, September — December 2019
24 September A planning meeting, welcome to new members.
Discussion of Logical relations tutorial, Chapter 1. Room EMG26
01 October Reading Group: Logical Relations as a proof method
presented by Alasdair Hill. Chapter 2 in the tutorial . Room EMG26.
08 October Reading Group: Logical Relations as a proof method
presented by Alasdair Hill. Chapter 2 in the tutorial . Room EMG26.
15 October Reading Group: Probabilistic Semantics for RoboChart: A Weakest Completion Approach, by Jim Woodcock
Presented by Katya Komendantskaya.
22 October Reading Group: Probabilistic Model Checker PRISM Following the tutorial.
Presented by Daniel Kienitz.
30 October Scottish Programming Language Seminar:
5 November Reading Group: Yale Wayang data set for a data mining project
Presented by Hannah Cohen.
12 November Reading Group: Probabilistic Model Checker PRISM – 2
Presented by Daniel Kienitz.
19 November Amazon Research day in Edinburgh University
03 December Reading Group: DL2: Training and Querying Neural Networks with Logic
Presented by Katya Komendantskaya.
10 December, Room EMG61, 13.00 Reading Group: DL2: Training and Querying Neural Networks with Logic
Presented by Katya Komendantskaya.
12 December Scottish Theorem Proving seminar in ICMS

May — August 2019

Date Topics, May — September 2019
01 May A planning meeting, welcome to new members
08 MayAdversarial Examples generation (for Neural networks in Keras library):
a demo by Fraser Garrow
Pierre Le Hen: Testing DVL method of Neural network verification:
a progress report
15 May Daniel Kienitz: Generic Algorithms and Verification of Neural nets:
a progress report

Pascal Bacchus: Hardware approach to Neural net programming:
a progress report
22 MayDmitry Rozplokhas:
Title: miniKanren and its formal semantics
Abstract: miniKanren is a small logic programming language which aims to be a light-weight and purely declarative alternative to Prolog. The main field of application for miniKanren is “relational programming” — the technique of writing programs as mathematical relations rather than mathematical functions. Programs written in this way have a few interesting features, such as an ability to be “run backwards” (synthesize arguments from the result).
In the first part of the talk, I am going to give a short overview of the language. We will discuss its advantages and disadvantages, look at some simple examples of programs in miniKanren as well as state-of-the-art applications of relational programming. In the second part of the talk, I will describe the project I am currently involved in: providing formal semantics — rigorous mathematical description — to miniKanren programs and using it to prove some properties of program execution in the language.
Note the change in time and room:
the seminar will take place at 13.00 in Room 1.82
29 May Feedback session on Semester 1 MSc reports: writing about Neural Net verification. This session is rescheduled.
05 June General meeting and discussion of paper choices for the reading group.
11 June, 11.00, EM183 Josh Ko, National Institute for Informatics, Japan.
Title: Streaming Metamorphisms in Agda
Abstract: A metamorphism is a fold followed by an unfold, the former consuming a data structure to compute an intermediate value and the latter producing a codata structure from that value. Bird and Gibbons considered streaming metamorphisms on lists, which, under a commutativity condition, can possibly start producing the output without having to consume all the input. In Agda, we can encode a metamorphic specification in a type such that any program with that type correctly implements the metamorphism. In addition, for streaming metamorphisms, type information can help us find out the commutativity condition as we program the streaming algorithm in Agda’s interactive development environment.
12 June LAIV is meeting academic visitors from Edinburgh University.
14 June, 12.00, PG305 Marta Vallejo, HWU
Title: Parkinson’s disease diagnosis using Convolutional Neural Networks and figure-copying tasks
Abstract: Parkinson’s disease (PD) is a chronic neurodegenerative disorder, which leads to a range of motor and cognitive impairments. An accurate PD diagnosis is a challenging task since its signs and symptoms are very similar to other related diseases such as normal ageing and essential tremor. This work aims to automate the PD diagnosis process by using a Convolutional Neural Network, a type of deep neural network architecture. To differentiate between healthy and PD patients, our approach focuses on discovering deviations in patient’s motor capabilities with the use of drawing tasks. Besides that, since different data representations may capture various aspects of this disease, this work explores which drawing task, cube or pentagon, is more effective in the discrimination process. The goal is to evaluate which data format is more effective in diagnosing PD.
19 June Reading group:
Adversarial Examples – A Complete Characterisation of the Phenomenon:
26 June Reading group:
Adversarial Examples – A Complete Characterisation of the Phenomenon:
03 July , 12-13.30 EM1.82 PhD progression talk 1. Amani Benhalem.
Title: Artificial Neural Network – Guided Particle Swarm Optimisation

Abstract: Abstract: Many well-known machine learning algorithms are inspired by nature. For example, swarm intelligence techniques are inspired by animal flock behaviours, artificial neural networks are inspired by a network of neurons in an animal brain; genetic algorithms are inspired by natural evolution. Even though these techniques are inspired by nature, we can apply them to solve many artificial problems. We can also improve the performance of one of them by combining it with another technique that is behaviourally different. Our work presents the idea of improving the performance of Particle Swarm Optimization by combining it with Artificial Neural Networks. The reason for our idea is because PSO uses very simple rules to move the swarm towards the target and adding the features of ANN to PSO equations will make individuals in the swarm more intelligent.

PhD progression talk 2. Alasdair Hill.
Title: Formal Verification of AI Planning Languages

Abstract: Verification of Artificial Intelligence is becoming increasingly important due to AI’s ever growing presence across society. It is already seen across society in facial identification, financial predictions and self driving vehicles. The correctness of AI is therefore inextricably linked to safety and security in the real world. AI planning is used by robots to create plans to accomplish real world tasks. The planning is done in a virtual world model where the robot is given a set of actions, an initial state and a goal state and the robot has to compose a plan of sequential actions to reach the goal state. Before any robot carries out real life execution plans we want ensure the validity of them. Whilst current planners such as PDDL run tests to check these plans, they do not supply any proof evidence that the plans are correct. To be truly confident about an AI generated plan we need plans to have proof evidence that can be checked in some formal system. In this talk I will introduce and discuss two formal systems for the verification of AI plans.

PhD progression talk 3. Abdullah Altawairqi.
Title: Exploring the Modeling of Attack Strategies for STPA.

Abstract: Security system analysis and safety system analysis have in common to identifying the circumstances that could threaten the system’s functions or integrity. While safety is concerned with avoiding accidents, security system analysis aims to prevent the system from suffering from intentional acts. In security analysis, this modeling of an attacker’s intention is done with the identification of security targets and, in combination with the identification of the surface of attack helps to specify the system defence to build. Systems-Theoretic Process Analysis (STPA) which is the hazard analysis technique of Systems-Theoretic Accident Model and Processes (STAMP) offers to reason about the system in terms of its control loops and to derive the safety constraints of the system by identify how the entities of a control loop could malfunction or miss their goal. STAMP integrates a notion of causal facto to elicit safety properties of complex systems. Such modeling is intrinsically focused on the system’s point of view while a security analysis would need to understand the attacker’s intention and capabilities. Attack Trees is a graphical representation to model and analyse potential attack strategies. Attack Trees could be extended to consider defensive patterns in Attack Defence Trees (ADTrees). In this talk we explore the possibilities to combine both STPA analysis and ADTrees modeling to strengthen a system security analysis. The approach would help to identify attack intentions and capabilities to priorities and reduce the scope of the analysis.
10 July Reading group:
Adversarial Examples – A Complete Characterisation of the Phenomenon:
17 July Presentation and discussion of MSc thesis results: Pierre Le Hen
24 July Presentation and discussion of MSc thesis results: Daniel Kienitz
31 July Presentation and discussion of MSc thesis results: Pascal Bacchus

January — April 2019

DateTopics, January — March 2019
9 JanSemester 2 Introduction.
Reading Seminar: In the next 6 sessions, we will cover
Safety Verification of Deep Neural Networks,
by Xiaowei Huang, Marta Kwiatkowska, Sen Wang and Min Wu
23 Jan

Reading Seminar: Introduction and general set-up
30 JanReading Seminar:
Background definitions, regions
6 Feb
Reading Seminar:
Verification Framework
13FebReading Seminar:
Feature Decomposition and Discovery
20 FebReading Seminar:
Feature Decomposition and Discovery
28 FebReading Seminar:
Selection of Regions and Manipulations
6 MarchReading Seminar:
Checking the verification properties in an SMT solver
18 MarchLAIV Masterclass: Survey of Deep Learning: Methods and Applications,
by Javier Garcia
21 MarchLAIV Masterclass: Convolutional Neural nets, by Mario Vasilev
27 MarchLAIV Masterclass: Keras and Adversarial example generation,
by Fraser Garrow