LAIV Seminars and Events

Welcome to LAIV seminars and events page. Non-members are welcome to attend and contribute! If you wish to sign up for LAIV mailing list and receive reminders of the upcoming events, please contact Daniel Kienitz (dk50 at hw dot ac dot uk), and let him know your name, affiliation, and email address.

All LAIV seminars now run on-line, at specified times on Wednesdays (see the timetable below for details). The virtual venue for all seminars is https://zoom.us/j/736667631

Some of those meetings are joint with the AISEC Grant (color-coded in “AISEC colours” red and gold)

October — November 2020

Date Topics
1 October
Katya speaks about AI Verification in Need of Programming Language Support at the SICSA Annual Conference
6 October
Edinburgh Center for Robotics: Annual Conference.
07 October
12.00 BST
(AISEC talk) Madeline Cheah, Horiba Mira
Title: Cybersecurity and Future Mobility Slides.
Abstract: Cybersecurity is already a major concern for manufacturers – but what will cybersecurity for vehicles of the future look like? Connected autonomous vehicles are expected to be plugged into an ecosystem that would mean that we would have to be faster, more agile and more responsive to security threats.
This presentation will identify potential questions we would need to be answering about autonomous vehicles and the ecosystem and all the many new technologies that have already been trialled, and forecasts what security practitioners of the future will potentially face.
Bio: Dr Madeline Cheah is currently Cyber Security Innovation Lead at HORIBA MIRA, with responsibility for automotive and autonomous vehicle cyber security research at the company. She started her current role after having earned a PhD in automotive cyber security at Coventry University, with a focus on systematic security evaluations on automotive interfaces and all the processes (formal or informal) that might be inherent in such a framework. Her contributions have been published in peer-reviewed papers and she is a regular speaker at cybersecurity events.
14 October
9.00 BST
(AISEC talk) Matthew Daggitt
Title: An introduction to algebraic routing theory
Abstract: Modern routing protocols are complex, so much so that even the Border Gateway Protocol that runs the backbone of the internet is known to exhibit undesirable behaviour such as non-convergence and non-determinism. In this talk I will discuss the work that I and others have been doing on an algebraic theory of routing. Instead of reasoning about an individual protocol implementation, this approach proves general theorems about routing that can then be specialised to a particular protocol. For example one of our recent results shows that there is no difference between the sets of routing problems solvable with reliable vs unreliable communication. In the talk I will also discuss our efforts in formalising these proofs in the dependently typed language and proof assistant Agda. In particular, I’ll discuss our recent formal proof of correctness of a protocol containing many of the features of BGP: local preferences, communities and route filtering.
15 October
LAIV attends SAFETY FOR CONVERSATIONAL AI WORKSHOP: https://tinyurl.com/safeconvai
21
October
9.00 BST
Esra Erdem, Sabanci University.
Title: Applications of Answer Set Programming where Theory meets Practice
Abstract: We have been investigating applications of Answer Set Programming (ASP) in various domains, ranging from historical linguistics and bioinformatics to economics and robotics. In these applications, theory meets practice around challenging computational problems, and they all start a journey towards benefiting science and life. ASP plays an important role in this journey, sometimes as a declarative programming paradigm to solve hard combinatorial search problems (e.g., phylogeny reconstruction for Indo-European languages, multi-agent path finding in autonomous warehouses, matching problems in economics), and sometimes as a knowledge representation paradigm to allow deep reasoning and inferences over incomplete heterogeneous knowledge and beliefs of agents (e.g., hybrid planning for robotic manipulation, diagnostic reasoning in cognitive factories, explanation generation for biomedical queries). In this talk, we will share our experiences from such different applications of ASP, and discuss its role and usefulness from different perspectives.
Bio: Esra Erdem is an associate professor in computer science and engineering at Sabanci University. She received her Ph.D. in computer sciences at the University of Texas at Austin (2002), and carried out postdoctoral research at the University of Toronto and Vienna University of Technology from 2002 to 2006. Her research is in the area of artificial intelligence, in particular, the mathematical foundations of knowledge representation and reasoning, and their applications to various domains, including robotics, bioinformatics, logistics, and economics. She was a general co-chair of ICLP 2013, and is a program co-chair of ICLP 2019 and KR 2020, and the general chair of KR 2021. She served on the editorial board of JAIR, and is a member of the editorial board of TPLP, the ALP executive committee, and the KR steering committee. She received the Young Scientist award by the Science Academy, and the Women Leader of Innovative and Disruptive Technologies award by Microsoft.
21
October
LAIV is attending the Scottish Seminar on Programming Language Semantics
28
October
9.00 GMT
(AISEC talk) Guy Katz (The Hebrew University of Jerusalem).
Title: Verifying Recurrent Neural Networks using Invariant Inference
Abstract: Deep neural networks are revolutionizing the way complex systems are developed. However, these automatically-generated networks are opaque to humans, making it difficult to reason about them and guarantee their correctness. In this work, we propose a novel approach for verifying properties of a widespread variant of neural networks, called recurrent neural networks. Recurrent neural networks play a key role in, e.g., speech recognition, and their verification is crucial for guaranteeing the reliability of many critical systems. Our approach is based on the inference of invariants, which allow us to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems. Experiments with a proof-of-concept implementation of our approach demonstrate that it performs orders-of-magnitude better than the prior state of the art.
[Based on an ATVA 2020 paper. Joint work with Yuval Jacoby and Clark Barrett]
Bio: Guy Katz is a senior lecturer at the Hebrew University of Jerusalem, Israel.
He received his Ph.D. at the Weizmann Institute of Science in 2015.
His research interests lie at the intersection between Formal Methods and
Software Engineering, and in particular in the application of formal methods to
software systems with components generated via machine learning. https://www.katz-lab.com/
04
November
9.00 GMT
Alasdair Hill (The Lab for AI and Verification, HWU).
Title: Tutorial on small-step and big step operational semantics for programming languages.
11
November
9.00 GMT
Chih-Hong Cheng (Denso Automotive Deutschland GmbH).
Title: Research Challenges and Opportunities towards Safe Autonomous Driving
Abstract: The race towards autonomous driving has lasted for years. I argue that the chance for the contestants to close the safety gap lies in scientifically rooted engineering methods that can efficiently improve the quality of AI. In particular, I consider a structural approach (via GSN) to argue the quality of Neural Networks (NN) with NN-specific dependability metrics. A systematic analysis by considering the quality of data collection, training, testing, and operation allows us to identify many unsolved research questions: (1) How to create a weaker form of data completeness while admitting the combinatorial explosion of scenarios? (2) How to overcome the specification problem for formal verification, where input as images can be hard to characterize? (3) How to justify that a decision made by the network is supported by prior similarities in the training data? Some of my conducted prior work will be concisely mentioned with a statement on their limitations.
Bio: Currently, Chih-Hong Cheng is a technical manager at DENSO, researching safety technologies for autonomous driving. Before DENSO, he held research positions in fortiss and ABB Corporate Research. He received his doctoral degree in CS from the Technical University of Munich.
18
November
14.00 GMT
(AISEC talk) Marco Gaboardi (Boston University, USA).
Title: TBA
25
November
14.00 GMT
(AISEC talk) Grant Passmore (Imandra.ai, USA).
Title: TBA
02
December
Reading Group: Probabilistic Model Checker PRISM – 3
We will consider tutorial 2 and the paper

For full history of seminars, see below.

Calendar of events (LAIV and CS seminars):

Past Seminars

June — September 2020

Date Topics
03 June Dorian Gouzou. Particle Swarm Optimisation in Machine Learning
10 June Hugo Cousin High-Performance Analysis for Social Networks in Rust
17 June, 10.00 PhD Progression talk: Fazl Barez
System 3: Learning with constraints and domain knowledge.
17 –19
June
LAIV at PLDI’20.
24 June Invited talk: John Woodward ABBA: Automatically Building Better Algorithms
Abstract: We will look at how to automatically build better algorithms. We begin by manually identifying a small component of a human-designed algorithm. We then target this piece of code with different metaheuristics (random search, hill-climbing, and evolutionary algorithms). I each case the resulting algorithm is an improvement over the pre-existing algorithm. We will look at a few applications: 1/ probability distributions for use in an optimization algorithm 2/ mutation operators for genetic algorithms 3/ debugging code overnight – a real-world application.
1-4 July LAIV at IJCAR’20
8 July Reading Group: PaRoT: A Practical Framework for Robust Deep Neural Network Training, by FiveAI
We will consider this paper . Marco Casadio will lead the seminar.
15 July CEC 2020 presentation by Wenjun Wang:
Title: Evolutionary Learning for Soft Margin Problems:
A Case Study on Practical Problems with Kernels
Abstract: This paper addresses two practical problems: the classification and prediction of properties for polymer and glass materials, as a case study of evolutionary learning for tackling soft margin problems. The presented classifier is modelled by support vectors as well as various kernel functions, with its hard restrictions relaxed by slack variables to be soft restrictions in order to achieve higher performance. We have compared evolutionary learning with traditional gradient methods on standard, dual and soft margin support vector machines, built by polynomial, Gaussian, and ANOVA kernels. Experimental results for data on 434 polymers and 1,441 glasses show that both gradient and evolutionary learning approaches have their advantages. We show that within this domain the chosen gradient methodology is beneficial for standard linear classification problems, whilst the evolutionary methodology is more effective in addressing highly non-linear and complex problems, such as the soft margin problem.
19 July
LAIV attends Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS) Details.
21 July
LAIV attends Workshop on Verification of Neural Networks (VNN’20) and Verification of Neural Networks Competition (VNN-COMP’20) Details.
22 July
13.00 — 18.00
LAIV attends SPLS: Scottish Programming Languages Seminar Details.
03-20 August LAIV attends SPLV: Scottish Summer School on Programming Languages and Verification Details.
03-20 August LAIV attends SPLV: Scottish Summer School on Programming Languages and Verification Details.
19 August
13.00 — 14.30
Vaishak Belle (Edinburgh University)
Title: Logical Interpretations of Autoencoders.
Abstract: The unification of low-level perception and high-level reasoning is a long-standing problem in artificial intelligence, which has the potential to not only bring the areas of logic and learning closer together but also demonstrate how abstract concepts might emerge from sensory data. Precisely because deep learning methods dominate perception-based learning, including vision, speech, and linguistic grammar, there is fast-growing literature on how to integrate symbolic reasoning and deep learning. Broadly, efforts seem to fall into three camps: those focused on defining a logic whose formulas capture deep learning, ones that integrate symbolic constraints in deep learning, and others that allow neural computations and symbolic reasoning to co-exist separately, to enjoy the strengths of both worlds. In this paper, we identify another dimension to this inquiry: what do the hidden layers really capture, and how can we reason about that logically? In particular, we consider variational autoencoders that are widely used for dimensionality reduction and inject a symbolic generative framework onto the feature layer. This allows us, among other things, to generate example images for a class to get a sense of what was learned. Moreover, the modular structure of the proposed model makes it possible to learn relations over multiple images at a time, as well as handle noisy labels. Our empirical evaluations show the promise of this inquiry.
* this is joint work with Anton Fuxjaeger & will appear at ECAI 2020
26 August
13.00 — 14.30
Reading group: Adversarial training and provable defenses: Bridging the gap by Balunovic et al., Link to the paper
Abstract: We present COLT, a new method to train neural networks based on a novel com-bination of adversarial training and provable defenses. The key idea is to model neural network training as a procedure which includes both, the verifier and the adversary. In every iteration, the verifier aims to certify the network using convex relaxation while the adversary tries to find inputs inside that convex relaxation which cause verification to fail. We experimentally show that this training method,named convex layerwise adversarial training (COLT), is promising and achieves the best of both worlds – it produces a state-of-the-art neural network with certified robustness of 60.5% and accuracy of 78.4% on the challenging CIFAR-10 dataset with a 2/255 L∞ perturbation. This significantly improves over the bestconcurrent results of 54.0% certified robustness and 71.5% accuracy.
02 September
12.00 — 13.00
Katya Komendantskaya (HWU)
Title: Continuous Verification of Machine Learning: a Declarative Programming Approach
Abstract: I will discuss state of the art in neural network verification. I will discuss the notion of “continuous verification” that characterises a family of methods that explore continuous nature of machine learning algorithms. I will argue that methods of continuous verification must rely on robust programming language infrastructure (refinement types, automated proving, type-driven program synthesis), which provides a major opportunity for the declarative programming language community.
* this is joint work with Wen Kokke and Daniel Kienitz & will appear as invited talk at PPDP’2020.
07 September
11.30 — 12.30
Alasdair Hill (HWU)
Title: Proof-Carrying Plans: a Resource Logic for AI Planning
Abstract: Planning languages have been used successfully in AI for several decades. Recent trends in AI verification and Explainable AI have raised the question of whether AI planning techniques can be verified. In this paper, we present a novel resource logic, the Proof Carrying Plans (PCP) logic that can be used to verify plans produced by AI planners. The PCP logic takes inspiration from existing resource logics (such as Linear logic and Separation logic) as well as Hoare logic when it comes to modelling states and resource-aware plan execution. It also capitalises on the Curry-Howard approach to logics, in its treatment of plans as functions and plan pre- and post- conditions as types. We present two main results. From the theoretical perspective, we show that the PCP logic is sound relative to the standard possible world semantics used in AI planning. From the practical perspective, we present a complete Agda formalisation of the PCP logic and of its soundness proof. Moreover, we showcase the Curry-Howard, or functional, value of this implementation by supplementing it with the library that parses AI plans into Agda’s proofs automatically. We provide evaluation of this library and the resulting Agda functions.
* this is joint work with Katya Komendantskaya and Ron Petrick & will appear as regular talk at PPDP’2020.
8-10
September
LAIV attends PPDP’20: 22nd International Symposium on Principles and Practice of Declarative Programming. Details
16
September 12.00 — 13.30
Katya Komendantskaya (HWU)
Title: The New Normal: We Cannot Eliminate Cuts in Coinductive Sequent Calculi, but we Can Explore Them.
Abstract: In sequent calculi, cut elimination is a property that guarantees that any provable formula can be proven analytically. For example, Gentzen’s classical and intuitionistic calculi LK and LJ enjoy cut elimination. The property is less studied in coinductive extensions of sequent calculi. In this paper, we use coinductive Horn clause theories to show that cut is not eliminable in a coinductive extension of LJ, a system we call CLJ. We derive two further practical results from this study. We show that CoLP by Gupta et al. gives rise to cut-free proofs in CLJ with fixpoint terms, and we formulate and implement a novel method of coinductive theory exploration that provides several heuristics for discovery of cut formulae in CLJ.
* this is joint work with Dmitry Rozplokhas and Henning Basold & will appear in ICLP’2020.
18-24
September
LAIV attends ICLP’20: The 36th International Conference on Logic Programming. Details
23 September
12.00 BST
(AISEC talk) Lu Yu
Title: Semantic Representation: From Color to Deep Embeddings
Abstract: One of the fundamental problems of computer vision is to represent images with compact semantically relevant embeddings. These embeddings could then be used in a wide variety of applications, such as image retrieval, object detection, and video search. The main objective of this presentation is to study image embeddings from two aspects: color embeddings and deep embeddings.
30 September
12.00
(AISEC talk) Scott McLachlan
Title: Lawmaps: Enabling Legal AI development through Visualisation of the Implicit Structure of Legislation and Lawyerly Process

March — May 2020

Date Topics
25
March
Masterclass on Neural Net Verification with Z3 in Python/Jupiter Notebooks.
By Ashwin Rajendran.
1 April Security Research talk, tutorial by Sasha Radomirovich.
8 April PhD Progression talk by Kirsty Duncan.
Title: Relative Robustness of Quantized Neural Networks
Against Adversarial Attacks
15 April PhD Progression talk by Alasdair Hill. Title: AI Planning in Separation Logic
22 April PhD Progression talk by Daniel Kienitz.
Title: Learning disentangled manifolds for robust classification
29 April Reading Group:
Auto-encoders for Neural network Verification lead by Daniel Kienitz
We will consider the paper
Generalized Denoising Auto-Encoders as Generative Models
06 May Fraser Garrow.
Title: Genetic algorithm improvement for accurate and robust neural networks
11 May, 17.00 —
24.00
LAIV attends NASA Formal methods conference, Safe AI Workshop, link.
12 May, 16.00 —
24.00
LAIV attends NASA Formal methods conference, link.
13 May, 12.15 — 13.30 Invited talk: Guy Katz, Hebrew University. Safety in AI Systems:
Verification of Deep Neural Networks
. Slides.
Abstract: Deep neural networks have emerged as a widely used and
effective means for tackling complex, real-world problems.
However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior.
We present an SMT-based technique for formally verifying properties of
deep neural networks (or providing counter-examples).
The technique can be used to prove various safety properties of
neural networks, or to measure a network’s robustness to adversarial inputs.
Our approach is based on the simplex method, extended to handle
piecewise-linear constraints, such as Rectified Linear Units (ReLUs), which
are a crucial ingredient in many modern neural networks. We evaluate our
technique on a prototype deep neural network implementation of the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu).
Brief Bio: Guy Katz is a senior lecturer at the Hebrew University of Jerusalem, Israel.
He received his Ph.D. at the Weizmann Institute of Science in 2015.
His research interests lie at the intersection between Formal Methods and
Software Engineering, and in particular in the application of formal methods to
software systems with components generated via machine learning. https://www.katz-lab.com/
13 May, 16.00 —
24.00
LAIV attends NASA Formal methods conference, link.
14 May, 16.00 —
24.00
LAIV attends NASA Formal methods conference, link.
20 May, 12.15 — 13.30 Invited talk: Iain Whiteside, Five AI Hard Cases, Edge Cases, and Head Cases: Challenges in AI Safety for Autonomous Vehicles
Abstract: We first elaborate on some of the practical challenges for demonstrating
safety of a perception system for Level 4 self-driving. We then outline a methodology and set of technologies that we believe can help us properly argue for the safety of AI components for an AV stack. We will exemplify the methodology with some real world examples from Five’s technology.
Brief Bio: Iain is a Principal Scientist at Five and leads the Safety Assurance team.
Prior to working at Five, Iain was a researcher into the future of assurance cases at
NASA Ames Research Center, leading the development of the AdvoCATE toolkit —
the state of the art for assurance case construction. Iain has a PhD in formal
verification of software from the University of Edinburgh and has been active in the
verification and safety community for over a decade.
27 May Bartosz Schatton. Explainability versus Robustness for AI
(with applications to medical imaging)
. Slides.
28 May,
9.00 — 12.30
LAIV attends TEASE-LP: Workshop on Trends, Extensions, Applications
and Semantics of Logic Programming,
link.
29 May,
14.30 — 18.30
LAIV attends TEASE-LP: Workshop on Trends, Extensions, Applications
and Semantics of Logic Programming,
link.

January — March 2020

Date Topics
15 January Reading Group: DL2: Training and Querying Neural Networks with Logic
Presented by Daniel Kienitz.
22 January No seminar
29 January No seminar
05 February Z3 API in Python for Verification of Neural Networks,
tutorial by Katya Komendantskaya. Sources
12 February DSG Seminar: Efficiency of Distributed Compilation Platforms,
tutorial by Alexander Lyon
19 February Liquid Haskell for verification of Neural Networks, tutorial by Wen Kokke
26 February Implementing verification constraints as loss functions for neural networks: overview of DL2 code, by Marco Casadio and Kirsty Duncan
04 March Introduction to Explainable AI using LIME,
reading group led by Alexandre Cardaillac,
information about LIME and the code are available here.
06 March, 13.00 — 14.00 LAIV Lunch in Celebration of LAIV’s 1st year Anniversary.
11 March Reinforcement learning for adversarial attack and defense. By Vincent Larcher.
18 March Was: Scottish Programming Language Seminar (SPLS) at Stirling. Wen Kokke is giving a talk on Verification of Neural Nets in F*. SPLS is now postponed until further notice.
Instead: LAIV Group on-line meeting, testing our Zoom connectivity: https://us04web.zoom.us/j/626010005
25 March Masterclass on Neural Net Verification with Z3 in Python/Jupiter Notebooks: improvement and elaboration of Katya’s demo given on the 5th February. By Ashwin Rajendran. Zoom meeting: https://zoom.us/j/736667631

September — December 2019

Date Topics, September — December 2019
24 September A planning meeting, welcome to new members.
Discussion of Logical relations tutorial, Chapter 1. Room EMG26
01 October Reading Group: Logical Relations as a proof method
presented by Alasdair Hill. Chapter 2 in the tutorial . Room EMG26.
08 October Reading Group: Logical Relations as a proof method
presented by Alasdair Hill. Chapter 2 in the tutorial . Room EMG26.
15 October Reading Group: Probabilistic Semantics for RoboChart: A Weakest Completion Approach, by Jim Woodcock
Presented by Katya Komendantskaya.
22 October Reading Group: Probabilistic Model Checker PRISM Following the tutorial.
Presented by Daniel Kienitz.
30 October Scottish Programming Language Seminar: https://spls-series.github.io/meetings/2019/october/
5 November Reading Group: Yale Wayang data set for a data mining project
Presented by Hannah Cohen.
12 November Reading Group: Probabilistic Model Checker PRISM – 2
Presented by Daniel Kienitz.
19 November Amazon Research day in Edinburgh University
03 December Reading Group: DL2: Training and Querying Neural Networks with Logic
Presented by Katya Komendantskaya.
10 December, Room EMG61, 13.00 Reading Group: DL2: Training and Querying Neural Networks with Logic
Presented by Katya Komendantskaya.
12 December Scottish Theorem Proving seminar in ICMS

May — August 2019

Date Topics, May — September 2019
01 May A planning meeting, welcome to new members
08 MayAdversarial Examples generation (for Neural networks in Keras library):
a demo by Fraser Garrow
Pierre Le Hen: Testing DVL method of Neural network verification:
a progress report
15 May Daniel Kienitz: Generic Algorithms and Verification of Neural nets:
a progress report

Pascal Bacchus: Hardware approach to Neural net programming:
a progress report
22 MayDmitry Rozplokhas:
Title: miniKanren and its formal semantics
Abstract: miniKanren is a small logic programming language which aims to be a light-weight and purely declarative alternative to Prolog. The main field of application for miniKanren is “relational programming” — the technique of writing programs as mathematical relations rather than mathematical functions. Programs written in this way have a few interesting features, such as an ability to be “run backwards” (synthesize arguments from the result).
In the first part of the talk, I am going to give a short overview of the language. We will discuss its advantages and disadvantages, look at some simple examples of programs in miniKanren as well as state-of-the-art applications of relational programming. In the second part of the talk, I will describe the project I am currently involved in: providing formal semantics — rigorous mathematical description — to miniKanren programs and using it to prove some properties of program execution in the language.
Note the change in time and room:
the seminar will take place at 13.00 in Room 1.82
29 May Feedback session on Semester 1 MSc reports: writing about Neural Net verification. This session is rescheduled.
05 June General meeting and discussion of paper choices for the reading group.
11 June, 11.00, EM183 Josh Ko, National Institute for Informatics, Japan.
https://josh-hs-ko.github.io/
Title: Streaming Metamorphisms in Agda
Abstract: A metamorphism is a fold followed by an unfold, the former consuming a data structure to compute an intermediate value and the latter producing a codata structure from that value. Bird and Gibbons considered streaming metamorphisms on lists, which, under a commutativity condition, can possibly start producing the output without having to consume all the input. In Agda, we can encode a metamorphic specification in a type such that any program with that type correctly implements the metamorphism. In addition, for streaming metamorphisms, type information can help us find out the commutativity condition as we program the streaming algorithm in Agda’s interactive development environment.
12 June LAIV is meeting academic visitors from Edinburgh University.
14 June, 12.00, PG305 Marta Vallejo, HWU
Title: Parkinson’s disease diagnosis using Convolutional Neural Networks and figure-copying tasks
Abstract: Parkinson’s disease (PD) is a chronic neurodegenerative disorder, which leads to a range of motor and cognitive impairments. An accurate PD diagnosis is a challenging task since its signs and symptoms are very similar to other related diseases such as normal ageing and essential tremor. This work aims to automate the PD diagnosis process by using a Convolutional Neural Network, a type of deep neural network architecture. To differentiate between healthy and PD patients, our approach focuses on discovering deviations in patient’s motor capabilities with the use of drawing tasks. Besides that, since different data representations may capture various aspects of this disease, this work explores which drawing task, cube or pentagon, is more effective in the discrimination process. The goal is to evaluate which data format is more effective in diagnosing PD.
19 June Reading group:
Adversarial Examples – A Complete Characterisation of the Phenomenon: https://arxiv.org/abs/1810.01185
26 June Reading group:
Adversarial Examples – A Complete Characterisation of the Phenomenon: https://arxiv.org/abs/1810.01185
03 July , 12-13.30 EM1.82 PhD progression talk 1. Amani Benhalem.
Title: Artificial Neural Network – Guided Particle Swarm Optimisation

Abstract: Abstract: Many well-known machine learning algorithms are inspired by nature. For example, swarm intelligence techniques are inspired by animal flock behaviours, artificial neural networks are inspired by a network of neurons in an animal brain; genetic algorithms are inspired by natural evolution. Even though these techniques are inspired by nature, we can apply them to solve many artificial problems. We can also improve the performance of one of them by combining it with another technique that is behaviourally different. Our work presents the idea of improving the performance of Particle Swarm Optimization by combining it with Artificial Neural Networks. The reason for our idea is because PSO uses very simple rules to move the swarm towards the target and adding the features of ANN to PSO equations will make individuals in the swarm more intelligent.

PhD progression talk 2. Alasdair Hill.
Title: Formal Verification of AI Planning Languages

Abstract: Verification of Artificial Intelligence is becoming increasingly important due to AI’s ever growing presence across society. It is already seen across society in facial identification, financial predictions and self driving vehicles. The correctness of AI is therefore inextricably linked to safety and security in the real world. AI planning is used by robots to create plans to accomplish real world tasks. The planning is done in a virtual world model where the robot is given a set of actions, an initial state and a goal state and the robot has to compose a plan of sequential actions to reach the goal state. Before any robot carries out real life execution plans we want ensure the validity of them. Whilst current planners such as PDDL run tests to check these plans, they do not supply any proof evidence that the plans are correct. To be truly confident about an AI generated plan we need plans to have proof evidence that can be checked in some formal system. In this talk I will introduce and discuss two formal systems for the verification of AI plans.

PhD progression talk 3. Abdullah Altawairqi.
Title: Exploring the Modeling of Attack Strategies for STPA.

Abstract: Security system analysis and safety system analysis have in common to identifying the circumstances that could threaten the system’s functions or integrity. While safety is concerned with avoiding accidents, security system analysis aims to prevent the system from suffering from intentional acts. In security analysis, this modeling of an attacker’s intention is done with the identification of security targets and, in combination with the identification of the surface of attack helps to specify the system defence to build. Systems-Theoretic Process Analysis (STPA) which is the hazard analysis technique of Systems-Theoretic Accident Model and Processes (STAMP) offers to reason about the system in terms of its control loops and to derive the safety constraints of the system by identify how the entities of a control loop could malfunction or miss their goal. STAMP integrates a notion of causal facto to elicit safety properties of complex systems. Such modeling is intrinsically focused on the system’s point of view while a security analysis would need to understand the attacker’s intention and capabilities. Attack Trees is a graphical representation to model and analyse potential attack strategies. Attack Trees could be extended to consider defensive patterns in Attack Defence Trees (ADTrees). In this talk we explore the possibilities to combine both STPA analysis and ADTrees modeling to strengthen a system security analysis. The approach would help to identify attack intentions and capabilities to priorities and reduce the scope of the analysis.
10 July Reading group:
Adversarial Examples – A Complete Characterisation of the Phenomenon: https://arxiv.org/abs/1810.01185
17 July Presentation and discussion of MSc thesis results: Pierre Le Hen
24 July Presentation and discussion of MSc thesis results: Daniel Kienitz
31 July Presentation and discussion of MSc thesis results: Pascal Bacchus

January — April 2019

DateTopics, January — March 2019
9 JanSemester 2 Introduction.
Reading Seminar: In the next 6 sessions, we will cover
Safety Verification of Deep Neural Networks,
by Xiaowei Huang, Marta Kwiatkowska, Sen Wang and Min Wu
23 Jan

Reading Seminar: Introduction and general set-up
30 JanReading Seminar:
Background definitions, regions
6 Feb
Reading Seminar:
Verification Framework
13FebReading Seminar:
Feature Decomposition and Discovery
20 FebReading Seminar:
Feature Decomposition and Discovery
28 FebReading Seminar:
Selection of Regions and Manipulations
6 MarchReading Seminar:
Checking the verification properties in an SMT solver
18 MarchLAIV Masterclass: Survey of Deep Learning: Methods and Applications,
by Javier Garcia
21 MarchLAIV Masterclass: Convolutional Neural nets, by Mario Vasilev
27 MarchLAIV Masterclass: Keras and Adversarial example generation,
by Fraser Garrow