LAIV Seminars and Events

Welcome to the LAIV seminars page. LAIV seminars provide a forum for AI, machine learning, verification, programming language, security and legal experts to meet and discuss the state of the art and future directions in Verification of AI.

Non-members are welcome to attend and contribute! If you wish to sign up for LAIV mailing list and receive reminders of the upcoming events, please contact Natalia Slusarz (nds1 at hw dot ac dot uk), and let her know your name, affiliation, and email address.

LAIV seminars run face-to-face on-line, at specified times (see the timetable below for details). The physical location for all seminars is Earl Mountbatten Building, room EM302 (Heriot-Watt Campus). The virtual venue for all seminars is: https://zoom.us/j/95059599181

Some of LAIV talks are available on the AISEC Youtube Channel

April — July 2024

DateTopics
27 March
3 April 2024, 11 am
Swaraj Dash, Heriot-Watt University
Title:  A small trick for converting tree syntax to Polish notation generically
Abstract: This is going to be an informal talk about some ongoing work (or rather a tiny trick I noticed, which may possibly be well-known by some name). This is all very open-ended work and so maybe it is best to simply give away the punchline for the talk.

Consider the Haskell type of binary trees `data Tree = L | B Tree Tree`. I will show how to define a type `TreeP`, related to `Tree`, with corresponding equivalents for `B` and `L` which I will call `b` and `l` respectively, such that the tree `t = B L (B (B L L) L) :: Tree` can be written isomorphically as `t’ = b ; l ; b ; b ; l ; l ; l :: TreeP` (i.e. the same thing but without brackets, where ; is function composition in the opposite order). One could probably make sense of this construction using the Yoneda lemma or the cofree comonad construction.
08 April 2024, 14.00Reading group: Logical Foundations of Cyber-Physical Systems, by A.Platzer
Reading Chapter 5. Exercises: 5.2, 5.3, 5.9. Leads: Son & Katya.
Zoom: https://zoom.us/j/91579438979?pwd=cEVuUXA5Y1NRQ1RVaHZ2R1dxMmMyUT09
If attending at Southampton, please join Building 53, Room 4039.

January — March 2024

DateTopics
08 January 2024, 10 amReading group: Logical Foundations of Cyber-Physical Systems, by A.Platzer
Read Appendix 3.6, Complete Exercise 3.9. Exercise Lead: Katya
Zoom: https://zoom.us/j/91579438979?pwd=cEVuUXA5Y1NRQ1RVaHZ2R1dxMmMyUT09
If attending at Southampton, please join Building 53, Room 4039.
22 January 2024, 10 amReading group: Logical Foundations of Cyber-Physical Systems, by A.Platzer
Read Chapter 4, propose exercises of interest at the end of the chapter (the choice of exercises will be discussed at the seminar). Chapter lead: Yuhui Lin
Zoom: https://zoom.us/j/91579438979?pwd=cEVuUXA5Y1NRQ1RVaHZ2R1dxMmMyUT09
If attending at Southampton, please join Building 53, Room 4039.
12 February, 10 amReading group: Logical Foundations of Cyber-Physical Systems, by A.Platzer
Exercises 4.8, 4.9, 4.10 Leads: Ben and Colin. Extra exercise: model non-trivial CPS. Lead: Mark.
Zoom: https://zoom.us/j/91579438979?pwd=cEVuUXA5Y1NRQ1RVaHZ2R1dxMmMyUT09
If attending at Southampton, please join Building 53, Room 4039.
4 March 2024
14.00
Meven Lennon-Bertrand,  CLASH group at the University of Cambridge
Title: Towards a certified type theoretic kernel
Abstract: Proof assistant kernels are a natural target for program certification: they are small, critical, and well-specified. Still, despite the maturity of type theory and software verification, we are yet to see a certified Agda, Coq or Lean. In this talk, I will give an overview of the current landscape around this goal, trying to explain why it is still evading us, and presenting recent developments in the area, spearheaded by the MetaCoq project.

If attending at Heriot-Watt, please join at CMT01.
06 March 2024SPLS seminar at University of St Andrews: https://scottish-pl-institute.github.io/spls/meetings/2024/march/
12 March 2024, 14.00Reading group: Logical Foundations of Cyber-Physical Systems, by A.Platzer
Exercises 4.8, 4.9, 4.10 Lead: Son.
Zoom: https://zoom.us/j/91579438979?pwd=cEVuUXA5Y1NRQ1RVaHZ2R1dxMmMyUT09
If attending at Southampton, please join Building 53, Room 4039.
27 March
3 April 2024, 11 am
Swaraj Dash, Heriot-Watt University
Title:  A small trick for converting tree syntax to Polish notation generically
Abstract: This is going to be an informal talk about some ongoing work (or rather a tiny trick I noticed, which may possibly be well-known by some name). This is all very open-ended work and so maybe it is best to simply give away the punchline for the talk.

Consider the Haskell type of binary trees `data Tree = L | B Tree Tree`. I will show how to define a type `TreeP`, related to `Tree`, with corresponding equivalents for `B` and `L` which I will call `b` and `l` respectively, such that the tree `t = B L (B (B L L) L) :: Tree` can be written isomorphically as `t’ = b ; l ; b ; b ; l ; l ; l :: TreeP` (i.e. the same thing but without brackets, where ; is function composition in the opposite order). One could probably make sense of this construction using the Yoneda lemma or the cofree comonad construction.
28 March 2024, 11.00Reading group: Logical Foundations of Cyber-Physical Systems, by A.Platzer
Exercises 4.10. Reading Chapter 5. Leads: Son & Katya.
Zoom: https://zoom.us/j/91579438979?pwd=cEVuUXA5Y1NRQ1RVaHZ2R1dxMmMyUT09
If attending at Southampton, please join Building 53, Room 4039.
DateTopics
04 October 2023, 11amLAIV + DSG Round Table Session
11 October 2023, 11amFilip Sieczkowski, Heriot-Watt University
Title: Generalized algebraic data types and how to interpret them
Abstract: Generalized algebraic data types are a common feature of modern functional programming languages, and a cornerstone of pseudo-dependently-typed programming in Haskell. In this talk I will explain how they work, how we can encode them using type-equalities with injectivity and discriminability rules, and what problems they pose for semantic interpretation. I will also show how these problems can be overcome, and how to construct a logical relation that can account for the equality constraints between types.
19 October, 16.00 Reading group: Logical Foundations of Cyber-Physical Systems, by A.Platzer
Chapter 1, Chapter 2 Sections 2.1-2.4.
Zoom: https://zoom.us/j/91579438979?pwd=cEVuUXA5Y1NRQ1RVaHZ2R1dxMmMyUT09
If attending at Heriot-Watt, please join at CMF17.
27 October 2023, 9.00-17.30Workshop on Safe and Robust Machine Learning https://laiv.uk/workshop-on-safe-and-robust-machine-learning/
30 October 2023,
10-11
Reading group: Logical Foundations of Cyber-Physical Systems, by A.Platzer
Chapter 2 Sections 2.4-2.6. Exercises 2.6 and 2.9. Leading: Katya and Colin
Zoom: https://zoom.us/j/91579438979?pwd=cEVuUXA5Y1NRQ1RVaHZ2R1dxMmMyUT09
If attending at Heriot-Watt, please join at CMF17.
8 November 2023,
11am
Craig Ramsay, Heriot-Watt University
Title: HAFLANG’s Foundation for Hardware Graph Reduction
Abstract: Implementations of general purpose functional languages overwhelmingly target  CPUs. This choice is despite the semantic gap between their high-level  execution model and CPU assembly. Meanwhile, advances in FPGAs (a prototyping  platform for digital circuits) continue to improve the density of *custom*  hardware resources offered to wide audiences. We consider this a call to  reinvestigate *hardware* implementations of functional languages. We present “Heron”, a special purpose processor core for pure, non-strict   functional languages. We co-design its language semantics and hardware design,  aiming for a good performance-per-Watt metric. We compare our current  single-core implementation to a variety of stock CPUs, and open a discussion  around our future topics of hardware garbage collection and multi-core  implementation.
13 November 2023,
10-11
Reading group: Logical Foundations of Cyber-Physical Systems, by A.Platzer
Chapter 2 Sections 2.7-2.8 (Appendix sections as optional). Exercise 2.9. Leading: Katya and Ben
Online participation is via Zoom: https://zoom.us/j/91579438979?pwd=cEVuUXA5Y1NRQ1RVaHZ2R1dxMmMyUT09
If attending at Heriot-Watt, please join at CMF17. If attending at SOTON, Building 53, Room 4039.
27 November 2023, 10-11Reading group: Logical Foundations of Cyber-Physical Systems, by A.Platzer
Chapter 2 Sections 2.7.3 and 2.8, Chapter 3 Sections 3.1 – 3.2.3. Exercises 2.2 and 2.9. Leading: Katya, Ben, Sergio.
Online participation is via Zoom: https://zoom.us/j/91579438979?pwd=cEVuUXA5Y1NRQ1RVaHZ2R1dxMmMyUT09
If attending at Heriot-Watt, please join at CMF17. If attending at SOTON, Building 53, Room 4039.
11 December 2023 10-11Reading group: Logical Foundations of Cyber-Physical Systems, by A.Platzer
Chapter 3 Sections 3.2 – 3.5. Exercises: Install KeYmaera X; propose exercises from the end of Chapter 3 (for next time). Leading: Sergio.
Online participation is via Zoom: https://zoom.us/j/91579438979?pwd=cEVuUXA5Y1NRQ1RVaHZ2R1dxMmMyUT09
If attending at Heriot-Watt, please join at CMF17. If attending at SOTON, Building 53, Room 4039.
13 December 2023, 11amOleksandr Letychevskyi, Heriot-Watt University
Title: Algebraic modeling and its applications
Abstract: Oleksandr Letychevskyi presents research on the use of algebraic modeling in various subject areas and its combination with AI methods. An overview of the theoretical foundations of algebraic modeling and its generalization, namely the theory of insertion multilevel modeling, will be presented. Examples of the application of technology in cyber security, biology, blockchain, and software/hardware verification will be considered, as well as some results based on the use of synergy of AI methods and the algebraic approach. 

May — July 2023

DateTopics
03 May 2023, 1.30pmJan van Brügge, LAIV, HWU (PhD Progression Talk)
Title: New datatypes for Isabelle – Binder-aware reasoning made (somewhat) easy
Abstract: During the last 1.5 years I’ve implemented a generalization of Isabelle’s datatypes. With this generalization we can express datatypes that bind variables in their constructors and derive better induction principles for them.
In this talk I will show you how the package looks as a user. How to define such a datatype and how to reason about it.
10 May 2023, 1.30pmGabriela Ochoa, University of Stirling
Title: Neuroevolution Landscapes and Trajectories
Abstract: Neuroevolution, the use of evolutionary algorithms to design neural networks, has a long tradition in evolutionary computation with roots in the early 1990s. Traditionally, neuroevolution systems optimise both the neural network topology and its weights. However, when scaling up to contemporary deep models with millions of weights, gradient-based weight optimisation generally outperforms evolutionary methods. In consequence, many recent  methods evolve the topology only. This approach is also known as Neural Architecture Search (NAS).  This talk overviews our recent work on modelling neuroevolution  and NAS systems with search trajectory networks (STNs) and local optima networks (LONs) with the aim of providing a visual and quantitative understanding of search and optimisation in this domain
17 May 2023, 2pm
*note the non-standard time
Pierre Olivier, University of Manchester
Title: Compatibility and Isolation in Specialised Operating Systems
Abstract: Unikernels are modular and specialised operating systems running an application in a lightweight and single-purpose virtual machine. They offer many benefits in terms of lightweightness, performance, and security. In this talk, after a general introduction to unikernels and a brief presentation of the main research avenues in the field, I will present two particular contributions. The first is in the domain of compatibility and concerns HermiTux, a unikernel binary-compatible with Linux applications. The second relates to security and introduces FlexOS, an operating system which safety and isolation strategy is seamlessly specialisable at build time, allowing it to achieve a wide range of safety/performance trade-off points in the OS design space.

This will be an online-only session via: https://zoom.us/j/95059599181
31 May 2023
2pm
*note the non-standard time
Ohad Kammar, University of Edinburgh
Title: Frex: staged-optimisation and equational-proof-synthesis using universal algebra
Abstract: A recurring task in program generation involves developing data-structures representing semantically-distinct code fragments. We can stage an optimised version of the original program off of its representation. Moreover, we may want to extract the detailed steps in the equational proof that the representation is equivalent to its source. These representations are known as partially-static data structures and semantic normal-forms. In recent and ongoing work, we specify these data structures using universal algebraic concepts, the free algebra and its less familiar sibling the free extension.

In this talk, I’ll review our application of free extensions to staged-optimisation and proof-synthesis. Depending on audience participation, I’ll either delve further into applications, give a
tutorial on designing data structures based on free extensions, or sketch future directions.
07 June 2023SPLS seminar at Paisley University: https://spls-series.github.io/meetings/2023/february/
14 June 2023 2pm
*note the non-standard time
Naomi Saphra, NYU
Title: Interpreting Training
Abstract: Interpretability research in NLP often follows a predictable pattern—pick an indicator of structure or knowledge such as probe or challenge set accuracy, measure that indicator in a fully trained model, and assert that this structure or information is integral to how the model functions. However, we can achieve a much deeper understanding by considering how these indicators emerge from the training process. First, this talk will discuss research on the relationship between interpretable generalization behavior and the presence of multiple basins on the loss landscapes of finetuned text classifiers. Then, I will describe how manipulating interpretable behaviors during the training process can shed light on the role of syntactic signals in attention distributions and generally on how an optimizer’s bias towards early-learned and simple strategies both helps and hurts language model performance. These results form the basis of a manifesto for exploring developmental explanations when researching interpretability and generalization behavior.

Naomi Saphra is a postdoctoral researcher at NYU with Kyunghyun Cho. They are interested in NLP training dynamics: how models learn to encode linguistic patterns or other structure and how we can encode useful inductive biases into the training process. Previously, they earned a PhD from the University of Edinburgh on Training Dynamics of Neural Language Models, worked at Google and Facebook, and attended Johns Hopkins and Carnegie Mellon University. Outside of research, they play roller derby under the name Gaussian Retribution, do standup comedy, and shepherd disabled programmers into the world of code dictation.

This will be an online-only session via: https://zoom.us/j/95059599181
15 June 2023 11am
*note the non-standard date and time
John Power, AISEC academic visitor.
Title: A Category Theoretic Analysis of Binders
Abstract: In the late 1990’s, Fiore, Plotkin and Turi proposed a category theoretic model for binders. Instead of having a set of terms, they proposed a presheaf of terms over F, the category of finite sets, which provided cartesian contexts. They used a substitution monoidal structure on the category [F,Set] and characterised the notion of term in this setting by combining the notion of monoid in the substitution monoidal structure with the usual free generation of terms. Their modelling of binder was derived from that of substitution. A little later, Tanaka, following their lead, modelled linear binders, replacing F by P, the category of finite sets and permutations. These are instances of an axiomatic situation, where one has a 2-monad S on Cat, with a pseudo-distributive law, subject to a caveat about size, of S over what, except for size, would be a pseudo-monad [-^op,Set]. Tanaka and I developed the axiomatics in a series of papers that I propose to explain. The same mathematics was also used by Martin Hyland and several other authors in modelling general notions of algebra.
15 June 2023 1pm
*note the non-standard date and time
Jamie Gabbay, HWU
Title: Equivariance: the name of the true monster in your induction
Abstract: I will argue that the essential difficulty with handling name-binding is to focus on permutations.
The mistake we make (including myself) has been to dismiss permutations as an easy macro, since on toy examples permutations look simple.  They are not.  Permutation actions — and their dual property of equivariance — are the true reason that getting names and binding right is hard, and we need to admit this and address the issue head-on. 

My talk will be mostly based on this paper:

https://doi.org/10.1093/logcom/exz015
https://arxiv.org/abs/1801.09443
A Haskell package based on these ideas is here:
https://hackage.haskell.org/package/nom
A prototype Agda library also based partly on these ideas (joint work with Orestis Melkonian; he wrote the Agda code), is sketched here:
https://omelkonian.github.io/data/publications/nominal-agda-short.pdf
28 June 2023 2pmRemi Desmartin, LAIV, HWU (PhD Progression Talk)
Title: 
Towards a Verified Proof Checker for Deep Neural Network Verification
Abstract: 
Recent developments in deep neural networks (DNNs) have led to their adoption in safety-critical systems, which in turn has heightened the need for guaranteeing their safety. These safety properties of DNNs can be proven using tools developed by the verification community. However, these tools are themselves prone to implementation bugs and numerical stability problems, which make their reliability questionable. To overcome this, some verifiers produce proofs of their results which can be checked by a trusted checker.
In this talk, I will present ongoing work on a novel implementation of a proof checker for DNN verification. It improves on existing implementations by offering numerical stability and greater verifiability. To achieve this, we leverage two key capabilities of Imandra, an industrial theorem prover: its support of infinite precision real arithmetic and its formal verification infrastructure. So far, we have implemented a proof checker in Imandra, specified its correctness properties and started to verify the checker’s compliance with them. Our ongoing work focuses on completing the formal verification of the checker and further optimising its performance.

January — April 2023

DateTopics
10 January 2023
*Note non-standard date
1 pm
Henning Basold, Leiden University
Title: Guarded Recursion for Coinductive, Higher-Order Stochastic Systems
Abstract: In stochastic modelling, as it appears in artificial intelligence,
biology or physics, one often encounters the question of how to
model a probabilistic process that may run indefinitely. Such processes
have been described in non-probabilistic programming successfully by
using coinductive types and coiteration.

In this talk, we will discuss a language and its semantics for
stochastic modelling of processes that run potentially indefinitely,
like random walks, and for distributions that arise from indefinite
runs, like geometric distributions. Such processes and distributions
will be described as coinductive types and programmed via so- called
guarded recursion, which ensures that the programs describing
random walks etc. are terminating. Central to guarded recursion is the
so-called later modality, which allows one to express that the
computation of an element of a type is delayed. This allows one to use
type-checking to ensure termination instead of, e.g., syntactic
guardedness. This eases reasoning about such processes, while the
recursion offered by the language is still convenient to use.

The language first appeared in the paper “Coinduction in Flow — The
Later Modality in Fibrations” and its sampling semantics was
implemented (with some extensions) in https://github.com/Rintse/ghopfl
18 January 2023
1.45pm
Jan van Brügge, Heriot-Watt University
Title: A New Way to Formalizing Binders in Isabelle/HOL
Abstract: Proofs about programming languages always need to deal with equality of terms that bind variables. In pen-and-paper proofs it is usually implicitly assumed that bound variables can always be renamed to be different from free variables. In a theorem prover however this fact has to be established from first principles. There are several approaches to formalizing terms with binders, each with their advantages and disadvantages. In this talk I will present a short overview over those techniques followed by a detailed account of how to use a special class of functors to derive a semantic account of binding. The advantages of this approach include support for easy composition of datatypes with binders and for infinite codatatypes. At the end I will discuss the current progress of the proof automation in Isabelle/HOL and a proof of progress and preservation of the simply typed lambda calculus using said automation.
25 January 2023
1.30pm
Jeremy Singer, University of Glasgow
Title: How sweet is CHERI malloc? Profiling runtime memory management overhead on the Arm Morello platform
Abstract: In this talk I will briefly introduce the CHERI concept of hardware capabilities, then consider the impact on the design of dynamic memory management frameworks. I will present benchmark performance results for a range of runtime memory managers and C/C++ benchmarks, showing that CHERI provides better security but may incur significant performance costs in some circumstances.
2 February 2023
1pm
CMT01
*note non-standard schedule and venue
Jonas Latz, Heriot-Watt University
Title: Gradient flows in machine learning and their stochastic approximation
Abstract: Scientists and engineers are often interested in mathematically describing the behaviour of a system of interest over time. Many of these systems implicitly proceed by optimising some quantity, e.g., the temperature is changing in a solid body until the integral of the Euclidean norm of the spatial temperature gradient is minimised. Those optimisation routines follow gradient flows — the natural, continuous-time version of the well-known gradient descent technique. 
In this talk, we study optimisation problems appearing in machine learning and data science through the lens of gradient flows. We then proceed to study stochastic approximations of those gradient flows, to finally arrive at a continuous-time version of the celebrated stochastic gradient descent method.
8 February 2023
1.30pm
Akilan Selvacoumar, Heriot-Watt University, PhD Progression Talk
Title: Combining TAG based architectures with (Uni-kernels and Multi-kernels)
Abstract: The following talk consists of my year 1 progression report which focuses on combining 3 major areas:

– Using TAG based architectures for executing security policies and speeding up dynamic type checkers. 
– Using Uni-kernels for running applications on a slim down single address spaced operating system. 
– Using Multi-kernels for running an application on heterogenous multi-core systems.

Based on the following background the next research aims for my PhD will be discussed. 
Links:
– Year 1 report: https://github.com/Akilan1999/phd-thesis/releases/download/Year1/thesis-year1.pdf 
20 February 2023
4pm
*note non-standard date and time
Vedran Čačić, University of Zagreb
Title: New approach to New Foundations (with Urelements)
Abstract: Most mathematicians use, at least informally, ZFC as the foundation of their mathematics. On the other hand, most computer scientists use type theory. Despite the efforts of people like T. Altenkirch, It is unlikely that each team is going to convince the other to switch, and there are real disadvantages of not having the common foundation. (Doing a lot of mathematics twice is one of them.)

We propose that there is a middle way: a set theory based on Quine’s New Foundations, with an almost perfect division of work between humans (working at the object level, where we have sets in the usual sense) and computers (working at the specification level, where we have formulas with special decidable properties ensuring that they denote sets via comprehension). it might not be the ultimate answer, but it is in our opinion a good candidate. We intend to explore the possibilities it offers, and would like to invite everybody sharing our enthusiasm to join us in the journey.
01 March
24 February
2023 1.30 pm
*note the non-standard day
Omri Isac, the Katz lab, Hebrew University of Jerusalem
Title: Neural Network Verification with Proof Production
Abstract: There are currently multiple techniques and tools for verifying Deep Neural Networks (DNNs). When DNN verifiers discover an input that triggers an error, that is easy to confirm; but when they report that no error exists, there is no way to ensure that the verification tool itself is not flawed. As multiple errors have already been observed in DNN verification tools, this calls the applicability of DNN verification into question.  In this talk, I will present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities. We will discuss how to create such mechanism based on an efficient adaptation of the well-known Farkas’ lemma, combined with mechanisms for handling piecewise-linear functions and numerical precision errors. We will conclude by discussing how to use the mechanism in order to improve the performance of DNN verifiers, using conflict clauses.
8 March 2023Scottish Programming Language Seminar (SPLS) at Heriot-Watt University.
15 March 2023LAIV + DSG meeting – no seminar.
29 March 2023
1.30pm
Michel Steuwer, University of Edinburgh
Title: On bringing a functional pearl into practice: An MLIR-based implementation of the strategy language ELEVATE
Abstract: In 2020, we proposed a new language, ELEVATE, for describing compiler optimisations as strategies, i.e. compositions of rewrite rules. We demonstrated that this language works really well for optimising computations expressed in our functional data-parallel language, RISE. Our paper was recognised as a SIGPLAN research highlight, praising the combination of “elegance” with “state-of-the-art performance”, but nobody in practice is using ELEVATE. Why? Because they don’t know how to implement our nice ideas in a practical setting.

Therefore, we now aim to bring ELEVATE to the real world, i.e., to the MLIR compiler framework. As it turns out, the real world is quite different to our functional academic setting. In this talk, we report on our ongoing efforts to design a practical rewrite system for MLIR that maintains the nice properties that ELEVATE provides. We will talk about the different thinking required when rewriting functional expressions vs. programs represented as SSA-based graphs. We will discuss the challenges of implementing composable rewrites in MLIR and give an outlook over the possibilities opened up by a practical implementation of ELEVATE.
5 April 2023
1.30pm
Nir Oren, Bruno Yun, University of Aberdeen
Title: Explainability via argumentation and dialogue
Abstract:  Foundational research in argumentation theory investigates both how arguments are constructed, and what conclusions can be drawn from conflicting sets of arguments. Due to its intuitionistic underpinnings, it is often claimed that humans can easily understand argument-based reasoning, offering promise as an explainable AI technique, particularly when coupled with dialogue. In this talk we present three case-studies where argumentation supports explainable AI, namely in the context of expert decision-making; the explanation of different types of plan-based reasoning; and in explaining Nash equilibria to non-expert users. 
12 April 2023
1.30pm
Natalia Ślusarz, LAIV, HWU (PhD Progression Talk)
Title:  Logic of Differentiable Logics: Towards a Uniform Semantics of DL
Abstract: Differentiable logics (DLs) are a method of training neural networks to satisfy logical specifications. A DL consists of a syntax in which specifications are stated and an interpretation function that translates expressions in the syntax into loss functions. These loss functions can then be used during training with standard gradient descent algorithms. The variety of existing DLs and the differing levels of formality with which they are treated makes a systematic comparative study of their properties and implementations difficult.

In this talk I will suggest a meta-language for defining DLs that we call the Logic of Differentiable Logics, or LDL. Syntactically, it generalises the syntax of existing DLs to First Order Logic, and for the first time introduces the formalism for reasoning about vectors and learners. Semantically, it introduces a general interpretation function that can be instantiated to define loss functions arising from different existing DLs. I will then use it to establish several theoretical properties of existing DLs, and to conduct their empirical study.
19 April 2023
2pm
*note the non-standard time
Marco Casadio, LAIV, HWU (PhD Progression Talk)
Title:  Verification of NLP
Abstract: Verification of machine learning models used in Natural Language Processing (NLP) is known to be a hard problem. In particular, many known neural network verification methods that work for computer vision and other numeric datasets do not work for NLP. In this talk, I will explore technical reasons that underline this problem and, based on this analysis, I will propose practical methods and heuristics for preparing NLP datasets and models in a way that renders them amenable to known verification methods based on abstract interpretation. I will then report the evaluation of these methods using the NLP dataset R-U-A-Robot suggested as a benchmark for verifying legally critical NLP applications and a medical dataset for safety critical applications.
26 April 2023
1.30pm
Jamie Gabbay, Heriot-Watt University
Title: Semitopologies and permissionless consensus
Abstract: A distributed system is permissionless when participants are free to leave and join it at any time.  This raises the question of what consensus even means for such a system.  For example, a naive definition that “the system is in consensus when a majority of participants agree” is just not helpful in a permissionless context, since the very notion of total system size is neither fixed nor necessarily known to participants.  This problem turns out to be a nice mathematical opportunity: it leads to a topology-like structure which we call semitopologies — like topologies, but without the condition that intersections of open sets be open.  Consensus gets identified with topological continuity; “closeness” of points is identified with an anti-Hausdorff separation axiom; and computing a consensus state is identified with iterating to build a topological closure.  I find the maths lovely, and just as nice is how closely practical considerations and theory seem to match up.  Full details and proofs are in a preprint here: https://arxiv.org/abs/2303.09287

September — November 2022

DateTopics
9 September
2pm

* Note non-standard date and time
Remi Desmartin – LAIV
Title: CheckINN: Wide Range Neural Network Verification in Imandra. A practice talk ahead of the 24th International Symposium on Principles and Practice of Declarative Programming PPDP’22. Joint work with Ekaterina Komendantskaya, Grant Passmore, Matthew Daggitt
Abstract: Neural networks are increasingly relied upon as components of complex safety-critical systems such as autonomous vehicles. There is high demand for tools and methods that embed neural network verification in a larger verification cycle. However, neural network verification is difficult due to a wide range of verification properties of interest, each typically only amenable to verification in specialised solvers. In this paper, we show how Imandra, a functional programming language and a theorem prover originally designed for verification, validation and simulation of financial infrastructure can offer a holistic infrastructure for neural network verification. We develop a novel library CheckINN that formalises neural networks in Imandra, and covers different important facets of neural network verification. For full paper see https://arxiv.org/abs/2207.10562
14 September
1pm
21 September
1pm
Highlights of FLOC’22, by Luca Arnaboldi, Natalia Slusarz, Marco Casadio
Abstract: We will discuss best FLOC contributions related to AI verification
26-27 SeptemberLAIV goes to MPC’22 — The International conference on Mathematics of Program Construction: https://www.macs.hw.ac.uk/mpc22/
28 September 1pm
5 October 1pmMuhammad Najib, Heriot-Watt University
Title: Examining Games and a Way to Repair Them
Abstract: Classical notion of correctness in formal verification is not appropriate for multi-agent systems–it does not capture the strategic behaviour of rational agents. As such, a different concept of correctness was proposed in rational verification. In proving such a correctness, we turn multi-agent systems into multi-player games and use game-theoretic techniques to analyse the games. In our works, we study various kinds of games with various goals and settings, and provide an algorithmic techniques to solve problems related to rational verification. We also propose a method to “repair” games should we find them “broken”.
12 October 1pmAISEC Workshop
19 October
1pm
Michael Akintunde, King’s College London
Title: Verifying strategic ability of neuro-symbolic multi-agent systems
Abstract: Forthcoming autonomous systems are expected to use machine learning-based components, such as neural networks, to implement their perception functions. If machine learning components are to be used in safety-critical systems, it is essential that they are verified and validated before deployment. Most recent work has focused on either verifying a neural network in isolation, or considered simple, predictable, single-agent systems with fully observable environments resulting in linear traces. 
In this talk I will discuss methods to examine more realistic situations where such autonomous systems would be typically deployed, where we may have multiple interacting agents with partial observability of the environment. Specifically, I introduce the notion of a neural-symbolic agent which is equipped with a neural network-based perception unit and a symbolic controller, and cover techniques used to verify the outcomes of the strategic interplay of groups of such agents. I will detail the results of evaluating the novel toolkit VENMAS (Verification of Neural Multi-Agent Systems) on a multi-agent variant of a complex prototype for air-traffic collision avoidance and outline current research directions.
26 October
1pm
Scottish Programming Language Seminar, SPLS, in Stirling University. To attend please visit the page: https://spls-series.github.io/meetings/2022/october/
02 November 1pmJurriaan Hage, Heriot-Watt University
Title: Heuristics-based Type Error Diagnosis for Haskell — The case of type families
Abstract: Helium is a Haskell compiler designed to provide programmer friendly type error messages. It employs specially designed heuristics that work on a type graph representation of the type inference process.
In order to support (all flavours of) type families in Helium, we extend the type graphs of Helium in order to design and implement heuristics that can diagnose type errors that might be caused by mistakes and misunderstandings involving type families. To increase programmer understanding, we also include type family execution traces as part of the type error diagnostics.
09 November 1 pmIeva Daukantas, Alessandro Bruni, Carsten Schürmann , IT-University of Copenhagen
Title: Trimming Data Sets: a Verified Algorithm for Robust Mean Estimation
Abstract: The operation of trimming data sets is heavily used in AI systems. Trimming is useful to make AI systems more robust against adversarial or common perturbations. At the core of robust AI systems lies the concept that outliers in a data set occur with low probability, and therefore can be discarded with little loss of precision in the result. The statistical argument that formalizes this concept of robustness is based on an extension of the Chebyshev’s inequality first proposed by Tukey in 1960.
In this paper we present a mechanized proof of robustness of the trimmed mean algorithm, which is a statistical method underlying many complex applications of deep learning. For this purpose we use the Coq proof assistant to formalize Tukey’s extension to Chebyshev’s inequality, which allows us to verify the robustness of the trimmed mean algorithm. Our contribution shows the viability of mechanized robustness arguments for algorithms that are at the foundation of complex AI systems.
09 November 3 pmPublic (Inaugural) Lecture by Ekaterina Komendantskaya: Artificial, Human, General, Mathematical: What Kind of Intelligence Will We Trust? To attend, please register: https://www.eventbrite.co.uk/e/what-kind-of-intelligence-will-we-trust-tickets-430854255677. As this is a public lecture, please bring family and friends. There will be refreshments after the lecture.
16 November 1 pmHans-Wolfgang Loidl, Heriot-Watt University
Title: Improving GHC Haskell NUMA Profiling
Abstract: As the number of cores increases, Non-Uniform Memory
Access (NUMA) is becoming increasingly prevalent in general purpose machines. Effectively exploiting NUMA can significantly reduce memory access latency and thus runtime by 10-20%, and profiling provides information on how to optimise. Language-level NUMA profilers are rare, and
mostly profile conventional languages executing on Virtual Machines (VMs). Here we profile, and develop new NUMA profilers for, a functional language executing on a runtime system (RTS).
23 November, all dayThe National Robotarium in Edinburgh is holding a 1 day conference on the 23rd November 2022, devoted to next steps of making Autonomous systems more dependable, explainable and deployable.

It is also an opportunity for all participants to get an overview of the new National Robotarium that just opened in Edinburgh. Dinner and drinks in the evening is sure to be a top event.

Please register by November 6th, 2022.

April — June 2022

DateTopics
1st April
1pm
Mehrnoosh Sadrzadeh – University College London
Title: Coreference resolution with modal substructural logics and vector space semantics
Abstract: In 1958 Lambek proposed the logic of a residuated monoid to reason about syntax of natural language.  This logic was amongst the first substructural logics.  It did not have contraction nor weakening, properties that indeed were not necessary for fixed word order languages.  Apart from difficulties in generalizing this logic to free word order languages, such as Sanskrit and Latin, they also failed to model discourse structure, i.e. structures that go beyond sentential boundaries, for instance in the simple discourse  “John slept.  He snored.”  In this talk, I will show how endowing Lambek Calculus with controlled modalities overcomes the problem.  I will also go through the vector semantics of such logics and show interesting applications to natural language tasks and recent adventures on the possibilities of  running the tasks on Quantum computers. 
EMG61 and online: https://zoom.us/j/96626186133
1st April 2pm F1Tenth Autonomous Driving Seminar
Title: Formal Verification of Hybrid Systems in KeYmaera, by Sinead Donnelly
Abstract: Cyber-Physical systems operate in both the physical and the cyber world. They operate in many high-risk areas, such as transport and medical devices, and as such their safety and security are of paramount importance. Due to the weaknesses of simulation to ensure the safety of such systems, it is important that we use formal verification to mathematically prove their correctness. In this talk, I will introduce the concept of hybrid systems which can model such Cyber-Physical systems, and demonstrate how they can be proven safe by using the KeYmaera software.
EMG61 and online: https://zoom.us/j/96626186133
22nd April 11amAgda PFLA Reading Group – Week 7 – Negation (https://plfa.inf.ed.ac.uk/Negation/)
EMG61 and online: https://zoom.us/j/96626186133
22nd April 1pmTimo Stadler, OTH Regensburg
Title: AI-based solutions for public transport in rural areas
Abstract: How do I get to the doctor, to the grocery store or to an appointment?
This question is central in rural areas, especially for young people without a driver’s license and older people who no longer want to or can drive themselves.
 
In this presentation, we will give an insight into the various research priorities for achieving smart public transport.
In particular, the areas of route planning, setting optimal bus stops, as well as the evaluation of passenger data are covered.
EMG61 and online: https://zoom.us/j/96626186133
29th April 1pmFilip Sieczkowski, Heriot-Watt University
Title: Call-by-value and call-by-name: a simple proof of a classic theorem
Abstract: The fact that lazy evaluation of programs can simulate the eager
approach is one of the best-known and most intuitive results of
lambda calculus, and its consequences affect both the principles
and practice of functional programming. However, the canonical
proof of this result is complex and not easily extended to
account for additional language features. In this talk, I will
present an account of simulation between call-by-value and
call-by-name lambda calculus that is elementary enough for the
first course in semantics, extensible, and easy to formalise. I
will also briefly discuss how the choice of semantic formalism
influences the proof, both in terms of technical and conceptual
complexity.
EMG61 and online: https://zoom.us/j/96626186133
6th
May 11am
Agda PFLA Reading Group – Week 8 – Quantifiers (https://plfa.github.io/Quantifiers/)
EMG61 and online: https://zoom.us/j/96626186133
6th May 1 pmWarrick McMillan, Chalmers University.
Title: On the syntax and semantics of voice assistants in autonomous vehicles.
Abstract : In ongoing investigations about the verification of hybrid systems with
multiple modalities of interaction, an interesting case arises in the design of
a robust voice assistant for autonomous vehicles. This work introduces a
Grammatical Framework (GF) grammar for parsing a Controlled Natural Language
(CNL), informed by the TOUCHDOWN dataset, capable giving imperative commands.
The Abstract Syntax Trees (ASTs) are then mapped to a commonly found class of
Linear Temporal Logic (LTL) formulas which represent sequential ordered
visitations, and whose satisfiability can be verified with a model checker once
a model of the vehicle and its environment have been established. An Agda
formalization of LTL is also developed, demonstrating the feasibility of
utilizing a functional programming pipeline in the partial verification of
complex systems which incorporate components utilizing machine learning
algorithms.
EMG61 and online: https://zoom.us/j/96626186133
13th May
1pm
Marco Casadio, Heriot-Watt University, PhD progression talk
Title: Verification for NLP classification
Abstract: Deep neural networks (DNNs) have shown their great power in addressing challenging problems in various areas, such as computer vision (CV) and natural language processing (NLP). Due to their huge success, systems based on DNNs are widely deployed in the physical world and their safety and security is a critical matter. Nevertheless, a series of studies have discovered that crafting inputs by adding imperceptible perturbations can easily deceive DNNs. These modified inputs are called adversarial examples, and they bring potential security threats to DNN based systems. Through the use of NLP systems, for example, attackers can maliciously propagate disinformation (e.g., rumor) via adversarial texts which can bypass rumor detectors. The spread of rumor might affect social stability and cause harm. Thus, effective defence methods need to be devised, and robust models should be developed for the community. While adversarial examples, robustness and verification on images are widely studied, there is not as much work on NLP. In this presentation I will first give an overview of problems arising in verification of NLP-based systems, and then will talk about one  specific verification technique that I developed in the first year of my PhD.
EMG61 and online: https://zoom.us/j/96626186133
20th
May 11am
Agda PFLA Reading Group – Week 9 – Decidability (https://plfa.github.io/Decidable/)
EMG61 and online: https://zoom.us/j/96626186133
20th May 1pmNatalia Slusarz, Heriot-Watt University, PhD progression talk
Title: Properties of loss functions in training neural networks
Abstract: To verify a neural network we oftentimes need it to satisfy certain properties – one way to approach this task is through what we can call “constraint-based loss functions” – a method of using differentiable logic (DL) to translate logical constraints into loss functions which can then be used to train the network specifically to satisfy said constraint. To that end my goal was to take a broader look at loss functions.
Despite how essential the choice of a loss function is there is no single way to choose it for a given task or to evaluate its suitability. I will discuss how we can attempt to identify the most common methods as well as give a broad overview of the field. Then I will focus on different approaches to constraint-based loss functions – such as prioritising the mathematical properties of the loss function itself or focusing on the logic side of this translation – their advantages and disadvantages. This will allow me to discuss in more general terms the design space for DL based loss functions.
EMG61 and online: https://zoom.us/j/96626186133
27th
May
11am
Agda PFLA Reading Group – Week 10 – Decidability (Part II) (https://plfa.github.io/Decidable/)
EMG61 and online: https://zoom.us/j/96626186133
27th
May
1pm
Rudy Matela, Channable
Title: Synthesizing Pure Functions Enumeratively with Search Space Pruning Using Automatically Discovered Properties
Abstract:  In this seminar, I will present Conjure, a tool for synthesising pure functions.  Given a few argument-result bindings and a list of available primitives, Conjure tries to find a matching implementation involving the primitives.  Conjure works by exhaustively enumerating type-correct definitions while pruning search space using automatically discovered properties.  I plan to show how this tool can be used in simple examples; how it works; its application to existing benchmark suites; and how it fares when compared to other similar tools.
EMG61 and online: https://zoom.us/j/96626186133
3rd
June
No talks – bank holiday
10th
June
11am
Agda PFLA Reading Group – Week 11 – Lists and Higher Order Functions (https://plfa.github.io/Lists/)
EMG61 and online: https://zoom.us/j/96626186133
10th June 1pmLuc Frachon, Heriot-Watt University, PhD progression talk
Title: An Immune-Inspired Approach to Neural Ensemble Search at the Macro-Architecture Level
Abstract: Over the last decade, deep learning has become the predominant form of machine learning, especially on unstructured and language data. However, designing and training deep neural networks involves many hyperparameters, making the process slow and costly. The problem motivated the emergence of Neural Architecture Search (NAS), which automatically discovers highly performing neural network architectures for a particular task.
Most research work in NAS has focused on: 1) finding the single best-performing architecture; 2) finding optimal sub-structures, called “cells”, rather than overall architectures. In this PhD project, we explore a different approach. We create a population of deep neural networks with automatically designed “macro-architectures”. This population allows efficient ensembling of predictions and good generalisation to a more difficult task.
To achieve these goals, we use a population-based algorithm called Artificial Immune System. AIS has a recognised ability to solve multimodal problems and maintain a high degree of diversity. We first show that this approach can lead to highly competitive performance on classical computer vision benchmarks. Then, we focus on assisting the AIS search by predicting architecture accuracy without training the associated networks at evaluation time. We address this by learning the latent space of architectural components and training a performance predictor in that space.
EMG61 and online: https://zoom.us/j/96626186133
17th
June
1pm
Fraser Garrow, Heriot-Watt University, PhD progression talk
Title: Why Functional Program Synthesis Matters (in the Realm of Genetic Programming)
Abstract: In Genetic Programming (GP) systems, particularly those that target general program synthesis problems, it is common to use imperative programming languages to represent evolving code. In this work, we consider the benefits of using a purely functional, rather than an imperative, approach. We then demonstrate some of these benefits via an experimental comparison of the pure functional language Haskell and the imperative language Python when solving program synthesis benchmarks within a grammar-guided GP system. Notably, we discover that the Haskell programs yield a higher success rate on unseen data, and that the evolved programs often have a higher degree of interpretability. We also discuss the broader issues of adapting a grammar-based GP system to functional languages, and highlight some of the challenges involved with carrying out comparisons using existing benchmark suites.
EMG61 and online: https://zoom.us/j/96626186133
24th June 1pmZoey Sheffield, Heriot-Watt University, PhD progression talk
Title: Cubical type theory for the rest of us
Abstract: Homotopy type theory (and the computational interpretation, cubical type theory) is a hot topic of late. It holds a lot of promise as a basis for formalized mathematics, with first class support for extensionality of functions, quotients and types supporting synthetic homotopy theory. But what does it mean for the humble programmer, who doesn’t know of (higher) category theory and/or homotopy theory? This talk is an introduction to cubical type theory for programmers, and why we might care about features such as higher inductive types and univalence.
EMG61 and online: https://zoom.us/j/96626186133
1st
July
11am
Agda PFLA Reading Group – Week 11 – Lists and Higher Order Functions (Part 2) (https://plfa.github.io/Lists/)
EMG61 and online: https://zoom.us/j/96626186133

For full history of seminars, see below.

Past Seminars

March 2022

DateTopics
4th March 1pmMatthew Daggitt, Heriot-Watt University
Title: Interfacing neural network verifiers with interactive theorem provers
Abstract: Verification of neural networks is currently a hot topic in automated theorem proving. Progress has been rapid and there are now a wide range of tools available that can verify properties of networks with hundreds of thousands of nodes. In theory this opens the door to the verification of larger control systems that make use of neural network components. However, there is currently no general methodology for bridging the gap between verifiers and interactive theorem provers that could be used to construct these larger proofs.

In this talk I will present Vehicle, a dependently typed domain-specific language for formalising properties of black-box neural network components. The Vehicle compiler translates specifications down to problems for the neural network verifier Marabou, before automatically generating a file containing an Agda version of the spec. Proofs about the larger system may then be built on top of this Agda interface. I will demonstrate the methodology by proving that a simple neural-network enhanced car controller will always stays on the road.

EMG61 and online: https://zoom.us/j/96626186133
11th March
11am
Agda PFLA Reading Group – Week 6 – Connectives (https://plfa.inf.ed.ac.uk/Isomorphism/)
EMG61 and online: https://zoom.us/j/96626186133
11th March
1pm
EM.G61
Moa Johansson, Chalmers University
Title: Theory Exploration – Combining symbolic and neural AI?
Abstract: Theory Exploration is a technique for automating the discovery of interesting mathematical conjectures, for instance to use as lemmas by an automated theorem prover or to form a candidate algebraic specification of a functional program.
 
I will introduce a system called QuickSpec, which is developed at Chalmers. It excels at automatically discovering equational conjectures about functional programs. For signatures with around 10 different function symbols it is also very fast, conjectures with a term size of up to about 9 (on either side) is usually found within seconds, thanks to a combination of clever heursitics and integration with automated testing, by which terms are evaluated and grouped into equivalence classes.
I will also talk about ongoing work to complement QuickSpec with templates, essentially hints about the shapes of conjectures we are interested in, which allows us to explore even larger theories without hitting exponential explosion. Finally, I will also mention about how we could leverage machine learning to help with theory exploration and mention some related recent work using neural networks for conjecture discovery.

EMG61 and online: https://zoom.us/j/96626186133
18 March 2022F1Tenth Autonomous Driving: This week all seminar participants are invited to attend externally organised talks.

10.00 Dr. Patrick MacAlpine from Sony AI present their recent Nature publication titled “Outracing champion Gran Turismo drivers with deep reinforcement learning”. https://www.nature.com/articles/s41586-021-04357-7The lecture will take place on  MS Teams (meeting link).

13.00. Lewis Thunstall, a Huggingface specialist, will demonstrate how to use large pre-trained models in your projects, how to create Machine Learning apps and demos using Streamlit and Gradio, Python libraries for this purpose, and how to share them with the rest of the Open Source ecosystem.  Huggingface is one of the most prominent companies in the Machine Learning ecosystem. They provide infrastructure and tools to access state-of-the-art models, datasets, and provide infrastructure to test and deploy Machine Learning models.
Register https://www.eventbrite.co.uk/e/hands-on-class-with-huggingface-registration-293035676597
18th March
2pm
NOTE: 2pm not 1pm

Cristian Sestito, University of Calabria, Italy
Title: Transposed Convolution-Based Quantized Neural Networks for FPGAs
Abstract: In the last few years, transposed convolutions have gained interest in image prIn the last few years, Transposed Convolutions (TCONVs) have gained interest in image processing due to the increasing demand of up-sampling applications, including image decompression, semantic segmentation and synthetic object generation using AI. Such algorithms are computationally intensive and require a lot of memory, thus asking for dedicated hardware platforms to be deployed effectively. Field Programmable Gate Arrays (FPGAs) offer reconfigurable cells to implement low-latency and power-efficient digital architectures. However, low-cost FPGAs are resource-bounded and proper compression techniques are mandatory to fit the referred AI models. For instance, data quantization has proven to be valid in Multi-Layer Perceptrons (MLPs) and tiny Convolutional Neural Networks (CNNs). Nevertheless, quantization about CNNs using TCONVs is still underexplored.
In this talk, I will present my current research about TCONV-based Quantized Neural Networks, highlighting: (a) the evaluation of the accuracy taking into account state-of-the-art AI models; (b) the design of an FPGA-based hardware accelerator for TCONV to analyse the resources requirements and the throughput.
EMG61 and online: https://zoom.us/j/96626186133
25th March 2pm F1Tenth Autonomous Driving: Seminar
Title: Introduction to Explainable AI (XAI), by Assya Chiguer
Abstract: Over the last few years, the successful development of new Machine Learning methods, especially concerning deep neural networks, resulted in the expansion of AI’s applications area. Search engines, virtual assistants, NLP tools, autonomous cars: the number of fields using AI systems is steadily growing. Most of those systems operate as black box and their inner decisions processes are unintelligible to users. In critical fields such as finance, medicine, autonomous vehicles, providing explanations about those systems became essential to increase users’ trust and understandability and allow them to efficiently use those systems. Explainable AI is an emerging field aiming to provide those insights and explanations. XAI defines clear explanations that then allow users to spot abnormal networks behaviors. Correcting those behaviors seems to be the next logical step: in order to reach this goal, users can apply property driven training methods to their systems to implement networks with the correct and expected behaviors.  If XAI and property driven methods appear to be linked, research about how to concretely implement them combinedly has not been thoroughly made. My dissertation will concern itself with trying to fill this gap. In this presentation, I will go over main methods of both fields and highlight their possible links.
EMG61 and : https://zoom.us/j/96626186133

January – February 2022

DateTopics
14th January 10amF1Tenth Autonomous Driving: Seminar 1. Introductions.
EMG61 and online
Homework: Lectures 1-3 on Youtube
14th January
1pm
Agda PFLA Reading Group – Week 4 – Equality (https://plfa.inf.ed.ac.uk/Equality/)
Online only
28th January
1pm
Liam O’Connor – University of Edinburgh
Title: Quickstrom – Linear Temporal Logic for Property-based Acceptance Testing
Abstract: Tools like Selenium WebDriver enable automated acceptance testing of user interfaces for web applications, and have seen widespread adoption. Unfortunately, programmers must painstakingly specify individual test cases themselves, which hearkens back to the dark ages of unit tests before property-based testing. Quickstrom (https://quickstrom.io) aims to bring the magic of property-based testing to this domain. Currently, Quickstrom properties are specified using a very limited subset of Linear Temporal Logic that only supports safety properties, and specifications are written in an idiosyncratic dialect of PureScript that, in addition to being unfamiliar to most JavaScript programmers, suffers from a number of semantic complications that make spec-writing difficult. My research aims to address both of these problems. In this talk I will describe QuickLTL, a significantly more expressive variant of Linear Temporal Logic which enables us to additionally support liveness properties despite merely checking partial traces, through use of an additional “demanding” next operator. I will also describe Specstrom, the new specification language in which QuickLTL is embedded which resolves the semantic issues that arose from using PureScript. We aim to release the new version of Quickstrom with these improvements this year.
EMG61 and online
28th January
14.00
F1Tenth Autonomous Driving: Seminar 2. Overview of F1Tenth and ROS, by Samuel Moses
Abstract: Cars are becoming autonomous robots, and every autonomous robot needs to be able to navigate safely while avoiding obstacles. A robot must be able to answer three questions: where it is, where it is going and how it can get there. The F1tenth is a good example of a car that can be programmed to do this. It is one-tenth the size of an actual formula one race car. A very suitable development studio that can be used to program the f1tenth is ROS Development Studio. ROS is short for Robotics Operating System, and it is a free and easy-to-use platform for programming robots. In this seminar, I will be presenting a brief overview of f1tenth and an introduction to ROS and its Development Studio. I will discuss the basic structure of ROS, its packages, nodes, subscribers, publishers, services and how to simulate an f1tenth using f1tenth dev simulator based on gazebo.
EMG61 and online
4th February
11.00
Agda PFLA Reading Group – Week 5 – Equality (https://plfa.inf.ed.ac.uk/Isomorphism/)
Online only
4th February
13.00
Daniel Kienitz, Heriot-Watt University
Title: The Effect of Manifold Entanglement and Intrinsic Dimensionality on Learning
Abstract: We empirically investigate the effect of class manifold entanglement and the intrinsic and extrinsic dimensionality of the data distribution on the sample complexity of supervised classification with deep ReLU networks. We separate the effect of entanglement and intrinsic dimensionality and show statistically for artificial and real-world image datasets that the intrinsic dimensionality and the entanglement have an interdependent effect on the sample complexity. Low levels of entanglement lead to low increases of the sample complexity when the intrinsic dimensionality is increased, while for high levels of entanglement the impact of the intrinsic dimensionality increases as well. Further, we show that in general the sample complexity is primarily due to the entanglement and only secondarily due to the intrinsic dimensionality of the data distribution.
EMG61 and online
4th February
14.00
F1Tenth Autonomous Driving: Seminar 3. Introduction to Simulators for Autonomous Systems. By Jiancheng Zhang.
Abstract: I will start with some well-known graphic simulators that can be used in developing racing algorithms, and then look into some open-source simulators written by other unis which based on ROS and discuss how the simulators are working.
EMG61 and online
11th February
1pm
EM.G61
James McKinna – Automatic differentiation in one line, forwards and backwards
Abstract: Automatic differentiation (AD) is usually presented as a
whole-program transformation, whereby code in a programming language
which computes a multivariate mathematical function is magically
transformed into another which computes its derivative, or indeed its
gradient vector of all partial derivatives at a given point. Mysterious
variations on this theme arise, depending on whether the computation is
structured ‘bottom-up’ (‘forward-mode’; from input parameters to output
value(s)) or ‘top-down’ (‘reverse-mode’: from ‘outputs to inputs’, as it
were), made even harder to understand in imperative languages by  the
associated data structures for book-keeping such computations, together
with an additional special ingredient, taking ordinary numerical
computation and working instead with Clifford’s so-called ‘dual numbers’
(1873: yes, 150 years ago) which pair… ‘a function (value) with (the
value of) its derivative’.

This whiteboard talk aims to demystify the basics of AD by considering
it as an abstract evaluator of symbolic expressions (the ‘one-liner’ of
the title), and how to incorporate the mysteries of forward-  and
backward-mode AD by changing the result type of the evaluation in
interesting, but algebraically comprehensible, ways, which moreover
realise the successive computational improvements, in complexity terms,
from purely functional vanilla forward-mode AD to imperative,
array-based, reverse-mode AD.
EMG61 only – *not online*
11th February 14.00F1Tenth Autonomous Driving: Seminar 4.
Title: LiDAR and GANs, a preview to my thesis, by Youssef Bonnaire
Abstract: This presentation will give an overview of how LiDARs work and their continued use within the world of automated driving today.  This will be followed by a description of GANs as they were created in 2014 by Ian Goodfellow with a recap summarising the 2019 paper released by Cao et. al. describing the developments which have been made in the recent years. It will be delving into the main forms (DCGANs and WGANs) at the moment, how they have been used for general purposes and how they have been used with regards to LiDAR data.
EMG61 and online: https://zoom.us/j/96626186133
18th FebruaryNo seminars.
25th February
1pm
EM.G61
Adrien Guatto, IRIF Paris
Title: Explicit Guarded Recursion
Guarded recursion has been proposed as an alternative to coinduction and general recursion. Unlike general recursion, guarded recursion is a sound logical
principle. Unlike coinduction, it provides a generic fixpoint combinator, avoiding the need for anti-modular syntactic positivity checks. The key device used to formulate guarded recursion is a family of approximation modalities, chiefly the so-called later modality. These modalities internalize the existence of fuel, an invisible ambient resource. Fuel is a natural number that controls how many times recursive definitions are allowed to unfold. In this talk, I will present ongoing work on an extension of guarded recursion with new substructural connectives. These connectives decompose approximation modalities by making the existence of fuel explicit.
EMG61 and online: https://zoom.us/j/96626186133
25th February 14.00F1Tenth Autonomous Driving: Seminar 5.
Title: An Introduction to Model Predictive Integral Path Control, by Haoran Hong
Abstract: Model Predictive Integral Path Control (MPPI) is a Model Predictive Control (MPC) algorithm published in 2015. MPPI was developed as an algorithm for autonomous driving which improves upon other MPC methods. This a potential algorithm which can be used for the F1TENTH car.
In this talk I will give an overview of the MPPI algorithm’s fundamentals and how it can be applied for autonomous driving.
EMG61 and online: https://zoom.us/j/96626186133

October – December 2021

DateTopics
8-10th SeptemberAISEC Annual Project Workshop and Industrial Board
Friday 8th October
1pm
Radu Mardare, University of Strathclyde
Title: A formal context for metric semantics
Abstract: In the last decades the research in most of the fields in computer science, from programming paradigms to cyber-physical systems and from robotics to learning, has been challenged to integrate various concepts of continuous mathematics into semantics. This is because the interaction of computational systems with the real world brought real-valued parameters in computation (rates, probabilities, differential equations, time, resources, etc). And in this context, the classic semantics centred on concepts of congruence (bisimulation, behavioural equivalence) became inadequate. We are not interested anymore in understanding systems or their behaviours up to identity, but we need instead to work with approximations of systems and of their behaviours, which scale properly in the structure of a computational system and allow us to understand approximated computation.
To answer this challenge, we have introduced quantitative equational reasoning, an algebraic theory that generalizes universal algebras by extending the classic concept of equation of type s=t to equations of type s=e t for some positive e, interpreted as an upper bound of the distance between the terms s and t. In this way, instead of axiomatizing congruences, we axiomatize algebraic structures on metric spaces. This gives us the concepts we need to develop a metric semantics for systems where the similarity between non-equivalent systems can be properly measured and approximated.
This talk is a tutorial on quantitative equational reasoning and will summarize a series of results that we have published in the last five years, joint work with Prakash Panangaden and Gordon Plotkin.
https://personal.cis.strath.ac.uk/r.mardare/
Friday 15th OctoberNone
Friday 22nd OctoberNone
Friday 29th
October
1pm
EM.G61
Agda PLFA reading groupWeek 0
Introductory session – Will cover installation of Agda and plan for the weeks ahead.
Friday 5th November
1pm
EM.G61
Martin Trefzer, University of York
Title: Reservoir Computing with Physical Substrates
Abstract: Classical digital computing is power hungry, fragile, and hard to interface to the real world.  Unconventional computers can help overcome these issues, particularly by being able to perform computation embodied in physical materials that can directly exploit the natural dynamics of the substrate, thereby dramatically reducing the power requirements.  Reservoir computing (RC) is a computational model derived from recurrent neural network theory and has proven to be an efficient approach for exploiting non-linear dynamics of physical substrates for computation.  State-of-the-art performance has been demonstrated in both simulation and physical implementation for a variety of proof-of-concept systems exploring novel materials and substrates, including novel nano materials, magnetic materials, neuromorphic hardware and analogue electronics.  However, current examples are still limited by components converting between analogue and digital (the real world and the computer), there exists no programming model or methodology to select appropriate substrates and design reservoir computer architectures.

In the talk I will motivate my goal to remove any digital pre- and post-processing to exploit the full potential of embodied RC, and outline the vision of a layered, flexible architecture composed of components implementing different RC models – a “reservoir of reservoirs” (RoR) – that can be networked into complex signal processing pathways.  I will show the steps that we have taken towards this goal, including a task-agnostic methodology to assess materials for RC performance (CHARC), results for different RC topologies, lessons learnt from different substrates (CNTs, magnets, analogue circuits), and some ventures into applications (NARMA, FPAAs, tensegrity robotics, sensor fusion).
Friday 12th November
1pm
EM.G61
Agda PLFA reading groupWeek 1
We will aim to discuss the first few sections of the PFLA book:
– Naturals: Natural numbers
– Induction: Proof by Induction
Friday 19th November
1pm
EM.G61
Marko Doko, Heriot-Watt University
Title: A Walk Through the Forest of Weak Memory Concurrency
Abstract: Realistic concurrency models are notoriously complex and difficult to grasp when encountered. In this talk, we will slowly go over several
approaches to defining weak concurrency models, accompanied with
motivating examples, and examples showcasing known issues with the
definitions.

We will start by looking at the C/C++ style of memory models, which
gives the semantics declaratively, in terms of execution graphs. Next,
we will look at the ways to approach the (parts of) that semantics
operationally, ending up with an introduction to the Promising semantics.
Friday 26th November
1pm
EM.G61
Agda PLFA reading groupWeek 2
We will continue to discuss the first few sections of the PFLA book:
– Naturals: Natural numbers
– Induction: Proof by Induction
Friday 3rd December
1pm
EM.G61
None
Friday 10th December
1pm
EM.G61
Agda PLFA reading group Week 3
We will continue to discuss the following sections of the PFLA book:
– Relations: Inductive definition of relations
– Equality: Equality and equational reasoning
Isomorphism: Isomorphism and Embedding

April – May 2021

DateTopics
7 April
9.00 GMT
Farayi Kambarami, Chinhoyi University of Technology
Title:  Making Every Word Count – Dealing with the Data Sparsity and Morphological Complexity challenges of the conjunctively written agglutinative Southern Bantu Languages
Abstract: Southern Bantu languages (SBLs) are all agglutinative. This agglutination is not always visible at the orthographic level because some of them are written conjunctively, while others have a disjunctive writing style. Whilst all these languages are resource-scarce and have limited textual data available on the web and elsewhere, the disjunctively written ones have the “advantage” of having very low type to token ratios and this has been shown to make the development of data-driven NLP models more effective for them than it is for their conjunctively written cousins. Since they are morphologically similar in almost every way except orthography, we ask the question: could we make better use of the orthographically conjunctively written words – extracting their structure to increase the amount of data available to learn from them? We thus propose and experiment with various methods to extract the structure of these words using rules-based methods (Finite State Automata) and data-driven methods (Minimum Description Length [Morfessor], etc) to extract sub-word features for use in the development of more effective computational language models for them.
14 AprilReading group:
Paper: KoGuN: Accelerating Deep Reinforcement Learning via Integrating Human Suboptimal Knowledge 
Proposer: Fazl Barez
22 April
11.00 GMT
Note: Thursday not Wednesday, 11am not 9am!
Clemens Kupke, University of Strathclyde
Title: Learning weighted automata
Abstract: In this talk I am going to discuss active learning algorithms for
weighted automata. I will first recall Angluin’s seminal L* algorithm
for learning deterministic finite automata. After that I will introduce
a variant of the L* algorithm for weighted automata over semirings. Our
main result is that this algorithm works when applied to weighted
automata over a principal ideal domain. In addition, I will discuss an
instructive example that shows that the algorithm is not guaranteed to
terminate over arbitrary semirings.
Joint work with Gerco van Heerdt, Jurriaan Rot and Alexandra Silva.
28 April
9.00 GMT
Yingfang (James) Yuan, LAIV PhD student (PhD talk)
Title: Hyperparameter Optimisation for Graph Neural Networks
Abstract: Graph neural networks (GNNs) have been proposed for a wide range of graph-related learning tasks. In particular, in recent years, an increasing number of GNN systems were applied to predict molecular properties. However, a direct impediment is to select appropriate hyperparameters to achieve satisfactory performance with lower computational cost. Meanwhile, many molecular datasets are far smaller than many other datasets in typical deep learning applications. Most hyperparameter optimization (HPO) methods have not been explored in terms of their efficiencies on such small datasets in the molecular domain. We conducted a theoretical analysis of common and specific features for two state-of-the-art and popular algorithms for HPO: TPE and CMA-ES, and we compared them with random search (RS), which is used as a baseline. Experimental studies are carried out on several benchmarks in MoleculeNet, from different perspectives to investigate the impact of RS, TPE, and CMA-ES on HPO of GNNs for molecular property prediction. Besides, we proposed a genetic algorithm with hierarchical evaluation strategy and tree-structured mutation for HPO. Finally, we believe our work will motivate further research on GNN as applied to molecular machine learning problems in chemistry and materials sciences.
5 May 9.00 BSTMatthew Daggitt, Heriot-Watt University
Title: Neural Network Robustness – Careful what you verify!
Abstract: Verifying the robustness of neural networks is a hot topic in Machine Learning at the moment. As is often common in active areas of research, there has been a proliferation in the number of definitions of robustness out there. However as of yet, the consequences of the differences in these definitions do not seem to be widely discussed in the literature.

In this talk I will compare four different definitions of robustness, and in particular look at: their motivation, mathematical properties, assumptions about the underlying distribution of data, and interpretability. In particular we highlight that if not used with care, the popular definition of classification robustness may result in a contradictory specification! This work is the result of the collaboration of various members of LAIV.
12 May 11.00 BSTNote: 11am not 9am!
Kirsty Duncan, LAIV PhD student (PhD talk)
Title: Pruning Robust Neural Network Models Using Logical Constraints
Abstract: Pruning algorithms aim to remove the redundancy in neural networks and can result in networks which require significantly less memory and computation than full-precision networks, with little or no difference in accuracy. This can be leveraged to deploy pruned models on resource-constrained hardware or even to shrink networks to make verification tractable.
Unfortunately, pruned networks are more sensitive to adversarial attacks.
To address this, we propose Logically Constrained Pruning (LCP), a pruning algorithm that employs a semantic loss function which encodes a network’s adherence to some robustness property described by a lgical constraint. Our new pruning algorithm produces models that are no less accurate than a state-of-the-art pruning algorithm, but has the advantage of producing pruned models that are more robust to adversarial attack.
19 May
12.00 BST
Note: 12am not 9am!
Dr Craig Innes, University of Edinburgh
Title:
Combining Task & Environment Specifications to Verify Robot Safety
Abstract: Despite the increasing prevalence of probabilistic, data-driven approaches to robotic control and other tasks, many practitioners still wish to rigorously test whether their system conforms to some concrete specification. This specification includes not just the explicit task that the robot is expected to complete (along with e.g., safety constraints), but also the range of environments that a user can reasonably expect the task to be achieved in. This talk addresses both parts of such specifications:

First, we will tackle environment specifications by introducing ProbRobScene – a probabilistic specification language for declaratively expressing (and sampling) 3D simulated environments. Second, we will tackle task specifications by combining data-driven imitation learning with Linear Temporal Logic. Finally I’ll discuss some recent work which ties these two threads together, with the goal of creating a holistic framework for automated testing of robot manipulation systems in simulation.
26 May 9.00 BSTNatalia Slusarz, LAIV BSc student (BSc talk)
Title: Mathematical properties of neural networks trained on artificial datasets
Abstract: In my BSc thesis, I investigated the mathematical properties of neural networks (NNs) by using artificially generated datasets with known properties and parameters. This work uses knowledge of the data distribution as well as the ability to control it with parameters to implement a new tool to extract more information about the state of NN during the training. This work allows us to extend our understanding of NNs and the process they go through to arrive at their final prediction, in turn helping in ongoing work in NN verification.
02 June
9:00
BST
Oliver Markgraf,  TU Kaiserslautern,
Title: Learning Union of Integer Hypercubes with Queries (with applications to monadic decomposition)
Abstract: We study the problem of learning a finite union of integer (axisaligned)
hypercubes over the d-dimensional integer lattice, i.e., whose edges are
parallel to the coordinate axes. This is a natural generalization of the classic problem in the computational learning theory of learning rectangles. We provide a
learning algorithm with access to a minimally adequate teacher (i.e. membership
and equivalence oracles) that solves this problem in polynomial-time, for
any fixed dimension d. Over a non-fixed dimension, the problem subsumes the
problem of learning DNF boolean formulas, a central open problem in the field.
We have also provided extensions to handle infinite hypercubes in the union, as
well as showing how subset queries could improve the performance of the learning
algorithm in practice. Our problem has a natural application to the problem
of monadic decomposition of quantifier-free integer linear arithmetic formulas,
which has been actively studied in recent years. In particular, a finite union of
integer hypercubes correspond to a finite disjunction of monadic predicates over
integer linear arithmetic (without modulos constraints). Our experiments suggest
that our learning algorithms substantially outperform the existing algorithms.
This work has been accepted at CAV’21 and is joint work among Anthony W. Lin, Daniel Stan and Oliver Markgraf.
09
June
9:00
BST
Edwin Stang, Heriot-Watt (SoSS) (PhD talk)
Title: Automating Robustness Analysis of Trading Strategy Development Processes
Abstract: This DBA research is about Automating Robustness Analysis of Trading Strategy Development Processes using a specifically designed trading platform. Investor decision points, such as Entries, Exits, Money Management, Equity Curve Trading, Strategy/Portfolio Selection, and Robustness Checking can be formalised. To facilitate this, a new functional expression language was created that limits exception cases and simplifies the problem space. Evolutionary Machine Learning techniques are used to generate explainable trading strategy candidates and simulate formalised decisions on a portfolio management level in automated longitudinal studies. Users can simulate a team of random strategy developers while the process extracts and manages a portfolio that is viable for live trading. This process is then tested for robustness by Monte Carlo simulations. This is facilitated by advances in research about high-performance backtesting engines and computer language design. An example is provided that compares breakout and signal-based strategy generators in the foreign exchange market. The long-term goal is to have a research platform that can evaluate decisions at a higher level than is currently possible with available contender platforms.
16
June
9:00
BST
Todd Waugh Ambridge, University of Birmingham
Title: Formalised Convergent Regression in Constructive Type Theory
Abstract: Supervised machine learning concerns itself with estimating relationships between sample data — it is thus the approximation of an unknown function from finite, perhaps error-prone, observations of its behaviour. Two methods of function approximation are (non-parametric) interpolation and (parametric) regression. Convergence properties of interpolation have been studied extensively by Weierstrass-style theorems; any continuous function on a closed interval can be interpolated to any degree of precision given enough (appropriately-spaced) samples.

In this talk, I will instead propose several convergence theorems instead for regression. The theorems proposed are constructive by design — and hence yield algorithms for function approximation — and are defined on a general class of types that include different representations of closed intervals of exact real numbers. I will show how these theorems have been fully-formalised in the proof assistant Agda, and the ongoing development of algorithms inspired by the theorems using an off-the-shelf Java library for exact real numbers.
23
June
9:00
BST
Jerome Jochems, University of Oxford
Title: Higher-order constrained Horn clauses for verification
Abstract: Higher-order constrained Horn clauses (HoCHC) are a semantically invariant system of higher-order logic modulo theories. With semi-decidable unsatisfiability over a semi-decidable background theory, HoCHC is particularly suitable for safety verification.

In this talk, I will make a case for HoCHC as a unifying setting in which to understand and compare different approaches to higher-order program verification, analogous to the role of first-order constrained Horn clauses in first-order verification. Our work is motivated by the (disparate) limitations of established higher-order verification approaches like higher-order model checking and refinement type inference.
30
June
LAIV attends SPLS
There will be no seminars in July/August.

March 2021

DateTopics
3 March, 9.00 GMTBruno Gavranović, University of Strathclyde
Title: Neural Networks through the lens of Category Theory
Abstract: I will give an introduction to the categorical foundation of gradient-based learning algorithms. I’ll show how category theory can be used to formalize various components of neural networks: parameterization, bidirectionality and differentiation. I’ll also show how CT can be used as a graphical language and a visual, pedagogical aid in understanding neural networks. Time permitting, we will cover how abstract categorical results of learning can be transferred to the surprising setting of Boolean circuits.
10 March, 9.00 GMTMichael Pidgeon, LAIV MSc student (MSc talk)
Title: Adversarial Attacks on Explainable AI Techniques
Abstract: Recent work has shown that certain explainable AI techniques are vulnerable to adversarial attacks. In an extreme case, this can be used to hide biased and otherwise nefarious behaviour behind an innocuous looking (and false) explanation of a model. This talk will summarise work in this area, provide a live demonstration of possible attacks, and give an opinionated view of real-world risks.
18 March, 9.30 GMTLAIV Reading group: Demystifying Black-box Models with Symbolic Metamodels by A. Alaa, M. V. D. Schaar Proposer: Daniel Kienitz
24 March, 9.00 GMTDan Green, LAIV MSc student (MSc talk)
Title: Understanding the Role of Machine Learning in Signed Language Recognition
Abstract: There have been many attempts in the past to properly capture signed language data using computers. Many approaches are flawed due to fundamental misunderstandings of the underlying linguistics or lack of understanding about deaf culture and language, while other approaches have been held back by limitations of technology. This talk will outline what components are needed for a system to properly capture signed language data and explore how a range of solutions within ML can be utilised to achieve this task.
31 March
9.00 GMT
None currently

January – February 2021

DateTopics
14th JanuaryWorld Logic Day
 https://logicday.vcla.at/
20 January, 9.00 GMTLuca Arnaboldi, Edinburgh University, LAIV RA
Title: Intrusion Detection in the IoT
Abstract: Intrusion Detection is a well studied area, with early work starting in the early 90s. The field is perennially evolving as the scope of the internet changes, and attacks get more varied, the detection mechanisms evolve alongside them. Despite dozens of proposed approaches and publications in the area, leading figures in the field have extensively discussed downfalls in current detection techniques, especially (but not only) in the context of the IoT. To find out just to what extent, we conducted an extensive review of techniques used in the extant literature, and demonstrate that most techniques are inadequate or currently not capable of detecting certain attacks as well as being unsuitable for certain IoT scenarios. Although we found there is a lot of promise in the usage of behaviour based techniques when we have domain knowledge of the IoT deployment. We therefore propose means to use formal models to embed domain knowledge as part of the detection process, enhancing actionability of an IDS. We also identify key scenarios for which it may be beneficial to adapt this technique as the core points of detection, where more standard techniques may not be feasible.
27 January, 9.00 GMTAlasdair Hill, LAIV PhD student, (PhD Talk)
Title: Work in progress: separating control from planning via Agda library for modelling AI planning
Abstract: AI planners (such as STRIPS) provide algorithms for finding plans that satisfy given constraints on initial and final states. Example is finding a taxi route from point A to point B, assuming initial passenger location at A. However, when a suitable plan (e.g. a route from A to B) is found by the planner, its execution may be subject to a number of additional constraints. For example, the taxi driver should not start the planned journey if the car does not have enough of fuel or the journey is planned outside of their working hours. These constraints may not be part of the planning problem, in other words, they are needed for control of plan execution, rather than for plan generation. In this talk, I will show a case study on how dependent types can help to separate concerns of AI planning from control.
3 FebruaryNone
10 February, 9.00 GMTEdwin Brady, St Andrews University
Title: Programming with Quantitative Types in Idris 2
Abstract: A general overview of Idris with specific examples on DSLs with resource tracking in the types.
17 February, 9.00 GMTRemi Desmartin, LAIV MSc student, (MSc talk)
Title: Verifying Neural Networks with Imandra.
Abstract: Imandra is a theorem prover built on OCaml. Mathematical models are written in OCaml, and Imandra can be used to reason about them, check if they verify a given constraint or generate counter-examples to that constraint. In this talk, I will present Imandra and demonstrate its application in verifying a linear regression model’s properties.
23 February, 9.00 GMTDaniel Kienitz, LAIV PhD student, (PhD talk)
Title: Unsupervised transfer learning of deep representation with autoencoders – From simple to complex
24 February, 9.00 GMTDr Michael Cashmore, University of Strathclyde
Title: Robust Plan-Based Robot Control
Abstract: The topic of the seminar will focus on temporal and numeric challenges that arise in plan execution. If a plan is produced with some flexibility, intended to be executed in an uncertain environment, how it can be executed? The seminar will cover properties of temporal controllability and robustness envelopes, which describe the flexibility allowed in a plan, the perturbations that can be observed in the environment for which a plan can be successful, and a window through which to analyse the fragility of a plan.

October – December 2020

Some of those meetings are joint with the AISEC Grant (color-coded in “AISEC colours” red and gold)

Date Topics
1 October
Katya speaks about AI Verification in Need of Programming Language Support at the SICSA Annual Conference
6 October
Edinburgh Center for Robotics: Annual Conference.
07 October
12.00 BST
(AISEC talk) Madeline Cheah, Horiba Mira
Title: Cybersecurity and Future Mobility Slides.
Abstract: Cybersecurity is already a major concern for manufacturers – but what will cybersecurity for vehicles of the future look like? Connected autonomous vehicles are expected to be plugged into an ecosystem that would mean that we would have to be faster, more agile and more responsive to security threats.
This presentation will identify potential questions we would need to be answering about autonomous vehicles and the ecosystem and all the many new technologies that have already been trialled, and forecasts what security practitioners of the future will potentially face.
Bio: Dr Madeline Cheah is currently Cyber Security Innovation Lead at HORIBA MIRA, with responsibility for automotive and autonomous vehicle cyber security research at the company. She started her current role after having earned a PhD in automotive cyber security at Coventry University, with a focus on systematic security evaluations on automotive interfaces and all the processes (formal or informal) that might be inherent in such a framework. Her contributions have been published in peer-reviewed papers and she is a regular speaker at cybersecurity events.
14 October
9.00 BST
(AISEC talk) Matthew Daggitt
Title: An introduction to algebraic routing theory
Video Recording
Abstract: Modern routing protocols are complex, so much so that even the Border Gateway Protocol that runs the backbone of the internet is known to exhibit undesirable behaviour such as non-convergence and non-determinism. In this talk I will discuss the work that I and others have been doing on an algebraic theory of routing. Instead of reasoning about an individual protocol implementation, this approach proves general theorems about routing that can then be specialised to a particular protocol. For example one of our recent results shows that there is no difference between the sets of routing problems solvable with reliable vs unreliable communication. In the talk I will also discuss our efforts in formalising these proofs in the dependently typed language and proof assistant Agda. In particular, I’ll discuss our recent formal proof of correctness of a protocol containing many of the features of BGP: local preferences, communities and route filtering.
15 October
LAIV attends SAFETY FOR CONVERSATIONAL AI WORKSHOP: https://tinyurl.com/safeconvai
21
October
9.00 BST
Esra Erdem, Sabanci University.
Title: Applications of Answer Set Programming where Theory meets Practice
Abstract: We have been investigating applications of Answer Set Programming (ASP) in various domains, ranging from historical linguistics and bioinformatics to economics and robotics. In these applications, theory meets practice around challenging computational problems, and they all start a journey towards benefiting science and life. ASP plays an important role in this journey, sometimes as a declarative programming paradigm to solve hard combinatorial search problems (e.g., phylogeny reconstruction for Indo-European languages, multi-agent path finding in autonomous warehouses, matching problems in economics), and sometimes as a knowledge representation paradigm to allow deep reasoning and inferences over incomplete heterogeneous knowledge and beliefs of agents (e.g., hybrid planning for robotic manipulation, diagnostic reasoning in cognitive factories, explanation generation for biomedical queries). In this talk, we will share our experiences from such different applications of ASP, and discuss its role and usefulness from different perspectives.
Bio: Esra Erdem is an associate professor in computer science and engineering at Sabanci University. She received her Ph.D. in computer sciences at the University of Texas at Austin (2002), and carried out postdoctoral research at the University of Toronto and Vienna University of Technology from 2002 to 2006. Her research is in the area of artificial intelligence, in particular, the mathematical foundations of knowledge representation and reasoning, and their applications to various domains, including robotics, bioinformatics, logistics, and economics. She was a general co-chair of ICLP 2013, and is a program co-chair of ICLP 2019 and KR 2020, and the general chair of KR 2021. She served on the editorial board of JAIR, and is a member of the editorial board of TPLP, the ALP executive committee, and the KR steering committee. She received the Young Scientist award by the Science Academy, and the Women Leader of Innovative and Disruptive Technologies award by Microsoft.
21
October
LAIV is attending the Scottish Seminar on Programming Language Semantics
28
October
9.00 GMT
(AISEC talk) Guy Katz (The Hebrew University of Jerusalem).
Title: Verifying Recurrent Neural Networks using Invariant Inference
Video recording
Abstract: Deep neural networks are revolutionizing the way complex systems are developed. However, these automatically-generated networks are opaque to humans, making it difficult to reason about them and guarantee their correctness. In this work, we propose a novel approach for verifying properties of a widespread variant of neural networks, called recurrent neural networks. Recurrent neural networks play a key role in, e.g., speech recognition, and their verification is crucial for guaranteeing the reliability of many critical systems. Our approach is based on the inference of invariants, which allow us to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems. Experiments with a proof-of-concept implementation of our approach demonstrate that it performs orders-of-magnitude better than the prior state of the art.
[Based on an ATVA 2020 paper. Joint work with Yuval Jacoby and Clark Barrett]
Bio: Guy Katz is a senior lecturer at the Hebrew University of Jerusalem, Israel.
He received his Ph.D. at the Weizmann Institute of Science in 2015.
His research interests lie at the intersection between Formal Methods and
Software Engineering, and in particular in the application of formal methods to
software systems with components generated via machine learning. https://www.katz-lab.com/
04
November
9.00 GMT
Alasdair Hill (The Lab for AI and Verification, HWU).
Title: Semantics of Programming Languages: applications in AI Planning.
Abstract: In this talk, I will give a general overview of three most popular kinds of semantics of Programming Languages: declarative, operational, big step and small step. I will then show how AI planning languages yield some of these semantics. I will finish by discussing my current challenge of formulating the big step semantics for AI planning. This work is part of my PhD studies.
11
November
9.00 GMT
Chih-Hong Cheng (Denso Automotive Deutschland GmbH).
Title: Research Challenges and Opportunities towards Safe Autonomous Driving
Abstract: The race towards autonomous driving has lasted for years. I argue that the chance for the contestants to close the safety gap lies in scientifically rooted engineering methods that can efficiently improve the quality of AI. In particular, I consider a structural approach (via GSN) to argue the quality of Neural Networks (NN) with NN-specific dependability metrics. A systematic analysis by considering the quality of data collection, training, testing, and operation allows us to identify many unsolved research questions: (1) How to create a weaker form of data completeness while admitting the combinatorial explosion of scenarios? (2) How to overcome the specification problem for formal verification, where input as images can be hard to characterize? (3) How to justify that a decision made by the network is supported by prior similarities in the training data? Some of my conducted prior work will be concisely mentioned with a statement on their limitations.
Bio: Currently, Chih-Hong Cheng is a technical manager at DENSO, researching safety technologies for autonomous driving. Before DENSO, he held research positions in fortiss and ABB Corporate Research. He received his doctoral degree in CS from the Technical University of Munich.
18
November
14.00 GMT
(AISEC talk) Marco Gaboardi (Boston University, USA).
Title: Formal Verification of Higher-order Probabilistic Programs
Abstract: In this talk I will introduce a program logic for proving properties of probabilistic programs written in an expressive probabilistic higher-order language with continuous distributions and operators for conditioning distributions by real-valued functions. This program logics retains the comfortable reasoning style of informal proofs thanks to carefully selected axiomatizations of key results from probability theory. To show this, I will discuss the use of this program logic in the verification of several examples from statistics, probabilistic inference, machine learning, and differential privacy.
Short Bio: Marco Gaboardi is an assistant professor in the Computer Science department at Boston University. His research is in computer science with a focus on Programming Languages, Formal Methods and Differential Privacy. Website: http://cs-people.bu.edu/gaboardi/
23 November
12.00 GMT
Masterclass by Sabina Jedrzejczyk
Title: Neural Net Verification with Z3 in Python
Abstract: Neural Networks are increasingly used in image recognition, autonomous systems, speech processing and many more. However, it is also known that, due to their statistical nature, their performance is prone to errors and adversarial attacks. This poses a threat to their safe use in such important domains as social media, transport control, autonomous cars, or robotics. Several methods that verify neural network robustness against errors and adversarial attacks are currently being developed. The Lab for AI and Verification (http://laiv.uk/) at Heriot-Watt Univerisity has recently developed a library called “Sapphire” in Python, which allows to verify neural networks implemented in Python, via Python’s API to the SMT-solver Z3. This masterclass focuses on introducing this library to Python users with little or no background in verification.
25
November
14.00 GMT
(AISEC talk) Grant Passmore (Imandra.ai, USA).
Title: Title: An Introduction to the Imandra Automated Reasoning System
Abstract: Imandra (imandra.ai) is a cloud-native automated reasoning system powering a suite of tools for the design and regulation of complex algorithms. Imandra is finding exciting industrial use: For example, Goldman Sachs is now public with the fact that Imandra is used to design and audit some of their most complex trading algorithms. Foundationally, Imandra is a full-featured interactive theorem prover with a unique combination of features, including: an “executable” logic based on a (pure, higher-order) subset of OCaml (in much the same way that ACL2’s logic is based on a subset of Lisp), first-class computable counterexamples (with a proof procedure that is “complete for counterexamples” in a precise sense), a seamless integration of bounded model checking and full-fledged theorem proving, decision procedures for nonlinear real and floating point arithmetic, first-class state-space decompositions, and powerful techniques for automated induction (including the “lifting” of many Boyer-Moore ideas to our typed, higher-order setting). In this talk, I’ll give an overview of Imandra and we’ll together work many examples. You can follow along and experiment with Imandra in the browser at http://try.imandra.ai/ and install Imandra locally by following the instructions at http://docs.imandra.ai/.
02
December
12.00 GMT
Marina De Vos, University of Bath
Title: Norms, Policy and Laws: Modelling, Verification and Monitoring
Abstract: Norms, policies and laws all focus on describing desired behaviour of actors, whether they are humans, software agents or processes. Actors (may) have the autonomy to decide to adhere to this behaviour or deviate from it. The actor’s actions can be checked against the desired behaviour, and compliance or violation potentially respectively rewarded or penalised. This talk explores the use of answer set programming, a declarative programming language, for the modelling, verification and compliance monitoring of these norms, policies, and laws. This is looked at from design time in terms of specification, debugging and combining models but also once the system is running in terms of monitoring, enforcement, and adaptation. Starting from normative multi-agent systems, where desired behaviour of agents can be described through a set of norms governed by so-called institutions, we look at case studies in law and business process modelling.
Bio: After completed her PhD in Computer Science and the Vrije Universiteit Brussel, Belgium, Marina De Vos joined the Department of Computer Science at the University of Bath, UK. She is a Senior Lecturer and Director of Training for UKRI Centre for Doctoral Training in Accountable, Responsible and Transparent Artificial Intelligence. Her research area is knowledge representation and reasoning, using answer set programming to model human/agent decision-making. Currently, her work focuses on the modelling, the explanation, and the verification of normative and policy-based reasoning in the areas of legal and socio-technical systems. In these systems participants, human and computational agents’ behaviour is guided by a set of norms/policies that describe expected behaviour. Non-compliance can be monitored and penalised while compliance is rewarded. Through a formal model, and corresponding implementation, the behaviour of the entire system can be proven and explained. Beyond normative modelling, she is interested in the software development for AI systems in general and logic-based systems more specifically and their use in wider society.
09
December
09.00 GMT
(AISEC talk) Burkhard Schafer, University of Edinburgh
Title: Computational law – mapping the landscape
Abstract: The talk develops two maps – one of the regulatory regime for autonomous cars, the other an overview of methods and approaches to render law computational. It then tries to establish “matches” between these two domains, fields where the properties of the law and the properties of the formal theory match in a promising way
Bio: B.Schafer is a Professor of Computational Legal Theory, at the University of Edinburgh. He is a director and co-founder of the SCRIPT Centre for IT and IP law, the UK’s oldest interdisciplinary research centre in the intersection between law and technology, funded initially though two large AHRC grants. He is also Co-Director of the Joseph Bell Centre for Forensic Statistics and Legal Reasoning, which since its inception in 1999 brought together mathematicians, computer scientists and lawyers. He has published over 120 papers in the interaction between law and technology, winning the LexisNexis best paper award of the international conference on informatics IRIS twice. He has been PI or Co-I on over 15 UK and EU grants. He is a member of the Data Ethics group of the Turing Institute, member of the expert group on AI4Society and member of the Law and AI steering group of the Gesellschaft Fuer Informatik. He has acted as advisor for the governments of the UK, Scotland, Germany, the US and EU.
16
December
12.00 GMT
LAIV Christmas Event

June — September 2020

Date Topics
03 June Dorian Gouzou. Particle Swarm Optimisation in Machine Learning
10 June Hugo Cousin High-Performance Analysis for Social Networks in Rust
17 June, 10.00 PhD Progression talk: Fazl Barez
System 3: Learning with constraints and domain knowledge.
17 –19
June
LAIV at PLDI’20.
24 June Invited talk: John Woodward ABBA: Automatically Building Better Algorithms
Abstract: We will look at how to automatically build better algorithms. We begin by manually identifying a small component of a human-designed algorithm. We then target this piece of code with different metaheuristics (random search, hill-climbing, and evolutionary algorithms). I each case the resulting algorithm is an improvement over the pre-existing algorithm. We will look at a few applications: 1/ probability distributions for use in an optimization algorithm 2/ mutation operators for genetic algorithms 3/ debugging code overnight – a real-world application.
1-4 July LAIV at IJCAR’20
8 July Reading Group: PaRoT: A Practical Framework for Robust Deep Neural Network Training, by FiveAI
We will consider this paper . Marco Casadio will lead the seminar.
15 July CEC 2020 presentation by Wenjun Wang:
Title: Evolutionary Learning for Soft Margin Problems:
A Case Study on Practical Problems with Kernels
Abstract: This paper addresses two practical problems: the classification and prediction of properties for polymer and glass materials, as a case study of evolutionary learning for tackling soft margin problems. The presented classifier is modelled by support vectors as well as various kernel functions, with its hard restrictions relaxed by slack variables to be soft restrictions in order to achieve higher performance. We have compared evolutionary learning with traditional gradient methods on standard, dual and soft margin support vector machines, built by polynomial, Gaussian, and ANOVA kernels. Experimental results for data on 434 polymers and 1,441 glasses show that both gradient and evolutionary learning approaches have their advantages. We show that within this domain the chosen gradient methodology is beneficial for standard linear classification problems, whilst the evolutionary methodology is more effective in addressing highly non-linear and complex problems, such as the soft margin problem.
19 July
LAIV attends Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS) Details.
21 July
LAIV attends Workshop on Verification of Neural Networks (VNN’20) and Verification of Neural Networks Competition (VNN-COMP’20) Details.
22 July
13.00 — 18.00
LAIV attends SPLS: Scottish Programming Languages Seminar Details.
03-20 August LAIV attends SPLV: Scottish Summer School on Programming Languages and Verification Details.
03-20 August LAIV attends SPLV: Scottish Summer School on Programming Languages and Verification Details.
19 August
13.00 — 14.30
Vaishak Belle (Edinburgh University)
Title: Logical Interpretations of Autoencoders.
Abstract: The unification of low-level perception and high-level reasoning is a long-standing problem in artificial intelligence, which has the potential to not only bring the areas of logic and learning closer together but also demonstrate how abstract concepts might emerge from sensory data. Precisely because deep learning methods dominate perception-based learning, including vision, speech, and linguistic grammar, there is fast-growing literature on how to integrate symbolic reasoning and deep learning. Broadly, efforts seem to fall into three camps: those focused on defining a logic whose formulas capture deep learning, ones that integrate symbolic constraints in deep learning, and others that allow neural computations and symbolic reasoning to co-exist separately, to enjoy the strengths of both worlds. In this paper, we identify another dimension to this inquiry: what do the hidden layers really capture, and how can we reason about that logically? In particular, we consider variational autoencoders that are widely used for dimensionality reduction and inject a symbolic generative framework onto the feature layer. This allows us, among other things, to generate example images for a class to get a sense of what was learned. Moreover, the modular structure of the proposed model makes it possible to learn relations over multiple images at a time, as well as handle noisy labels. Our empirical evaluations show the promise of this inquiry.
* this is joint work with Anton Fuxjaeger & will appear at ECAI 2020
26 August
13.00 — 14.30
Reading group: Adversarial training and provable defenses: Bridging the gap by Balunovic et al., Link to the paper
Abstract: We present COLT, a new method to train neural networks based on a novel com-bination of adversarial training and provable defenses. The key idea is to model neural network training as a procedure which includes both, the verifier and the adversary. In every iteration, the verifier aims to certify the network using convex relaxation while the adversary tries to find inputs inside that convex relaxation which cause verification to fail. We experimentally show that this training method,named convex layerwise adversarial training (COLT), is promising and achieves the best of both worlds – it produces a state-of-the-art neural network with certified robustness of 60.5% and accuracy of 78.4% on the challenging CIFAR-10 dataset with a 2/255 L∞ perturbation. This significantly improves over the bestconcurrent results of 54.0% certified robustness and 71.5% accuracy.
02 September
12.00 — 13.00
Katya Komendantskaya (HWU)
Title: Continuous Verification of Machine Learning: a Declarative Programming Approach
Abstract: I will discuss state of the art in neural network verification. I will discuss the notion of “continuous verification” that characterises a family of methods that explore continuous nature of machine learning algorithms. I will argue that methods of continuous verification must rely on robust programming language infrastructure (refinement types, automated proving, type-driven program synthesis), which provides a major opportunity for the declarative programming language community.
* this is joint work with Wen Kokke and Daniel Kienitz & will appear as invited talk at PPDP’2020.
07 September
11.30 — 12.30
Alasdair Hill (HWU)
Title: Proof-Carrying Plans: a Resource Logic for AI Planning
Abstract: Planning languages have been used successfully in AI for several decades. Recent trends in AI verification and Explainable AI have raised the question of whether AI planning techniques can be verified. In this paper, we present a novel resource logic, the Proof Carrying Plans (PCP) logic that can be used to verify plans produced by AI planners. The PCP logic takes inspiration from existing resource logics (such as Linear logic and Separation logic) as well as Hoare logic when it comes to modelling states and resource-aware plan execution. It also capitalises on the Curry-Howard approach to logics, in its treatment of plans as functions and plan pre- and post- conditions as types. We present two main results. From the theoretical perspective, we show that the PCP logic is sound relative to the standard possible world semantics used in AI planning. From the practical perspective, we present a complete Agda formalisation of the PCP logic and of its soundness proof. Moreover, we showcase the Curry-Howard, or functional, value of this implementation by supplementing it with the library that parses AI plans into Agda’s proofs automatically. We provide evaluation of this library and the resulting Agda functions.
* this is joint work with Katya Komendantskaya and Ron Petrick & will appear as regular talk at PPDP’2020.
8-10
September
LAIV attends PPDP’20: 22nd International Symposium on Principles and Practice of Declarative Programming. Details
16
September 12.00 — 13.30
Katya Komendantskaya (HWU)
Title: The New Normal: We Cannot Eliminate Cuts in Coinductive Sequent Calculi, but we Can Explore Them.
Abstract: In sequent calculi, cut elimination is a property that guarantees that any provable formula can be proven analytically. For example, Gentzen’s classical and intuitionistic calculi LK and LJ enjoy cut elimination. The property is less studied in coinductive extensions of sequent calculi. In this paper, we use coinductive Horn clause theories to show that cut is not eliminable in a coinductive extension of LJ, a system we call CLJ. We derive two further practical results from this study. We show that CoLP by Gupta et al. gives rise to cut-free proofs in CLJ with fixpoint terms, and we formulate and implement a novel method of coinductive theory exploration that provides several heuristics for discovery of cut formulae in CLJ.
* this is joint work with Dmitry Rozplokhas and Henning Basold & will appear in ICLP’2020.
18-24
September
LAIV attends ICLP’20: The 36th International Conference on Logic Programming. Details
23 September
12.00 BST
(AISEC talk) Lu Yu
Title: Semantic Representation: From Color to Deep Embeddings
Abstract: One of the fundamental problems of computer vision is to represent images with compact semantically relevant embeddings. These embeddings could then be used in a wide variety of applications, such as image retrieval, object detection, and video search. The main objective of this presentation is to study image embeddings from two aspects: color embeddings and deep embeddings.
30 September
12.00
(AISEC talk) Scott McLachlan
Title: Lawmaps: Enabling Legal AI development through Visualisation of the Implicit Structure of Legislation and Lawyerly Process

March — May 2020

Date Topics
25
March
Masterclass on Neural Net Verification with Z3 in Python/Jupiter Notebooks.
By Ashwin Rajendran.
1 April Security Research talk, tutorial by Sasha Radomirovich.
8 April PhD Progression talk by Kirsty Duncan.
Title: Relative Robustness of Quantized Neural Networks
Against Adversarial Attacks
15 April PhD Progression talk by Alasdair Hill. Title: AI Planning in Separation Logic
22 April PhD Progression talk by Daniel Kienitz.
Title: Learning disentangled manifolds for robust classification
29 April Reading Group:
Auto-encoders for Neural network Verification lead by Daniel Kienitz
We will consider the paper
Generalized Denoising Auto-Encoders as Generative Models
06 May Fraser Garrow.
Title: Genetic algorithm improvement for accurate and robust neural networks
11 May, 17.00 —
24.00
LAIV attends NASA Formal methods conference, Safe AI Workshop, link.
12 May, 16.00 —
24.00
LAIV attends NASA Formal methods conference, link.
13 May, 12.15 — 13.30 Invited talk: Guy Katz, Hebrew University. Safety in AI Systems:
Verification of Deep Neural Networks
. Slides.
Abstract: Deep neural networks have emerged as a widely used and
effective means for tackling complex, real-world problems.
However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior.
We present an SMT-based technique for formally verifying properties of
deep neural networks (or providing counter-examples).
The technique can be used to prove various safety properties of
neural networks, or to measure a network’s robustness to adversarial inputs.
Our approach is based on the simplex method, extended to handle
piecewise-linear constraints, such as Rectified Linear Units (ReLUs), which
are a crucial ingredient in many modern neural networks. We evaluate our
technique on a prototype deep neural network implementation of the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu).
Brief Bio: Guy Katz is a senior lecturer at the Hebrew University of Jerusalem, Israel.
He received his Ph.D. at the Weizmann Institute of Science in 2015.
His research interests lie at the intersection between Formal Methods and
Software Engineering, and in particular in the application of formal methods to
software systems with components generated via machine learning. https://www.katz-lab.com/
13 May, 16.00 —
24.00
LAIV attends NASA Formal methods conference, link.
14 May, 16.00 —
24.00
LAIV attends NASA Formal methods conference, link.
20 May, 12.15 — 13.30 Invited talk: Iain Whiteside, Five AI Hard Cases, Edge Cases, and Head Cases: Challenges in AI Safety for Autonomous Vehicles
Abstract: We first elaborate on some of the practical challenges for demonstrating
safety of a perception system for Level 4 self-driving. We then outline a methodology and set of technologies that we believe can help us properly argue for the safety of AI components for an AV stack. We will exemplify the methodology with some real world examples from Five’s technology.
Brief Bio: Iain is a Principal Scientist at Five and leads the Safety Assurance team.
Prior to working at Five, Iain was a researcher into the future of assurance cases at
NASA Ames Research Center, leading the development of the AdvoCATE toolkit —
the state of the art for assurance case construction. Iain has a PhD in formal
verification of software from the University of Edinburgh and has been active in the
verification and safety community for over a decade.
27 May Bartosz Schatton. Explainability versus Robustness for AI
(with applications to medical imaging)
. Slides.
28 May,
9.00 — 12.30
LAIV attends TEASE-LP: Workshop on Trends, Extensions, Applications
and Semantics of Logic Programming,
link.
29 May,
14.30 — 18.30
LAIV attends TEASE-LP: Workshop on Trends, Extensions, Applications
and Semantics of Logic Programming,
link.

January — March 2020

Date Topics
15 January Reading Group: DL2: Training and Querying Neural Networks with Logic
Presented by Daniel Kienitz.
22 January No seminar
29 January No seminar
05 February Z3 API in Python for Verification of Neural Networks,
tutorial by Katya Komendantskaya. Sources
12 February DSG Seminar: Efficiency of Distributed Compilation Platforms,
tutorial by Alexander Lyon
19 February Liquid Haskell for verification of Neural Networks, tutorial by Wen Kokke
26 February Implementing verification constraints as loss functions for neural networks: overview of DL2 code, by Marco Casadio and Kirsty Duncan
04 March Introduction to Explainable AI using LIME,
reading group led by Alexandre Cardaillac,
information about LIME and the code are available here.
06 March, 13.00 — 14.00 LAIV Lunch in Celebration of LAIV’s 1st year Anniversary.
11 March Reinforcement learning for adversarial attack and defense. By Vincent Larcher.
18 March Was: Scottish Programming Language Seminar (SPLS) at Stirling. Wen Kokke is giving a talk on Verification of Neural Nets in F*. SPLS is now postponed until further notice.
Instead: LAIV Group on-line meeting, testing our Zoom connectivity: https://us04web.zoom.us/j/626010005
25 March Masterclass on Neural Net Verification with Z3 in Python/Jupiter Notebooks: improvement and elaboration of Katya’s demo given on the 5th February. By Ashwin Rajendran. Zoom meeting: https://zoom.us/j/736667631

September — December 2019

Date Topics, September — December 2019
24 September A planning meeting, welcome to new members.
Discussion of Logical relations tutorial, Chapter 1. Room EMG26
01 October Reading Group: Logical Relations as a proof method
presented by Alasdair Hill. Chapter 2 in the tutorial . Room EMG26.
08 October Reading Group: Logical Relations as a proof method
presented by Alasdair Hill. Chapter 2 in the tutorial . Room EMG26.
15 October Reading Group: Probabilistic Semantics for RoboChart: A Weakest Completion Approach, by Jim Woodcock
Presented by Katya Komendantskaya.
22 October Reading Group: Probabilistic Model Checker PRISM Following the tutorial.
Presented by Daniel Kienitz.
30 October Scottish Programming Language Seminar: https://spls-series.github.io/meetings/2019/october/
5 November Reading Group: Yale Wayang data set for a data mining project
Presented by Hannah Cohen.
12 November Reading Group: Probabilistic Model Checker PRISM – 2
Presented by Daniel Kienitz.
19 November Amazon Research day in Edinburgh University
03 December Reading Group: DL2: Training and Querying Neural Networks with Logic
Presented by Katya Komendantskaya.
10 December, Room EMG61, 13.00 Reading Group: DL2: Training and Querying Neural Networks with Logic
Presented by Katya Komendantskaya.
12 December Scottish Theorem Proving seminar in ICMS

May — August 2019

Date Topics, May — September 2019
01 May A planning meeting, welcome to new members
08 MayAdversarial Examples generation (for Neural networks in Keras library):
a demo by Fraser Garrow
Pierre Le Hen: Testing DVL method of Neural network verification:
a progress report
15 May Daniel Kienitz: Generic Algorithms and Verification of Neural nets:
a progress report

Pascal Bacchus: Hardware approach to Neural net programming:
a progress report
22 MayDmitry Rozplokhas:
Title: miniKanren and its formal semantics
Abstract: miniKanren is a small logic programming language which aims to be a light-weight and purely declarative alternative to Prolog. The main field of application for miniKanren is “relational programming” — the technique of writing programs as mathematical relations rather than mathematical functions. Programs written in this way have a few interesting features, such as an ability to be “run backwards” (synthesize arguments from the result).
In the first part of the talk, I am going to give a short overview of the language. We will discuss its advantages and disadvantages, look at some simple examples of programs in miniKanren as well as state-of-the-art applications of relational programming. In the second part of the talk, I will describe the project I am currently involved in: providing formal semantics — rigorous mathematical description — to miniKanren programs and using it to prove some properties of program execution in the language.
Note the change in time and room:
the seminar will take place at 13.00 in Room 1.82
29 May Feedback session on Semester 1 MSc reports: writing about Neural Net verification. This session is rescheduled.
05 June General meeting and discussion of paper choices for the reading group.
11 June, 11.00, EM183 Josh Ko, National Institute for Informatics, Japan.
https://josh-hs-ko.github.io/
Title: Streaming Metamorphisms in Agda
Abstract: A metamorphism is a fold followed by an unfold, the former consuming a data structure to compute an intermediate value and the latter producing a codata structure from that value. Bird and Gibbons considered streaming metamorphisms on lists, which, under a commutativity condition, can possibly start producing the output without having to consume all the input. In Agda, we can encode a metamorphic specification in a type such that any program with that type correctly implements the metamorphism. In addition, for streaming metamorphisms, type information can help us find out the commutativity condition as we program the streaming algorithm in Agda’s interactive development environment.
12 June LAIV is meeting academic visitors from Edinburgh University.
14 June, 12.00, PG305 Marta Vallejo, HWU
Title: Parkinson’s disease diagnosis using Convolutional Neural Networks and figure-copying tasks
Abstract: Parkinson’s disease (PD) is a chronic neurodegenerative disorder, which leads to a range of motor and cognitive impairments. An accurate PD diagnosis is a challenging task since its signs and symptoms are very similar to other related diseases such as normal ageing and essential tremor. This work aims to automate the PD diagnosis process by using a Convolutional Neural Network, a type of deep neural network architecture. To differentiate between healthy and PD patients, our approach focuses on discovering deviations in patient’s motor capabilities with the use of drawing tasks. Besides that, since different data representations may capture various aspects of this disease, this work explores which drawing task, cube or pentagon, is more effective in the discrimination process. The goal is to evaluate which data format is more effective in diagnosing PD.
19 June Reading group:
Adversarial Examples – A Complete Characterisation of the Phenomenon: https://arxiv.org/abs/1810.01185
26 June Reading group:
Adversarial Examples – A Complete Characterisation of the Phenomenon: https://arxiv.org/abs/1810.01185
03 July , 12-13.30 EM1.82 PhD progression talk 1. Amani Benhalem.
Title: Artificial Neural Network – Guided Particle Swarm Optimisation

Abstract: Abstract: Many well-known machine learning algorithms are inspired by nature. For example, swarm intelligence techniques are inspired by animal flock behaviours, artificial neural networks are inspired by a network of neurons in an animal brain; genetic algorithms are inspired by natural evolution. Even though these techniques are inspired by nature, we can apply them to solve many artificial problems. We can also improve the performance of one of them by combining it with another technique that is behaviourally different. Our work presents the idea of improving the performance of Particle Swarm Optimization by combining it with Artificial Neural Networks. The reason for our idea is because PSO uses very simple rules to move the swarm towards the target and adding the features of ANN to PSO equations will make individuals in the swarm more intelligent.

PhD progression talk 2. Alasdair Hill.
Title: Formal Verification of AI Planning Languages

Abstract: Verification of Artificial Intelligence is becoming increasingly important due to AI’s ever growing presence across society. It is already seen across society in facial identification, financial predictions and self driving vehicles. The correctness of AI is therefore inextricably linked to safety and security in the real world. AI planning is used by robots to create plans to accomplish real world tasks. The planning is done in a virtual world model where the robot is given a set of actions, an initial state and a goal state and the robot has to compose a plan of sequential actions to reach the goal state. Before any robot carries out real life execution plans we want ensure the validity of them. Whilst current planners such as PDDL run tests to check these plans, they do not supply any proof evidence that the plans are correct. To be truly confident about an AI generated plan we need plans to have proof evidence that can be checked in some formal system. In this talk I will introduce and discuss two formal systems for the verification of AI plans.

PhD progression talk 3. Abdullah Altawairqi.
Title: Exploring the Modeling of Attack Strategies for STPA.

Abstract: Security system analysis and safety system analysis have in common to identifying the circumstances that could threaten the system’s functions or integrity. While safety is concerned with avoiding accidents, security system analysis aims to prevent the system from suffering from intentional acts. In security analysis, this modeling of an attacker’s intention is done with the identification of security targets and, in combination with the identification of the surface of attack helps to specify the system defence to build. Systems-Theoretic Process Analysis (STPA) which is the hazard analysis technique of Systems-Theoretic Accident Model and Processes (STAMP) offers to reason about the system in terms of its control loops and to derive the safety constraints of the system by identify how the entities of a control loop could malfunction or miss their goal. STAMP integrates a notion of causal facto to elicit safety properties of complex systems. Such modeling is intrinsically focused on the system’s point of view while a security analysis would need to understand the attacker’s intention and capabilities. Attack Trees is a graphical representation to model and analyse potential attack strategies. Attack Trees could be extended to consider defensive patterns in Attack Defence Trees (ADTrees). In this talk we explore the possibilities to combine both STPA analysis and ADTrees modeling to strengthen a system security analysis. The approach would help to identify attack intentions and capabilities to priorities and reduce the scope of the analysis.
10 July Reading group:
Adversarial Examples – A Complete Characterisation of the Phenomenon: https://arxiv.org/abs/1810.01185
17 July Presentation and discussion of MSc thesis results: Pierre Le Hen
24 July Presentation and discussion of MSc thesis results: Daniel Kienitz
31 July Presentation and discussion of MSc thesis results: Pascal Bacchus

January — April 2019

DateTopics, January — March 2019
9 JanSemester 2 Introduction.
Reading Seminar: In the next 6 sessions, we will cover
Safety Verification of Deep Neural Networks,
by Xiaowei Huang, Marta Kwiatkowska, Sen Wang and Min Wu
23 Jan

Reading Seminar: Introduction and general set-up
30 JanReading Seminar:
Background definitions, regions
6 Feb
Reading Seminar:
Verification Framework
13FebReading Seminar:
Feature Decomposition and Discovery
20 FebReading Seminar:
Feature Decomposition and Discovery
28 FebReading Seminar:
Selection of Regions and Manipulations
6 MarchReading Seminar:
Checking the verification properties in an SMT solver
18 MarchLAIV Masterclass: Survey of Deep Learning: Methods and Applications,
by Javier Garcia
21 MarchLAIV Masterclass: Convolutional Neural nets, by Mario Vasilev
27 MarchLAIV Masterclass: Keras and Adversarial example generation,
by Fraser Garrow