We are opening the seminar series SAIR: Safe and Secure AI for Robotics. The inaugural lecture will be given by Prof Ram Ramamoorthy, Edinburgh University, on “Safety and Trustworthiness of Human-Centred Autonomous Systems”. More details are available here: https://laiv.uk/sair-safe-and-secure-ai-for-robotics/
Talks on formal Methods for Robotics
In the next 2 weeks, we are hosting two great talks on applications of formal methods in Robotics: https://laiv.uk/laiv-seminars/. If you are in Edinburgh area, join us at HWU or Edinburgh Universities. Otherwise, join on-line!
DAIR CDT New Course on Safe AI
The LAIV group has started to teach an AI Safety Course in the New DAIR CDT: the Center for Doctoral training in Dependable and Deployable AI for Robotics. Ekaterina, Colin and Marco teach relevant aspects of verification for Robotics and AI PhD students within this CDT.
LAIV group contributes a VNN COMP Benchmark
We submitted a new NLP benchmark to the International Competition on Neural Network Verification (VNN COMP), and we won a popular VNN COMP community vote (2nd place) to have our new benchmark included in the Competition. This is the first ever NLP benchmark at VNNCOMP. It is considered to be prestigious to contribute to the selected list of benchmarks there! The benchmark comes as a result of the below research paper:
Marco Casadio, Tanvi Dinkar, Ekaterina Komendantskaya, Luca Arnaboldi, Omri Isac, Matthew L. Daggitt, Guy Katz, Verena Rieser, Oliver Lemon:
NLP Verification: Towards a General Methodology for Certifying Robustness.
Many congratulations to all co-authors!
Accepted paper at ITP’24
Many congratulations to Natalia Slusarz, and supporting collaborators/supervisors, for having the following paper accepted at ITP’24:
Taming Differentiable Logics with Coq Formalisation
Reynald Affeldt, Alessandro Bruni, Ekaterina Komendantskaya, Natalia Ślusarz, Kathrin Stark https://arxiv.org/abs/2403.13700
Verification for NLP — new talk
Katya and Marco gave a talk “Mind Your Language: How to Make LLM Applications Trustworthy” at TAS Governance node at Edinburgh university. The pre-print related to the talk can be found here.
New Centre for Doctoral Training — Dependable and Deployable AI for Robotics
A team of Robotics, AI and Verification experts at Heriot-Watt and Edinburgh Universities has secured funding for training a cohort of PhD students in “Dependable and Deployable AI for Robotics”. Press release can be found here. It is an honour for LAIV to be part of this new chapter in the development of the Edinburgh Centre for Robotics!
Best Short Paper Award — Remi Desmartin
Many congratulations to Remi Desmartin and Omri Isac (and their team of co-authors) for receiving the “Best Short Paper” award at LOPSTR’2023 this week, for the paper:
Remi Desmartin, Omri Isac, Grant O. Passmore, Kathrin Stark, Ekaterina Komendantskaya, Guy Katz:
Towards a Certified Proof Checker for Deep Neural Network Verification. LOPSTR 2023: 198-209
PhD defense congratulations
Many congratulations to Dr Daniel Kienitz, who successfully passed his viva exam on “The Influence of Geometric Properties of Data Distributions on Artificial Neural Networks” on the 26th of October 2023!
Welcome to new PhD students
Huge welcome to our new PhD students, Ben Coke, who will be working on Verification of Neural Networks in Engineering Applications (as part of ICASE Studentship led by SLB Cambridge) and Colin Kessler, who will be working in the Edinburgh Center for Robotics on Safe Design of Airborne Autonomous Systems with Neural Network Components.