Kirsty Duncan, Ekaterina Komendantskaya, Robert Sewart, Michael Lones Relative Robustness of Quantized Neural Networks Against Adversarial Attacks, accepted to be published and presented at the International Joint Conference on Neural Networks, IJCNN’20, part of the world congress on computational intelligence: https://wcci2020.org/ 19-24 July 2020, Glasgow, Scotland.
Vincent Larcher. Generation of Adversarial Attacks on Computer Vision Models using Reinforcement Learning. MSc Dissertation. 2020. Supervisor: E.Komendantskaya.
Bartosz Schatton. Informed Adversarial Examples with Explainable AI and Metaheuristics in Medical Imaging. MSc Dissertation. 2020. Supervisor: E.Komendantskaya.
Frantisek Farka. Proof-Relevant Resolution: The Foundations of Constructive Proof Automation. PhD Dissertation. 2020. Supervisor: E.Komendantskaya.
2019 Conference papers:
C.Schwaab, E.Komendantskaya, A.Hill, F.Farka, J.Wells, R.Petrick, K.Hammond. Proof-Carrying Plans. PADL 2019 (21st International Symposium on Practical Aspects of Declarative Languages), 14-15 January 2019, Cascais/Lisbon, Portugal.
P. Bacchus. Performance Metrics for Approximate Deep Learning on Programmable Hardware. MSc Thesis, Heriot-Watt University, 2019. Supervisor: R. Stewart.
P. Le Hen. Adversarial Attacks on Neural Networks in Image Processing. MSc Thesis, Heriot-Watt University, 2019. Supervisor: E.Komendantskaya.
D. Kienitz. Robustness of Neural Networks: Understanding the Nature of Adversarial Examples. MSc Thesis, Heriot-Watt University, 2019. Supervisor: E.Komendantskaya.
Y.Li. A Proof-Theoretic Approach to Coinduction in Horn Clause Logic” . PhD Thesis, Heriot-Watt University, 2019. Supervisor: E.Komendantskaya and M.Lawson.