EPSRC-funded project: AISEC — AI Secure and Explainable by Construction:
Multiple Research Positions (3 Doctoral, 5 Post-doctoral)
available at Heriot-Watt, Edinburgh and Strathclyde Universities, Scotland, UK.
Start date: 1 September 2020; End date: 30 August 2023
Postdoctoral Salary Scale: £31,866 to £40,322 per annum
PhD funding: covering PhD fees and stipend for 3.5 years
We encourage interested applicants to contact us informally ASAP.
COVID-19 update: we are monitoring the situation, and will consider moving the start of the project by up to 6 months forward, should COVID-19 situation call for such measure… We are asking potential applicants to contact us and discuss any concerns directly.
AI applications have become pervasive: from mobile phones and home appliances to stock markets, autonomous cars, robots, and drones. Each application domain comes with a rich set of requirements such as legal policies, safety and security standards, company values, or simply public perception. AISEC aims to build a sustainable, general purpose, and multidomain methodology and development environment for policy-to-property secure and explainable by construction development of complex AI systems.
This project will employ types with supporting lightweight verification methods (such as SMT solvers) in order to create and deploy a novel framework for documenting, implementing and developing policies for complex deep learning systems. Types will serve as a unifying mechanism to embed security and safety contracts directly into programs that implement AI. The project will produce an integrated development environment with infrastructure to cater for different domain experts: from lawyers and security experts to verification experts and system engineers designing complex AI systems. It will be built, tested and used in collaboration with industrial partners in two key AI application areas: autonomous vehicles and natural language interfaces (aka chatbots).
The project spans several subjects: type theory, automated and interactive theorem proving, security, AI and machine learning, autonomous systems, natural language processing and generation, legal aspects of AI. It will cover two main application areas: autonomous cars and chatbots, drawing from expertise and infrastructure provided by industrial partners working in these two areas. AISEC has a significant international span, with 12 partners from Academia and Industry in Europe (France, Germany, Israel, the Netherlands, Norway) and the US. Researchers joining this project will have excellent opportunities to travel to international conferences, organise scientific events, spend time with industrial partners, collaborate with academic leaders in the field, develop their own research profiles as well as gain experience in other AI and CS disciplines.
We will be looking to employ five enthusiastic early-career researchers with the following backgrounds:
- Types, Provers, Programming languages for AI Verification with applications in autonomous vehicles (2 positions)
PhD degree in Computer Science, Mathematics or Engineering is essential, as are excellent programing skills and some knowledge of either functional programming or automated theorem provers. Leading researchers: Robert Atkey, University of Strathclyde and Ekaterina Komendantskaya, Heriot-Watt University
- Security for AI, with applications in autonomous vehicles and chat-bots(1 position)
PhD degree in Computer Science or related area, with experience of cyber security foundations and applications. The research strand will involve formulating security properties precisely, constructing attack and defence models in natural and formal languages. Leading researcher: David Aspinall, Edinburgh University
- Verification for Natural language processing and generation ,with applications in chat-bots (1 position)
PhD degree in Computer Science/AI or related area is essential, background in Deep Learning and Natural Language Processing is desirable as is knowledge of neural methods for Natural Language Generation and Response Generation in Dialogue Systems. This post will develop mechanisms to increase the robustness and controllability of conversational agents, using techniques such as adversarial learning and combining neural and symbolic representations. Leading Researcher: Verena Rieser.
- Legal Aspects of AI, Law for AI and AI Verification, with applications in autonomous vehicles and chat-bots (1 position)
Either a PhD in law (IT law, software liability or legal technology) and demonstrable practical knowledge of computer technology, or a PhD in Computer Science/AI with demonstrable experience in applying AI to the legal domain, and strong knowledge of technology regulation. This research strand has 2 complementary aspects. First, it will analyse the legal responses to attacks on AI systems and the legal demands on and consequences of certifiable security. Second, the post holder will work with the other RAs on implementing legal regulation in code, providing the relevant legal domain knowledge. Leading researcher: Burkhard Schafer, Edinburgh University.
The following interdisciplinary PhD projects will be available:
- Verification of recurrent neural networks for natural language generation. PhD Supervisors: Rieser and Komendantskaya, Heriot-Watt
- Defining and implementing regulatory responses to security threats against AI. PhD Supervisors: Aspinall and Schafer, Edinburgh University
- Probabilistic dependent type theory for verified machine learning. PhD Supervisor: Atkey, University of Strathclyde
For PhD applications, MSc or first class BSc degree in one of the following subjects is essential: Computer Science, Computational Logic, AI, Machine Learning, Natural Language Processing/Generation, Security, Mathematics, Law, Engineering.
For PhD positions, contact the prospective supervisors (as mentioned above), directly.
For RA positions, apply via one of the University portals (missing links will be added shortly):
Research assistant in AI Verification, Heriot-Watt University. Closing date: 8th June 2020
Research assistant in Type theoretic methods for AI, Strathclyde University. Closing date: TBA, tentatively in July 2020
Research assistant in Security for AI, Edinburgh University. Closing date: 30 June 2020.
Research assistant in Verification of NLP and Spoken Dialogue Systems, Heriot-Watt University. Closing date: 1st June 2020
Research assistant in Legal Aspects of AI, Edinburgh University. Closing date: 9th June 2020
Relocating to Scotland:
- Vibrant, world-class, research environment:
Edinburgh, Heriot-Watt and Strathclyde universities are parts of the cluster of excellence in Computer Science and AI in the center of Scotland. The School of Informatics in Edinburgh is the largest computer science department in Europe. It is hosts the University’s Academic Centre of Excellence in Cyber Security Research (ACE-CSR), of which Aspinall is Director and Schafer is a member. Heriot-Watt University hosts the Edinburgh Centre for Robotics (ECR) including a Centre for Doctoral Training and the Robotarium, which is a centre of excellence for AI research on the international scale. Rieser and Komendantskaya are the ECR members. Lab for AI and Verification (www.LAIV.uk), of which Komendantskaya is the lead, is an active consortium of researchers working on a range of interdisciplinary methods making AI safer. Rieser leads the NLP lab at HWU.
- World-class cultural and nature attractions:
Scotland has a lot to offer for those who love history (from Neolithic sites on the North, to a range of medieval castles and Victorian mansions elsewhere) and nature (islands and sandy beaches on the West, Cairngorms hills in the center). See https://www.visitscotland.com/. The cities of Edinburgh and Glasgow are famous for their festivals, theatres, museums, bars and all year welcome to visitors. See e.g.
For Further Information: