Centre for Software Reliability
  1. News
  2. Staff
  3. Research
  4. Publications
  5. Courses
  6. Consultancy & Advisory Services
  7. Contact CSR
  1. Research Projects
Software Reliability

ICRI-SAVe

Staff and Funding

Principal Investigators: Professor Robin Bloomfield

Co-investigators: Professor Lorenzo Strigini, Dr Peter Popov, Professor Artur Garcez.

Researchers involved in the work: Professor Peter Bishop, Dr Andrey Povyakalo, Dr Kizito Salako, Ms Robab Aghazadeh Chakherlou, Mr Mathew Stewart, Mr Daniel Matvienko-Sikar, and Dr Francesco Terrosi.

Funding: $300,000

Funding Source: Intel Corporation, USA.

Duration: 3 years, September 2019 – September 2022

Project Description

The ICRI-SAVe project (Intel Collaborative Research Institute on Safety of Autonomous Vehicles), was set up by Intel Labs, Germany to address the following objectives:

  • Structuring of assurance cases with increased semantics using Assurance 2.0.
  • Statistical inference methods to support confidence in safety of automated vehicles, using experience from simulated or real operation together with other available evidence, via a Bayesian approach.
  • Probabilistic modelling of safety of a vehicle so as to analyse the importance of road hazards in different scenarios on the road and the impact of limited reliability of various subsystems, e.g. perception and safety monitors.
  • Analysis of diversity's role in safety of AVs.

The work was undertaken by three senior academics with support from PhDs enabled by the ICRI funding, and academic colleagues, and an MSc student.

There were three themes in the project: assurance cases, diversity and defence-in-depth, and systemic risk modelling.

We addressed the research questions from an assurance case perspective to structure and challenge claims and assumptions. The particular techniques we built on are:

  • the “claims, argument, evidence (CAE)” framework that makes clear the chain of reasoning from evidence to top-level safety claim and allows for the separation and identification of inductive and deductive reasoning.
  • mathematical models that explore how to combine and reason about uncertainty based on disparate sources of evidence, e.g. simulated and road testing, architecture, development process. These include in particular Bayesian reasoning plus bounding methods for robustness to weakly supported assumptions.
  • probabilistic modelling techniques including stochastic activity networks (SAN) and Markov and semi-Markov models to study the imperfection of perception systems and safety monitors on vehicle safety in the presence of road hazards.
  • justification and verification techniques for machine learning (including the derivation of explanatory rules) and the assessment of diversity in architecture, training etc.

The focus of the work was on the mathematical and probabilistic modelling.

Publications

Bishop, P., Povyakalo, A. & Strigini, L. (2022). Bootstrapping confidence in future safety based on past safe operation. 2022 IEEE 33rd International Symposium on Software Reliability Engineering (ISSRE 2022), 31 Oct - 3 Nov 2022, Charlotte, NC, USA.

Aghazadeh Chakherlou, R., Salako, K. & Strigini, L. (2022). Arguing safety of an improved autonomous vehicle from safe operation before the change: new results. RAIS 2022 2nd International Workshop on Reliability of Autonomous Intelligent Systems, 31 Oct - 3 Nov 2022, Charlotte, NC, USA.

Buerkle, C., Oboril, F., Popov, P. T., and Strigini, L. (2022). Modelling road hazards and the effect on AV safety of hazardous failures. The 25th IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC 2022), 8 Oct - 12 Oct 2022, Macau, China.

Terrosi, F., Strigini, L., Bondavalli, A. (2022). Impact of Machine Learning on Safety Monitors. In: Trapp, M., Saglietti, F., Spisländer, M., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2022. Lecture Notes in Computer Science, vol 13414. Springer, Cham. https://doi.org/10.1007/978-3-031-14835-4_9

Salako, K., Strigini, L. & Zhao, X. (2021). Conservative Confidence Bounds in Safety, from Generalised Claims of Improvement & Statistical Evidence. 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2021, pp. 451-462. ISSN 1530-0889 doi: 10.1109/DSN48987.2021.00055

Zhao, X., Salako, K., Strigini, L. (2020). Assessing Safety-Critical Systems from Operational Testing: A Study on Autonomous Vehicles. Information and Software Technology, 128, 106393. doi: 10.1016/j.infsof.2020.106393

Bloomfield, R. and Rushby, J.. Assurance 2.0: A manifesto. In Mike Parsons and Mark Nicholson, editors, Systems and Covid-19: Proceedings of the 29th Safety-Critical Systems Symposium (SSS’21), pages 85–108, Safety- Critical Systems Club, York, UK, February 2021. Final draft available as arXiv:2004.10474.

Rushby, J. and Bloomfield, R., Assessing Confidence with Assurance 2.0, https://doi.org/10.48550/arXiv.2205.04522.

Popov, P. (2021), Conservative reliability assessment of a 2-channel software system when one of the channels is probably perfect, Reliability Engineering & System Safety, 216, 108008, doi: (https://www.sciencedirect.com/science/article/pii/S0951832021005172)

Project Collaborators

  1. Karlsruhe Institute of Technology, (Germany)
  2. FZI Research Center for Information Technology (Germany)
  3. Fortis GmbH (Germany)
  4. Technical University of Munich (Germany)
  5. Fraunhofer IESE (Germany)
  6. City, University of London (UK)