Centre for Software Reliability Research
Centre for Software Reliability
Our research is motivated by the increasing dependence of modern society on socio-technical systems for critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, security, etc.) are required from such systems, and it is important to be able to assess the achieved level of dependability in order to make an informed decision about the degree of risk that such systems pose to society. Over the years, CSR's research has tackled some of the crucial difficult problems in this area, especially concerning complex systems subject to systematic failures, mostly safety-critical systems, from nuclear reactor safety to medical applications to autonomous vehicles.
In 2007, Professor Bev Littlewood received the IEEE Computer Society's Harlan D Mills Award
"for leading research on the application of rigorous probabilistic and statistical techniques in software engineering, particularly in systems dependability".
Specific areas of research and expertise include:
- Probabilistic Assessment
- Assurance Cases and Argumentation
- Confidence in claims
- Socio-technical issues
- Diversity for fault-tolerance
- Software engineering methods
- Critical Infrastructure Protection
- Computer-aided decision making
- Security Assessment
Distinctive aspects of our research, interwoven throughout our activities, include:
- An emphasis on quantitative assessment, based on the use of probabilistic methods
- An interest in how to collect, present, and reason about evidence for dependability
- Concerns with philosophical issues such as confidence and uncertainty in judgements about dependability
- A multidisciplinary approach to deal with both technical and socio-technical aspects of systems
To support decisions about accepting systems into service, quantitative assessment of dependability is necessary. Over the years, CSR's research has tackled some of the crucial difficult problems in this area, especially concerning complex systems subject to systematic failures, and our contributions in this area have included:
- seminal contributions to the area of software reliability growth modelling
- defining the problems in demonstrating very high levels of dependability and the ways available for progress
- the study of Bayesian network as means for combining complex evidence
- the study of diversity (see below)
Assurance Cases and Argumentation
For critical systems, it is important to know whether the system is trustworthy and to be able to communicate, review and debate the level of trust achieved. This requires assembling and connecting disparate evidence to build an assurance case:
"a documented body of evidence that provides a convincing and valid argument that a system is adequately dependable for a given application in a given environment"
In the safety domain, explicit Safety Cases are increasingly required by law, regulations and standards, and CSR has close links with Adelard, which is a leading consultancy in the application of cases in safety. However, the need to understand risks is not just a safety issue: more and more organisations need to know their risks and to be able to communicate and address them to multiple stakeholders from the boardroom to back office and beyond. CSR's ideology is clear thinking foremost, and our research has focused on extending the use of cases to new domains as well as studying how to make the thinking more rigorous (e.g. by explicit probabilistic reasoning).
A description of the industrial impact of this research strand, as assessed for the 2014 U.K. Research Excellence Framework.
Confidence in claims
In probabilistic assessment of safety and reliability, there is a tendency to work with probabilities as though they were certain values. But these values are estimates that are subject to varying degrees of doubt, depending on how they are derived, and the failure to acknowledge and describe such doubt is one of the reasons why the results of such probabilistic risk assessments and safety assessments are sometimes suspect. Our research aims to help the practitioner by clarifying the effects of doubt on the value of parameters and proposing means of describing uncertainty and confidence that are natural as well as rigorous.
Assessing the risk of service failure or accidents in "socio-technical systems" (organisations that include computers) is difficult, and can go badly wrong if machine-oriented assessment methods are applied naively to human activities. Since the early 1990s, CSR has been a partner in a series of interdisciplinary projects on socio-technical systems, examining problems of method and, in terms of applications, making a notable contribution about the effects of decision support systems on human decision making. As with our work in the other difficult area of assessing software reliability, we aim to retain for decision makers the advantages of rigorous probabilistic and statistical reasoning, whilst properly identifying the limitations of such methods and the remaining uncertainties.
Diversity for fault tolerance
In many fault-tolerant systems, redundancy for preventing system failures needs to be supplemented by diversity between the redundant parts (different software designs, for instance), to avoid common weaknesses that would lead to common failures. CSR's work in this area, which started with the application of software design diversity, has included a series of projects for the nuclear industry and addressed both how to achieve useful diversity and how to assess the resulting reduction in common mode failures. The mathematical methods developed have proved to be useful in a broad range of applications, including the assessment of socio-technical systems.
The practical impact of this research was flagged in the 2014 U.K. Research Excellence Framework assessment exercise as "particularly impressive".
Software Engineering Methods
Software engineering has long been notorious for methods and tools being advertised on the strength of vague reasoning and fads rather than of real evidence and tight reasoning. CSR were founding members of the movement pushing for a more scientific approach to support practitioners. Our work has included empirical studies and probabilistic modelling, with useful results focusing on how to judge testing methods from the specific viewpoint of improving delivered reliability.
Critical Infrastructure Protection (CIIP)
Public life, economy and society as a whole depend to a very large extent on the proper functioning of large critical complex infrastructures (LCCIs) like energy supply or telecommunication. The extensive use of information and communication technologies (ICT) has pervaded these infrastructures, rendering them more intelligent, increasingly interconnected, complex, interdependent, and therefore more vulnerable and failure-prone.
One of the key challenges for ensuring the protection and resilience of LCCIs is to identify and understand the dependencies that exist between them. CSR has been involved in ongoing research about interdependency analysis at national and international levels, aiming to explore the occurrence, nature and effects of possible LCCI dependencies. To that end we have been contributing to development and refinement of integrated models and simulations with enhanced visualization capabilities that help identify (and estimate the consequences of) potential risks due to dependencies.
Computer-aided decision making
Research in human factors has long documented that computer tools designed to aid human decision making may not always lead to the intended results. Crucially these tools have been known to induce human failures that would not have occurred without computer support. CSR has used an interdisciplinary approach to address this problem. A case study on computer-aided breast cancer screening led to the discovery of hitherto unreported behavioural patterns and helped to develop new general cognitive explanations and methods to address automation bias.
There should be no need to emphasise the growing importance of IT security for our society. The IT infrastructure is vulnerable to attack, and there is no short or medium term prospect of eliminating the vulnerabilities with certainty by avoiding all errors in design, configuration and use. CSR has collaborated with various academic, industry and governmental institutions, both in the UK as well as EU and US to address how better and more rigorous experimentation and quantitative security modelling can be done to inform decisions on securing networks and systems and choosing security protection tools.