Abstract
The primary objective of this paper is to highlight the important role of measurement in both develo** and assessing safety-critical software systems. The ideal approach to measure the safety of such systems is to carefully record times of occurrences of safety-related failures during actual operation. Unfortunately, this is of little use to assessors who need to certify systems in advance of operation. Moreover, even this extremely onerous measurement obligation does not work when there are ultra high reliability requirements; in such cases we are unlikely to observe sufficiently long failure free operational periods. So when we have to assess the safety of either a system that is not yet operational, or a system with ultra-high reliability requirements we have to try something else. In general, we try to make a ‘safety case’ that takes account of many different sources and types of evidence. This may include evidence from testing; evidence about the ‘quality’ of the internal structure of the software; or evidence about the ‘quality’ of the development process. Although many potential types of information could be based on rigorous measurement, more often than not safety assessments are primarily based on engineering judgement. After reviewing a range of measurement techniques that have recently been used in software safety assessment, we focus especially on two important areas:
-
Measures related to ‘defects’ and their resolution; even where developers and testers of safety critical systems record carefully this information there seem to be inevitable flaws in the data. Adherence to some simple principles, such as orthogonal fault classifications, can significantly improve the quality of data and consequently its potential use in safety assessment
-
Rigorous, measurement-based approaches to combining different pieces of evidence; in particular recent work on a) the use of Bayesian Belief Networks and b) the role of Multi-Criteria Decision Aid in dependability assessment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Brocklehurst, S and Littlewood B, New ways to get accurate software reliability modelling, IEEE Software, July, 1992.
Littlewood B and Strigini L, Validation of Ultra-High Dependability for Software-based Systems, CACM, vol. 36, no 11, 1993.
Littlewood B, Limits to evaluation of software dependability, in Software Reliability and Metrics (Ed Littlewood B, Fenton N), Elsevier, 1991.
Hamlet D and Voas J, Faults on its sleeve: amplifying software reliability testing, Proc ISSTA ‘83, Boston, pp 89–98, 1993.
Voas JM and Miller KW, Software testability: the new verfication, IEEE Software, pp 17–28, May, 1995.
Riley P, Towards safe and reliable software for Eurostar, GEC Journal of Research 12(1), 3–12, 1995.
Fenton NE, Software Metrics: A Rigorous Approach, Chapman and Hall, 1991.
Keller T, Measurements role in providing ‘error-free’ onboard shuttle software, 3rd Intl Applications of Software Metrics Conference, La Jolla, California“, pp 2.154–2.166, Proceedings available from Software Quality Engineering, 1992.
Lytz R, Software metrics for the Boeing 777: a cased study, Software Quality Journal, 4 (1), 1–14, 1995.
Bennett PA, Software development for the channel tunnel: a summary, High Integrity Systems 1 (2), 213–220, 1994.
Stark G, Durst RC and Vowell CW, ‘Using metrics in management decision making’, IEEE Computer, 42–49, Sept, 1994.
Leveson NG and Turner CS, An investigation of the Therac-25 accidents, IEEE Computer, July, 18–41, 1993.
Fenton NE, Software measurement:a necessary scientific basis, IEEE Trans Software Eng 20 (3), 199–206, 1994.
Zuse H, Software Complexity: Measures and Methods, De Gruyter. Berlin, 1991.
Finkelstein L, A review of the fundamental concepts of measurement, Measurement Vol 2(1), 25–34., 1984.
Roberts FS, Measurement Theory with Applications to Decision Making, Utility, and the Social Sciences, Addison Wesley, 1979.
Fenton NE, Lizuka Y, and Whiny RW (Editors), Software Quality Assurance and Metrics: A Worldwide Perspective, International Thomson Computer Press, 1995.
International Organisation for Standardisation, Software product evaluation - Quality characteristics and guide lines for their use, ISO/IEC IS 9126, 1991.
Fenton NE, When a sofware measure is not a measure, Softw Eng J 7 (5), 357–362, 1992.
Halstead M, Elements of Software Science, North Holland, 1977.
McCabe T, A Software Complexity Measure, IEEE Trans. Software Engineering SE-2(4), 308–320, 1976.
Hamer P, Frewin G, Halstead’s software science: a critical examination, Proc 6th Int Conf Software Eng, 197–206, 1982.
Shepperd MJ, A critique of cyclomatic complexity as a software metric, Softw. Eng. J. vol 3 (2), pp 30–36, 1988.
Oulsnam G, Cyclomatic numbers do not measure complexity of unstructured programs, Inf Processing Letters, 207–211, Dec, 1979.
Oviedo EI, Control flow, data flow, and program complexity, In Proc COMPSAC 80, IEEE Computer Society Press, New York, 1980, 146–152, 1980.
Nejmeh, BA, NPATH: A measure of execution path complexity and its applications, Comm ACM, 31 (2), 188–200, 1988.
Hatton, L., & Hopkins, T. R, Experiences with Flint, a software metrication tool for Fortran 77, In Symposium on Software Tools, Napier Polytechnic, Edinburgh, 1989.
Woodward MR, Hennell MA, Hedley D, A measure of control flow complexity in program text, IEEE Trans Soft. Eng, SE-5 (1), 45–50, 1979.
Bache R, Mullerburg M, Measures of testability as a basis forquality assurance, Software Eng J Vol 5 (2), 86–92, 1990.
Bertolino A and Marre M, How many paths are needed for branch testing? J Systems and Software, 1995.
Neil, M.D, Multivariate Assessment of Software Products, Journal of Software Testing, Verification and Reliability, Vol 1 (4), pp 17–37, 1992.
Hatton L, Static inspection: tap** the wheels of software, IEEE Software, May, 85–87, 1995.
Hatton L, ‘C and Safety Related Software Development: Standards, Subsets, testing, Metrics, Legal issues’, McGraw-Hill, 1994.
Blum M, Luby M, and Rubinfield R, Self-testing/correcting with applications to numerical problems, J Computer and Systems Sciences 47, 549–595, 1993.
Humphrey, W S, Managing the Software Process, Addison-Wesley, Reading, Massachusetts, 1989.
Paulk, MC, Curtis B, Chrissis MB and Weber CV, Capability Maturity Model for Software, Version 1.1, SEI Technical Report SEI-CMU-93-TR-24, Software Engineering Institute, Pittsburgh, Pennsylvania, 15213–3890, USA
Pfleeger SL, Fenton NE, Page P, Evaluating software engineering standards, WEE Computer, Sept, 1994, 71–79, 1994.
Fenton NE, Guidelines for interpreting standards, CAS/CITY/D212, CASCADE project, Lloyds Register, Croydon, 1994.
Adams E, Optimizing preventive service of software products, IBM Research Journal, 28 (1), 2–14, 1984.
Laprie J-C (ed), Dependability: basic concepts and terminology, iSpringer Verlag, 1992.
Mellor P, Failures, faults and changes in dependability measurement, Information and Software Technology“, 34 (10), 640–654, 1992.
Pearl J, Probabilistic reasoning in intelligent systems, Morgan Kaufmann, Palo Alto, CA, 1988.
Zadeh LA, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and Systems, 1, 3–28, 1978.
Dubois D and Prade H, Possibility theory: an approach to computerized processing od uncertainty, Plenum Press, NY, 1988.
Wright D and Cai K-Y, Representing uncertainty for safety critical systems, DATUM/CITY/02, City University, London EC1V OHB, 1994.
Fenton NE, Neil M and Ostralenk G, Metrics and models to predict software fault rates, DATUM/CSR/10, CSR, 1995.
Delic KA, Mazzanti F and Strigini L, Formalising a software safety case via belief networks, SHIP/T046, City University, London EC1V OHB, 1995.
Fenton NE, Multi-criteria Decision Aid; with emphasis on its relevance of in dependability assessment, DATUM/CSR/02, CSR, 1995.
Vincke P, Multicriteria Decision Aid, J Wiley, New York, 1992.
Roy B, Decision aid and decision making, European J of Operational Research, 45, 324–331, 1990.
Fishburn PC, Utility theory for decision making, Wiley, NY, 1970.
Saaty T, The Analytic Hierarchy Process, McGraw Hill, New York, 1980.
Auer A, A judgement and decision making framework, SHIP document, SHIP/T/013/v02, 1994.
Vaisanen A, Auer A, Korhonen J, Assessment of the safety of PLCs: Janiksenlinna water plant study, SHIP/T/033, VTT, Finland, 1994.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer-Verlag London Limited
About this paper
Cite this paper
Fenton, N. (1997). The Role of Measurement in Software Safety Assessment. In: Shaw, R. (eds) Safety and Reliability of Software Based Systems. Springer, London. https://doi.org/10.1007/978-1-4471-0921-1_11
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0921-1_11
Publisher Name: Springer, London
Print ISBN: 978-3-540-76034-4
Online ISBN: 978-1-4471-0921-1
eBook Packages: Springer Book Archive