The Role of Measurement in Software Safety Assessment

  • Conference paper
Safety and Reliability of Software Based Systems

Abstract

The primary objective of this paper is to highlight the important role of measurement in both develo** and assessing safety-critical software systems. The ideal approach to measure the safety of such systems is to carefully record times of occurrences of safety-related failures during actual operation. Unfortunately, this is of little use to assessors who need to certify systems in advance of operation. Moreover, even this extremely onerous measurement obligation does not work when there are ultra high reliability requirements; in such cases we are unlikely to observe sufficiently long failure free operational periods. So when we have to assess the safety of either a system that is not yet operational, or a system with ultra-high reliability requirements we have to try something else. In general, we try to make a ‘safety case’ that takes account of many different sources and types of evidence. This may include evidence from testing; evidence about the ‘quality’ of the internal structure of the software; or evidence about the ‘quality’ of the development process. Although many potential types of information could be based on rigorous measurement, more often than not safety assessments are primarily based on engineering judgement. After reviewing a range of measurement techniques that have recently been used in software safety assessment, we focus especially on two important areas:

  • Measures related to ‘defects’ and their resolution; even where developers and testers of safety critical systems record carefully this information there seem to be inevitable flaws in the data. Adherence to some simple principles, such as orthogonal fault classifications, can significantly improve the quality of data and consequently its potential use in safety assessment

  • Rigorous, measurement-based approaches to combining different pieces of evidence; in particular recent work on a) the use of Bayesian Belief Networks and b) the role of Multi-Criteria Decision Aid in dependability assessment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Brocklehurst, S and Littlewood B, New ways to get accurate software reliability modelling, IEEE Software, July, 1992.

    Google Scholar 

  2. Littlewood B and Strigini L, Validation of Ultra-High Dependability for Software-based Systems, CACM, vol. 36, no 11, 1993.

    Google Scholar 

  3. Littlewood B, Limits to evaluation of software dependability, in Software Reliability and Metrics (Ed Littlewood B, Fenton N), Elsevier, 1991.

    Google Scholar 

  4. Hamlet D and Voas J, Faults on its sleeve: amplifying software reliability testing, Proc ISSTA ‘83, Boston, pp 89–98, 1993.

    Google Scholar 

  5. Voas JM and Miller KW, Software testability: the new verfication, IEEE Software, pp 17–28, May, 1995.

    Google Scholar 

  6. Riley P, Towards safe and reliable software for Eurostar, GEC Journal of Research 12(1), 3–12, 1995.

    Google Scholar 

  7. Fenton NE, Software Metrics: A Rigorous Approach, Chapman and Hall, 1991.

    MATH  Google Scholar 

  8. Keller T, Measurements role in providing ‘error-free’ onboard shuttle software, 3rd Intl Applications of Software Metrics Conference, La Jolla, California“, pp 2.154–2.166, Proceedings available from Software Quality Engineering, 1992.

    Google Scholar 

  9. Lytz R, Software metrics for the Boeing 777: a cased study, Software Quality Journal, 4 (1), 1–14, 1995.

    Article  Google Scholar 

  10. Bennett PA, Software development for the channel tunnel: a summary, High Integrity Systems 1 (2), 213–220, 1994.

    Google Scholar 

  11. Stark G, Durst RC and Vowell CW, ‘Using metrics in management decision making’, IEEE Computer, 42–49, Sept, 1994.

    Google Scholar 

  12. Leveson NG and Turner CS, An investigation of the Therac-25 accidents, IEEE Computer, July, 18–41, 1993.

    Google Scholar 

  13. Fenton NE, Software measurement:a necessary scientific basis, IEEE Trans Software Eng 20 (3), 199–206, 1994.

    Article  Google Scholar 

  14. Zuse H, Software Complexity: Measures and Methods, De Gruyter. Berlin, 1991.

    Google Scholar 

  15. Finkelstein L, A review of the fundamental concepts of measurement, Measurement Vol 2(1), 25–34., 1984.

    Article  MathSciNet  Google Scholar 

  16. Roberts FS, Measurement Theory with Applications to Decision Making, Utility, and the Social Sciences, Addison Wesley, 1979.

    Google Scholar 

  17. Fenton NE, Lizuka Y, and Whiny RW (Editors), Software Quality Assurance and Metrics: A Worldwide Perspective, International Thomson Computer Press, 1995.

    MATH  Google Scholar 

  18. International Organisation for Standardisation, Software product evaluation - Quality characteristics and guide lines for their use, ISO/IEC IS 9126, 1991.

    Google Scholar 

  19. Fenton NE, When a sofware measure is not a measure, Softw Eng J 7 (5), 357–362, 1992.

    Article  Google Scholar 

  20. Halstead M, Elements of Software Science, North Holland, 1977.

    MATH  Google Scholar 

  21. McCabe T, A Software Complexity Measure, IEEE Trans. Software Engineering SE-2(4), 308–320, 1976.

    Article  MathSciNet  Google Scholar 

  22. Hamer P, Frewin G, Halstead’s software science: a critical examination, Proc 6th Int Conf Software Eng, 197–206, 1982.

    Google Scholar 

  23. Shepperd MJ, A critique of cyclomatic complexity as a software metric, Softw. Eng. J. vol 3 (2), pp 30–36, 1988.

    Google Scholar 

  24. Oulsnam G, Cyclomatic numbers do not measure complexity of unstructured programs, Inf Processing Letters, 207–211, Dec, 1979.

    Google Scholar 

  25. Oviedo EI, Control flow, data flow, and program complexity, In Proc COMPSAC 80, IEEE Computer Society Press, New York, 1980, 146–152, 1980.

    Google Scholar 

  26. Nejmeh, BA, NPATH: A measure of execution path complexity and its applications, Comm ACM, 31 (2), 188–200, 1988.

    Article  Google Scholar 

  27. Hatton, L., & Hopkins, T. R, Experiences with Flint, a software metrication tool for Fortran 77, In Symposium on Software Tools, Napier Polytechnic, Edinburgh, 1989.

    Google Scholar 

  28. Woodward MR, Hennell MA, Hedley D, A measure of control flow complexity in program text, IEEE Trans Soft. Eng, SE-5 (1), 45–50, 1979.

    Article  Google Scholar 

  29. Bache R, Mullerburg M, Measures of testability as a basis forquality assurance, Software Eng J Vol 5 (2), 86–92, 1990.

    Article  Google Scholar 

  30. Bertolino A and Marre M, How many paths are needed for branch testing? J Systems and Software, 1995.

    Google Scholar 

  31. Neil, M.D, Multivariate Assessment of Software Products, Journal of Software Testing, Verification and Reliability, Vol 1 (4), pp 17–37, 1992.

    MathSciNet  Google Scholar 

  32. Hatton L, Static inspection: tap** the wheels of software, IEEE Software, May, 85–87, 1995.

    Google Scholar 

  33. Hatton L, ‘C and Safety Related Software Development: Standards, Subsets, testing, Metrics, Legal issues’, McGraw-Hill, 1994.

    Google Scholar 

  34. Blum M, Luby M, and Rubinfield R, Self-testing/correcting with applications to numerical problems, J Computer and Systems Sciences 47, 549–595, 1993.

    Article  MATH  Google Scholar 

  35. Humphrey, W S, Managing the Software Process, Addison-Wesley, Reading, Massachusetts, 1989.

    Google Scholar 

  36. Paulk, MC, Curtis B, Chrissis MB and Weber CV, Capability Maturity Model for Software, Version 1.1, SEI Technical Report SEI-CMU-93-TR-24, Software Engineering Institute, Pittsburgh, Pennsylvania, 15213–3890, USA

    Google Scholar 

  37. Pfleeger SL, Fenton NE, Page P, Evaluating software engineering standards, WEE Computer, Sept, 1994, 71–79, 1994.

    Article  Google Scholar 

  38. Fenton NE, Guidelines for interpreting standards, CAS/CITY/D212, CASCADE project, Lloyds Register, Croydon, 1994.

    Google Scholar 

  39. Adams E, Optimizing preventive service of software products, IBM Research Journal, 28 (1), 2–14, 1984.

    Article  Google Scholar 

  40. Laprie J-C (ed), Dependability: basic concepts and terminology, iSpringer Verlag, 1992.

    MATH  Google Scholar 

  41. Mellor P, Failures, faults and changes in dependability measurement, Information and Software Technology“, 34 (10), 640–654, 1992.

    Article  Google Scholar 

  42. Pearl J, Probabilistic reasoning in intelligent systems, Morgan Kaufmann, Palo Alto, CA, 1988.

    Google Scholar 

  43. Zadeh LA, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and Systems, 1, 3–28, 1978.

    Article  MathSciNet  MATH  Google Scholar 

  44. Dubois D and Prade H, Possibility theory: an approach to computerized processing od uncertainty, Plenum Press, NY, 1988.

    Google Scholar 

  45. Wright D and Cai K-Y, Representing uncertainty for safety critical systems, DATUM/CITY/02, City University, London EC1V OHB, 1994.

    Google Scholar 

  46. Fenton NE, Neil M and Ostralenk G, Metrics and models to predict software fault rates, DATUM/CSR/10, CSR, 1995.

    Google Scholar 

  47. Delic KA, Mazzanti F and Strigini L, Formalising a software safety case via belief networks, SHIP/T046, City University, London EC1V OHB, 1995.

    Google Scholar 

  48. Fenton NE, Multi-criteria Decision Aid; with emphasis on its relevance of in dependability assessment, DATUM/CSR/02, CSR, 1995.

    Google Scholar 

  49. Vincke P, Multicriteria Decision Aid, J Wiley, New York, 1992.

    Google Scholar 

  50. Roy B, Decision aid and decision making, European J of Operational Research, 45, 324–331, 1990.

    Article  Google Scholar 

  51. Fishburn PC, Utility theory for decision making, Wiley, NY, 1970.

    MATH  Google Scholar 

  52. Saaty T, The Analytic Hierarchy Process, McGraw Hill, New York, 1980.

    MATH  Google Scholar 

  53. Auer A, A judgement and decision making framework, SHIP document, SHIP/T/013/v02, 1994.

    Google Scholar 

  54. Vaisanen A, Auer A, Korhonen J, Assessment of the safety of PLCs: Janiksenlinna water plant study, SHIP/T/033, VTT, Finland, 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag London Limited

About this paper

Cite this paper

Fenton, N. (1997). The Role of Measurement in Software Safety Assessment. In: Shaw, R. (eds) Safety and Reliability of Software Based Systems. Springer, London. https://doi.org/10.1007/978-1-4471-0921-1_11

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0921-1_11

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76034-4

  • Online ISBN: 978-1-4471-0921-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics

Navigation