5
Comput. & Elect. Engng Vol. I1, No. 2/3, pp. 145-149, 1984 0045-7906/84 $3.00+.00 Printed in the U.S.A, © 1985Pergamon PressLtd. SOFTWARE RELIABILITY--THEORY AND PRACTICE PEI Hs~A Computer Science and Engineering Department, University of Texas at Arlington, Arlington, TX 76019, U.S.A. Abstract--Software reliability has been a very controversial research topic in software engi- neering for over 15 years. There are several different models proposed and studied by re- searchers. Nevertheless, none of the existing models has been successfully used in real projects for reliability forecasting. This paper presents a brief review of the status of software reliability research, and points out new research directions that may benefit the software development process in a more positive and direct way. INTRODUCTION Software is becoming an increasingly important component of computer systems. This is due to the overemphasis, in the past, of hardware components and to the gradual awareness of software complexity and its induced, unwieldy problems. A major concern once a software is developed is, "Does it work reliably?" The term software relia- bility[I, 2] is used to convey an idea similar to, albeit not exactly the same as, hardware reliability. However, the more one looks into software reliability, the less similar it is to its hardware namesake. The most significant difference between software and hardware reliabilities is that material wear and tear contributes nothing to the former, but it is a major factor in the latter. Hence, it is natural to consider redundancy and replacement schemes to improve hardware reliability, but none of them works for software reliability[3, 4]. Early studies of software reliability did borrow extensively, from hardware reliability models[ I, 2, 5, 6, 7]. As the distinction between the two reliabilities became vivid and clear, new approaches steered away from patterning the hardware reliability models. The term "software reliability" is intended to convey the idea of a quantitative measurement about a software to reflect the degree to which it can be trusted to suc- cessfully perform a certain function. Nevertheless, it is only recently that the definition of this term gravitated to a commonly accepted form. We list the past definitions of software reliability: 1. a measure of the number of errors or "bugs" encountered in a program[8], 2. the degree to which a program performs as expected[9], 3. the probability that the program performs successfully according to specifications for a given time period[10], 4. the probability that the system performs its assigned function under specified environmental conditions for a given period of time[4]. One can conclude that a commonly accepted definition gradually precipitates into a consensus. Serious research in software reliability[l, 2, 6, 7, 11, 12] started in the early 1970s. Basic models include the reliability growth model[12, 13], the probablistic model[2, 6, I1, 14], the error seeding model[15, 16], and the complexity model[8, 17, 18]. The software reliability growth model is a Bayesian reliability growth model that includes certain features designed to reproduce special properties of the growth in reliability of a program. One feature of the model is the requirement that the program- mer try to improve the reliability of the program by detecting and correcting any errors whenever a failure occurs. The object of the model is to create a repair rule that simulates the effect of the programmer's action upon the program. The probabilistic model basically assumes that the existing errors in a program always appear in a probabilistic manner. It further assumes that the detection rate is 145

Software reliability—Theory and practice

Embed Size (px)

Citation preview

Comput. & Elect. Engng Vol. I1, No. 2/3, pp. 145-149, 1984 0045-7906/84 $3.00+.00 Printed in the U.S.A, © 1985 Pergamon Press Ltd.

S O F T W A R E R E L I A B I L I T Y - - T H E O R Y A N D P R A C T I C E

PEI Hs~A Computer Science and Engineering Department, University of Texas at Arlington,

Arlington, TX 76019, U.S.A.

Abstract--Software reliability has been a very controversial research topic in software engi- neering for over 15 years. There are several different models proposed and studied by re- searchers. Nevertheless, none of the existing models has been successfully used in real projects for reliability forecasting. This paper presents a brief review of the status of software reliability research, and points out new research directions that may benefit the software development process in a more positive and direct way.

INTRODUCTION

Software is becoming an increasingly important component of computer systems. This is due to the overemphasis, in the past, of hardware components and to the gradual awareness of software complexity and its induced, unwieldy problems. A major concern once a software is developed is, "Does it work reliably?" The term software relia- bility[I, 2] is used to convey an idea similar to, albeit not exactly the same as, hardware reliability. However, the more one looks into software reliability, the less similar it is to its hardware namesake.

The most significant difference between software and hardware reliabilities is that material wear and tear contributes nothing to the former, but it is a major factor in the latter. Hence, it is natural to consider redundancy and replacement schemes to improve hardware reliability, but none of them works for software reliability[3, 4]. Early studies of software reliability did borrow extensively, from hardware reliability models[ I, 2, 5, 6, 7]. As the distinction between the two reliabilities became vivid and clear, new approaches steered away from patterning the hardware reliability models.

The term "software reliability" is intended to convey the idea of a quantitative measurement about a software to reflect the degree to which it can be trusted to suc- cessfully perform a certain function. Nevertheless, it is only recently that the definition of this term gravitated to a commonly accepted form. We list the past definitions of software reliability:

1. a measure of the number of errors or "bugs" encountered in a program[8], 2. the degree to which a program performs as expected[9], 3. the probability that the program performs successfully according to specifications

for a given time period[10], 4. the probability that the system performs its assigned function under specified

environmental conditions for a given period of time[4]. One can conclude that a commonly accepted definition gradually precipitates into

a consensus. Serious research in software reliability[l, 2, 6, 7, 11, 12] started in the early 1970s.

Basic models include the reliability growth model[12, 13], the probablistic model[2, 6, I1, 14], the error seeding model[15, 16], and the complexity model[8, 17, 18].

The software reliability growth model is a Bayesian reliability growth model that includes certain features designed to reproduce special properties of the growth in reliability of a program. One feature of the model is the requirement that the program- mer try to improve the reliability of the program by detecting and correcting any errors whenever a failure occurs. The object of the model is to create a repair rule that simulates the effect of the programmer's action upon the program.

The probabilistic model basically assumes that the existing errors in a program always appear in a probabilistic manner. It further assumes that the detection rate is

145

146 P. HSIA

directly proportional to the number of remaining errors. Several models are in this class, and each advocates further different assumptions.

The error seeding model is a simple intuitive model. It hypothesizes that the more errors a program contains, the less reliable it is. This model proposes to estimate the number of remaining errors by first "seeding" some foreign errors into a program and then having the programmer(s) debug the program. The total number of uncovered errors can reveal the total number of remaining errors.

The complexity model estimates software reliability based on the internal charac- teristics of a program. It assumes that a quantitative relationship exists between the software complexity measures, or program attributes, and the software reliability.

RELIABILITY AND CORRECTNESS

Correctness of software refers to whether the program transforms the inputs to the expected outputs according to its specification. It is obvious that a program can only be correct or incorrect; however, its reliability can range from 0% to 100%. It is worth noting that "reliable software is not necessarily correct, because 1. Correct software can fail (error in specification) and 2. Incorrect software need not fail."[19] This state- ment intends to bring out the differences between reliability and correctness, but ov- eremphasizing it is a grave mistake. The measurements of correctness and reliability are different: the former assumes only two discrete values; the latter, a continuum. However, any software that is not 100% reliable must have some problem, either in specifications or in source code. The improvement involves fixing the problem, which is called "debugging." Unreliable software may be correct due to mistakes in speci- fications, but the original intent is to have it 100% reliable; therefore, the error in specifications needs to be corrected.

The point is that even though reliability is not the same as correctness, both depend on software errors. Software correctness indicates that the software is free of errors. Reliability is a measure denoting the probability that the program performs its assigned function for a given period of time. The term "assigned function" may not be fully and correctly specified beforehand; however, it is immediately noticeable if the program does not perform as expected. Note that correctness is a theoretical concern, but re- liability is a practical interest.

Most reliability models make one or more of the following assumptions: 1. The number of software errors contributes inversely to its reliability. 2. The occurrence of error(s) causes software to fail. 3. Errors in software reveal themselves in a probabilistic manner. 4. The fewer the errors in a program, the more reliable it is, and the more difficult

it is to uncover them. 5. All errors are equal in terms of their likelihood to turn up and their impact

severity.

RELIABILITY APPROACHES

All previously mentioned reliability researches are purely phenomenological stud- ies. They attempt to find a model that can best describe the relationship between the product characteristics and the observed failure occurrences. The product character- istics may be the testing history, the past performance, the total estimated errors in the program, the complexity of the program, the past performance of similar software, etc. The phenomenology study usually provides little insight into the production pro- cess. It observes from the outside and correlates only between the observable factors. There is no commonly accepted software reliability model[20-24]. Even if we had a widely accepted reliability model that accurately predicts reliability, it could not provide any insight for directly constructing more reliable software. Nevertheless, it can achieve the goal in an indirect way by conducting further studies to correlate the factors in the production process to the reliability measure.

There are two different approaches in reliability modeling: the product-oriented view, and the process-oriented view. Product-oriented reliability modeling formulates

Software reliability 147

reliability measurement from the product characteristics. All of the existing reliability models are product-oriented models. They calculate reliability measurements or pred- ications starting from the end (or close to the end) of the software development process. Therefore, it does not include such factors as the method of production. This type of model cannot be directly used to improve software reliability since it does not consider developmental features.

This paper points out that a new look is necessary to shift the focus of software reliability studies to the process-oriented approach. Development of a process-oriented reliability model can and will contribute to software reliability improvement much more significantly than the other type of models. Two important reasons encourage the study of process-oriented reliability.

1. Reliability engineering, as defined in [25], is "a design adjunct that assists in establishing a product design with high inherent reliability." It has an active and positive goal to improve reliability. Existing models cannot achieve this goal because they are all product-oriented models and merely serve as passive formulas. None can provide insight to improve the software development process and, hence, the quality of its product.

2. Programming environment[26] is gradually becoming a practical field, and several usable environments are being developed and applied[27,28]. The previously unman- ageable programming activities, the lack of uniform programmer discipline, and the nonexistence of programming methodology and tools played havoc for anyone at- tempting to study process-oriented reliability models. However, all major problems are gradually brought under control in a programming environment, where specified meth- odology, selected tools, and required disciplines are guaranteed to be followed pre- cisely. All management information is up-to-date and can be retrieved immediately. It therefore allows researchers to look into process-oriented reliability models with new confidence and hope. This opportunity did not previously exist. Some preliminary work with promising results is reported in [29].

E N V I R O N M E N T AND RELIABILITY

"A good workman is known by his tools." This saying is true and precise for individual works. When works demand a team of workers, the saying can be misleading; i.e. sharp tools alone do not make quality products; they must be coupled with guidance and disciplined usage by the workmen.

Therefore, the term "programming environment" used here emphasizes that tools should be used in the proper stage in the software development process. The "bag-of- tools" approach is abhorred, even though many researchers subscribe to that approach in today's programming environment research[26, 30-32]. They want to provide as many tools as possible to all users, who then must choose the best tool to use at any time. However, it should be pointed out that simply lending an expert carpenter's tools to another carpenter will not make the latter carpenter an expert. Hence, tools alone do not an expert make.

If a programmer can be directed by the computer as to what tools to use at a particular juncture, he or she would be better off than being left with all the tools at his or her disposal at all times. Better yet, if a computer can suggest a limited choice of tools (e.g., syntax checker or load-and-go compiler for a program), programmers will appreciate the guidance more.

Programming environment is, therefore, defined as an integration of programming discipline, project management, and development support into one unified system for the purpose of

I. guiding programmers through a specific development process; 2. providing up-to-date visibility of the ongoing project, and 3. enabling the proper tools/techniques to be applied at the proper place throughout

the software development process[33]. Several prototypes have been built and tested[33]. They are not yet used in large

148 P. HSIA

projects. Howeve r , their success so far points to a promising solution of the entire software deve lopment and maintenance problems.

By incorporat ing a specific programming environment into a reliability study, one may develop some prel iminary process-or iented reliability models. This approach is now feasible because many previously uncontrollable factors are now tamed. Any effort to study software reliability in this direction can be very fruitful.

INDUSTRY PRACTICE

After reviewing the status of software reliability research, it is obvious that there is no commonly accepted measurement of software reliability used today. In order to obtain high reliability there are only two solutions: emphasize the amount of testing required on the product ; emphasize the specific discipline called for in the software deve lopment process . The first solution is based on the belief that the more one un- covers and removes errors f rom the software, the bet ter its reliability will be. The second solution is t ranslated f rom an implicit understanding that a bet ter process always produces a bet ter product . General ly, it is easier to apply the former solution because (1) the testing period is shorter than the entire software deve lopment life-cycle; and (2) software management has long been a headache for the industry. It is several orders of magnitude more difficult to ensure an orderly manner in the software product ion process than it is to simply spend more time and resources in the testing phase.

The prevalent pract ice in industry of at tempting to improve software reliability through concent ra ted testing has been severely criticized and challenged in the research communi ty . It is commonly agreed in industry that the reliability of any software is a lways bet ter if its first er ror is never found, ra ther than if its " l a s t " error has been fixed[19]. Thus, it affirms even more that process-or iented reliability models, in con- junct ion with integrated programming environment research, is the best approach to pursue in software reliability studies.

REFERENCES 1. Z. Jelinski and P. B. Moranda, in Statistical Computer Performance Evaluation (Edited by W. Frei-

berger), pp. 465-484. Academic Press, New York (1972). 2. M. L. Shooman, Operational testing and software reliability estimation during program development.

Record of the 1973 IEEE Symposium on Computer Software Reliability, pp. 51-57. IEEE, New York (1973).

3. B. Shneiderman, Software Psychology. Winthrop (1980). 4. R. S. Pressman, Software Engineering: A Practitioner's Approach. McGraw-Hill, New York (1982). 5. A. N. Sukert, A software reliability modeling study. RADC-TR-76-247, Rome Air Development Center,

Griffiss Air Force Base, New York (NTIS AD/A-030437) (1976). 6. M. Shooman, in Statistical Computer Performance Evaluation (Edited by W. Freiberger), pp. 485-502.

Academic Press, New York (1972). 7. G.J. Schick and R. W. Wolverton, Assessment of software reliability. Proc. 11 th Annual Meeting German

Oper. Res. Soc., Hamburg, Germany, September 6-8 (1972). 8. G. J. Myers, Reliable Software Through Composite Design. Petrocelli/Charter (1975). 9. C. A. Ziegler, Programming System Methodologies. Prentice-Hall, Englewood Cliffs, New Jersey (1983).

10. M. Shooman, Software Engineering Design~Reliability~Management. McGraw-Hill, New York (1983). 11. J. D. Musa, A theory of software reliability and its application, IEEE Trans. Software Eng SE-I(3), 312-

327 (1975). 12. B. Littlewood and J. L. Verrall, A Bayesian reliability growth model for software reliability. Conf. Rec.

1973 1EEE Syrup. Comput. Software Reliability, pp. 70-76, (1973). 13. B. Littlewood, in Proceedings of the Symposium on Computer Software Engineering, pp. 281-300.

Polytechnic Press, New York (1976). 14. M. L. Shooman, in Proceedings of the Second International Conference on Software Engineering, pp.

268-280. IEEE, New York, (1976). 15. H. D. Mills, On the statistical validation of computer programming. FSC-72-6015, IBM Federal Systems

Div., Gaithersburg, Maryland (1972). 16. B. Rudner, Seeding/tagging estimation of software errors: models and estimates. RADC-TR-77-15, Po-

lytechnic Institute of New York (1977) (NTIS AD/A-036655). 17. G. J. Myers, Software Reliability: Principles and Practices, Chap. 18. Wiley-Interscience, New York

(1976). 18. M. Halstead, Elements of Software Science. Elsevier, New York (1977). ! 9. D. B. Wortman (Editor), Notes from a workshop on the attainment of reliable software. Technical Report

CSRG-41, University of Toronto (1974). 20. B. Littlewood, Comments on Moranda's critque (1979). 21. P. B. Moranda, Prediction of software reliability during debugging. Proc. 1975 Annual Reliability and

Maintainability Symp., pp. 327-332 (1975).

Software reliability 149

22. A. L. Goel, A summary of the discussion on "An analysis of competing software reliability models." 1EEE Trans. Software Engn. SE-6, 501-502 (1980).

23. P. B. Moranda, An analysis of competing software reliability models. IEEE Trans. Software Engng SE- 4, 104-120 (1978).

24. B. Littlewood, Theories of software reliability. 1EEE Trans. Software Engn. SE-6, 480-500 (1980). 25. R. R. Landers, Reliability and Product Assurance. Prentice-Hall, Englewood Cliffs, New Jersey (1963). 26. A. I. Wasserman, Tutorial: Software Development Environment. IEEE Computer Society Press (1981). 27. P. Hsia, Software engineering environment. Technical Report CSE TR 84-001, University of Texas at

Arlington (January 1984). 28. A. N. Habermann, An overview of the Gandalf Project. Computer Science Research Review 1978-1979,

Carnegie-Mellon University (1979). 29. P. Hsia, and F. E. Petry, A framework for discipline in programming. IEEE Trans. Software Engng SE-

6 (2), pp. 226-232 (1980). 30. H. Hunke (Editor), Software Engineering Environments. North-Holland (1981). 31. Brian W. Kernighan and John R. Mashey, The UNIX programming environment. IEEE Computer 14(4),

1-4 (1981). 32. E. G. Shapiro et al., PASES: A Programming Environment for PASCAL. Schlumbeger Programming

Environment Workshop (June 1980); also in ACM SIGPLAN NOTICES 16(8), 50-57 (August 1981). 33. P. Hsia, Programming environment and software productivity. Technical Report CSE TR 84-002, Uni-

versity of Texas at Arlington (February 1984).