Upload
others
View
1
Download
1
Embed Size (px)
Citation preview
SENG 637SENG 637Dependability Reliability & Dependability Reliability & Dependability, Reliability & Dependability, Reliability & Testing of Software Testing of Software SystemsSystems
E ti T tE ti T tExecuting TestExecuting Test(Chapter 7)(Chapter 7)
Department of Electrical & Computer Engineering, University of Calgary
B.H. Far ([email protected])http://www enel ucalgary ca/People/far/Lectures/SENG637/
http://www.enel.ucalgary.ca/People/far/Lectures/SENG637/
SRE: Process (Review)SRE: Process (Review)SRE: Process (Review)SRE: Process (Review)
5 steps in SRE pprocess: Define necessary Define necessary
Define NecessaryReliability
Develop Operational Profilereliabilityreliability
Develop operational profiles
Operational Profile
Prepare for Test
operational profiles Prepare for test Execute test
Execute Test
Apply Failure Data to Guide Decisions
Apply failure data to guide decisions
to Guide Decisions
Chapter 7 Chapter 7 Section 1Section 1
All ti T t Ti & RAll ti T t Ti & RAllocating Test Time & ResourcesAllocating Test Time & Resources
ContentsContentsContentsContents Allocating test time
How to allocate test time among system components based How to allocate test time among system components based on test type (feature test, regression test, load test) and operation modes?
Invoking the test Invoking the test In what order the test should be carried on? How many test cases and runs should be invoked?
Identifying failures that occur What deviation could be discovered? How to document it? How to document it?
Test Time AllocationTest Time AllocationTest Time AllocationTest Time AllocationAllocate test time:1) Among subsystems to be tested2) Among feature, regression and load test for2) Among feature, regression and load test for
each subsystem under reliability growth test3) Among operational modes for each3) Among operational modes for each
subsystem under load test
1 Test Time: Systems /11 Test Time: Systems /11. Test Time: Systems /11. Test Time: Systems /1 Various systems
involved get theirinvolved get their share in test time
Allocate time to
Interface to other systems
interface to other systems based on estimated risk
Acquiredcomponents
DevelopedComponents
estimated risk Allocate time to the
rest in the sameOS
rest in the same proportion that test cases were ll t d
Hardware
allocated
1 Test Time: Systems /21 Test Time: Systems /21. Test Time: Systems /21. Test Time: Systems /2Example: Total test time: 320 h Total test time: 320 h
Interface to other systems(40 h)
Acquiredcomponents
DevelopedComponents
SubsystemComponents200 h
Test case proportion: Associated system
components: 71 4%OS
(80 h)
200 h
components: 71.4% Operating system
component: 28.6%
(80 h)
Hardware
2 Test Time: Reliability Growth2 Test Time: Reliability Growth2. Test Time: Reliability Growth2. Test Time: Reliability Growth For each subsystem components during
li bilit th t treliability growth test: for all new test cases: Allocate time to feature test
(first release)(first release) for all new test cases: Allocate time to regression
test (subsequent releases)( q ) In this way testing all critical new operations
will be guaranteed.g The remaining time goes to load test.
3 Test Time: Load Test3 Test Time: Load Test3. Test Time: Load Test3. Test Time: Load Test Allocate time for load test based on the proportion of
th i ti l dthe occurrences in operational modes Example:
Operational mode
Proportion of transactions
Operational mode
Test Time (h)
Interface Product OS
Peak hours 0.1
Prime hours 0.7
Off hours 0.2
Peak hours 4 18 8
Prime hours 28 126 56
Off hours 8 36 16
Total (h) 40 180 80
The remaining 20 hours go to feature and regression tests.
g g g
Test Invocation /1Test Invocation /1Test Invocation /1Test Invocation /1 Sequence of system test:
1 Acq ired components1. Acquired components Certification test only
2. Developed product Feature test and then load test for a new product Feature test, and then regression test for subsequent releases
3. Other systems interfaceO y Load test only
It is possible to change this sequence or testing diff t t i ll ldifferent systems in parallel.
Test Invocation /2Test Invocation /2Test Invocation /2Test Invocation /2 In the reliability growth test, perform feature test first and
then load test. Then conduct regression test after each buildthen load test. Then conduct regression test after each build that has a significant change.
Invocation of test cases should occur at random times. I f t t t l t i d d f th t f ll In feature test select in random order from the set of all new test cases plus the regression test cases of the previous release.
In load test, invoke each operational mode for its allocated proportion of time.
The number of test cases invoked will be determined based on the time the operational mode runs and its occurrence rate.the time the operational mode runs and its occurrence rate.
Test Case Selection /1Test Case Selection /1Test Case Selection /1Test Case Selection /1 Selection should be with replacement for test cases in load
test and without replacement in feature or regression testtest and without replacement in feature or regression test. In feature or regression test, the runs are much less likely to
be different because indirect input variables are tightlybe different because indirect input variables are tightly controlled.
In load test, the resulting runs will almost be different, because the indirect variables can vary, causing to explore the failure behavior of different runs.
Repetitions are inefficient. But in load test, the number of runs is so large that the probability of wasting test resources by repeating many runs is infinitesimal
by repeating many runs is infinitesimal.
Test Case Selection /2Test Case Selection /2Test Case Selection /2Test Case Selection /2 Therefore, in load test, after each test,
l th l t i th l tireplace the element in the population, allowing reselection. Why? Why?
Operations may be associated with multiple faults If selection ismultiple faults. If selection is performed without replacement, the operation can be selected only once p ytherefore reducing the probability of finding another bug.
Repetition /1Repetition /1Repetition /1Repetition /1 In invoking test cases, repeat a run in certain special
circumstances:circumstances: To collect more data to improve failure identification,
either in the sense of more failures or better describing known failuresknown failures
To verify failure resolution (fault removal) Usually you will require some number n of runs per
operation to achieve reasonable confidence that theoperation to achieve reasonable confidence that the operation is failure-free.
How to select number of runs n? The number n depends on the nature of the operation
and the code that implements it.
Repetition /2Repetition /2Repetition /2Repetition /2 How to select number of runs n?
If th b bilit f th ti i ll If the occurrence probability of the operation is very small and the operation is non-critical, usually the risk of remaining fault(s) is acceptable.
If occurrence probability is very small and the operation is critical, the number of runs for that operation must be increasedincreased.
If the occurrence probability is not very small, the total amount of testing planned may be increased.
Random Run Selection /1Random Run Selection /1Random Run Selection /1Random Run Selection /1 The occurrence
b biliti ith Example:probabilities with which the operations for execution are
Operation Occurrence probability
A 0 7
Example:
for execution are selected, should be stationary, or
A 0.7
B 0.3
Set 1:y,reasonably unchanged with time.
Set 1:ABAABAAABAS t 2 Set 2:AAAAAAABBB
Example from Musa’s Book
Random Run Selection /2Random Run Selection /2Random Run Selection /2Random Run Selection /2 For set 1, deviations from
expected occurrence O d f P b bilit f ti A ft hpprobability is reasonably small and can be considered stationary. For set 2 it is not.
Order ofoperations
Probability of operation A after each run
Set 1 Set 2
1 1 1
2 0 5 1
Non-stationary probabilities generally occur because the
2 0.5 1
3 0.67 1
4 0.75 1
5 0.6 1g ytester reorders the selection of operations from what would occur at random.
6 0.67 1
7 0.71 1
8 0.75 0.88
9 0.67 0.78
10 0.7 0.7
Example from Musa’s Book
How to Write a Test Suite?How to Write a Test Suite?How to Write a Test Suite?How to Write a Test Suite? There are many tools that help write test cases
d t t itand test suites. JUnit (helps write test cases for Java code faster
and in a more efficient way)and in a more efficient way) HttpUnit (emulates the relevant portions of
browser behaviour, including form submission, , g ,JavaScript, cookies, etc., and allows Java test code to examine returned pages from the server.)j k ( i l f k f i jakarta cactus (simple test framework for unit testing server-side java code, e.g., servlets, etc).
Test Tools /1Test Tools /1Test Tools /1Test Tools /1Type of Tool Type of Tool Description Description
Test Procedure Generators Generate test procedures fromTest-Procedure Generators Generate test procedures from requirements/design/object models
Code (Test) Coverage Identify untested code and support dynamic Analyzers and Code Instrumentors
testing
Memory-Leak Detection Verify that an application is properly managing y y pp p p y g gits memory resources
Metrics-Reporting Tools Read source code and display metrics information such as complexity of data flowinformation, such as complexity of data flow, data structure, and control flow. Can provide metrics about code size in terms of numbers of modules, operands, operators, and LOC.
modules, operands, operators, and LOC. Source: Effective Software Testing: 50 Specific Ways to Improve Your Testing By Elfriede Dustin
Test Tools /2Test Tools /2Test Tools /2Test Tools /2Type of Tool Type of Tool Description Description
U bilit U fili t k l iUsability-Measurement Tools
User profiling, task analysis, prototyping, and user walk-throughs
Test-Data Generators Generate test dataTest-Data Generators Generate test data Test-Management Tools
Provide such test-management functions as test-procedure documentation and pstorage and traceability
Network-Testing Tools Monitoring, measuring, testing, and diagnosing performance across entire network
network Source: Effective Software Testing: 50 Specific Ways to Improve Your Testing By Elfriede Dustin
Test Tools /3Test Tools /3Test Tools /3Test Tools /3Type of Tool Type of Tool Description Description
GUI T ti T l A t t GUI t t b diGUI-Testing Tools (Capture/Playback)
Automate GUI tests by recording user interactions with online systems, so they may be replayed automatically
Load, Performance, and Stress Testing
l
Load/performance and stress testing
Tools
Specialized Tools Architecture-specific tools that provide speciali ed testing of specificspecialized testing of specific architectures or technologies, such as embedded systems
Source: Effective Software Testing: 50 Specific Ways to Improve Your Testing By Elfriede Dustin
Chapter 7 Section Chapter 7 Section 33
F il Id tifi ti &F il Id tifi ti &Failure Identification & Failure Identification & DocumentationDocumentation
Identifying System FailuresIdentifying System FailuresIdentifying System FailuresIdentifying System FailuresTo identify (and document) system failures: 1) Analyze the test output for deviations 2) Determine which deviations are failures2) Determine which deviations are failures 3) Establish when the failures occurred 4) A i f il it l hi h ill b4) Assign failure severity classes, which will be
used in prioritizing failure resolution5) Document failures
1 Analyze Test Output /11 Analyze Test Output /11. Analyze Test Output /11. Analyze Test Output /1 A deviation is any departure of system
behavior in execution from expected behavior.
How to analyze?
1 Analyze Test Output /21 Analyze Test Output /21. Analyze Test Output /21. Analyze Test Output /21. Standard types of deviations can be detected
using debugging tools, such as GDB, etc. Example:
Illegal memory references Deviant return code values
dl k Deadlocks Process crashes
Hangs Hangs etc.
1 Analyze Test Output /31 Analyze Test Output /31. Analyze Test Output /31. Analyze Test Output /32. Insert assertions manually into the code to
set flags that permit programmer-defined deviations to be detected by debugging tools. In inserting assertions, it is important to
document the deviations they detect because this i f i i f l linformation is frequently lost.
1 Analyze Test Output /41 Analyze Test Output /41. Analyze Test Output /41. Analyze Test Output /43. Examine the output automatically with built-
in audit code or an external checker. (instrumentation)( ) It may require considerable labor to specify
what output variables must be checked and their correct values.
When using an output checker, one can save preparation time by specifying ranges of output variables that are acceptable instead of computing precise values
computing precise values.
1 Analyze Test Output /51 Analyze Test Output /51. Analyze Test Output /51. Analyze Test Output /54. Manual inspection of test results for
d i ti th t diffi lt t if ideviations that are difficult to specify in advance and are not amenable to automatic detectiondetection. In theory, there is no need to inspect manually
when feature test is adequate.w e featu e test s dequ e. Deviations are sometimes missed in load test
because they cannot be detected automatically and enough effort is not devoted to detecting them manually.
2 Deviations & Failures /12 Deviations & Failures /12. Deviations & Failures /12. Deviations & Failures /1 Manual analysis of deviations is necessary to
d t i h th th i l t tdetermine whether they violate customer requirements and hence represent failures. F il f hi h it b d t t d Failures of higher severity can be detected directly without analyzing deviations.E l Example: Generic failures (for example, process crashes)
P j t ifi f il (f l i l t Project-specific failures (for example, incomplete transactions).
2 Deviations & Failures /22 Deviations & Failures /22. Deviations & Failures /22. Deviations & Failures /2 Incidents, troubles, modifications, and change reports
d t il i di t f ildo not necessarily indicate failures. Example:
A i id d b An event or incident reported by a user may not necessarily be a software failure.
It may represent a failure of the human component of aIt may represent a failure of the human component of a system (possibly due to insufficient training).
It may be a desire for a new feature. There may be a documentation problem (such as lack of
clarity), etc.
2 Deviations & Failures /32 Deviations & Failures /32. Deviations & Failures /32. Deviations & Failures /3 Does a failure exists if system behavior
d ’t i l t itt i t?doesn’t violate any written requirement? Yes, if there exists a general dissatisfaction
ith b h i b th “ it ”with behavior by the “user community” as a failure. U it di ti f ti i li th t User community dissatisfaction implies that there is a considerable consensus of expectations of users not simply desires of aexpectations of users not simply desires of a very small number of unreasonable users.
3 Failure Occurrence Time /13 Failure Occurrence Time /13. Failure Occurrence Time /13. Failure Occurrence Time /1 In establishing when a failure occurred, use
th ( t l ti it ) dthe same measure (natural or time units) used in setting the failure intensity objective. I tifi ti t t it t th i t In certification test, measure units at the point at which the failure occurred. I li bilit th t t f il th t In reliability growth test, measure failure that occur in an interval, such as 5 failures in 1000 transactions This format is usually calledtransactions. This format is usually called “grouped failure data.”
3 Failure Occurrence Time /23 Failure Occurrence Time /23. Failure Occurrence Time /23. Failure Occurrence Time /2 Data for a system
with maximum 40with maximum 40 users and utilization rate of 80%
If a fault happens at If a fault happens at 10:00 AM and 2:00 PM, the approximateapproximate execution time interval will be 2.64 hours R b ti h bhours. Remember execution hour may be
different from calendar hour
Example from Musa’s book
4 Assign Severity Class4 Assign Severity Class4. Assign Severity Class4. Assign Severity Class Map the detected failures to the failure
severity classes that were determined earlier Document the failures and their severity y
classes Start fixing the code for the failures of higherStart fixing the code for the failures of higher
severity
5 Documenting Failure /15 Documenting Failure /15. Documenting Failure /15. Documenting Failure /1 How well the failure is documented affects
h lik l it ld b fi dhow likely it could be fixed. Types of failure reports:
Coding error Design issue
S ti Suggestion Documentation problem
Q Query Hardware and/or other systems interactions
5 Documenting Failure /25 Documenting Failure /25. Documenting Failure /25. Documenting Failure /2 Write a 1-2 lines problem summary. The summary
should be unique to the problemshould be unique to the problem. Explain how to reproduce the problem. Describe the failure severity (fatal serious minor) Describe the failure severity (fatal, serious, minor)
and in a minimum number of steps. Write problem reports immediately as the failures p p y
happen. Include the other necessary information (report
b f il b l i t ) fnumber, failure number, release version, etc.) for keeping tracking the failure.
5 Documenting Failure /35 Documenting Failure /35. Documenting Failure /35. Documenting Failure /3 External problem tracking report
Report no. Date:
Program: Release: Version:
Report type: Coding error Design issue Suggestion
Documentation Query Hardware
P blProblem summary:
Reproducible: Yes / No
Problem and how to reproduce it:p
Suggested fix (optional)
Reported by: (date)
5 Documenting Failure /45 Documenting Failure /45. Documenting Failure /45. Documenting Failure /4 Internal problem tracking report
Status : open/closed
Severity class: (low) 1 2 3 4 (high)Severity class: (low) 1 2 3 4 (high)
Priority: (low) 1 2 3 4 (high)
Resolution: Pending Fixed Irreproducible
Deferred As designed Can’t be fixed
Withdrawn Need more info Disagree
Resol ed b (date)Resolved by: (date)
Tested by: (date)
Treated as deferred: Yes / No
/
ReportingReporting
Test Dept. XTest Dept. X
ReportingReporting
Test Cases Completed
60%
80%
100%
Test Cases Completed
60%
80%
100%
0%20%
40%
60%
0%20%
40%
60%
week1 week2 week3 week4week1 week2 week3 week4
week1 week2 week3 week4Completed % 25% 54% 81% 92%
Test Cases Passed 200 175 150 50Test Cases Failed 50 60 70 25Test Cases Planned 1000 900 875 850Test Cases Completed 250 485 705 780
Pass Rate 80% 74% 68% 67%
Report: Time Between FailuresReport: Time Between FailuresReport: Time Between FailuresReport: Time Between Failures
Time Between Failure vs. ith Failure
800
900
1000
500
600
700
ours
200
300
400Ho
0
100
200
1 11 21 31 41 51 61 71 81 91
ith Failure Failure Time
Report:: MTTFReport:: MTTFReport:: MTTFReport:: MTTF
Mean Time To Failure vs. ith Failure
700
800
900
500
600
700
ours
200
300
400Ho
0
100
1 11 21 31 41 51 61 71 81 91
ith F il Failure Time Mean Time (AV=2)
ith Failure ( )Mean Time (AV=5) Mean Time (AV=10)
Chapter 7 Section Chapter 7 Section 44
V i S ftV i S ftVarious Software Various Software Testing TechniquesTesting Techniques
Levels of TestingLevels of TestingLevels of TestingLevels of Testing Level 0
Testing = debugging Testing = debugging No distinction between wrong program code and wrong program
design
Level 1 Purpose of testing is to show software works using prototype, etc.
Level 2 Level 2 Purpose of testing is to show software does not work hacking
Level 3 Purpose of testing is to reduce the risk commercial software,
reliability testing
44
Basic Testing MethodsBasic Testing Methods Two widely recognized testing methods:
Basic Testing MethodsBasic Testing Methods
White Box Testing Reveal problems with the internal structure of a Reveal problems with the internal structure of a
program Black Box Testing Black Box Testing
Assess how well a program meets its requirementsrequirements
1 White Box Testing1 White Box Testing Checks internal structure of a program
1. White Box Testing1. White Box Testing
Requires detailed knowledge of structure Common goal is Path CoverageCommon goal is Path Coverage
How many of the possible execution paths are actually tested?y
Effectiveness often measured by the fraction of code exercised by test casesy
2 Black Box Testing2 Black Box Testing Checks how well a program meets its
2. Black Box Testing2. Black Box Testing
requirements Assumes that requirements are already validated Look for missing or incorrect functionality Exercise system with input for which the expected
output is known Various methods:
Performance, Stress, Reliability, Security testing
Testing LevelsTesting Levels Testing occurs
th h t ftFeasibility System In
Testing LevelsTesting Levels
4throughout software lifecycle: Unit
Study
ProductDefinition
Evaluationand
Acceptance
Use
3 Unit Integration & System Evaluation & Acceptance
High LevelDesign
Integrationand System
Testing
Acceptance
2v u o & ccep ce
Installation Regression (Reliability
DetailedDesign Unit Testing 1
Growth)Implementation
1 Unit Testing1 Unit Testing White box testing in a “controlled test
1. Unit Testing1. Unit Testing
environment” of a module in isolation from others Unit is a function or small library Small enough to test thoroughly Exercises one unit in isolation from others Controlled test environment White Box
2 Integration Testing2 Integration Testing Units are combined and module is exercised
2. Integration Testing2. Integration Testing
Focus is on the interfaces between units White Box with some Black BoxWhite Box with some Black Box Three main approaches:
Top down Top-down Bottom-up
Bi B Big Bang
Integration Testing: TopIntegration Testing: Top--DownDown The control program is tested first
Integration Testing: TopIntegration Testing: Top--DownDown
Modules are integrated one at a time Major emphasis is on interface testingMajor emphasis is on interface testing Interface errors are discovered early Forms a basic early prototype Forms a basic early prototype Test stubs are needed Errors in low levels are found late Errors in low levels are found late
Integration Testing: BottomIntegration Testing: Bottom--UpUp Modules integrated in clusters as desired
Integration Testing: BottomIntegration Testing: Bottom--UpUp
Shows feasibility of modules early on Emphasis on functionality and performanceEmphasis on functionality and performance Usually, test stubs are not needed Errors in critical modules are found early Errors in critical modules are found early Many modules must be integrated before a
working program is availableworking program is available Interface errors are discovered late
Integration Testing: ‘Big Bang’Integration Testing: ‘Big Bang’ Units completed and tested independently
Integration Testing: Big BangIntegration Testing: Big Bang
Integrated all at once Quick and cheap (no stubs, drivers)Q p ( , ) Errors: Discovered later More are found More expensive to repair
Most commonly used approach
Big BangBig BangBig BangBig BangSequential activities
IntegrationBegins
100%
s
Requirements Design Code Integration Test
Late DesignBreakage
nt P
rogr
esod
ed)
evel
opm
en(%
co
OriginalTarget Date
Project Schedule
De Target Date
3 External Function Test3 External Function Test Black Box test
3. External Function Test3. External Function Test
Verify the system correctly implements specified functionsp
Sometimes known as an Alpha test In house testers mimic the end use of the In-house testers mimic the end use of the
system
4 System Test4 System Test More robust version of the external test
4. System Test4. System Test
Difference is the test platform Environment reflects end useEnvironment reflects end use
Includes hardware, database size, system complexity, external factorsp y,
Can more accurately test nonfunctional system requirements (performance securitysystem requirements (performance, security etc.)
5 Acceptance Testing5 Acceptance Testing Also known as Beta testing
5. Acceptance Testing5. Acceptance Testing
Completed system tested by end users More realistic test usage than ‘system’ phaseMore realistic test usage than system phase Validates system with user expectations
D t i if th t i d f Determine if the system is ready for deployment
6 Installation Testing6 Installation Testing The testing of full, partial, or upgrade
6. Installation Testing6. Installation Testing
install/uninstall processes Not well documented in the literature
7 Regression Testing7 Regression Testing Tests modified software
7. Regression Testing7. Regression Testing
Verify changes are correct and do not adversely affect other system components
Selects existing test cases that are deemed necessary to validate the modification
With bug fixes, four things can happen Fix a bug; add a new bug; damage program
t t d i t itstructure; damage program integrity Three of them are unwanted
PostPost--Release ActivitiesRelease ActivitiesPostPost--Release ActivitiesRelease Activities Do we conduct “testing” post-release?
Post-release testing is usually called qualityPost release testing is usually called quality assessment and includes techniques, such as Reliability-Availability-Maintenance (RAM) Reliability Availability Maintenance (RAM) Fault Tree Analysis (FTA) Failure Modes and Effect Analysis (FMEA) Failure Modes and Effect Analysis (FMEA) HAZOP
etc etc.
SENG635 (Winter 2007) [email protected] 60
Quality AssessmentQuality AssessmentQuality AssessmentQuality Assessment
Pre release: testing Pre-release: testing, certification, verification, validation
Quality Assessment
Post-release: quality Post release: quality assessment, RAM
ReliabilityReliability--AvailabilityAvailability--Maintenance (RAM) Maintenance (RAM)
AnalysisAnalysis
61
Ref: Design for Electrical & Comp. Engineers, J.E. Salt et al., Wiley
AnalysisAnalysis
Test Completion CriteriaTest Completion CriteriaQ. When is testing enough?
Test Completion CriteriaTest Completion Criteria
Test phase time or resources are exhausted All black-box test cases are run White-box test coverage targets are met Rate of fault discovery goes below a target value Target percentage of all faults in the system are
found Measured reliability of the system achieves its target
value
Testing vs InspectionTesting vs InspectionTesting vs. InspectionTesting vs. InspectionInspections are strict and close examinations conducted on
ifi i d i d d h ifspecifications, design, code, test, and other artifacts. Inspections allow for
defect detection, Testing allows for defect
detection ,prevention, and isolation
Start early in life cycle Start later in life cycle
Inspections are up to 20 times more efficient than testing Code reading detects twice as many defects/hour as testing Code reading detects twice as many defects/hour as testing 80% of development errors are usually found by inspections Inspections resulted in a 10x reduction in cost of finding errors
Inspections or Testing?Inspections or Testing?Inspections or Testing?Inspections or Testing?Q. Can inspection replace testing? No. Inspections could replace testing if and
only if all information gleaned through testing ld b bt i d th h i ticould be obtained through inspection.
l i i i l Complex interactions in large systems Software reliability indicator
N f ti l i t f Nonfunctional requirements: performance, usability, etc.
Fault Based Test TechniquesFault Based Test TechniquesFault Based Test TechniquesFault Based Test Techniques Error based testing: defines classes of errors as well as inputs
that will reveal any error of a particular class, if it exists.that will reveal any error of a particular class, if it exists. Fault seeding: implies the injection of faults into software
prior to test. Based on the number of these artificial faults discovered during testing inferences are made on the numberdiscovered during testing, inferences are made on the number of remaining ‘real’ faults for this to be valid the seeded faults must be assumed similar to the real faults. M i i i j t f lt i t d t d t i ti l Mutation testing: injects faults into code to determine optimal test inputs.
Fault injection: evaluates the impact of changing the code or state of an executing program on behavior of the software.
Nonfunctional TestingNonfunctional TestingNonfunctional TestingNonfunctional Testing Nonfunctional requirements do not endow the system with
additional functions, but rather constrain or further define howadditional functions, but rather constrain or further define how the system will perform any given function.
The nonfunctional aspects of an application or system, such as performance security compatibility and usability canperformance, security, compatibility, and usability, can require considerable effort to test and perfect.
Meeting nonfunctional requirements can make the difference b t li ti th t l f it f ti dbetween an application that merely performs its functions, and one that is well received by its end-users and technical support personnel, and is readily maintainable by system d i iadministrators.
Nonfunctional Risks /1Nonfunctional Risks /1Nonfunctional Risks /1Nonfunctional Risks /11. Poor Performance: 1. Poor Performance: Poor application performance may be merely an Poor application performance may be merely an
inconvenience, or it may render the application worthless to the end user.
h l i h f i k f When evaluating the performance risk of an application, consider the need for the application to handle large quantities of users and data. g q
Example:Example: In server-based systems (Web applications), failure to
evaluate performance can leave scaling capabilitiesevaluate performance can leave scaling capabilities unknown, and lead to inaccurate cost estimates, or lack of cost estimates, for system growth.
Nonfunctional Risks /2Nonfunctional Risks /2Nonfunctional Risks /2Nonfunctional Risks /22. Incompatibility:2. Incompatibility: The proper functioning of an application on multiple end user The proper functioning of an application on multiple end-user
system configurations is of primary concern in most software-development activities. I tibiliti lt i t h i l t ll d Incompatibilities result in many technical-support calls and product returns.
Example:Example: If an application is released without verifying the compatibility of its
components with all user environments and an incompatibility is later identified, there will be no choice but to remove the third-party control and pro ide an alternate or c stom implementation req iring e traand provide an alternate or custom implementation, requiring extra development and testing time.
Nonfunctional Risks /3Nonfunctional Risks /3Nonfunctional Risks /3Nonfunctional Risks /33. Inadequate Security: 3. Inadequate Security:
Alth h it h ld b id d f ll Although security should be considered for all applications, Web-based software projects must be particularly concerned with this aspect. p y p
Improper attention to security can result in compromised customer data, and can even lead to l l ilegal action.
Example:Example:I i it W b li ti l it Ignoring security on a Web application may leave it vulnerable to attack from malicious Internet users, which in turn can result in downtime and loss of customer data.
Nonfunctional Risks /4Nonfunctional Risks /4Nonfunctional Risks /4Nonfunctional Risks /44. Insufficient Usability: 4. Insufficient Usability: Inadequate attention to the usability aspects of an Inadequate attention to the usability aspects of an
application can lead to a poor acceptance rate among end users, based on the perception that the application is difficult to use or that it does notapplication is difficult to use, or that it does not perform the minimum required functions.
This can trigger an increase in technical support gg ppcalls, and can negatively affect user acceptance and application sales.
Example:Example: Example:Example: A software that does not let the user save the data at user’s
will.
Nonfunctional Risks /5Nonfunctional Risks /5Nonfunctional Risks /5Nonfunctional Risks /55. Concurrency:5. Concurrency: Concurrency in a multiuser client server architecture Concurrency in a multiuser client-server architecture. In client-server systems, concurrency, or
simultaneous access to application data, is a major b d b h iconcern because unexpected behavior can occur.
A strategy for handling data isolation is necessary to ensure that the data remains consistent in a multiuserensure that the data remains consistent in a multiuser environment.
Example:Example:T dif h i f d Two users attempt to modify the same item of data.
Documenting Nonfunctional Documenting Nonfunctional RequirementsRequirementsRequirementsRequirements
Nonfunctional requirements are usually documented in two ways:ways:
A system-wide specification is created that defines nonfunctional requirements for all use-cases in the system. Example: “The user interface of the Web system must beExample: The user interface of the Web system must be compatible with Netscape Navigator 4.x or higher and Microsoft Internet Explorer 4.x or higher.”E h i t d i ti t i ti titl d Each requirement description contains a section titled “Nonfunctional Requirements,” which documents any specific nonfunctional needs of that particular requirement h diff f h id ifi ithat differ from the system-wide specifications.
Nonfunctional Testing Tips /1 Nonfunctional Testing Tips /1 Nonfunctional Testing Tips /1 Nonfunctional Testing Tips /1 Do not make nonfunctional testing an
afterthought It is usually recommended that nonfunctional y
requirements are associated with their corresponding functional requirements and defined during the requirements phase. e.g., included or attached to the use-casee.g., included or attached to the use case specification document.
Source: Effective Software Testing: 50 Specific Ways to Improve Your Testing By Elfriede Dustin
Nonfunctional Testing Tips /2Nonfunctional Testing Tips /2Nonfunctional Testing Tips /2Nonfunctional Testing Tips /2 Conduct performance testing with production
sized databasessized databases An application may be tested with 1, 100, 500, 1,000,
5,000, 10,000, 50,000, and 100,000 records to , , , , , , ,investigate how the performance changes as the data quantity grows.
hi f i l k i ibl fi d h This type of testing also makes it possible to find the upper bound of the application’s data capability, meaning the largest database with which themeaning the largest database with which the application performs acceptably.
Source: Effective Software Testing: 50 Specific Ways to Improve Your Testing By Elfriede Dustin
Nonfunctional Testing Tips /3Nonfunctional Testing Tips /3Nonfunctional Testing Tips /3Nonfunctional Testing Tips /3 Tailor usability tests to intended audience Try several ways to get end-user input.
Subject-matter experts; focus groups; surveys; j p ; g p ; y ;study of similar products; observation of users in action.
Prototyping: early user-interface prototype (for the purpose of proof of concept) is useful.
Source: Effective Software Testing: 50 Specific Ways to Improve Your Testing By Elfriede Dustin
Nonfunctional Testing Tips /4Nonfunctional Testing Tips /4Nonfunctional Testing Tips /4Nonfunctional Testing Tips /4 Consider all aspects of security, for specific
requirements and system-widerequirements and system wide Each functional requirement likely has a specific set
of related security issues to be addressed in the software implementationsoftware implementation.
With the security-related requirements properly documented, test procedures can be created to verify , p ythat the system meets them.
If the security risk associated with an application has been determined to be substantial it is worthbeen determined to be substantial, it is worth investigating options for outsourcing security-related testing.
Source: Effective Software Testing: 50 Specific Ways to Improve Your Testing By Elfriede Dustin
Nonfunctional Testing Tips /5Nonfunctional Testing Tips /5Nonfunctional Testing Tips /5Nonfunctional Testing Tips /5 Investigate the system’s implementation to plan for
concurrency testsconcurrency tests Concurrency is the handling of multiple users attempting to
access the same data at the same time. A li ti t h d l An application must have a concurrency model: Pessimistic: The model places locks on data. No other user is allowed
to read/write until the lock is removed.O i i i Th d l ll th t d th d t b t t t Optimistic: The model, allows other users to read the data but not to update it.
No concurrency protection: “last one wins” policy.It i i t t t d i t t t if th t th li ti It is important to design tests to verify that the application properly handles concurrency, following the concurrency model that has been selected for the project.
Source: Effective Software Testing: 50 Specific Ways to Improve Your Testing By Elfriede Dustin
Nonfunctional Testing Tips /6Nonfunctional Testing Tips /6Nonfunctional Testing Tips /6Nonfunctional Testing Tips /6 Set up an efficient environment for compatibility
testingtesting Certification test of the other components or systems. All the other components/systems that the developed
f i i h hsoftware system interact with must pass the certification test.
If resources are limited, one should consider ranking If resources are limited, one should consider ranking the possible configurations in order, from most to least common for the target application. Alpha and beta testing Alpha and beta-testing.
Source: Effective Software Testing: 50 Specific Ways to Improve Your Testing By Elfriede Dustin