View
216
Download
3
Tags:
Embed Size (px)
Citation preview
Contents - Integration
Definitions Why integration testing? Aspects of integration testing Dependency Analysis Integration Faults Integration Patterns
3
Definitions
System : set of components Component : can consist of other
components Increment : set of components that
correspond with subset of requirements Stub : partial implementation of a
component Driver : system that applies test cases
4
Why integration testing?
Antidecomposition axiom :System scope test does not cover everything → Insufficient to test as a whole; also test components and the interaction in between.
Components work together→ Show that components are minimally interoperable.
5
Aspects of integration testing
Integration testing :Search for component faults that cause intercomponent failures.
How?– Which components?– In what sequence?– Which test design technique?
6
Aspects - scopes
7
Component(integration focus)
System(Integration scope)
Intercomponentinterfaces
Method Class Instance variablesIntraclass messages
Class Cluster Interclass messages
Cluster Subsystem Interclass messagesInterpackage messages
Subsystem System Interprocess communicationRemote Procedure CallORB servicesOS services
Aspects - flexibility
Flexibility is important:– Not all components are available– Components change
8
Dependency Analysis
Composition and aggregation Inheritance Global variables Calls to an API Server objects ...
9
Integration Faults
Example: 2/3 of the faults in a large C system were interface-related
Typical interface bugs:– Wrong parameters– Missing, overlapping or conflicting functions– Client violates server’s preconditions– Thread X will crash when process Y is
running– … 12
Integration Patterns
Big Bang Bottom-up Top-down Collaboration Backbone Layer Client/Server Distributed Services High-frequency
13
Big Bang - (Dis)Advantages
+ Requires little effort
- Few clues about fault locations- No distinction between critical and
peripheral components- Interface faults can be hidden
15
Bottom-up - (Dis)Advantages
+ Early testing+ May proceed in parallel+ Well suited to responsibility-based design
- Driver development is most significant cost- Error-prone- Highest level of control and component
interoperability tested at the end
21
Top-Down - (Dis)Advantages
+ Early testing+ Reduced driver development (just 1 driver)+ Stubs easier to code than drivers+ Components may be developed in parallel
- Large number of stubs- Complex test cases can require recoding of stubs- Fragile → Last minute changes- Late interoperability testing of all components
27
Collaboration - Advantages
+ Interface coverage with few test runs+ Useful for system scope testing+ Minimally coupled components of collaboration+ End-to-end coverage sooner than bottom-up+ Minimizes driver development costs
31
Collaboration - Disadvantages
- Intercollaboration dependencies may be subtle- Not all collaborations may be covered- Scenario-based big bang- May require many stubs- Exercising lower-level components may be difficult- Specified collaborations may be incomplete
32
Backbone
33
Backbone: infrastructure of a system Applications depend on backbone Combines top-down, bottom-up
and big bang Known use: Windows NT
Backbone - (Dis)Advantages
+ Combines advantages of top-down and bottom-up+ Begins on early side of the midpoint in development+ Development and testing may proceed in parallel
- Careful analysis of system structure anddependencies is necessary
- Requires both drivers and stubs- Each backbone component must be adequately tested
39
Layer - (Dis)Advantages
Same as bottom-up or top-down Integration
+ If stack of layers needed as subsystem → Available at the earliest possible date, when using top-down variant
- Viability of the stack untested until lowest layer is integrated
- Mostly used in time-critical applications → Performance of the stack only tested at the end
42
Client/Server - (Dis)Advantages
+ Avoids the problems of big bang+ Can be sequenced according to priority of risk+ Drivers & test cases suitable for reuse+ Controllable, repeatable testing
- Cost of driver and stub development- Midway or late exercise of end-to-end use cases
46
Distributed Services - (Dis)Advantages
+ Avoids big bang’s disadvantages+ Provides a basis for system scope testing+ Controllable, repeatable testing
- Largest cost in driver and stub development- End-to-end use case only tested midway to late in
the testing cycle- Establishing stable testing environment costly
51
High Frequency
52
Used in rapid incremental development Relies on a stable increment Must be automated in order to keep up
with the pace Procedure consists of 3 main steps Known use: Windows NT 3.0
– 5,600,000 lines of C code
High Frequency – Developer step
Write/revise code & test suite Desk check/review/inspect code & tests Perform static analysis & resolve errors Compile & build Run test suite If passed, commit changes
53
High Frequency – Build tester step
At regular intervals, the integration tester stops accepting changes and builds the system
Test suites are run– Smoke tests– Newly developed tests– As many other test suites as possible
54
High Frequency – Evaluation step
Evaluate results If errors are discovered, the developer
that broke the code is notified This developer must fix the problem
immediately
55
High Frequency - Advantages
+ Coding & testing equally important→ effective bug prevention strategy
+ Errors, omissions and incorrect assumptions are revealed early
+ Recent additions cause failures→ easy debugging
+ Producing a system that works → team sees tangible results early→ a happy development team
56
High Frequency - Disadvantages
- Developing and maintaining source and tests requires a lot of commitment
- Questionable effectiveness due to simpler test suites- Initial cycles will not go smoothly- Susceptible to pesticide paradox
57
Contents - Application Systems
Testing Application Systems Test Design Patterns Implementation-specific Capabilities Post-development Testing
59
Testing Application Systems
Test from requirements Some dimensions are implementation-
specific:– Tightly coupled capabilities– Test harness interaction– Some system scope faults
61
Test strategy
First integration testing System scope testing per increment
3 primary goals– Reveal bugs only present at system scope– Satisfying all system requirements– “Can we ship it yet?”
62
Requirements
Typical source:– Natural-language documents– Use cases– User interface prototypes, layouts and
models– Features described in product literature– ...
63
Extended Use Case Test
Intent– Develop application system test suite by
modeling essential capabilities as extended use cases
Context– Essential requirements described by
use cases– Each use case → large number of
instances/scenarios
65
Extended Use Case Test
“Extend” use case with – Domains of operational variables– Input/output relations– Relative frequency – Sequential dependencies among use
cases
67
Extended Use Case Test
Entry criteria – Extended use cases developed and
validated– The SUT has undergone integration tests
Exit criteria– All requirements exercised – Every variant of use cases should be
exercised at least once
70
Extended Use Case Test - (Dis)Advantages
+ Use cases widely used+ Reflect customer/user point of view+ May be used as systematic basis for test- Level of abstraction for use cases unclear- Use cases are not used to specify qualities- Composite use cases must be flattened to be
independently testable- Does not necessarily imply (code) coverage
71
Covered in CRUD
Create, Read, Update, Delete – Operations to be provided by domain objects
Intent– Verifies all basic operations are exercised for each
problem domain object in the SUT
Context– Test suites developed from use cases cannot
guarantee coverage of problem domain class logic
72
Covered in CRUD
Do a CRUD coverage analysis of use case-based test for each class
73
Required Optional Prohibited
Exercised – pass Validated Surprise Failure
Exercised – throws expected exception
Failure Validated Validated
Not exercised Incomplete – Add capability test
Incomplete – Add exception test
Incomplete – Add exception test
Covered in CRUD
Entry criteria– Defined use cases and
problem domain classes Exit criteria
– All basic operations for all problem domain classes when applicable
74
Allocate Test by Profile
Intent– Allocate overall testing budget to each use
case in proportion to its relative frequency Context
– Testing budgets are limited– Choose which variants and how many to
choose in an extended use case test
75
Allocate Test by Profile
Assumes bugs are distributed evenly across use cases– Frequently used use cases are the ones
most likely to fail – May also need to prioritize on other criteria
such as complexity or criticality
76
Allocate Test by Profile - Example
1000 hours available for tests: – 1 hour to do a test case– 5% of test cases reveal bugs– 4 hours to correct a bug
77
Impl.-specific Capabilities
Quality/architectural capabilities– E.g. performance, modifiability, scalability– Often not described by use cases
79
Configuration and Compability
Context– Configuration combinations that must be
supported– Same behavior for different configurations
Strategy– Define the compatibility and configuration
requirements– Identify the variables– Identify allowed deviations, unspecified variables
80
Performance
Context– Software systems must produce results
within acceptable time intervals Strategy
– Express a time interval for SUT– Requirements for worst-case and average– Load testing and volume testing
81
Integrity and Fault Tolerance
Concurrency testing– Context
• Concurrent execution with shared resources• Common problems: resources contention resolution,
deadlock avoidance, priority inversion, race conditions
– Strategy• Identify configurations of concurrent execution• Several processes and threads, simulate typical input
patterns • Attempt to cause abnormal termination
82
Integrity and Fault Tolerance
Stress testing– Context
• Analogous to vehicle crash testing• Rate of input that exceeds design limits
→ Fail-safe response of the SUT
– Strategy• Design to cause a failure• Reveal 2 kind of faults
– Lack of fail-safe behavior– Load-sensitive bugs
• Components that catch exceptions→ Should be unite tested with Controlled Exception Test
83
Integrity and Fault Tolerance
Restart/recovery testing– Context
• SUT supports automatic or manual recovery from failure• E.g. Data link fails, system crash, ...
– Strategy• Controlled exception test: Simulates failure mode
→ Run regression test to verify recovery
84
Human-Computer Interaction
Context– Capabilities of SUT may require HCI– Correctness, ease of use, ...
of the SUT must be tested• HCI bugs can be critical
Strategy– Enumerate capabilities/features– Define pass/no pass criterion– Run the tests
85
Several types of HCI testing
Usability Security Localization Installation User documentation Operator procedure Serviceability
86
Post-development Testing
After developer-administered testing Alpha test : by an independent group,
performed within developer’s company Beta test : by representative end-users Acceptance test : does the customer
accept the system? Compliance test : test compliance with
standards and regulations87