View
232
Download
1
Tags:
Embed Size (px)
Citation preview
Testing and the UML 2.0 Testing Profile
Based on FOKUS UML Tesing Profileand
Object Oriented TestingBy James Gain
©I. Schieferdecker
Outline
• Introduction• Testing• Testing and UML• The Testing Profile• Its Relation to TTCN-3• What it brings
©I. Schieferdecker
Testing
• In general testing is the process of determining if an artifact meets a given specification. It is one of the four basic technical activities in the software development process. (analysis, design, implementing, testing). It is used to validate and verify products or the processes by which they are made. we must distinguish between the correct behaviour and the expected behaviour. Testing methods should incorporate accountability repeatability and traceability.
©I. Schieferdecker
Testing• Accountability: who performs the test• Repeatability: can the test be repeated, must have known
inputs, outputs, expected and actual results.• Traceability is the ability to trace the history of each
requirement from problem description through• conceptual models and • specification models and • deployment models
• to the implementation in order to reconstruct the significant events that led to the final operational software system. In other words traceability is the ability to record information that will allow any investigation to reconstruct events, changes and reasons for change.
©I. Schieferdecker
Verification and Validation
• There is a distinction between verification (checking are we building product correctly) and validation (checking are we building the right product).
• Program testing relates more to the testing the design (verification).
• Model testing relates more to testing the specification (validation)
©I. Schieferdecker
Types of Testing• Usability testing tests the system is easy to use effectively, e.g.
a GUI should offer good affordance, visibility and feedback.• Regression testing: Regression testing will typically only catch
50% of the existing anomalies• White box testing: Tests chosen by looking to the
implementation.• Black box testing: Tests chosen by looking to the specification.• Inspection: The importance of implementing inspection
processes becomes clearer when you consider the following dramatic statistics: Over 60% of software bugs occur before the code is ever written. Inspection is about 80% effective in removing anomalies.
• Performance and Stress Test: e.g. number of users supported
©I. Schieferdecker
Basis Paths• A basis set is a set of linearly independent paths that can be used to construct any
path through the program flow graph.• Path testing is testing designed to execute all or selected paths through a computer
program.• Branch testing is testing designed to execute each outcome of each decision point in
a computer program.• The test paths in a basis set fulfill the requirements of branch testing and also test all
of the paths that could be used to construct any arbitrary path through the graph.
The nodes in this graph are program statements and the directed edges are flow of control
©I. Schieferdecker
Basis Paths• A basis set is a set of linearly independent test paths. A path can be associated with
a vector, where each element in the vector is the number of times that an edge is traversed. For example, consider a graph with 4 edges: a, b, c and d. The path ac can be represented by the vector [1 0 1 0]. Paths are combined by adding or subtracting the paths' vector representations. Each path in the basis set can not be formed as a combination of other paths in the basis set. Also, any path through the control flow graph can be formed as a combination of paths in the basis set.
ac = [1 0 1 0]
[a b c d]
©I. Schieferdecker
Basis PathsThe figure shows a simplified control flow graph. While a complete flow graph would not have two edges going to the same destination, this requirement has been relaxed to keep the number of paths to a manageable size for this example. A basis set for this graph is {ac, ad, bc}. The path bd can be constructed by the combination bc + ad - ac as shown in this table.
Edgebd bc bc + ad bc + ad - ac
a 0 0 1 0
b 1 1 1 1
c 0 1 1 0
d 1 0 1 1
The set {ac, bd} is not a basis set, because there is no way to construct the path ad. The set {ac, ad, bd} is also a basis set. Basis sets are not unique; thus a flowgraph can have more than one basis set.
©I. Schieferdecker
Types of Testing• Unit testing: The first stage of developmental testing in which units of
functionality (e.g. classes) are tested in isolation.• Integration Testing: The second stage of developmental testing in
which previously tested units are tested together as a unified whole.• System testing: The third stage of developmental testing in which the
completed system is tested against the customers requirement.• Acceptance testing customer involvement.• V-model of testing: Links specification aspects with implementations
through various test plans. Also there should be consistency verification, which checks that each deliverable is self consistence ,e.g. no contradictions in a specification.
Negotiated Statement Requirement ---- Acceptance testing-------- Delivered systemSystem Design Specification --------- Integration testing --- System Components Component Specification ----Unit testing --- Source code
©I. Schieferdecker
Testing and xUML
• UML becomes executable when used with an action language
• xUML models are finished when they execute their acceptance test correctly
• xUML models are validated specifications which offer a strong contractual basis for implementation by external contractors or in-house teams
©I. Schieferdecker
JUnit• The running of tests, recording of results and reporting of errors is
common to all tests, and JUnit provides precisely this functionality. A unit test exercises "a unit" of production code in isolation from the full system and checks that the results are as expected. The size of "a unit" to be tested depends on the size of a set of coherent functionality and in practice varies between a class and a package. JUnit supports writing unit test in Java for Java code. The purpose is to identify bugs in the code being tested prior to integrating the code into the rest of the system. The reason for identifying bugs before integration is that it is far easier (i.e. quicker and cheaper) to identify and correct problems (and demonstrate that the solution works) in isolation. Ideal for regression testing, if one needs to change existing code the fact that the unit test still passes the resulting code makes one more confident that the change doesn't break anything. And the better the unit test the more confident one can be - ideally the test should be updated to include the new functionality prior to making the change.
©I. Schieferdecker
JUnit
• Unit tests and the production code can be developed independently against the specification. Used this way unit tests are likely to pick up misinterpretation of, or ambiguities in, the specification.
• Unit tests also feature as part of "Extreme Programming" [XP]. The idea is to develop the interface, the test harness, and the implementation. (In roughly that order - although some parts of the implementation may be completed before unrelated parts of the test harness).
©I. Schieferdecker
JUnit
• A piece of test code cannot be run in isolation, it needs to be part of a runtime environment. Also, it is desirable to automate the running of unit tests - such as periodically running all the test harnesses in the system to prove that nothing is broken. For this unit tests need to meet certain criteria: a successful test shouldn't need manual checking, a failed test should deliver adequate documentation for diagnosis.
©I. Schieferdecker
JUnit
• JUnit tests can be organized into a hierarchy of test suites containing test cases and even other test suites. The composite behaviour of JUnit tests allows you to assemble collections of tests and automatically regression test the entire test suite in one fell swoop. You can also run the tests for any layer within the test suite hierarchy.
©I. Schieferdecker
JUnit
• The fewer tests you write, the less stable your code becomes. Tests validate the stability of the software and instill confidence that changes haven't caused a ripple-effect through the software. The tests form the glue of the structural integrity of the software.
©I. Schieferdecker
JUnit
• JUnit issues: Limitations: JUnit a Java only solution, little support for multi-threading, need to be adapted for specialized testing e.g. CACTUS for web development. Focuses on unit white-box testing. Unlike functional tests, which treat the system as a black box and ensure that the software works as a whole, unit tests are written to test the fundamental building blocks of the system from the inside out. On the other hand XP programmers say "Here's my deliverable and the tests which validate it."
©I. Schieferdecker
UML Test Profile
• The UML Testing Profile defines a language for designing, visualizing, specifying, analyzing, constructing and documenting the artifacts of test systems. It is a test modelling language that can be used with all major object and component technologies and applied to testing systems in various application domains. The UML Testing Profile can be used stand alone for the handling of test artifacts or in an integrated manner with UML for a handling of system and test artifacts together.
©I. Schieferdecker
UML Test Profile• The UML testing profile specifically addresses typical testing
concepts in model-based development. The testing profile supports the specification and modeling of software testing infrastructures. It follows the same fundamental principles of UML in that it provides concepts for the structural aspects of testing such as the definition of test components, test contexts and test system interfaces, and behavioral aspects of testing such as the definition of test procedures, test setup, execution and evaluation. The core UML may be used to model and describe testing functionality since test software development can be seen as any other development for functional software properties. However, as software testing is based on a number of special test-related concepts these are provided by the testing profile as extensions to UML. The concepts are mainly concepts for test architectures, test behaviors and test data.
©I. Schieferdecker
UML Test Profile
• The executable versions of tests can be generated with mappings towards existing test execution environments based JUnit or TTCN-3 (Testing and Test Control Notation) which is a widely accepted technique for testing in the telecommunication and data communication domain. U2TP not yet directly mapped to ASL or OCL, because these a not ‘test frameworks’, although a test framework could be constructed from them.
©I. Schieferdecker
UML Test Profile
• The UML Testing Profile extends UML with test specific concepts like test components, verdicts, defaults, etc. These concepts are grouped into concepts for test architecture, test data, test behavior and time. Being a profile, the UML testing profile seamlessly integrates into UML: it is based on the UML meta-model and reuses UML syntax.
©I. Schieferdecker
UML Test Profile
• It has been architected with the following design principles in mind:
• UML integration: as a real UML profile. UML profiles are defined in the UML infrastructure volume of UML 2.0.
• Reuse and minimality: wherever possible, the UML Testing Profile makes direct use of the UML concepts and extends them and adds new concepts only where needed. Only those concepts are extended/added to UML, which have been demonstrated in the software, hardware and protocol testing area to be of central relevance to the definition of test artifacts and are not part of UML.
©I. Schieferdecker
UML Test Profile
• Test Architecture• Test Behavior• Test Data• Time Concepts• There are mappings to existing frameworks TTCN-
3 and JUnit. The UTP not yet linked to xUML (executable UML) or OCL, which can both be used for testing of models.
©I. Schieferdecker
Test Architecture of UTP
• The test architecture is a set of concepts to specify the structural aspects of a test context covering test components, the system under test, their configuration, etc.
©I. Schieferdecker
Test Architecture of UTP
• SUT The system under test (SUT) is a part and is the system, subsystem, or component being tested. A SUT can consist of several objects. The SUT is exercised via its public interface operations and signals by the test components. No further information can be obtained from the SUT as it is a black-box.
©I. Schieferdecker
Test Architecture of UTP
• Test Context A collection of test cases together with a test configuration on the basis of which the test cases are executed.
• Test Configuration The collection of test component objects and of connections between the test component objects and to the SUT. The test configuration defines both (1) test component objects and connections when a test case is started (the initial test configuration) and (2) the maximal number of test component objects and connections during the test execution.
©I. Schieferdecker
Test Architecture of UTP
• Test Component: A test component is a class of a test system. Test component objects realize the behavior of a test case. A test component has a set of interfaces via which it may communicate via connections with other test components or with the SUT.
©I. Schieferdecker
Test Architecture of UTP
• Arbiter: A property of a test case or a test context to evaluate test results and to assign the overall verdict of a test case or test context respectively. There is a default arbitration algorithm based on functional, conformance testing, which generates Pass, Fail, Inconc, and Error as verdict, where these verdicts are ordered as Pass < Inconc < Fail < Error. The arbitration algorithm can be user-defined.
©I. Schieferdecker
Test Architecture of UTP
• Scheduler: A property of a test context used to control the execution of the different test components. The scheduler will keep information about which test components exist at any point in time, and it will collaborate with the arbiter to inform it when it is time to issue the final verdict. It keeps control over the creation and destruction of test components and it knows which test components take part in each test case.
©I. Schieferdecker
Test Architecture of UTP
• Utility Part: A part of the test system representing miscellaneous components which help test components to realize their test behavior. Examples of utility parts are miscellaneous features of the test system.
©I. Schieferdecker
Test Behavior
• Test Behavior: The set of concepts to specify test behaviors, their objectives and the evaluation of systems under test.
• Test Control A test control is a specification for the invocation of test cases within a test context. It is a technical specification of how the SUT should be tested with the given test context.
©I. Schieferdecker
Test Behavior• Test Case: A test case is a specification of one case to
test the system, including what to test with which input, result, and under which conditions. It is a complete technical specification of how the SUT should be tested for a given test objective. A test case is defined in terms of sequences, alternatives, loops and defaults of stimuli to and observations from the SUT. It implements a test objective. A test case may invoke other test cases. A test case uses an arbiter to evaluate the outcome of its test behavior. A test case is a property of a test context. It is an operation specifying how a set of cooperating test components interacting with a system under test realize a test objective. Both the system under test and the different test components are parts of the test context to which the test case belongs.
©I. Schieferdecker
Test Behavior
• Test Invocation A test case can be invoked with specific parameters and within a specific context. The test invocation leads to the execution of the test case. The test invocation is denoted in the test log.
• Test Objective A test objective is a named element describing what should be tested. It is associated to a test case.
©I. Schieferdecker
Test Behavior
• Stimulus Test data sent to the SUT in order to control it and to make assessments about the SUT when receiving the SUT reactions to these stimuli.
• Observation Test data reflecting the reactions from the SUT and used to assess the SUT reactions which are typically the result of a stimulus sent to the SUT.
©I. Schieferdecker
Test Behavior
• Coordination Concurrent (and potentially distributed) test components have to be coordinated both functionally and in time in order to assure deterministic and repeatable test executions resulting in well-defined test verdicts. Coordination is done explicitly with normal message exchange between components or implicitly with general ordering mechanisms.
• A Default is a behavior triggered by a test observation that is not handled by the behavior of the test case per se. Defaults are executed by test components.
©I. Schieferdecker
Test Behavior
• Verdict: Verdict is the assessment of the correctness of the SUT. Test cases yield verdicts. Verdicts can also be used to report failures in the test system. Predefined verdict values are pass, fail, inconclusive and error. Pass indicates that the test behavior gives evidence for correctness of the SUT for that specific test case. Fail describes that the purpose of the test case has been violated. Inconclusive is used for cases where neither a Pass nor a Fail can be given. An Error verdict shall be used to indicate errors (exceptions) within the test system itself. Verdicts can be user-defined. The verdict of a test case is calculated by the arbiter.
©I. Schieferdecker
Test Behavior• Validation Action: An action to evaluate the status of the
execution of a test case by assessing the SUT observations and/or additional characteristics/parameters of the SUT. A validation action is performed by a test component and sets the local verdict of that test component.
• Log Action: An action to log information in the test log.• Test Log: A log is an interaction resulting from the
execution of a test case. It represents the different messages exchanged between the test components and the SUT and/or the states of the involved test components. A log is associated with a verdict representing the adherence of the SUT to the test objective of the associated test case.
©I. Schieferdecker
Test Data
• Test Data: The set of concepts to specify data used in stimuli to the SUT, observations from the SUT and for coordination between test components.
• Wildcard: Wildcards allow the user to explicitly specify whether the value is present or not, and/or whether it is of any value. Wildcards are special symbols to represent values or ranges of values. Wildcards are used instead of symbols within instance specifications. Three wildcards exist: a wildcard for any value, a wildcard for any value or no value at all (i.e. an omitted value) and a wildcard for an omitted value.
©I. Schieferdecker
Test Data• Data Pool A data pool is a collection of data partitions or explicit
values that are used by a test context, or test components, during the evaluation of test contexts and test cases. In doing so, a data pool provides a means for providing values or data partitions for repeated tests.
• Data Partition A logical value for a parameter used in a stimulus or in an observation. It typically defines an equivalence class for a set of values, e.g. valid user names etc,.
• Data Selector An operation that defines how data values or equivalence classes are selected from a data pool or data partition.
• Coding Rule The interfaces of a SUT use certain encodings (e.g. CORBA GIOP/IIOP, IDL, ASN.1 PER or XML), which have to be respected by the test systems. Hence, coding rules are part of a test specification.
©I. Schieferdecker
Time Concepts
• The set of concepts to specify time constraints, time observations and/or timers within test behavior specifications in order to have a time quantified test execution and/or the observation of the timed execution of test cases.
• Timezone: Timezone is a grouping mechanism for test components. Each test component belongs to a certain timezone. Test components in the same timezone have the same time, i.e. test components of the same timezone are time synchronized.
©I. Schieferdecker
Time Concepts
• Timer Timers are mechanisms that may generate a timeout event when a specified time value occurs. This may be when a pre-specified time interval has expired relative to a given instant (usually the instant when the timer is started). Timers belong to test components. They are defined as properties of test components. A timer is started with an expiration time being the time when the timeout is to be issued. A timeout indicates the timer expiration. A timer can be stopped. The expiration time of a running timer and its current status (e.g. active/inactive) can be checked.
©I. Schieferdecker
UML Test Profile and JUnit
• There is a mapping from the UML Testing Profile to JUnit. This mapping considers primarily the JUnit framework: When no trivial mapping exists to the JUnit framework, existing extensions to the framework are used as examples of how the framework has been extended to support some of the concepts included in the UML Testing Profile.
©I. Schieferdecker
UML Test Profile to JUnit Mappings
• Verdict In JUnit, predefined verdict values are pass, fail, and error. Pass indicates that the test behavior gives evidence for correctness of the SUT for that specific Test Case. Fail describes that the purpose of the Test Case has been violated. An Error verdict shall be used to indicate errors (exceptions) within the test system itself.
• There is no such thing as an Inconclusive verdict in JUnit. Therefore, the Inconclusive verdict will be generally mapped into Fail.
©I. Schieferdecker
UML Test Profile to JUnit Mappings
• Test Context: A test context is realized in JUnit as a class inheriting from the JUnit TestCase class. To be noticed that the concept of Test Context exists in the JUnit framework but is different from the one defined in the UML Testing Profile.
©I. Schieferdecker
UML Test Profile to JUnit Mappings
• Arbiter: The arbiter can be realized as a property of Test Context of a type TestResult. There is a default arbitration algorithm which generates Pass, Fail, and Error as verdict, where these verdicts are ordered as Pass < Fail < Error. The arbitration algorithm can be user-defined.
©I. Schieferdecker
UML Test Profile to JUnit TTCN-3
• TTCN-3 - Testing and Test Control Notation is widely accepted as a standard for test system development in the telecommunication and data communication area. TTCN-3 comprises concepts suitable to all types of distributed system testing. TTCN-3 test specification consists of four main parts:
• type definitions for test data structures• templates definitions for concrete test data• function and test case definitions for test behavior• control definitions for the execution of test cases
©I. Schieferdecker
Introduction
System-integration level tests
Unit-level tests
SWIFTNet SWIFTBureau
US BankSSSB Client
ClearingCompany
OTC Market Makers
EU BankSSSB Client
EU BankNetwork
US BankNetwork
SWIFTBureau
System-level tests
©I. Schieferdecker
Introduction
Developer
Heterogeneityincreases
Testingthroughout the
processIntegrator
SystemsIntegrator
©I. Schieferdecker
Introduction: Extreme View
Testing tight to Specificatione.g. OCL,iUML,
TTCN-3
Testing tightto Development
e.g. JUnit
Developer
Integrator
SystemsIntegrator
©I. Schieferdecker
Introduction: Balanced View
Testing tight to Services
e.g. OCL,iUML, TTCN-3
Testing tightto Development
e.g. JUnit
Developer
Integrator
SystemsIntegrator
©I. Schieferdecker
An Answer: Model-Based View
Customer
Supervisor
Salesperson
Establish Credit
Telephone Catalog
Shipping Clerk
Check Status
Place Order
Fill Orders
use case diagrams
Product
Customer
Organization
Service
OrderHeader
LineItem
Account
*
order
item
1
ShoppingCart
1
1
productItem
product
serviceItem
service1
1
client
client
customerOrder
organizationOrder
cart
cartOrder
accountOrder
account
11..*
1..*1
1
0..1
0..*
1
{xor}
{xor}
class diagrams
selectAmount
enterAmount
ok
abort
otherAmount
amount
abort
ReadAmountSM
aborted
state machinessd N
s[u]:B s[k]:B
m3()
m3()
interactions
Developer
Integrator
SystemsIntegrator
Testing tight to Specification
e.g. OCL,iUML, TTCN-3
Testing tightto Development
e.g. JUnit
©I. Schieferdecker
UML and Testing
• UML-based test generation • UML-based test notation
• Agedis, EC IST project• UML Testing Profile, OMG
©I. Schieferdecker
The Testing Profile RootsP
roto
col T
esti
ng
like
TT
CN
-3
Soft
war
e T
esti
ng
like
JU
nit,
TE
T, e
tc.
MSC-2000 UML 1.x SDL-2000MSC-2000
UML 2.0Graphical Format
of TTCN-3
UML Testing Profile
• Test control• Wildcards• Defaults• Test components
• Arbiter• Validation actions • Data pools
©I. Schieferdecker
Concepts of the Testing Profile
• Test architecture • Test structure, test components and test configuration
• Test data • Test data and templates used in test procedures
• Test behavior • Dynamic aspects of test procedures
• Test time • Time quantified definition of test procedures
©I. Schieferdecker
Concepts beyond TTCN-3
• Unification of test cases:• Test case as a composition of test cases• Test behavior defines the execution of a test case
• Separation of test behavior and verdict handling• Arbiter is a special component to evaluate the verdict• Validation actions are used to set the verdict
• Abstract test cases which can use a set of stimulus data• Data partitions to describe value ranges for observations and
stimuli
• Test architecture with test deployment support• Part of the test specification is the definition of deployment
requirements for a test case
©I. Schieferdecker
Concepts beyond UML
• Defaults within test behavior• Concentration on main flow of test behavior• Default hierarchy to handle different concerns
• Wildcards within test data• Flexible definition of value sets
• Timers and time constraints• Time controlled test behavior
• Arbitration and verdicts• Assessment of test behavior
©I. Schieferdecker
System Test
ATM HWControl
Bank
«import»
«import»
Money
«import»
SWIFTNetwork
«import»
«import»
An Example
©I. Schieferdecker
System Level Test
ATM
«import»
ATMTest
«testSuite»ATMSuite
-verdict : Verdict-amount : IMoney-targetBank : SwiftId-targetAccount : String-sourceAccount : String
«testCase» +validWiring() : Verdict«testCase» +invalidPIN() : Verdict«testCase» -authorizeCard() : Verdict
*
-accounts«testComponent»BankEmulator
IBank
«interface»IAccount
-pinOk : Boolean-enteredPIN : String-message : String-t1 : Timer
«testComponent»HWEmulator
hwComIATM
IHardware
Test suite with test cases
Test component
Miscellaneous
„Test package“
©I. Schieferdecker
be : BankEmulator
hwe : HWEmulator
atmPort
«sut»atm : BankATM
bankCom
current : CardData
«testSuite» class ATMSuite
coding”Encrypted”
Test Configuration
«testSuite»ATMSuite
-verdict : Verdict-amount : IMoney-targetBank : SwiftId-targetAccount : String-sourceAccount : String
«testCase» +validWiring() : Verdict«testCase» +invalidPIN() : Verdict«testCase» -authorizeCard() : Verdict
Utility property
SUT propery
Test component property
Coding rules
Connections
©I. Schieferdecker
sd ATMSuite
[verdict == fail][verdict == pass]
verdict = invalidPINref
verdict = validWiringref
Test Control (Execution of Test Suite)
«testSuite»ATMSuite
-verdict : Verdict-amount : IMoney-targetBank : SwiftId-targetAccount : String-sourceAccount : String
«testCase» +validWiring() : Verdict«testCase» +invalidPIN() : Verdict«testCase» -authorizeCard() : Verdict
Referring test case behaviors
©I. Schieferdecker
sd invalidPIN
hwe «sut»atm
storeCardData(current)
display(”Enter PIN”)
isPinCorrect(invalidPIN)
«validationAction»pass
current
{readOnly} Integer invalidPIN; { current.isPinCorrect(invalidPIN) == false }
isPinCorrect(invalidPIN)
display(”Invalid PIN”)
display(”Enter PIN again”)
isPinCorrect : false
isPinCorrect : false
t1(2.0)
t1
{0 .. 3}
A Test CaseData partition
Arbitrated verdict
Timing
SUT and Test Component Lifelines
©I. Schieferdecker
A Test Case with Default (Extract)sd validWiring
hwe be
refrefauthorizeCard
selectOperation(wireMoney)
atmPort bankCom
checkCredentials : true
checkCredentials(sourceAccount)
findAccount(current)
sourceAccount = findAccount
display(”Enter SWIFT and account numbers”)
display(”Deposit money”)
wireMoney(amount, targetBank, targetAccount)
wireMoney : true
acceptMoney()
amount = acceptMoney
display(”Transaction Accepted”)
selectOperation : true
«sut»atm
defaultDisplayDefaultdefaultDisplayDefault
«validationAction»pass
getTransactionInfo(account=targetBank, bic=targetAccount)
getTransactionInfo(targetBank=account, targetAccount=bic)
sd validWiring
hwe be
refrefauthorizeCard
selectOperation(wireMoney)
atmPort bankCom
checkCredentials : true
checkCredentials(sourceAccount)
findAccount(current)
sourceAccount = findAccount
display(”Enter SWIFT and account numbers”)
display(”Deposit money”)
wireMoney(amount, targetBank, targetAccount)
wireMoney : true
acceptMoney()
amount = acceptMoney
display(”Transaction Accepted”)
selectOperation : true
«sut»atm
defaultDisplayDefaultdefaultDisplayDefault
«validationAction»pass
getTransactionInfo(account=targetBank, bic=targetAccount)
getTransactionInfo(targetBank=account, targetAccount=bic)
defaultDisplayDefaultdefaultDisplayDefault
Default application
©I. Schieferdecker
Defaults
-pinOk : Boolean-enteredPIN : String-message : String-t1 : Timer
«testComponent»HWEmulator
hwComIATM
defaultHWEmulator::hweDefault
IHardware
-pinOk : Boolean-enteredPIN : String-message : String-t1 : Timer
«testComponent»HWEmulator
hwComIATM
defaultHWEmulator::hweDefault
IHardware
*
t1 / setverdict(fail);
display(msg) / if (msg == ”Connection lost”) then
setverdict(inconc);else
setverdict(fail);
ejectCard /setverdict(fail)
«default»statemachine hweDefault
Defining an event-specific default
Applying a component-specific default
Defining a component-specific default
sd DisplayDefault
self
altalt display(*)
*
«validationAction»inconc
«validationAction»fail
©I. Schieferdecker
The Mappings• To enable the direct execution of U2TP
specifications by reusing existing test infrastructures• Mappings to
• The JUnit test framework• An open source test technology for Java• Black-box tests on unit level Only selected aspects of U2TP can be mapped
• The Testing and Test Control Notation TTCN-3 • A generic test technology by ETSI/ITU-T • Black-box/grey-box tests on unit, component, integration and
system level Almost all concepts can be mapped
©I. Schieferdecker
Example for Mapping to TTCN-3
...type port hwCom_PType procedure {...}...type component HWEmulator_CType{ port atmPort_PType hwCom; var boolean pinOk; var charstring enteredPIN; var charstring message_; timer t1;}
ATMTest
-
pinOk : Boolean-
enteredPIN : String-
message : String-
t1 : Timer
« testComponent »
HWEmulator
hwComIATM
IHardware
©I. Schieferdecker
Example for Mapping to TTCN-3sd invalidPIN
storeCardData(current)
«sut»atm
hwe
display(”Enter PIN”)
isPinCorrect(invalidPIN)
isPinCorrect : false
«validationAction»pass
current
{readOnly} Integer invalidPIN; { current.isPinCorrect(invalidPIN) == false }
isPinCorrect(invalidPIN)
display(”Invalid PIN”)
display(”Enter PIN again”)
isPinCorrect : false
t1(2.0)
t1
{0 .. 3}
function invalidPIN_hwe ... { ... hwCom.call( storeCardData:{current},nowait); t1.start(2.0); hwCom.getreply( display_:{"Enter PIN"}); t1.stop; hwCom.call( isPinCorrect:{invalidPIN},3.0) { [] hwCom.getreply( isPinCorrect:{?} value false) {} } hwCom.getreply( display_:{"Invalid PIN"}); hwCom.getreply( display_:{"Enter PIN again"}); setverdict(pass); }
©I. Schieferdecker
Summary of Testing Profile
• UML Testing Profile provides specification means for test artifacts of systems from various domains
• Enhances UML with concepts like test configuration, test components, SUT, verdict and default
• Seamlessly integrates into UML: being based on UML metamodel, using UML syntax
Direct support for test design Integration with the system development process
©I. Schieferdecker
Summary of Testing (slides 1-71)
• Testing is the process of determining if an artefact meets a given specification. It is one of the four basic technical activities in the software development process. (analysis, design, implementing testing). It is used to validate and verify products or the processes by which they are made. we must distinguish between the correct behaviour and the expected behaviour. Testing methods should incorporate accountability repeatability and traceability.
• Many approaches to testing (slides 6-7).
©I. Schieferdecker
Testing Objectives• Many strategies and tools associated with object
oriented testing• Analysis and Design Testing• Class Tests• Integration Tests• System Tests• Validation Tests
analysis design code test
©I. Schieferdecker
A Broader View of Testing
• Nature of OO systems influence both testing strategy and methods
• Will re-use mean less need for testing? NO• In Object Oriented systems the view of testing is
broadened to encompass Analysis and Design• “It can be argued that the review of OO analysis
and design models is especially useful because the same semantic constructs (e.g., classes, attributes, operations, messages) appear at the analysis, design, and code level.”
• Allows early circumvention of later problems
©I. Schieferdecker
Object-Oriented Testing
• Analysis and Design:• Testing begins by evaluating the OOA and OOD models • Cannot be executed, so conventional testing impossible• Use formal technical reviews of correctness, completeness and
consistency
• Programming:• OO Code testing differs from conventional methods:
l The concept of the ‘unit’ broadens due to class encapsulation• Integration focuses on classes and their execution across a ‘thread’
or in the context of a usage scenario• Validation uses conventional black box methods
• Test case design draws on conventional methods, but also encompasses special features
©I. Schieferdecker
Criteria for Completion of Testing• When are we done testing?1. Testing is never done, the burden simply shifts from you
to the customer2. Testing is done when you run out of time or money3. Statistical Model:
• Assume that errors decay logarithmically with testing time• Measure the number of errors in a unit period• Fit these measurements to a logarithmic curve • Can then say: “with our experimentally valid statistical model
we have done sufficient testing to say that with 95% confidence the probability of 1000 CPU hours of failure free operation is at least 0.995”
• More research needs to be done into how to answer this question
©I. Schieferdecker
Strategic Issues• Issues to address for a successful software testing strategy:
• Specify product requirements in a quantifiable manner long before testing commences. For example, portability, maintainability, usability
• State testing objectives explicitly. For example, mean time to failure, test coverage, etc
• Understand the users of the software and develop a profile for each user category. Use cases do this
• Develop a testing plan that emphasizes “rapid cycle testing”. Get quick feedback from a series of small incremental tests
• Build robust software that is designed to test itself. Exception handling and automated testing
• Conduct formal technical reviews to assess the test strategy and test cases themselves. “Who watches the watchers”
• Develop a continuous improvement approach to the testing process
©I. Schieferdecker
Testing Analysis and Design
• Syntactic correctness: • Is UML notation used correctly?
• Semantic correctness: • Does the model reflect the real world problem?• Is UML used as intended by its designers?
• Testing for consistency:• Are different views of the system in agreement? • An inconsistent model has representations in one part
that are not correctly reflected in other portions of the model
©I. Schieferdecker
Testing the Class Model
1. Revisit the CRC model and the class model. Check that all collaborations are properly represented in both
2. Inspect the description of each CRC index card to determine if a delegated responsibility is part of the collaborator’s definition
• Example: in a point of sale system. A read credit card responsibility of a credit sale class is accomplished if satisfied by a credit card collaborator
3. Invert the connection to ensure that each collaborator that is asked for a service is receiving requests from a reasonable source
• Example: a credit card being asked for a purchase amount (a problem)
©I. Schieferdecker
Final Steps in Testing the Class Model
4. Using the inverted connections examined in step 3, determine whether other classes might be required or whether responsibilities are properly grouped among the classes
5. Determine whether widely requested responsibilities might be combined into a single responsibility• Example: read credit card and get authorization
could easily be grouped into validate credit request
6. Steps 1 to 5 are applied iteratively and repeatedly
©I. Schieferdecker
Testing OO Code
class testsclass tests integrationintegrationteststests
validationvalidationteststests
systemsystemteststests
©I. Schieferdecker
[1] Class Testing• Smallest testable unit is the encapsulated class• A single operation needs to be tested as part of a class
hierarchy because its context of use may differ subtly• Class testing is the equivalent of unit testing in
conventional software• Approach:
• Methods within the class are tested• The state behavior of the class is examined
• Unlike conventional unit testing which focuses on input-process-output, class testing focuses on designing sequences of methods to exercise the states of a class
• But white-box methods can still be applied
©I. Schieferdecker
Class Test Case Design
1. Each test case should be uniquely identified and should be explicitly associated with the class to be tested
2. The purpose of the test should be stated3. A list of testing steps should be developed for each test
and should contain:a. A list of specified states for the object that is to be testedb. A list of messages and operations that will be exercised as a
consequence of the testc. A list of exceptions that may occur as the object is testedd. A list of external conditions (i.e., changes in the environment
external to the software that must exist in order to properly conduct the test)
e. Supplementary information that will aid in understanding or implementing the test
©I. Schieferdecker
Challenges of Class Testing
• Class: • A class should be the basic unit for testing, this is
harder to test than a function. In general the OO testing space is much greater. Objects must be tested in all their possible states, every transition of a state diagram should be tested. We are concerned about the receiver, sender and to some extent the arguments of messages. Flow of control path analysis is more difficult in OO there may be many possible paths depending on the class of the receiver, sender or
©I. Schieferdecker
Challenges of Class Testing• For a particular class, unit testing should consist of five parts:• •interface testing, in which test data is chosen to check that the flow
of information into and out from an object of the class is correct;• •boundary-condition testing, in which test data are chosen to check
that methods perform correctly at the extremities of input ranges;• •local-data testing, in which test data are chosen to check that
methods manipulate attributes correctly, so that an object’s state remains consistent;
• •control-structure testing, in which test data are chosen to check that algorithms are coded correctly and to execute each statement in the code at least once;
• •error-handler testing, in which test data are chosen to check that the object performs correctly in exceptional situations.
©I. Schieferdecker
Challenges of Class Testing
• Encapsulation: • Encapsulation is the hiding from clients of a class the
details of how the class is implemented. So the tester may not have access to the state of the object. This is fine for black box testing, but not for white box testing. Difficult to obtain a snapshot of a class without building extra methods which display the classes’ state. This could be made a little easier by using a tool like JUnit
• White box tests: • Basis path, condition, data flow and loop tests can all
be applied to individual methods within a class but they don’t test interactions between methods
©I. Schieferdecker
Challenges of Class Testing• Inheritance: • Inherited functionality must be tested in the new subclass. In defining
a subclass we have changed the context in which messages in the superclass may be executed particularly in the new subclass. Because of dynamic binding methods non overridden as well as those overridden methods must be tested. When an object of class A interacts with an object of class B, it may in fact be interacting with an object of any subclass, which may not have been written when class A was written and tested. Abstract classes, were never intended to be executed so can only be tested with extra specially test code. In OO languages a test harness can be used that allows the programmer to subclass the supplied testing classes in the test harness to allow application specific test classes. This means the testing does not interfere with the code of the SUT.
• In summary, each new context of use (subclass) requires re-testing because a method may be implemented differently (polymorphism). Other unaltered methods within the subclass may use the redefined method and need to be tested
©I. Schieferdecker
Challenges of Class Testing• Polymorphism• Polymorphism is the ability of objects to send the same
message to instances of different classes which do not have necessarily have a common superclass i.e. polymorphism without inheritance. It is possible the methods are defined differently in sub-hierarchies i.e. the subclass methods do not share the same super class definition. It is necessary to test class hierarchies for different implementations of the method.
• A method may use a collection as an argument which could be a set, bag, sorted collection, ordered collection. These heterogeneous collections may have an impact affect the methods behaviour, the method may encounter a collection type that it cannot handle.
©I. Schieferdecker
Challenges of Integration Testing
• Consider a base class, PlaneFigure, from which the classes Triangle, Circle and Line are all derived.
• We might define an abstract method draw in PlaneFigure to indicate that derived classes should be able to render themselves to the screen. Triangle, Circle and Line will implement draw as they wish.
• Furthermore, suppose classes A, B and C have a
design (PlaneFigure aFigure) method which invokes aFigure.draw.
©I. Schieferdecker
Challenges of Integration Testing
• The dynamic binding of the draw method means that, at run time, any of three different draw methods could be invoked (one for each of Triangle, Circle and Line) for each of the classes A, B and C, so that nine possible run-time combinations exist. This is illustrated in Figure 2.2.
©I. Schieferdecker
Challenges of Integration Testing
An arrow represents invocation; the arrow between class A and aFigure.draw means that aFigure.draw is invoked from within a method of A, perhaps indirectly. Similarly, aTriangle.draw is invoked from aFigure.draw.
©I. Schieferdecker
Challenges of Integration Testing
• Should we test all of them? Conceivably, errors in the Triangle draw method could be uncovered only when the calling object is an object of class C, but not when called by A or B. In this case, it is only by testing the combination of C with Triangle that we could discover the error. When errors are dependent in this way, the best testing strategy is to test all combinations; in this simple case, we could test all nine combinations and so be sure of testing the erroneous combination. This is theoretically the only strategy for ensuring that all combinations of invoking and target classes are tested together, and so of being sure of testing dependent errors.
©I. Schieferdecker
Random Class Testing
1. Identify methods applicable to a class2. Define constraints on their use – e.g. the class must
always be initialized first3. Identify a minimum test sequence – an operation
sequence that defines the minimum life history of the class
4. Generate a variety of random (but valid) test sequences – this exercises more complex class instance life histories
©I. Schieferdecker
Random Class Testing• Example:
1. An account class in a banking application has open, setup, deposit, withdraw, balance, summarize and close methods
2. The account must be opened first and closed on completion3. Open – setup – deposit – withdraw – close4. Open – setup – deposit –* [deposit | withdraw | balance |
summarize] – withdraw – close. Generate random test sequences using this template
©I. Schieferdecker
Partition Class Testing• Reduces the number of test cases required (similar to
equivalence partitioning)• State-based partitioning
• Categorize and test methods separately based on their ability to change the state of a class
• Example: deposit and withdraw change state but balance does not• Attribute-based partitioning
• Categorize and test operations based on the attributes that they use
• Example: attributes balance and creditLimit can define partitions• Category-based partitioning
• Categorize and test operations based on the generic function each performs
• Example: initialization (open, setup), computation (deposit, withdraw), queries (balance, summarize), termination (close)
©I. Schieferdecker
[2] Integration Testing
• OO does not have a hierarchical control structure so conventional top-down and bottom-up integration tests have little meaning
• Integration applied three different incremental strategies• Thread-based testing: integrates classes required to respond to
one input or event• Use-based testing: integrates classes required by one use case• Cluster testing: integrates classes required to demonstrate one
collaboration
©I. Schieferdecker
Random Integration Testing
• Multiple Class Random Testing1. For each client class, use the list of class methods to
generate a series of random test sequences. The methods will send messages to other server classes
2. For each message that is generated, determine the collaborating class and the corresponding method in the server object
3. For each method in the server object (that has been invoked by messages sent from the client object), determine the messages that it transmits
4. For each of the messages, determine the next level of methods that are invoked and incorporate these into the test sequence
©I. Schieferdecker
Behavioral Integration Testing
• Derive tests from the object-behavioural analysis model• Each state in a State diagram should be visited in a
“breadth-first” fashion. • Each test case should exercise a single transition• When a new transition is being tested only previously tested
transitions are used• Each test case is designed around causing a specific transition
• Example: • A credit card can move between undefined, defined, submitted
and approved states• The first test case must test the transition out of the start state
undefined and not any of the other later transitions
©I. Schieferdecker
[3] Validation Testing• Are we building the right product? Validation
succeeds when software functions in a manner that can be reasonably expected by the customer.
• Focus on user-visible actions and user-recognizable outputs
• Details of class connections disappear at this level and the focus moves to:
• Use-case scenarios from the software requirements spec• Black-box testing to create a deficiency list• Acceptance tests through alpha (at developer’s site) and
beta (at customer’s site) testing with actual customers
©I. Schieferdecker
[4] System Testing• Software may be part of a larger system. This often
leads to “finger pointing” by other system dev teams• Finger pointing defence:
1. Design error-handling paths that test external information2. Conduct a series of tests that simulate bad data3. Record the results of tests to use as evidence
• Types of System Testing:• Recovery testing: how well and quickly does the system
recover from faults• Security testing: verify that protection mechanisms built into the
system will protect from unauthorized access (hackers, disgruntled employees, fraudsters)
• Stress testing: place abnormal load on the system• Performance testing: investigate the run-time performance
within the context of an integrated system
©I. Schieferdecker
[4] System Testing• Types of System Testing:
• Recovery testing: how well and quickly does the system recover from faults
• Security testing: verify that protection mechanisms built into the system will protect from unauthorized access (hackers, disgruntled employees, fraudsters)
• Stress testing: place abnormal load on the system• Performance testing: investigate the run-time performance
within the context of an integrated system
©I. Schieferdecker
Automated Testing
• CPPUnit on SourceForge.net• Differentiates between:
• Errors (unanticipated problems usually caught by exceptions)
• Failures (anticipated problems checked for with assertions)
• Basic unit of testing:• CPPUNIT_ASSERT(Bool) examines an expression
• CPPUnit has a variety of test classes (e.g. TestFixture). Approach is to inherit from them and overload particular methods
©I. Schieferdecker
Testing Summary
• Testing is integrated with and affects all stages of the Software Engineering lifecycle
• Strategies: a bottom-up approach – class, integration, validation and system level testing
• Techniques: • white box (look into technical internal details)• black box (view the external behaviour)• debugging (a systematic cause elimination approach is
best)
analysis design code test
©I. Schieferdecker
Testing Summary
• Verification is the process of checking that each system description is self-consistent, and that different system descriptions are consistent and complete with respect to each other.
• Verifying: building the product right. • For verification in the UML use:
• Class, interaction and statchart diagrams
analysis design code test
©I. Schieferdecker
Testing Summary
• Validation is the process of checking that each system description is consistent with the customer’s requirements.
• Validating: building the right product.• For validation in the UML use:
• Use case diagrams, Activity Diagrams to capture user processes or workflows.
analysis design code test
©I. Schieferdecker
Testing Summary
• Validation is the process of checking that each system description is consistent with the customer’s requirements.
• Validating: building the right product.• For validation in the UML use:
• Use case diagrams, Activity Diagrams to capture user processes or workflows.
analysis design code test