Upload
anuraj-sl
View
489
Download
3
Embed Size (px)
Citation preview
An Introduction to Software Testing &
Test Management.
Prepared By,ANURAJ S.L
(LEAD QA Engineer)
Let’s Start with a simple Game
SOFTWARE IS A SKIN THAT SURROUNDS OUR CIVILIZATION
Some Typical Project Blues…
Project Kick Off Bug fixing Just started First Bug Fixed Regression Testing Fixed Regression Too
Requirement Changes What to do ? When will Testing Stop ? Next Increment please Quality, God let me sleep !
Tester vs Developer
What is Quality?
What is Software Testing?
Why testing is necessary?
Who does the testing?
What has to be tested?
When is testing done?
How often to test?
What are Testing Standards?
Introduction & Fundamentals
Quality is “fitness for use” - (Joseph Juran)
Quality is “conformance to requirements” - (Philip B. Crosby)
Quality of a product or service is its ability to satisfy the needs and expectations of the customer.
What is Quality?
Quality - the most important factor affecting an organization’s long-term performance. Quality - the way to achieve improved productivity and competitiveness in any organization.
Quality - saves. It does not cost.
Quality - is the solution to the problem, not a problem.
Quality Principles
TESTING DEFINITIONS
Testing is the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements - (IEEE 83a)
Process: Sequence of steps performed for a given purpose. (IEEE)
Software Process: A set of activities, methods, practices, and transformations that people use to develop and maintain software and associated products.(SEI-CMM)
WHAT IS SOFTWARE TESTING ?
• Testing is like making love... it never lasts as long as you wanted it to. • Testing is like making love... once it's over you're the only one interested in discussing how it went. • Testing is like making love... well, I'm sure you get the idea.
Testing is a matter of taking what's been delivered and investigating, for example:
• whether the feature contains everything that the spec said it should.• whether the feature contains anything that the spec didn't say it should.• whether the feature violates any of the implicit rules.• whether the feature is what the user would want.
WHAT IS SOFTWARE TESTING ?
WHY DOES TESTING MATTER?NIST REPORT, “THE ECONOMIC IMPACTS OF INADEQUATE INFRASTRUCTURE FOR SOFTWARE TESTING” (2002)
Inadequate software testing costs the US alone between $22 and $59 billion annuallyBetter approaches could cut this amount in half
MAJOR FAILURES: ARIANE 5 EXPLOSION, MARS POLAR LANDER, INTEL’S PENTIUM FDIV BUGINSUFFICIENT TESTING OF SAFETY-CRITICAL SOFTWARE CAN COST LIVES: THERAC-25 RADIATION MACHINE: 3 DEAD
HUGE LOSSES DUE TO WEB APPLICATION FAILURES FINANCIAL SERVICES : $6.5 MILLION PER HOUR (JUST IN USA!) CREDIT CARD SALES APPLICATIONS : $2.4 MILLION PER HOUR (IN USA) IN DEC 2006, AMAZON.COM’S BOGO OFFER TURNED INTO A DOUBLE DISCOUNT SYMANTEC SAYS THAT MOST SECURITY VULNERABILITIES ARE DUE TO FAULTY SOFTWARE
WE WANT OUR PROGRAMS TO BE RELIABLETesting is how, in most cases, we find out if they are.
Ariane 5:exception-handlingbug : forced selfdestruct on maidenflight (64-bit to 16-bitconversion: about370 million $ lost)
BASIC QUESTIONS ON TESTING
Why to test? Testing becomes absolutely essential to make sure the software works properly and does the
work that it is meant to perform.
What to test? Any working product which forms part of the software application has to be tested.
Both data and programs must be tested.
How often to test? When a program (source code) is modified or newly developed, it has to be tested.
Who tests? Programmer, Tester and Customer
OBJECTIVES OF TESTING
Provide confidence in the system
Identify areas of weakness
Establish the degree of quality
Establish the extent that the requirements have been met, i.e. what the users asked for is what
they got not what someone else though they wanted
To provide an understanding of the overall system
To prove it is both usable and operable
To check if the system is “ Fit for purpose”.
To check if the system does what it is expected to do.
OBJECTIVES OF TESTER
Find bugs as early as possible and make sure they get fixed.
To understand the application well.
Study the functionality in detail to find where the bugs are likely to occur.
Study the code to ensure that each and every line of code is tested.
Create test cases in such a way that testing is done to uncover the hidden bugs and also ensure
that the software is usable and reliable
WHO TESTS THE SOFTWARE?
Developer
Understands the system and, is driven by "delivery" and, is driven by quality
Independent tester
Must learn about the system,
TERMS: VERIFICATION AND VALIDATION
Actually Software testing is one element of a broader topic that is often referred to as
===> Verification and Validation (V&V)
Verification --> refers to the set of activities that ensure that software correctly implements a specific function.
Validation -> refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements.
In Short:
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
ERROR, BUG, FAULT & FAILURE
Error : it is a human action that produces the incorrect result that produces a fault.
Bug : the presence of error at the time of execution of the software.
Fault : state of software caused by an error.
Failure : deviation of the software from its expected result. It is an event.
SOFTWARE TESTING LIFECYCLE – PHASES
Project Initiation
System Study
Summary Reports
Analysis
Regression Test
Report Defects
Execute Test Cases( manual /automated )
Design Test Cases
Test Plan
Testing Life Cycle
Testing Cycle starts with the study of requirements. Understanding of the requirements is very essential for testing the product.
Requirements study
As per IEEE 829 test plan structure is as follows. * Test plan identifier * Introduction * Test items * Features to be tested * Features not to be tested * Approach * Item pass/fail criteria * Suspension criteria and resumption requirements * Test deliverables * Testing tasks * Environmental needs * Responsibilities * Staffing and training needs * Schedule * Risks and contingencies * Approvals
Reasons for writing a test plan:
• It guides our thinking
• Forces us to confront the challenges that await us
• Focus us on important topics
• Serve as vehicles for communicating with other members of the project team, testers, peers, managers and other stakeholders
• Becomes a record of previous discussions and agreements between the testers and the rest of the project team
Test Case Design and Development • Component Identification• Test Specification Design• Test Specification Review
Test Execution• Code Review• Test execution and evaluation• Performance and simulation
Test Closure• Test summary report• Project De-brief• Project Documentation
Test Progress Monitoring and Documenting
Test documenting & monitoring serve various purposes during the project:• Give feedback on how the testing work is going, allowing opportunities to guide and improve the
testing and the project• Provide visibility about the test results• Measure the status of the testing, test coverage and test items against the exit criteria to
determine whether the test work is done• Gather data for use in estimating future test efforts
Test progress monitoring techniques vary considerably depending on:• The preferences of the testers and stakeholders• The needs and goals of the project• Regulatory requirements, time and money constraints and other factors
Test progress monitoring is about gathering detailed test data
The Universal Test Procedure
“Try it and see if it works.” Learn about it Model it Speculate about it Configure it Operate it
Know whatto look for
See what’sthere
Understand the requirementsIdentify problemsDistinguish bad problems from not-so-bad problems
Models EvaluationCoverage
TESTING METHODOLOGIES
Black box testing• No knowledge of internal program, design or code required.• Tests are based on requirements and functionality
White box testing• Knowledge of the internal program design and code required.• Tests are based on coverage of code statements, branches, paths, conditions.
DIFFERENT LEVELS OF TESTING
Testing Levels
•Unit testing•Integration testing •System testing
Unit testing
Tests each module individually. Follows a white box testing (Logic of the program). Done by Programmers (not by testers).
Unit testing
Objectives
To test the function of a program or unit of code such as a program or module. To test internal logic To verify internal design To test path & conditions coverage To test exception conditions & error handling
When After modules are coded
Input Internal Application Design Master Test Plan Unit Test Plan
Output Unit Test Report
Who Developer
Methods White Box testing techniques
Tools Debug Re-structure Code Analyzers Path/statement coverage tools
Education Testing Methodology Effective use of tools
Testing of combined parts of an application to determine their functional correctness.
‘Parts’ can be code modules individual applications client/server applications on a network.
Integration Testing
Integration testing Objectives To technically verify proper interfacing between modules, and
within sub-systems
When After modules are unit tested
Input Internal & External Application Design Master Test Plan Integration Test Plan
Output Integration Test report
Who Developers
Methods White and Black Box techniques Problem / Configuration Management
Tools Debug Re-structure Code Analyzers
Education Testing Methodology Effective use of tools
System Testing
Objectives
To ensure that the system functions together with all the components of its environment as a total system To ensure that the system releases can be deployed in the current environment To verify that the system components perform control functions To perform inter-system test To demonstrate that the system performs both functionally and operationally as specified To perform appropriate types of tests relating to Transaction Flow, Installation, Reliability, Regression etc.
When After Integration Testing
Input Detailed Requirements & External Application Design Master Test Plan System Test Plan
Output System Test Report
Who Development Team and Users
Methods Problem / Configuration Management
Tools Recommended set of tools
Education Testing Methodology Effective use of tools
Types Of System Testing
Alpha Testing - It is carried out by the test team within the developing organization .
Beta Testing - It is performed by a selected group of friendly customers.
Acceptance Testing - It is performed by the customer to determine whether to accept or reject the delivery of the system.
Performance Testing - It is carried out to check whether the system meets the nonfunctional requirements identified in the SRS document.
Different Types Of Testing
Functional testing
Black box type testing geared to functional requirements of an application. Done by testers.
System testing
Black box type testing that is based on overall requirements specifications; covering all combined parts of the system.
End-to-end testing
Similar to system testing; involves testing of a complete application environment in a situation that mimics real-world use.
Sanity testing Initial effort to determine if a new software version is performing well enough
to accept it for a major testing effort.
Regression testing Re-testing after fixes or modifications of the software or its environment.
Acceptance testing Final testing based on specifications of the end-user or customer
Load testing Testing an application under heavy loads. Eg. Testing of a web site under a range of loads to determine, when the system
response time degraded or fails.
Stress Testing Testing under unusually heavy loads, heavy repetition of certain actions or
inputs, input of large numerical values, large complex queries to a database etc. Term often used interchangeably with ‘load’ and ‘performance’ testing.
Performance testing Testing how well an application complies to performance requirements.
Install/uninstall testing Testing of full, partial or upgrade install/uninstall process.
Recovery testing Testing how well a system recovers from crashes, HW failures or other problems.
Compatibility testing Testing how well software performs in a particular HW/SW/OS/NW environment.
Exploratory testing / ad-hoc testing Informal SW test that is not based on formal test plans or test cases; testers
will be learning the SW in totality as they test it.
Comparison testing Comparing SW strengths and weakness to competing products.
Entry CriteriaVarious steps to be performed before the start of a test i.e. Pre-requisites.
E.g. Timely environment set up Starting the web server/app server Successful implementation of the latest build etc.
Exit CriteriaContains tasks like
Bringing down the system / server Restoring system to pre-test environment Database refresh etc.
TESTING STANDARDSExternal Standards
Familiarity with and adoption of industry test standards from organizations.
Internal Standards
Development and enforcement of the test standards that testers must meet.
What is IEEE/ISO Standards ?They are organizations which set the International Standards.
ISO/IEC/IEEE 29119 Software Testing is an internationally agreed set of standards for software testing that can be used within any software development life cycle or organization.
List Of Documents ?Here is a list of some important documents provided by IEEE, which should be used and maintained regularly:
• Software requirement specification document• Test design document• Test plan document• Test case document• Test procedure document• Test strategy• Bug reports document• Test data document• document of weekly status report• Document of user acceptance report• Report of risk assessment• Test analysis• Test summary reports
Other standards…..
ISO – International Organization for Standards
Six Sigma – Zero Defect Orientation
SPICE – Software Process Improvement and Capability Determination
TESTS MAKE CHANGE SMOOTH
smallchange
smallchange
smallchange
Has anythingbroken?
Has anythingbroken?
Has anythingbroken?
AGILE DEV PROCESS
Sample Test Strategy
Each Sprint Participate in sprint planning Estimate tasks(QA input) Write test scenarios using stories(QA)
High level test scenarios before coding begins- guide dev
Unit/ Integration tests(Dev) Automatic test on code level run every night or after new code is added. Bugs found are fixed immediately
Integration/Feature tests(QA) Performed when a feature is finished implemented. Errors found are logged in main project in bug system and prioritized.
After all prioritized bugs are fixed, the feature is ready for System Test at the end of the project.
System/Regression tests(QA) Performed when all the feature is finished implemented and moved on to the UAT server. Errors found are logged in main
project in bug system and prioritized.
Reviews (stories, req, test) Increase collaboration and communications
Test Strategy-outlined
Bugs
•New features are updated in the backlog.
Specification
•Development starts
accordingly.
Development
•Each feature once done is deployed
hereDev / Int
Environment
Test Feature 2
Log Bugs if any
Test Feature 1
Log Bugs if any
Test Feature 3
Log Bugs if any
Bugs will be fixed in dev environment and it will be re-tested.
•All the features and bug fixes are deployed here.
UAT Environment
•System / Automated Regression / Load Testing could be performed.
System Testing •Release
notes will be sent if exit criteria is met.
Release
The Smoke Suite will be scheduled to run every night to ensure system
stability.
Working as a Test Leader Involve in the planning, monitoring, and control of the testing activities and tasks Devise the test objectives, organizational test policies, test strategies and test plans
Estimate the testing activities and negotiate with management to acquire the necessary resources Recognize when test automation is appropriate
Plan the effort, select the tools, and ensure training of the team
Consult with other groups - e.g., programmers - to help them with their testing
Lead, guide and monitor the analysis, design, implementation and execution of the test cases, test procedures and test suites
Make sure the test environment is put into place before test execution and managed during test execution
Schedule the tests for execution and monitor, measure, control and report on the test progress, the product quality status and the test results, adapting the test plan
Do adjustment in the testing activities as when required
In last they write summary reports on test status
Working as a Test Leader
They analyze and review requirements specifications and contribute to test plans Involve in identifying test conditions and creating test designs, test cases, test procedure specifications and test data
Help in automating the tests
Set up the test environments or assist system administration and network management staff in doing so
Execute and log the tests, evaluate the results and document problems found
Monitor the testing and the test environment, often using tools
Gather performance metrics
Review each other's work, including test specifications, defect reports and test results
Software Testing Principles•Principle #1: Complete testing is impossible.
•Principle #2: Software testing is not simple.• Reasons:
• Quality testing requires testers to understand a system/product completely• Quality testing needs adequate test set, and efficient testing methods• A very tight schedule and lack of test tools.
•Principle #3: Testing is risk-based.
•Principle #4: Testing must be well planned and documented.
•Principle #5: Quality software testing depends on:• Good understanding of software products and related domain application• Cost-effective testing methodology, coverage, test methods, and tools.• Good engineers with creativity, and solid software testing experience
Software Testing Myths
- We can test a program completely. In other words, we test a program exhaustively.
- We can find all program errors as long as test engineers do a good job.
- We can test a program by trying all possible inputs and states of a program.
- A good test suite must include a great number of test cases.
- Good test cases always are complicated ones.
- Software test automation can replace test engineers to perform good software testing.
- Software testing is simple and easy. Anyone can do it. No training is needed.
- Documentation of Software Testing Process is waste of time and not necessary.
Testing Levels Based on Test Process Maturity
Level 0 : There’s no difference between testing and debugging Level 1 : The purpose of testing is to show correctness Level 2 : The purpose of testing is to show that the software doesn’t work Level 3 : The purpose of testing is not to prove anything specific, but to reduce the risk of
using the software Level 4 : Testing is a mental discipline that helps all IT professionals develop higher quality
software
Level 0 Thinking
Testing is the same as debugging
Does not distinguish between incorrect behavior and mistakes in the program
Does not help develop software that is reliable or safe
Level 1 Thinking
Purpose is to show correctness Correctness is impossible to achieve What do we know if no failures?
– Good software or bad tests?
Test engineers have no:– Strict goal– Real stopping rule– Formal test technique– Test managers are powerless
Level 2 Thinking Purpose is to show failures
Looking for failures is a negative activity
Puts testers and developers into an adversarial relationship
What if there are no failures?
This describes most software companies.
How can we move to a team approach ??
Level 3 Thinking
Testing can only show the presence of failures
Whenever we use software, we incur some risk
Risk may be small and consequences unimportant
Risk may be great and the consequences catastrophic
Testers and developers work together to reduce risk
This describes a few “enlightened” software companies
Level 4 Thinking“A mental discipline that increases quality”
Testing is only one way to increase quality
Test engineers can become technical leaders of the project
Primary responsibility to measure and improve software quality
Their expertise should help the developers
This is the way “traditional” engineering works
Are we able to achieve these points
Do we get enough time for Test Case Preparation? (Y/N) Do we have a space for test estimation?
(Y/N) Can we work in a stable environment?
(Y/N) Do we have a space for doing the Sanity testing?
(Y/N) Do we have a space for performing Smoke testing? (Y/N)
QUESTIONS ???