88
Guidelines for Software Testing June, 2002

Guidelines for Software Testing

Embed Size (px)

Citation preview

Page 1: Guidelines for Software Testing

Guidelines for Software Testing

June, 2002

Page 2: Guidelines for Software Testing

Table of Contents

1. Purpose :......................................................................................................1

2. Scope :..........................................................................................................1

3. Audience :....................................................................................................1

4. Organization Of The Manual:......................................................................1

5. Abbreviations :.............................................................................................1

6. Introduction to Software Testing...............................................................2

7. Testing Principles........................................................................................2

8. Testing Philosophies...................................................................................3

9. Automated Software Testing......................................................................3

10 Code coverage.............................................................................................3

10.1 General Guidelines for Usage of Coverage........................................................3

11 System Testing............................................................................................3

11.1 Objective..................................................................................................................3

11.2 Processes to be followed in the activities............................................................3

11.3 System Test Team..................................................................................................3

11.4 Hypothetical Estimate of when the errors might be found.................................3

11.5 Input.........................................................................................................................3

11.6 Deliverables:............................................................................................................3

11.7 Various Methods of System Testing.....................................................................311.7.1 Functional Testing...........................................................................................................311.7.2 Security Testing...............................................................................................................311.7.3 Performance Testing.......................................................................................................311.7.4 Stress Testing................................................................................................................. 311.7.5 Reliability Testing............................................................................................................311.7.6 Usability Testing..............................................................................................................311.7.7 Environment Testing.......................................................................................................311.7.8 Storage Testing...............................................................................................................311.7.9 Installation Testing..........................................................................................................311.7.10 Recovery Testing.........................................................................................................311.7.11 Volume Testing........................................................................................................311.7.12 Error Guessing..............................................................................................................311.7.13 Data Compatibility Testing.............................................................................................311.7.14 User Interface testing....................................................................................................311.7.15 Acceptance Testing.......................................................................................................311.7.16 Limit testing................................................................................................................... 311.7.17 Error Exit Testing...........................................................................................................311.7.18 Consistency testing.......................................................................................................311.7.19 Help Information Testing...............................................................................................311.7.20 Manual procedure testing..............................................................................................3

V1.0 Page i of iv

Page 3: Guidelines for Software Testing

11.7.21 User information Testing...............................................................................................3

12 Testing GUI Applications.................................................................................3

12.1 Introduction........................................................................................................312.1.1 GUIs as universal client....................................................................................................3

12.2 GUI Test Strategy.....................................................................................................312.2.1 Test Principles Applied to GUIs........................................................................................3

12.3 Types of GUI errors................................................................................................3

12.4 Four Stages of GUI Testing....................................................................................3

12.5 Types of GUI Test...................................................................................................312.5.1 Checklist Testing.............................................................................................................312.5.2 Navigation Testing...........................................................................................................312.5.3 Application Testing..........................................................................................................312.5.4 Desktop Integration Testing.............................................................................................312.5.5 Synchronisation Testing..................................................................................................3

12.6 Non-functional Testing of GUI...............................................................................312.6.1 Soak Testing...................................................................................................................312.6.2 Compatibility Testing.......................................................................................................312.6.3 Platform/Environment Testing.........................................................................................3

12.7 Automating GUI Tests............................................................................................312.7.1 Justifying Automation......................................................................................................312.7.2 Automating GUI Tests.....................................................................................................312.7.3 Criteria for the Selection of GUI tool................................................................................312.7.4 Points to be considered while designing GUI test suite:..................................................3

12.8 Examples of GUI Tests:..........................................................................................3

13 Client / Server Testing.....................................................................................3

13.1 Testing Issues.........................................................................................................3

13.2 C/S Testing Tactics.............................................................................................3

14 Web Testing.................................................................................................3

14.1 Standards of WEB Testing.....................................................................................314.1.1 Frames............................................................................................................................ 314.1.2 Gratuitous Use of Bleeding-Edge Technology.................................................................314.1.3 Scrolling Text, Marquees & Constantly Running Animations...........................................314.1.4 Long Scrolling Pages......................................................................................................314.1.5 Complex URLs................................................................................................................314.1.6 Orphan Pages.................................................................................................................314.1.7 Non-standard Link Colors................................................................................................314.1.8 Outdated Information.......................................................................................................314.1.9 Lack of Navigation Support.............................................................................................314.1.10 Overly Long Download Times.......................................................................................3

14.2 Testing of User Friendly.........................................................................................314.2.1 Use familiar, natural language.........................................................................................314.2.2 Checklist of User-friendliness:.........................................................................................3

14.3 Testing of User Interface........................................................................................3

V1.0 Page ii of iv

Page 4: Guidelines for Software Testing

14.3.1 Visual Appeal.................................................................................................................. 314.3.2 Grammatical and Spelling Errors in the Content.............................................................3

14.4 Server Load Testing...............................................................................................3

14.5 Database Testing....................................................................................................314.5.1 Relevance of Search Results..........................................................................................314.5.2 Query Response Time....................................................................................................314.5.3 Data integrity................................................................................................................... 314.5.4 Data Validity.................................................................................................................... 314.5.5 Recovery of Data.............................................................................................................3

14.6 Security Testing......................................................................................................314.6.1 Network Security.............................................................................................................314.6.2 Payment Transaction Security.........................................................................................3

14.7 Software Performance Testing..............................................................................314.7.1 Correct Data Capture......................................................................................................314.7.2 Completeness of Transaction..........................................................................................314.7.3 Gateway Compatibility.....................................................................................................3

14.8 Web Testing Methods.............................................................................................314.8.1 Stress Testing................................................................................................................. 314.8.2 Regression Testing.........................................................................................................314.8.3 Acceptance Testing.........................................................................................................3

15 Guidelines to prepare Test Plan......................................................................3

15.1 Preparing Test Strategy.........................................................................................3

15.2 Standard Sections of a Test Plan..........................................................................3

16 Amendment History.........................................................................................3

17 Guideline for Test Specifications....................................................................3

References..............................................................................................................3

Appendix – 1...........................................................................................................3

List of Testing Tools:.............................................................................................3

Appendix - 2............................................................................................................3

Sample system test plan........................................................................................3

Appendix - 3............................................................................................................3

Sample Test Plan for Web Testing:......................................................................3

Sample Test cases For Login Page...............................................................................3

GLOSSARY:............................................................................................................3

V1.0 Page iii of iv

Page 5: Guidelines for Software Testing

1.1. Purpose :Purpose :The purpose of the document is to define the methodology for software testing and to guide and facilitate the use of the methodology in testing software product. The focus is on reducing the software Quality Control costs by setting up streamlined processes.

2.2. Scope :Scope :This document is meant for providing the guidelines for testing processes. All testing projects executed under any software development company would follow these guidelines for the processes.

3.3. Audience :Audience :Target audiences of this document are the Project Managers, Project Leaders, Test Personnel and any new person with some basic understanding of Software Engineering, joining this activity.

4.4. Organization Of The Manual:Organization Of The Manual:This guideline covers various aspects of testing process. It covers the topics as given below:

Section # Topic Covered Brief Description of the Topic1 Introduction to S/W

TestingA brief description of S/W Testing and its background.

2 Testing principles and myths

Says about various testing principles and underlying mis-conceptions.

3 Testing Philosophies Says about the general steps to be followed in the process.

4 Automated S/W testing Covers principles of automating s/w testing and usage of various testing tools.

5 Code Coverage Deals on usage of code coverage method in testing.6 System testing Gives an overall idea on system testing including various

methods of system testing.7 Testing GUI application Gives guidelines on GUI testing8 Client Server testing Gives a brief description on principles of client/server

testing.9 Web Testing Gives ideas to test web based applications.10 Guidelines to prepare

test planCovers different sections of a test plan and its contents in general.

11 Guidelines for test specification

Gives guidelines to prepare test specifications.

12 Appendix Gives a list of testing tools and a sample of system test plan.

5.5. Abbreviations :Abbreviations :GUI – Graphical User InterfaceSUT – Software Under TestDUT – Device Under TestC/S - Client Server

V1.0 Page 1 of 52

Page 6: Guidelines for Software Testing

6.6. Introduction to Software TestingIntroduction to Software Testing

Testing is a process to detect the difference between the observed and stated behaviour of software. Due to the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software.

Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software’s reliability and quality. But, testing cannot show the absence of defect—it can only show that software defects are present. Actually, testing is used to confirm quality than to achieve it.

7.7. Testing PrinciplesTesting Principles

Some of the Principles of Testing in different phases of testing are:

Testing Project Management: Testing is the process of executing a program with the intent of finding error. Do not plan a testing effort under the assumption that no errors will be found.

Preparation of test cases: Study project priorities while deciding on the testing activities. E.g. for an on-line system, pay

more attention to response time. Test carefully the features used frequently. A good test case is one that has a high probability of detecting an as-yet-undiscovered error. A successful test case is one that detects an as-yet-undiscovered error. A necessary part of a test case is a definition of the expected output or result. Test cases must be written for invalid and unexpected, as well as valid and expected, input

conditions. Avoid throwaway test cases unless the program is truly a throwaway program.

Testing : A programmer should avoid attempting to test his or her own program. A programming organisation should not test its own programming. Thoroughly inspect the results of each test. Examining a program to check what it is supposed to do is only half of the battle. The other

half is seeing whether the program does what it is NOT supposed to do. The probability of the existence of more errors in a section of a program is proportional to the

number of errors already found in that section. Testing is an extremely creative and intellectually challenging task. Tools should be used for better control over testing and improve productivity.

Software Testing Myths :

A test process that complements object-oriented design and programming can significantly increase reuse, quality, and productivity. Establishing such a process usually means dealing with some common mis-perceptions (myths) about testing software. This article is about these perceptions. We’ll explore these myths and their assumptions and then explain why the myth is at odds with reality.

V1.0 Page 2 of 52

Page 7: Guidelines for Software Testing

Myth 1: Testing is unnecessary—

Reality: Human error is as likely as ever. With iterative and incremental development we obviate the need for a separate test activity, which was really only necessary in the first place because conventional programming languages made it so easy to make mistakes.

Myth 2: Testing gets in the way.

Reality: Testing can be a complementary, integral part of development. The idea of testing to find faults is fundamentally wrong—all we need to do is to keep “improving” our good ideas. The simple act of expression is sufficient to create trustworthy classes. Testing is a destructive, rote process—it isn’t a good use of developer’s creative abilities and technical skills.

Myth 3: Testing is structured/waterfall idea—it can’t be consistent with incremental object-oriented development. Objects evolve—they aren’t just designed, thrown over the wall for coding, and over another wall for testing. What’s more, if you test each class in a system separately then you have to do “big-bang” integration, which is an especially bad idea with object-oriented systems.

Reality: Testing can be incremental and iterative. While the iterative and incremental nature of object-oriented development is inconsistent with a simple, sequential test process (test each unit, then try to integrate test all of them, then do system test), it does not mean that testing is irrelevant. The boundary that defines the scope of unit and integration testing is different for object-oriented development. Tests can be designed and exercised at many points in the process. Thus “design a little, code a little” becomes “design a little, code a little, test a little.”

Myth 4: Testing is trivial. Testing is simply poking around until you run out of time. All we need to do is start the app, try each use-case, and try some garbage input. Testing is neither serious nor challenging work—hasn’t most of it has already been automated?

Reality: Hunches about testing completeness are notoriously optimistic. Adequate testing requires a sophisticated understanding of the system under test. You need to be able to develop an abstract view of the dynamics of control flow, data flow, and state space in a formal model, system requirements. You need to be able to define the expected results for any input and state you select as a test case. This is interesting work for which little automation is available.

Myth 5: Automated GUI testing is sufficient. If a system is automatically exercised by trying permutations of GUI commands supplied by a command playback tool, the underlying application objects will be sufficiently tested.

Reality: GUI-based tests may be little more than automated testing-by-poking-around. While there are many useful capture/playback products to choose from, the number of hours a script runs has no direct or necessary correlation with the extent that the system under test has been exercised. It is quite possible to retest the same application logic over and over, resulting in an inflated confidence. Further, GUI test tools are typically of little use for objects in embedded systems.

Myth 6: If programmers were more careful, testing would be unnecessary. Extra effort, extra pressure, or extra incentive can eliminate programming errors. Bugs are simply an indication of poor work habits. These poor work habits could be avoided if we’d use a better management strategy.

Reality: Many bugs only surface during integration. There are many interactions among components that cannot be easily foreseen until all or most components of a system are integrated and exercised. So, even if we could eliminate all individual sources of error, integration errors are highly likely. Static methods cannot reveal interaction errors with the target or transient performance problems in hard real-time systems.

V1.0 Page 3 of 52

Page 8: Guidelines for Software Testing

Myth 7: Testing is inconsistent with a commitment to quality. Testing assumes faults have escaped the design and programming process. This assumption is really just an excuse for sloppy development. All bugs are due to errors that could be avoided if different developer behaviour could be induced. This perception is often a restatement of the preceding sloppy programmer myth

Reality: Reliable software cannot be obtained without testing: Testing activities can begin and proceed in parallel with concept definition, OOA, OOD, and programming. When testing is correctly interleaved with development, it adds considerable value to the entire development process. The necessity of testing is not an indictment of anything more than the difficulty of building large systems.

Myth 8: Testing is too expensive—we don’t have time. To test beyond testing-by-poking- around takes too much time and costs too much. Test tools are an unnecessary luxury since all we need are a few good pokes. Besides, projects always slip—testing time gets squeezed anyway.

Reality: Pay me now, or pay me much more later. The cost of finding and correcting errors is always higher as the time between fault injection and detection increases. The lowest cost results when you prevent errors. If a fault goes unnoticed, it can easily take hours or days of debugging to diagnose, locate, and correct after the component is in widespread use.

Myth 9: Testing is the same (as it is with conventional software). The only kind of testing that matters is “black-box” system testing, where we define the externally observable behaviour to be produced from a given input. We don’t need to use any information about the implementation to select these tests and their test inputs.

Reality: OO code structure matters. Effective testing is guided by information about likely sources of error. The combination of polymorphism, inheritance, and encapsulation are unique to object-oriented languages, presenting opportunities for error that do not exist in conventional languages. Our testing strategy should help us look for these new kinds of errors and offer criteria to help decide when we’ve done enough looking. Since the “fundamental paradigm shift” often touted for object-oriented development has lead to some new points of view and representations, our techniques for extracting test cases from these representations must also change.

8.8. Testing PhilosophiesTesting Philosophies

The following steps should be followed for a testing project: Project Manager shall start preparation of test plan – identifying the requirements and plan for test set-up and dividing the various tasks among the product testing team. A sample System Test Plan is given in Appendix –2.

V1.0 Page 4 of 52

Page 9: Guidelines for Software Testing

1. Identify the requirements for training of the team members and draw plan to impart it. Get the plan sanctioned by the Group Head. If it can be arranged with internal resources, Project Co-ordinator can arrange it and forward the training details to Head – Training Department for record purpose. In case the training has to be arranged with external resources, inform Head – Training Department for implementation.

2. Obtain the test cases and checklists. If customer has not given any test cases, then the team members will go through the user manual and other relevant documents and develop the test plan and test cases along with checklists to be used. The project co-ordinator will get the test plan and test cases reviewed and approved by using peer review or any suitable method. The Team Leader/Group Head should approve the test plan and test cases.

3. The test cases obtained from the client shall be considered as the original ones. The testers should make a copy of their allocated test cases and maintain a master copy. Similarly they should again copy the checklists, which would be used. Review the checklists and test cases – for their completeness. If required, the test cases may be added or modified. These modifications would have to be reviewed and approved.

4. Install the software and verify proper installation. If installation fails, inform the customer and despatch the product testing report. If required, return other materials like the original CDs of the product. Wait for next build from customer.

5. If the installation is proper, then prepare the database for storing the defects and apply the test cases. Execute the testing as planned. First test the basic features and functions of the software, then go for integration testing, simulation and stress testing. The testers should maintain the following records in everyday testing:

Actual test records on test checklist (recording of pass/fail against the test cases) Detail Defect Report,

6. The Project Co-ordinators should maintain the Weekly Project Status Report (to be sent to Client representative for status updating).

7. If any project has a particular record to maintain as per client’s choice, it should be adjusted with the suggested records.

8. Consolidate the defects found at each stage of testing and prepare a defect summary report along with the Product Testing report stating the extent to which the product meets its stated specifications.

9. At the end of the project, Close the product testing, get report signed by Group Head and send it to customer.

9.9. Automated Software TestingAutomated Software Testing

V1.0 Page 5 of 52

Page 10: Guidelines for Software Testing

Automated Software Testing, as conducted with today’s automated test tools, is a development activity that includes programming responsibilities similar to those of the SUT developer. Automated test tools generate code comprising test scripts while exercising a user interface. This code can be modified and reused to serve as automated test scripts for other applications in lesser time.

Software testing tools makes the job easier. Now lots of standard tools are available in the market, ready to serve the purpose. Similarly, one can develop his own customized test tool to serve his particular requirement or can use the standard tools and then ‘instruct’ it to get the desired service.

Usage of tools should be decided at the planning stage of the project. And testing activity should also be started from the Requirement Specification Stage. Though early start of testing is recommended for any project, irrespective of usage of tools, it seems to be a must for projects that use tools. The reason lies in preparation of test cases. Test cases for System Testing should be prepared once the SRS is baselined. Test cases then would be modified with evolution of design spec, code and of course, actual software.

This process also requires a well-defined test plan and it should be prepared right after project plan. Most of the standard tools work with ‘capture/playback’ method. The tool has a capture facility. While using the tool, tester would first run the tool, start its capture facility and then run the SUT according to a test case. With its capture facility, the tool will record all the steps performed by users including keystrokes, mouse activity and selected output. Now the user can playback the recorded steps for a test case automatically driving the application and validating the results by comparing them to the previously saved baseline.

When the tool records the steps, it automatically generates a piece of code that is known as a test script. Now, the tester can modify the code to enable the script performing some desired activity. Tester can write his own test scripts also. A number of test cases can be combined in a test suite and the tester can schedule a few of test suite to run in night or any off time unattended. The result log would be automatically generated and stored for review.

Here is the basic paradigm for GUI-based automated regression testing:a. Design a test case, then run it.b. If the program fails the test, write a bug report. Restart after the bug is fixed.c. If the program passes the test, automate it. Run the test again (either from a script or

with the aid of a capture utility). Capture the screen output at the end of the test. Save the test case and the output.

d. Next time, run the test case and compare its output to the saved output. If the outputs match, the program passes the test.

Benefits of Automated Testing The use of automated testing can improve all areas of testing, including test procedure development, test execution, test results analysis etc. It also supports all test phases including unit, integration, regression, system, acceptance, performance and stress testing.

Some of the Tools Various types of test tools available for use throughout the various life-cycle phases, supporting the automated testing process. One sample list of software tools with lifecycle phase is given in Appendix –1.

Problems with Automated TestingThere are some popular myths that : People can use these tools to quickly create extensive test suites. They are easy to use, maintenance of the test suites is not a problem and A manager can save money and time and can ship software sooner by using one of these

tools to replace human testers.

Actually there are many pitfalls in automated testing and there has to be some proper planning for its implementation.

V1.0 Page 6 of 52

Page 11: Guidelines for Software Testing

Few known pitfalls are:a. This is not cheap and time consuming also. It usually takes 3 to 10 times as long to

create, verify and minimally document [1] the automated test compared to manual test. Many tests will be worth automating, but for all the tests that are run once or twice, it is not worthwhile.

b. These tests are not powerful.c. In practice, many test groups automate only the easy-to-run tests.d. Slightest change in UI will make the script invalid.

Suggested strategies for success:a. Reset management expectations about the timing of benefits from automationb. Recognize that test automation development is software development: Automation of

software testing is just like all the other automation efforts that software developers engage in – except that this time, the testers are writing the automation code.

a. Within an application dedicated to testing a program, every test case is a feature and

b. Every aspect of the underlying application is data. c. Use a data-driven architectured. Use a framework based architecturee. Recognize staffing realitiesf. Consider using other types of automation.

1010 Code coverage Code coverage

A perfectly effective test suite would find every bug. Since we don’t know how many bugs there are, we can’t measure how closely test suites approach perfection. Consequently, we use an index as an approximate measure of test suite quality: since we can’t measure what we want, we measure something related.

With coverage, we estimate test suite quality by examining how thoroughly the tests exercise the code:

Is every if statement taken in both the true and false directions? Is every case taken? What about the default case? Is every while loop executed more than once? Does some test force the while loop to be

skipped? Is every loop executed exactly once? Do the tests probe off-by-one errors?

V1.0 Page 7 of 52

Page 12: Guidelines for Software Testing

The main technique for demonstrating that the testing has been thorough is called test coverage analysis. Simply stated, the idea is to create, in some systematic fashion, a large and comprehensive list of tasks and check that each task is covered in the testing phase. Coverage can help in monitoring the quality of testing, assist in creating tests for areas that have not been tested before, and help with forming small yet comprehensive regression suites.

Coverage, in general, can be divided into two types: code-based or functional. Code-based coverage concentrates on measuring syntactic properties in the execution, for example, that each statement was executed, or each branch was taken. This makes program-based coverage a generic method which is usually easy to measure, and for which many tools are available. Examples include program based coverage tools for C, C++ and Java . Functional coverage, on the other hand, focuses on the functionality of the program, and it is used to check that every aspect of the functionality is tested. Therefore, functional coverage is design and implementation specific, and is more costly to measure.

Few general code coverage categories are :

Control-flow Coverage Block CoverageData-flow Coverage

Few other types and alternate names are :

Table Types of verification coverageCoverage type Alternate namesStatement execution Line, statement, block, basic block, segmentDecision Branch, all edgesExpression Condition, condition-decision, all edges, multiple conditionPath Predicate, basis pathEvent (None)Toggle (None)Variable (None)State machine State value, state transition, state scoring, variable transition, FSM

V1.0 Page 8 of 52

Page 13: Guidelines for Software Testing

Code based coverage, usually just called coverage, is a technique that measures the execution of tests against the source code of the program. For example, one can measure whether all the statements of the program have been executed. The main uses of program based coverage are assessing the quality of the testing, finding missing requirements in the test plan and constructing regression suites.

A number of standards, as well as internal company policies, require the testing program to achieve some level of coverage, under some model. For example, one of the requirements of the ABC standard is 100% statement coverage.

Now-a-days lots of software tools are available. Almost all coverage tools implement the statement and branch coverage models. Many tools also implement multi-condition coverage, a model that checks that each part of a condition (e.g. A or B and C) had impact. Fewer tools implement the more complex models such as define-use, mutation, and path coverage variants.

The main advantage of code based coverage tools is their simplicity of use. The tools come ready for the testing environment. No special preparations are needed in the programs and understanding the feedback from the tool is straightforward. The main disadvantage of code coverage tools is that the tools do not “understand” the application domain. Therefore, it is very hard to tune the tools to areas which the user thinks are of significant. These “defects” can be removed by the use of simple scripting languages like VB / Perl / Tcl-tk.

10.1 General Guidelines for Usage of Coverage

Coverage should not be used if the resources used for it can be better spent elsewhere. This is the case when the budget is very tight and there is not enough time to even finish the test plan. In such a case, designing new tests is not useful as not all the old tests will be run. Coverage should be used only if there is a full commitment to make use of the data collected. Measuring coverage in order to report coverage percentile is practically worthless. Coverage points out parts of the applications that have not been tested and guides test generation to these parts. Moreover, it is very important to try to reach full coverage or at least set high coverage goals, since many bugs hide in hard-to-reach places.

Coverage is a very useful criterion for test selection for regression suites. Whenever a small set of tests is needed, the test suite should be selected so that it will cover as many requirements or coverage tasks as possible.

When coverage and reviews are used for the same project reviews can put less emphasis on things that coverage is likely to find. For example, a review for dead code is unnecessary if statement coverage is used, and manually checking that some values of variable can be attained is not needed if the appropriate functional coverage model is used.

Coverage should not be used to judge if the “desirable” features are implemented.

1111 System TestingSystem Testing

A system is the big componentSystem testing is aimed at revealing bugs that can’t be attributed to a component as such ,to inconsistencies between components or planned interactions between componentsDeals with issues, behaviour that can only be exposed by testing the entire/ integrated system e.g. performance, security etc.

11.1 Objective

V1.0 Page 9 of 52

Page 14: Guidelines for Software Testing

The purpose of system testing is showing that the product is inconsistent with its original objectives. System testing is oriented toward a distinct class of errors and measured with respect to a distinct type of documentation in the development process. It could very well be partially overlapped in time with other testing process. Care must be taken so that no component / class of error is missed as this is the last phase of testing.

System Testing of software is divided into four major types:

Functional System Testing; Regression Testing; Performance Testing; and Sanity Testing.

The system testing will be designed to test each functional group of software modules in a sequence that is expected in production. In each functional testing area, the following will be tested at a minimum:

Initial inputs; Program modifications and functionality (as applicable); Table and ledger updates; and Error conditions. System Test cases are designed by analyzing the objectives and then formulated by analyzing the user documentation.

Different categories of test cases are given below:

Facility Testing Volume Testing Stress Testing Usability Testing Security Testing Performance Testing Storage Testing Configuration Testing Compatibility/Conversion Testing Installability Testing Reliability Testing Recovery Testing Serviceability Testing Documentation Testing Procedure Testing

Time to Plan and Test : Test planning starts with the preparation of Software Requirement specification. Testing starts after the completion of unit testing and integration testing.

Responsibilities:

Project Manager/ Project Leader : They are responsible for the following activities:

Preparation of test plan Obtain existing Test cases and checklist Responsible for getting the test plan and test cases reviewed and approved by using peer

review or any other suitable method. Communication with client Project Tracking and reporting

V1.0 Page 10 of 52

Page 15: Guidelines for Software Testing

Test Engineers: Their responsibilities include:

Preparation of test cases Actual testing Actual test record on checklist Detail defect report 11.2 Processes to be followed in the activities

Development and review the system test plan and test results; Provide training for the system testers; Designate a “final authority” to provide written sign-off and approvals of all deliverables in

each implementation area. Once the person designated as the final authority approves, in writing, the deliverables, they will be considered final and the Project Team will proceed with migration to the user acceptance testing environment;

Execute the system tests, including all cycles and tests identified in the plans; Resolve issues that arise during testing using formal issue resolution process; and Document test results.

11.3 System Test Team

Few professional system-test experts A representative end user or two A human-factors engineer Key original analysts or designers of the programPerhaps the most economical way of conducting a system test is to subcontract it to a separate company.

11.4 Hypothetical Estimate of when the errors might be found

Coding and logic-design errors Design errorsModule TestFunction TestSystem Test

Total

65% 0%30% 60%3% 35%__________________________________________________98% 95%

11.5 Input

Software Requirement Specification Design Document Project Plan Existing testcases / checklists, if any User documents Unit and Integration test results

11.6 Deliverables:

System Test Plan Scripting; Test Cases System Test Results; System Tested Software.

V1.0 Page 11 of 52

Page 16: Guidelines for Software Testing

11.7 Various Methods of System Testing

11.7.1 Functional Testing

A functional test exercises a system application with regard to functional requirements with the intent of discovering non-conformance with end-user requirements. This technique is central to most software test programs. Its primary objective is to assess whether the application does what it is supposed to do in accordance with specified requirements.

Test development considerations for functional tests include concentrating on test procedures that execute the functionality of the system based upon the project’s requirements. One significant test development consideration arises when several test engineers will be performing test development and execution simultaneously. When these test engineers are working independently & sharing the same data or database, a method needs to be implemented to ensure that the test engineer A does not modify or affect the data being manipulated by test engineer B, potentially invalidating the test results produced by test engineer B. Also, automated test procedures should be organized in such a way that effort is not duplicated.

The steps of functional testing are : Decompose & analyze the functional design specification. Partition the functionality into logical components and for each component make a list of the

detailed functions. For each function, use the analytical black-box methods to determine inputs & outputs. Develop functional test cases. Develop a function coverage matrix. Execute the test cases & measure logic coverage Develop additional functional tests, as indicated by the combined logic coverage of function &

system testing.

11.7.2 Security Testing

Security Testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration. Security Tests involve checks to verify the proper performance of system access & data access mechanisms. Test procedures are devised that attempt to subvert the programs security checks. The test engineer uses security tests to validate security levels & access limits and thereby verify compliance with specified security requirements and any applicable security regulations.

Objective of Security Testing :

a) To check that system is password protected users only granted necessary system privileges

b) Deliberately attempt to break the security mechanism by: accessing the files of another user breaking into the system authorization files accessing a resource when it is locked

11.7.3 Performance Testing

Performance Testing is designed to test the runtime performance of software within the context of an integrated system. It should be done throughout all steps in testing process. Even at the unit level, the performance of an individual module maybe assessed Performance testing verify that the system application meets specific performance efficiency objectives. It can measure & report on such data as I/O rates, total no. of I/O actions, average database query response time & CPU utilization rates. The same tools used in stress testing can generally be used in performance testing to allow for automatic checks of performance efficiency.

To conduct performance testing, the following performance objectives need to be defined:

V1.0 Page 12 of 52

Page 17: Guidelines for Software Testing

How many transactions per second need to be processed? How is a transaction defined? How many concurrent & total users are possible? Which protocols are supported? With which external data sources or systems does the application interact?

Many automated performance test tools permit virtual user testing, in which the test engineer can simulate tens, hundreds or even thousands of users executing various testscripts.

11.7.4 Stress Testing

In stress testing, the system is subjected to extreme and maximum loads to find out whether and where the system breaks and to identify what breaks first. The system is asked to process a huge amount of data or perform many function calls within a short period of time. It is important to identify the weak points of the system. System requirements should define these thresholds and describe the system’s response to an overload. Stress testing should then verify that it works properly when subjected to an overload.

Examples of stress testing: – running a client application continuously for many hours or simulating a multi-user environment. Typical types of errors uncovered include memory leakage, performance problems, locking problems, concurrency problems, and excess consumption of system resources and exhaustion of disk space.Stress tools typically monitor resource usage, including usage of global memory, DOS memory, free file handles, and disk space, and can identify trends in resource usage so as to detect problem areas, such as memory leaks and excess consumption of system resources and disk space.

11.7.5 Reliability Testing

The goal of all types of testing is the improvement of the eventual reliability of the program, but if the program’s objectives contain specific statements about reliability, specific reliability tests might be devised. For the objective of building highly reliable systems, the test effort should be initiated during the development cycle’s requirements definition phase, when requirements are developed & refined.

11.7.6 Usability Testing

Usability Testing involves having the users work with the product & observing their responses to it. It should be done as early as possible in the development life cycle. The real customer is involved as early as possible. The existence of the functional design specification is the prerequisite for starting.Usability testing is the process of attempting to identify discrepancies between the user interfaces of a product and the human engineering requirements of its potential users. Usability Testing collects information on specific issues from the intended users. It often evaluation of a products presentation rather than its functionality.Usability characteristics, which can be tested, include the following:Accessibility, Responsibility, Efficiency, Comprehensibility.

11.7.7 Environment Testing

The testing activities here involve basically testing the environment setup activities as well as the calibration of the test tools to match the specific environment.When checking the set-up activities, need to test the set-up script (if any), the integration & validation of resources – hardware, software, network resources, databases. The objective should be to ensure the complete functionality of the production application & performance analysis.It is also necessary to check for stress testing requirements, where we require the use of multiple workstations to run multiple test procedures simultaneously.

V1.0 Page 13 of 52

Page 18: Guidelines for Software Testing

11.7.8 Storage Testing

Products do have some storage specifications. For instance, the amounts of main & secondary storage used by the program & sizes of required temporary or spill files.Checks should be made to monitor memory & backing storage occupancy & taking necessary measurements.

11.7.9 Installation Testing

Installation testing involves the testing of the installation procedures. Its purpose is not to find software errors, but to find installation errors i.e. to locate any errors made during the installation process.Installation tests should be developed by the organization that produced the system, delivered as part of the system, and run after the system is installed. Among other things, the test cases might check to ensure that a compatible set of options has been created and have the necessary contents, and that the hardware configuration is appropriate.

11.7.10 Recovery Testing

The system must have recovery objectives, stating how the system is to recover from hardware failures, and data errors. These can be injected into the system to analyze the system’s reaction. A system must be fault tolerant; system failure must be corrected within a specific period of time

11.7.11 Volume Testing

This involves subjecting the program to heavy volumes of data. For instance, a compiler would be fed an absurdly large source program to compile. A linkage editor might be fed a program containing thousands of modules. If a program is supposed to handle files spanning multiple volumes, enough data are created to cause the program to switch from one volume to another. Thus, the purpose of volume testing is to show that the program cannot handle the volume of data specified in its objectives.

11.7.12 Error Guessing

Error Guessing is an ad hoc approach, based on intuition & experience, to identify tests, likely to expose errors. The basic idea is to make a list of possible errors or error-prone situations & then develop tests based on the list.For instance, the presence of the value 0 in the program’s input or output is an error prone situation. Therefore, one might write test cases for which particular input values have a 0 value and for which particular output values are forced to 0.Also, where variable number of input output can be present, the cases of “none” and “one” are error-prone situations. Another idea is to identify test cases associated with assumptions that the programmer might have made when reading the specification. - i.e. things that were omitted from the specification

Thus, some items to try are – Empty or null lists/strings Zero instances /occurrences Blank or null characters in strings Negative numbers

11.7.13 Data Compatibility Testing

V1.0 Page 14 of 52

Page 19: Guidelines for Software Testing

Many programs developed are often replacements for some deficient system, either a data processing or manual system. Programs often have specific objectives concerning their compatibility with, and conversion procedures from, the existing system. Thus the objective of Compatibility testing is to determine whether the compatibility objectives of the program have been met & whether conversion procedures work.

11.7.14 User Interface testing

The User Interface is checked against the design or requirement specification. The user interface is tested as per the User manual, on-line help and SRS.The test cases should be build for interface style, help facilities & the error handling protocol. Also, issues like – number of actions required per task & whether they are easy to remember & invoke, how self-explanatory & clear are the icons, how easy its to learn the basic system operations etc., need to be evaluated while conducting the User Interface Testing.

11.7.15 Acceptance Testing

The acceptance test phase includes testing performed for or by end users of the software product. Its purpose is to ensure that end users are satisfied with the functionality & performance of the software system. The acceptance test phase begins only after the successful conclusion of system testing.Commercial software products do not generally undergo customer acceptance testing, but do often allow a large number of users to retrieve an early copy of the software, so that they can provide feedback as a part of beta test.

11.7.16 Limit testing

Limit Testing implies testing with the values beyond the specified limits –for e.g. memory, no of users, no of files open etc.The test cases should focus on testing with out of range values i.e. with values, which exceeds the values as laid down in the specifications, which the system can well handle. Such cases should be included within every stage of the testing life cycle. 11.7.17 Error Exit Testing

This testing considers whether the software developed, in case of a system error, displays appropriate system error messages & thereafter provides a clear exit.All possibilities of the system error cropping up, should be tested, except those causing abnormal termination.

11.7.18 Consistency testing

The system developed should have consistency throughout the system with respect to both data & modules. The interrelated modules should be using the same set of data, retrieving/writing the data to the same common place, & thus reflecting uniformity over the entire system. Thus test cases should be developed involving those sample data & methods, which will provide an insight as to the system ensures consistency or not.

11.7.19 Help Information Testing Help information should be adequate and provide useful information. The contents should cover all significant areas of the system, on which the users might be requiring help. The flow of the help information should be sequential and the links embedded in the document must be relevant & must be even tested for as to whether actually provides the link or not. The contents should also be correct, & tested for its clarity & completeness.

V1.0 Page 15 of 52

Page 20: Guidelines for Software Testing

11.7.20 Manual procedure testing

In this method of test, the system is tested for manual device requirement and handling. For e.g.- this could be related to tape loading, manual switching of any device etc. The system should recognise all those devices, successfully do the task of loading/unloading, and also smooth working with such devices. Also, any prescribed human procedures, such as procedures to be performed by the system operator, database administrator, or terminal user should be tested during the system test.

11.7.21 User information Testing

The User information Testing is also concerned with the adequacy & correctness of the user documentation. It should be determined whether the user manual gives a proper representation of the system. Also, it should be tested for clarity & that whether it’s easy to look for any information related to the system.

V1.0 Page 16 of 52

Page 21: Guidelines for Software Testing

12 Testing GUI Applications 12 Testing GUI Applications

12.1 Introduction

The most obvious characteristic of GUI applications is the fact that the GUI allows multiple windows to be displayed at the same time. Displayed windows are ‘owned’ by applications and of course, there may be more than one application active at the same time. Access to features of the systems is provided through mechanisms menu bars buttons and keyboard shortcuts. GUIs free the user to access system functionality in their preferred way. They have permanent access to all features and may use the mouse, the keyboard or a combination of both to have a more natural dialogue with the system.

12.1.1 GUIs as universal client

GUIs have become the established alternative to traditional forms-based user interfaces. GUIs are the assumed user interface for virtually all systems development using modern technologies.

12.2 GUI Test Strategy

12.2.1 Test Principles Applied to GUIsThe approach concentrates on GUI errors and using the GUI to exercise tests so is very-oriented toward black box testing.

Focus on errors to reduce the scope of tests We intend to categorise errors into types and design test to detect each type of error in turn.

In this way, we can focus the testing and eliminate duplication. Separation of concerns (divide and conquer) By focusing on particular types of error and designing test cases to detect those errors, we

can break up the complex problem into a number of simpler ones. Test design techniques where appropriate Traditional black box test techniques that we would use to test forms based applications are

still appropriate. Layered and staged tests. Organise the test types into a series of test stages. We implement

integration tests of components and test the integrated application last. In this way, we can build the testing up in trusted layers.

Test automation...wherever possible. Automation most often fails because of over-ambition. By splitting the test process into stages, we can seek and find opportunities to make use of automation where appropriate, rather than trying to use automation everywhere.

12.3 Types of GUI errors

We can list some of the multifarious errors that can occur in a client/server-based application that we might reasonably expect to be able to test for using the GUI. Many of these errors relate to the GUI, others relate to the underlying functionality or interfaces between the GUI application and other client/server components.

Data validation Incorrect field defaults Mishandling of server process failures Mandatory fields, not mandatory Wrong fields retrieved by queries Incorrect search criteria Field order Multiple database rows returned, single row expected Currency of data on screens Window object/DB field correspondence Correct window modality?

V1.0 Page 17 of 52

Page 22: Guidelines for Software Testing

Window system commands not available/don’t work Control state alignment with state of data in window? Focus on objects needing it? Menu options align with state of data or application mode? Action of menu commands aligns with state of data in window Synchronisation of window object content State of controls aligns with state of data in window?

By targeting different categories of errors in this list, we can derive a set of different test types that focus on a single error category of errors each and provide coverage across all error types.

12.4 Four Stages of GUI Testing

The four stages are summarised in Table 2 below. We can map the four test stages to traditional test stages as follows:

Low level - maps to a unit test stage. Application - maps to either a unit test or functional system test stage. Integration - maps to a functional system test stage. Non-functional - maps to non-functional system test stage.

The mappings described above are approximate. Clearly there are occasions when some ‘GUI integration testing’ can be performed as part of a unit test. The test types in ‘GUI application testing’ are equally suitable in unit or system testing. In applying the proposed GUI test types, the objective of each test stage, the capabilities of developers and testers, the availability of test environment and tools all need to be taken into consideration before deciding whether and where each GUI test type is implemented in your test process.

The GUI test types alone do not constitute a complete set of tests to be applied to a system. We have not included any code-based or structural testing, nor have we considered the need to conduct other integration tests or non-functional tests of performance, reliability and so on. Your test strategy should address all these issues.

Stage Test TypesLow Level Checklist testing , NavigationApplication Equivalence Partitioning, Boundary Values

Decision Tables, State Transition TestingIntegration Desktop Integration, C/S Communications

SynchronizationNon-Functional Soak testing, Compatibility testing

Platform/environment

12.5 Types of GUI Test

12.5.1 Checklist Testing

Checklists are a straightforward way of documenting simple re-usable tests. The types of checks that are best documented in this way are:

Programming/GUI standards covering standard features such as: window size, positioning, type (modal/non-modal), standard system commands/buttons (close, minimise, maximise etc.)

Application standards or conventions such as: standard OK, cancel, continue buttons, appearance, colour, size, location consistent use of buttons or controls object/field labelling to use standard/consistent text.

V1.0 Page 18 of 52

Page 23: Guidelines for Software Testing

12.5.2 Navigation Testing

In the context of a GUI, we can view navigation tests as a form of integration testing. To conduct meaningful navigation tests the following are required to be in place:

An application backbone with at least the required menu options and call mechanisms to call the window under test.

Windows that can invoke the window under test. Windows that are called by the window under test.

Obviously, if any of the above components are not available, stubs and/or drivers will be necessary to implement navigation tests. If we assume all required components are available, what tests should we implement? We can split the task into steps:

For every window, identify all the legitimate calls to the window that the application should allow and create test cases for each call.

Identify all the legitimate calls from the window to other features that the application should allow and create test cases for each call.

Identify reversible calls, i.e. where closing a called window should return to the ‘calling’ window and create a test case for each.

Identify irreversible calls i.e. where the calling window closes before the called window appears.

There may be multiple ways of executing a call to another window i.e. menus, buttons, keyboard commands. In this circumstance, consider creating one test case for each valid path by each available means of navigation. Note that navigation tests reflect only a part of the full integration testing that should be undertaken. These tests constitute the ‘visible’ integration testing of the GUI components that a ‘black box’ tester should undertake.

12.5.3 Application Testing

Application testing is the testing that would normally be undertaken on a forms-based application. This testing focuses very much on the behaviour of the objects within windows. Some guidelines for their use with GUI windows are presented in the table below:

Equivalence Partitions andBoundary Value Analysis

Input validation Simple rule-based processing

Decision Tables Complex logic or rule-based processing

State-transition testing Applications with modes or states where processing behaviour is affected

Windows where there are dependencies between objects in the window.

12.5.4 Desktop Integration Testing

Client/server systems assume a ‘component based’ architecture so they often treat other products on the desktop as components such as a word processor, spreadsheet, electronic mail or Internet based applications and make use of features of these products by calling them as components directly or through specialist middleware.We define desktop integration as the integration and testing of a client application with these other components. The tester needs to know what interfaces exist, what mechanisms are used by these

interfaces and how the interface can be exercised by using the application user interface.

V1.0 Page 19 of 52

Page 24: Guidelines for Software Testing

To derive a list of test cases the tester needs to ask a series of questions for each known interface:

Is there a dialogue between the application and interfacing product (i.e. a sequence of stages with different message types to test individually) or is it a direct call made once only?

Is information passed in both directions across the interface? Is the call to the interfacing product context sensitive? Are there different message types? If so, how can these be varied?

12.5.5 Synchronisation Testing

There may be circumstances in the application under test where there are dependencies between different features. Examples of synchronisation are when:

The application has different modes - if a particular window is open, then certain menu options become available (or unavailable).

If the data in the database changes and these changes are notified to the application by an unsolicited event to update displayed windows.

If data on a visible window is changed and makes data on another displayed window inconsistent.

In some circumstances, there may be reciprocity between windows. For example, changes on window A trigger changes in window B and the reverse effect also applies (changes in window B trigger changes on window A).

In the case of displayed data, there may be other windows that display the same or similar data which either cannot be displayed simultaneously, or should not change for some reason. These situations should be considered also. To derive synchronisation test cases:

Prepare one test case for every window object affected by a change or unsolicited event and one test case for reciprocal situations.

Prepare one test case for every window object that must not be affected - but might be.

12.6 Non-functional Testing of GUI

The tests described in the previous sections are functional tests. These tests are adequate for demonstrating the software meets it’s requirements and does not fail. However, GUI applications have non-functional modes of failure also. We propose three additional GUI test types (that are likely to be automated).

12.6.1 Soak Testing

Soak tests exercise system transactions continuously for an extended period in order to flush- out memory leaks problems.These tests are normally conducted using an automated tool. Selected transactions are repeatedly executed and machine resources on the client (or the server) monitored to identify resources that are being allocated but not returned by the application code.

12.6.2 Compatibility Testing

Compatibility Tests are (usually) automated tests that aim to demonstrate that resources that are shared with other desktop products are not locked unnecessarily causing the system under test or the other products to fail.These tests normally execute a selected set of transactions in the system under test and then switch to exercising other desktop products in turn and doing this repeatedly over an extended period.

12.6.3 Platform/Environment Testing

V1.0 Page 20 of 52

Page 25: Guidelines for Software Testing

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Application may be designed to operate on a variety of platforms; you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

12.7 Automating GUI Tests

12.7.1 Justifying Automation

Automating test execution is normally justified based on the need to conduct functional regression tests. In organisations currently performing regression test manually, this case is easy to make - the tool will save testers time. However, most organisations do not conduct formal regression tests, and often compensate for this ‘sub-consciously’ by starting to test late in the project or by executing tests in which there is a large amount of duplication.

In this situation, buying a tool to perform regression tests will not save time, because no time is being spent on regression testing in the first place. In organisations where development follows a RAD approach or where development is chaotic, regression testing is difficult to implement at all - software products may never be stable enough for a regression test to mature and be of value. Usually, the cost of developing and maintaining automated tests exceeds the value of finding regression errors.

We propose that by adopting a systematic approach to testing GUIs and using tools selectively for specific types of tests, tools can be used to find errors during the early test stages. That is, we can use tools to find errors pro-actively rather than repeating tests that didn’t find bugs first time round to search for regression errors late in a project.

12.7.2 Automating GUI Tests

Throughout the discussion of the various test types in the previous chapter, we have assumed that by designing tests with specific goals in mind, we will be in a better position to make successful choices on whether we automate tests or continue to execute them manually. Based on our experience of preparing automated tests and helping client organisations to implement GUI test running tools we offer some general recommendations concerning GUI test automation below.

Pareto lawWe expect 80% of the benefit to derive from the automation of 20% of the tests.

Don’t waste time scripting low volume complex scripts at the expense of high volume simple ones.

Hybrid ApproachConsider using the tools to perform navigation and data entry prior to manual test execution.

Consider using the tool for test running, but perform comparisons manually or ‘off-line’.

Coded scriptsThese work best for navigation and checklist-type scripts. Use where loops and case statements in code leverage simple scripts.Are relatively easy to maintain as regression tests.

Recorded Scripts

V1.0 Page 21 of 52

Page 26: Guidelines for Software Testing

Need to be customised to make repeatable.Sensitive to changes in the user interface.

Test Integration Automated scripts need to be integrated into some form of test harness. Proprietary test harnesses are usually crude so custom-built harnesses are required. Migrating Manual Test Scripts Manual scripts document automated scripts Delay migration of manual scripts until the software is stable, and then reuse for regression

tests.

Non-Functional Tests Any script can be reused for soak tests, but they must exercise the functionality of concern. Tests of interfaces to desktop products and server processes are high on the list of tests to

automate. Instrument these scripts to take response time measurements and re-use for performance

testing. Following are the test automation regime that fits the GUI test process and Manual versus

automated execution presents a rough guideline and provides a broad indication to select tests to automate.

Test Types Manual or Automated? Checklist testing Manual execution of tests of application conventions

Automated execution of tests of object states, menus and standard features

Navigation Automated execution. Equivalence Partitioning, Boundary Values, Decision Tables, State Transition Testing

Automated execution of large numbers of simple tests of the same functionality or process e.g. the 256 combinations indicated by a decision table.

Manual execution of low volume or complex tests

Desktop Integration, C/S Communications Automated execution of repeated tests of simple transactions

Manual tests of complex interactionsSynchronisation, Manual execution, Soak testing, Compatibility testing, Platform/environmentAutomated execution.

V1.0 Page 22 of 52

Page 27: Guidelines for Software Testing

12.7.3 Criteria for the Selection of GUI tool Cross platform availability Supporting the underlying test methodology e.g. Bitmap comparison, Record Playback etc. Functionality Ease of use Support for distributed testing Style and power of scripting language Option to test Script development environment Non standard window handling capability Availability of technical support Low price

12.7.4 Points to be considered while designing GUI test suite: Structure the test suite, as far as possible so that no test suite depends on the success of a

previous test suite Build the capability to recover from errors into verification events Start each test case from a known baseline( data state and window state) Avoid low level GUI testing methodologies such as bitmap comparison, as these tests may

cause false test failures.

12.8 Examples of GUI Tests: Test each toolbar and menu item for navigation using the mouse and keyboard. Test window navigation using the mouse and keyboard. Test to make sure that proper format masks are used. For example, all drop-down boxes should be properly sorted. The date entry should also be

properly formatted.

Test to make sure that the colors, fonts, and font widths are to standard for the field prompts and displayed text.

Test for the colour of the field prompts and field background is to standard in read-only mode. Test to make sure that vertical scroll bars or horizontal scrollbars should not appear unless

required. Test to make sure that the various controls on the window are aligned correctly. Test to make sure that the window is resizable. Check for the spellings of all the text displayed in the window, such as the window caption,

status bar options, field prompts, pop-up text, and error messages. Test to make sure that all character or alphanumeric fields are left justified and that the

numeric fields are right justified. Check for the display of defaults if there are any. In case of multiple windows, check to make sure that they all have the same look and feel. Check to make sure that all shortcut keys are defined and work correctly. Check for the tab order. It should be from top left to bottom right. Also, the read-only/disabled

fields should be avoided in the TAB sequence. Check to make sure that the cursor is positioned on the first input field when the window is

opened. Make sure that if any default button is specified, then it should work properly. Check for proper functioning of ALT+TAB. Assure that each menu command has an alternative hot key sequence and that it works

correctly. Check that there are no duplicate hot keys defined on the window. Validate the behaviour of each control, such as push button, radio button, list box, and so on. Test to make sure that the window is modal. This will prevent the user from accessing other

functions when this window is active. Test to make sure that multiple windows can be opened at the same time. Make sure that there is a Help menu. Check to make sure that the command buttons are greyed out when not in use.

13 Client / Server Testing13 Client / Server Testing

In general, testing of client/server software occurs at three different levels:

V1.0 Page 23 of 52

Page 28: Guidelines for Software Testing

i. Individual client applications are tested in a ‘disconnected’ mode, the operation of the server and the underlying network are not considered.

ii. The client software and associated server applications are tested in concert, but network operations are not explicitly exercised.

Iii Complete c/s architecture is tested.

Few common testing approaches are:

Application function tests: the application is tested in stand-alone fashion in an attempt to uncover errors in its operation.

Server tests: the co-ordination and data management functions of the server are tested. Server performance (overall response time and data throughput) is also considered.

Database tests: the accuracy and integrity of data stored by the server is tested. Transactions posted by client applications are examined to ensure that data are properly stored, updated and retrieved. Archiving is also tested.

Transaction tests: a series of tests are created to ensure that each class of transactions is processed according to requirements. Tests focus on the correctness of processing and also on performance issues.

Network communications tests: it verifies that communication among the nodes of the network occurs correctly and that message passing, transactions and related network traffic occur without error. Network security tests may also be conducted as part of these tests.

13.1 Testing Issues

The distributes nature of client/server systems pose a set of unique problems for software testers with the following areas in focus:

Client GUI considerations Target environment and platform diversity considerations Distributed database considerations Distributed processing considerations Non-robust target environment Non-linear performance relationships

The strategy and tactics associated with c/s testing must be designed in a manner that allows each of these issues to be addressed.

13.2 C/S Testing Tactics

Object oriented testing techniques can be used even if the system is not implemented with c/s technology. The replicated data and processes can be organised into classes of objects that share the same set of properties. Once test cases have been derived for a class of objects, those test cases should be broadly applicable for all instances of the class. The OO point of is particularly valuable when the GUI of the c/s system is under testing. GUI is inherently object oriented.

Performance of C/S systems is also under test due to the following issues:

Large volumes of network traffic caused by ‘intelligent clients’ Increased layers of ‘architectural layers’ Delays between distributed processes communicating across networks The increased number of suppliers of architectural components who must be dealt with.

V1.0 Page 24 of 52

Page 29: Guidelines for Software Testing

The execution of a performance test must be automated. The five main tools for the test process are:

Test database creation / maintenance – create the large volume of data on the database Load generation – tools can be of two types, either a test running tool which drives the client

application, or a test driver which simulates clients workstations Application running tool – test running tool which drives the application under test and

records response time measurements Resource monitoring – utilities which can monitor and log client and server system resources,

network traffic, database activity.

1414 Web TestingWeb Testing

While many of the traditional concepts of software testing still hold true, Web and e-Business applications have a different risk profile to other, more mature environments. Gone are the days of measuring release cycles in months or years; instead, Web applications now have release cycles often measured in days or even hours! A typical Web tester now has to deal with shorter release cycles, constantly changing technology, fewer mature testing tools, and an anticipated user base that may run into millions on the first day of a site’s launch.

The most crucial aspect of a Web site testing is the test environment. A Web site testing is challenging. Breaking up the testing tasks based on each of the tiers of the Windows DNA architecture helps to reduce the complexity of the testing task.

14.1 Standards of WEB Testing

This is the first and very important phase of Web testing. It should also be clearly mentioned to your “Test Plan”. Whenever you are going to test a Website just make sure that the Website must follow some standards. The following points must be avoided / may not be present with a Standard Website:

14.1.1 Frames

Splitting a page into frames is very confusing for users since frames break the fundamental user model of the web page. All of a sudden, you cannot bookmark the current page and return to it, URLs stop working, and printouts become difficult. Even worse, the predictability of user actions goes out the door: who knows what information will appear where when you click on a link?

14.1.2 Gratuitous Use of Bleeding-Edge Technology

Don't try to attract users to your site by bragging about use of the latest web technology. The Site may attract a few nerds, but mainstream users will care more about useful content and site’s ability to offer good customer service. Using the latest and greatest before it is even out of beta is a sure way to discourage users: if their system crashes while visiting your site, you can bet that many of them will not be back. Unless you are in the business of selling Internet products or services, it is better to wait until some experience has been gained with respect to the appropriate ways of using new techniques.

V1.0 Page 25 of 52

Page 30: Guidelines for Software Testing

14.1.3 Scrolling Text, Marquees & Constantly Running Animations

Never include page elements that move incessantly. Moving images have an overpowering effect on the human peripheral vision. Give your user some peace and quiet to actually read the text!

14.1.4 Long Scrolling Pages

Only 10% of users scroll beyond the information that is visible on the screen when a page comes up. Just test it properly that all-critical content and navigation options should be on the top part of the page.

14.1.5 Complex URLs

It is always found that users actually try to decode the URLs of pages to infer the structure of web sites. Users do this because of the horrifying lack of support for navigation and sense of location in current web browsers. Thus, a URL should contain human-readable directory and file names that reflect the nature of the information space.

Also, users sometimes need to type in a URL, so try to minimize the risk of typing by using short names with all lower-case characters and no special characters (many people don't know how to type a ~).

14.1.6 Orphan Pages

Make sure that all pages include a clear indication of what web site they belong to since users may access pages directly without coming in through your home page. For the same reason, every page should have a link up to your home page as well as some indication of where they fit within the structure of your information space.

14.1.7 Non-standard Link Colors

The standard Links to pages that have not been seen by the user are blue; links to previously seen pages are purple or red. Don't mess with these colors since the ability to understand what links have been followed is one of the few navigational aides that is standard in most web browsers. Consistency is key to teaching users what the link colors mean.

14.1.8 Outdated Information

Many old pages keep their relevance and should be linked into the new pages. Of course, some pages are better off being removed completely from the server after their expiration date.

14.1.9 Lack of Navigation Support

Don't assume that users know as much about your site as you do. They always have difficulty finding information, so they need support in the form of a strong sense of structure and place. The Web Site must be having a Complete Site map so that the user can know where they are and if he / she want to go to different page he / she can easily go with the help of Site map. Also the site must contain a good search feature since even the best navigation support will never be enough.

14.1.10 Overly Long Download Times

If the Web site contains a link for download, then make sure that download time should not exceed 10 seconds. Traditional human factor guidelines indicate 10 seconds as the maximum response time before users lose interest. On the web, users have been trained to endure so much suffering that it may be acceptable to increase this limit to 15 seconds for a few pages.

V1.0 Page 26 of 52

Page 31: Guidelines for Software Testing

14.2 Testing of User Friendly

This is the second phase of testing. As the name suggest this is testing for User friendliness like “How friendly the WEB site with End User?” Long time back, "user-friendly" software was any application that had a menu or allowed a user to correct an input error. Today, usability engineering is a distinct professional discipline in its own right, where researchers and practitioners strive to develop and implement techniques for making software systems user-friendlier.

In the meantime, the sustained growth of the World Wide Web has resulted in the creation of literally millions of Web sites -- only a small percentage of which are user-friendly. Fortunately, many of the principles from usability engineering can be easily applied (or adapted) to Web development.

The site should be very much user friendly. Any end user can easily work with the site. There should be proper guide to use / work with the site. Identify which type of user is going to use the site with which type of connection (modem, lease line etc) and test accordingly.

Building a User friendly Web site is a worthwhile endeavor in its own right. After all, satisfied Users are the keys to a truly successful Web site. But there are also certain fringe benefits that go along with genuine user-friendliness. User-friendly Web sites should also:

Browser-friendly Bandwidth-friendly Server-friendly

Apart from this Loading of site should be tested properly because speed is the single biggest determinant of user satisfaction. Small, fast-loading pages make for more and happier visitors at your Web site. Large, slow-loading pages simply invite your visitor to browse elsewhere.

User-friendly Web sites share some common stylistic characteristics:

14.2.1 Use familiar, natural language

A user-friendly Web site understands who its intended users are, and it targets them directly. So the objective of WEB testing should be user perspective. Using your user language, without neither apology nor pretense, helps them to feel "at home" at your Web site.

Always remember that you are your user's host in their virtual tour of your Webspace Be conversational. Be polite. Use complete sentences. And don't nag them about their choice in Web browser.

14.2.2 Checklist of User-friendliness:

Prepare a following checklist to test a User Friendly Web Site:

A. Clarity of Communication

Does the site convey a clear sense of its intended audience? Does it use language in a way that is familiar to and comfortable for its readers? Is it conversational in its tone?

B. Accessibility Is load time appropriate to content, even on a slow dial-in connection? Is it accessible to readers with physical impairments? Is there an easily discoverable means of communicating with the author or administrator?

C. Consistency

V1.0 Page 27 of 52

Page 32: Guidelines for Software Testing

Does the site have a consistent, clearly recognizable "look-&-feel"? Does it make effective use of repeating visual themes to unify the site? Is it visually consistent even without graphics?

D. Navigation

Does the site use (approximately) standard link colors? Are the links obvious in their intent and destination? Is there a convenient, obvious way to maneuver among related pages, and between different

sections?

E. Design & maintenance

Does the site make effective use of hyperlinks to tie related items together? Are there dead links? Is page length appropriate to site content?

F. Visual Presentation

Is the site moderate in its use of color? Does it avoid juxtaposing text and animations? Does it provide feedback whenever possible?

14.3 Testing of User Interface

The site should be looking good and gives a good feeling to the user. Every screen should be appeared properly. The Site should have:

14.3.1 Visual Appeal

The visual appearance of a Web site is important to maintain repeat visits. Although the home page of a Web site is the "breadwinner," catalog pages cannot be ignored. Regardless of the developer's choice for color, font, or graphics, the tester needs to test for the appearance of the site thoroughly and try to bring out problem areas.

Tests required to check the visual appeal of a site are described below.

14.3.1.1 Browser compatibility for font style

There are a number of different fonts available on HTML editors these days. However, many of these fonts may not display on all browsers, especially on older versions. Or they may display as unreadable characters. Therefore, it's important to test the browser for version compatibility.

14.3.1.2 Consistency of font size

Test for consistency of font size throughout the Web site. The standard font size for Web is 18 to 24 for Header Part and 10 to 14 for Body part.

14.3.1.3 Colors

Testing of color is again an important thing for Web testing. Test the combinations of foreground and background colors of all the pages of the Web Site.

14.3.1.4 Graphics

V1.0 Page 28 of 52

Page 33: Guidelines for Software Testing

Fewer graphics on a Web page aid in faster downloads. As much as possible, thumbnails should replace photographs. The tester must test for download time of graphics-intensive pages if there is any link of download.

14.3.2 Grammatical and Spelling Errors in the Content

The home page requires special attention because it is the first page that the site visitor sees.

Use the spelling checker to check the spelling throughout the site. Sometimes there are errors that may not be checked by the spelling checker, such as "there" and "their."

Finally, make sure to proofread the entire site to check the grammar.

14.3.2.1 Checklist of User-Interface:

Prepare a following checklist to test a User Interface Web Site:

Test for general look and feel appearance of the entire Window. Test the complete functionality of control panel of the entire Window. (Minimizing,

maximizing, double click of mouse on control panel should work properly, close etc.) Test the spellings of all the text displayed in the window, such as the window caption, status

bar options, field prompts, pop-up text, and error messages. Test the colors, fonts, and font widths of entire Window. That should be standard for the field

prompts and displayed text. Test each toolbar and menu item for navigation using the mouse and keyboard. Test window navigation using the mouse and keyboard. Test to make sure that proper format masks are used. For example, all drop-down boxes

should be properly sorted. The date entry should also be properly formatted. Test for the color of the field prompts and field background is to standard in read-only mode. Test the Vertical / Horizontal Scroll Bar. These should appear only if required. Test the various controls on the window. The control should be aligned properly. Test the resizing of Window. Test the alignment of the field. All character or alphanumeric fields should be left aligned and

all the numeric fields should be right aligned. Check for the display of defaults if there are any. Test all the shortcut keys. These all should be well defined and work properly. Test the Hotkeys of the entire Window. Every menu command should have a properly

defined hotkey. Test for the duplication of Hotkeys on the same Window. Test that Alt + Tab is working properly. Test that Alt + F4 is working properly. Test the tab order. It should be from top left to bottom right. Also, the read-only/disabled

fields should be avoided in the TAB sequence. Test the positioning of the Cursor. The cursor should be positioned on the first input field (if

any) when the window is opened. Make sure that if any default button is specified, then it should work properly. Test and validate the behaviour of each control, such as push button, radio button, list box

etc. Test to make sure that the window is modal. This will prevent the user from accessing other

functions when this window is active. Test to make sure that multiple windows can be opened at the same time. Make sure that there is a Help menu. Check to make sure that the command buttons are greyed out when not in use.

V1.0 Page 29 of 52

Page 34: Guidelines for Software Testing

14.4 Server Load Testing

Web sites that rely on a heavy volume of trading on the Internet need to make sure that their Web servers have a very high uptime. To prevent breakdown and to offload traffic from a server at peak time, entrepreneurs must invest in additional Web servers. The power of a Web server to handle a heavy load at peak hours depends on the network speed and the server's processing power, memory, and storage space of the server. The hardware component of the Web server is most vulnerable at peak hours.

The number of simultaneous users that the server can successfully handle measures its capacity. Excessive load on the Web server causes it to degrade dramatically in performance until the load is reduced. The objective of this load testing is to determine an optimum number of simultaneous users

14.5 Database Testing

Most of the Web sites typically have User profile, stores catalogs, shopping Cart, and order information in the database. Since the database stores lot of information about the site and user so it must be tested thoroughly. The purpose of database testing is to determine how well the database meets requirements.

Following are the main reason to test a database:

14.5.1 Relevance of Search Results

The Search option is one of the most frequently used functions of online databases. Generally users uses the Search results to go directly to other page instead of going step-by-step and also to save the time and effort.

It was found that Search option of lots of Web site is not working properly. Which makes a user annoyed. Just to make them happy be sure that the Search option of your Web site is working properly and displaying the proper result.

A team of people that are not a part of the development team should carry out testing for Search relevance. This team assumes the role of the online customer and tries out random Search options with different keywords. The Search results are recorded by the percentage of relevance to the keyword. At the end of the testing process, the team comes up with a series of recommendations. This can be incorporated into the database Search options.

14.5.2 Query Response Time

The query response time is essential in online transactions. The turnaround time for responding to queries in a database must be short. The results from this testing may help to identify problems, such as bottlenecks in the network, specific queries, the database structure, or the hardware.

14.5.3 Data integrity

A database stores an important data of catalog, pricing, shipping tables, tax tables, order database, and customer information. Testing must verify the correctness of the stored data. Therefore, testing should be performed on a regular basis because data changes over time.

Checklist for Data Integrity

Prepare the following checklist for the proper testing of Data Integrity of a Web Site:

From the list of functionality provided by the development team test the creation, modification, and deletion of data in tables.

V1.0 Page 30 of 52

Page 35: Guidelines for Software Testing

Test to make sure that sets of radio buttons represent a fixed set of values. Check on what happens when a blank value is retrieved from the database.

Test to make sure that when a particular set of data is saved to the database, each value gets saved fully. In other words, the truncation of strings and rounding of numeric value does not occur.

Test whether default values are saved in the database if the user input is not specified. Test the compatibility with old data. In addition, old hardware, versions of the operating

system, and interfaces with other software need to be tested.

14.5.4 Data Validity

Errors caused due to incorrect data entry, called data validation errors, are probably the most common data related errors. These errors are also the most difficult to detect in the system. These errors are typically caused when a large volume of data is entered in a short time frame. For example, $67 can be entered as $76 by mistake. The data entered is therefore invalid.

You can reduce data validity errors. Use the data validation rules in the data fields. E.g. the date field in a database uses the MM/DD/YYYY format. A developer can incorporate a data validation rule, such that MM does not exceed 12; DD does not exceed 31.

In many cases, simple field validation rules are unable to detect data validity errors. Here, queries can be used to validate data fields. For example, a query can be written to compare the sum of the numbers in the database data field with the original sum of numbers from the source. A difference between the figures indicates an error in at least one data element.

14.5.5 Recovery of Data

Another test that is performed on database software is the Recovery of data test. This test involves forcing the system to fail in a variety of ways to ensure that the system recovers from faults and resumes processing within a pre-defined period of time. The system is fault-tolerant, which means that processing faults do not halt the overall functioning of the system. Data recovery and restart are correct in case of auto-recovery. If recovery requires human intervention, then the mean time to repair the database is within pre-defined acceptable limits.

14.6 Security Testing

Gaining the confidence of online customers is extremely important to Web site success. Building the confidence of online customers is not an easy task and requires a lot of time and effort. Therefore, entrepreneurs must plan confidence-building measures. Ensuring the security of transactions over the Internet ensures customer confidence.

The main technique in security testing is to attempt to violate built-in security controls. This technique ensures that the protection mechanisms in the system secure it from improper penetration.

The tester overwhelms the system by continuous requests, thereby denying service to others. The tester may purposely cause system errors to penetrate during recovery or may browse through insecure data, to find the key to system entry.

There are two distinct areas of concern in Web site security:

14.6.1 Network Security

Unauthorized users can wreak havoc on a Web site by accessing confidential information or by damaging the data on the server. This kind of security lapse is due to insufficient network security measures. The network operating systems, together with the firewall, take care of the security over the network.

V1.0 Page 31 of 52

Page 36: Guidelines for Software Testing

The network operating system must be configured to allow only authentic users to access the network. Also, firewalls must be installed and configured. This ensures that the transfer of data is restricted from only one point on the network. This effectively prevents hackers from accessing the network.

For example, a hacker accesses the unsecured FTP port (say Port 25) of a Web server. Using this port as an entry point to the network, the hacker can access data on the server. The hacker may also be able to access any machine connected to this server. Therefore, security testing will indicate these vulnerable areas and will also help to configure the network settings for better security.

14.6.2 Payment Transaction Security

Secure transactions create customer confidence. That's because when customers purchase goods over the Internet, they can be apprehensive about giving Credit Card information. Therefore, security measures should be communicated to the customer.

Two things needed to be tested to ensure that the customer's Credit Card information is safe:

i. Testing should ensure that the credit card information is transmitted and stored securely. ii. Testing should verify that strong encryption software is used to store the Credit Card

information, and only limited, authorized access is allowed to this information.

14.7 Software Performance Testing

Software performance testing aims to ensure that the software performs in accordance with operational specifications for response time, processing costs, storage use, and printed output.

All interfaces are fully tested. This includes verifying the facilities and equipment, and checking to make sure that the communication lines are performing satisfactorily. Following should be tested for Software Performance of a Web Site:

14.7.1 Correct Data Capture

Correct data capture refers to the use of CGI scripts or ASP to capture data from the Web client. This includes forms, credit card numbers, and payment details. Any error in capturing this data will result in incorrect processing of the customers' orders.

14.7.2 Completeness of Transaction

Transaction completeness is the most important aspect of a Web site transaction. Any error in this phase of operation can invite legal action because the affected party may be at risk of losing money due to an incomplete transaction. 14.7.3 Gateway Compatibility

The payment gateway consists of software installed on Web servers to facilitate payment transactions. The gateway software captures Credit Card details from the customer and then verifies the validity of the Credit Card with the transaction clearinghouse.

Gateways are complex because they can create compatibility problems. In turn, these problems make Web site transactions unreliable. So, the entrepreneur needs to consult experienced developers before investing in a payment gateway. Therefore, before launching the site, online pilot testing must be done to test the reliability of the gateway.

V1.0 Page 32 of 52

Page 37: Guidelines for Software Testing

14.8 Web Testing Methods

The following methods can be used for the Web testing:

14.8.1 Stress Testing

Running the system in a high-stress mode creates high demands on resources and stress tests the system. Some systems are designed to handle a specified volume of load.

For example, A Bank Transaction Processing System may be designed to process up to 100 transactions per second; an operating system may be assigned to handle up to 200 separate terminals.

Tests must be designed to ensure that the system can process expected load. This usually involves planning a series of tests where the load is gradually increased to reflect the expected usage pattern.

Stress tests steadily increase the load on the system beyond the maximum design load until the system fails. This type of testing has a dual function:

i) It tests the failure behavior of the system. Circumstances may arise through an unexpected combination of events where the load placed on the system exceeds the maximum anticipated load. Stress testing determines if overloading the system results in loss of data or user service.

ii) It stresses the system and may cause certain defects to come to light, which may not normally manifest the errors.

Stress testing is particularly relevant to Web site system with Web databases. These systems often exhibit severe degradation when the network is swamped with operating system call.

14.8.2 Regression Testing

Regression testing refers to re-testing previously tested components/functionality of the system to ensure that they function properly even after a change has been made to parts of the system.

As defects are discovered in a component, modifications should be made to correct them. This may require other components in the testing process to be re-tested.

Component system errors can present themselves later in the testing process. The process is iterative because is information fed back from later stages to earlier parts of the process. Repairing program defects may introduce new defects. Therefore, the testing process should be repeated after the system is modified.

Guidelines to follow for Regression testing:

i) Test any modifications to the system to ensure that no new problems are introduced and that the operational performance is not degraded due to the modifications.

ii) Any changes to the system after the completion of any phase of testing or after the final testing of the system must be subjected to a thorough Regression test. This is to ensure that the effects of the changes are transparent to other areas of the system and other systems that interface with the system.

iii) The project team must create test data based on predefined specifications. The original test data should come from other levels of testing and then it should be modified along with test cases.

V1.0 Page 33 of 52

Page 38: Guidelines for Software Testing

14.8.3 Acceptance Testing

Acceptance testing is performed on a collection of business functions in a production environment, and after the completion of functional testing. This is the final stage in the testing process before the system is accepted for operational use. It involves testing the system with data supplied by the customer or the site visitor rather than the simulated data developed as part of the testing process.

Acceptance testing often reveals errors and omissions in the system requirements definition. The requirements may not reflect the actual facilities and performance required by the user. Acceptance testing may demonstrate that the system does not exhibit the anticipated performance and functionality. This test confirms that the system is ready for production.

Running a pilot for a select set of customers helps in Acceptance testing for an e-commerce site. A survey is conducted among these site visitors on different aspects of the Web site, such as user friendliness, convenience, visual appeal, relevance, and responsiveness.

A sample of test plan and test cases on Web Testing has been added in Appendix – 3.

15 Guidelines to prepare Test Plan15 Guidelines to prepare Test Plan

15.1 Preparing Test Strategy

The test strategy should be:

Risk based – the amount and rigour of testing at each stage corresponds to the risk of failure due to errors.

Layered and staged – aligned with the development process, testing aims to detect different errors at each stage

Prepared early – to allow testing to be planned, resourceful and scheduled and to identify any tool requirements

Capable of automation – throughout the life cycle, there are opportunities for automated support, particularly in the areas of requirements testing test execution and test management.

Since the strategy is aimed at addressing the risks of a client/server development, knowledge of the risks of these projects is required.

15.2 Standard Sections of a Test Plan

A standard test plan contains different sections. Those sections with their explanations are given below:

1. Introduction

Set goals and expectations of the testing effort. Summarise the software items and software features to be tested. The purpose of each item and its history may be included.

References to the following documents, if they exist, are required in the highest-level test plan:

Project authorisation Project plan Relevant policies Relevant standards

V1.0 Page 34 of 52

Page 39: Guidelines for Software Testing

In multilevel test plans, each lower level plan must reference the next higher level plan.

1.1 Purpose

Describe the purpose of the test plan. Multiple components can be incorporated into one test plan

1.2 System Overview

This section provides an overview of the project and identifies critical and high-risk functions of the system.

1.3 Test Team Resources

The composition of the ABC Project test team is outlined within the test team profile depicted in Table 9.1. This table identifies the test team positions on the project together with the names of the personnel who will fill these positions. The duties to be performed by each person are described, and the skills of the individuals filling the positions are documented. The last two columns reflect the years of experience for each test team member with regard to total test program experience as well as years of experience with the designated test management tool for the project.

Table 9.1 Test Team Profile

Position Name Duties/Skills Test Experience(years)

Test ToolExperience(years)

2. Test Environment

2.1 Test Environment

The test environment mirrors the production environment. This section describes the hardware and software configurations that compose the system test environment. The hardware must be sufficient to ensure complete functionality of the software. Also, it should support performance analysis aimed at demonstrating field performance.

2.2 Automated Tools

List any testing tools you may want to use.

2.3 Test Data

Working in conjunction with the database group, the test team will create the test database. The test database will be populated with unclassified production data.

3. Test Program

3.1 Scope of testing

List of the features to be tested, such as particular field and expected values, etc., Identify software features to be tested. Identify the Test-Design Specification associated with each Feature

V1.0 Page 35 of 52

Page 40: Guidelines for Software Testing

3.2 Areas beyond the scope

Identify all features and significant combinations of features, which will not be tested, and the reasons.

3.3 Test Plan Identifier

Specify a unique identifier to be used when referring to this document.

Naming Convention for Test Identifier

ABCXXYYnn

ABC - First 3 letters of the project nameXX - Type of Testing CodeFor e.g.

System Testing - STIntegration Testing -ITUnit Testing -UTFunctional Testing - FT

YY - A particular document code/ abbreviation within the type of testing.For e.g.Test Plan - TPTest Case -TCTest Result - TRTest Specifications - TSDefect Reports - DRnn - Version Serial Number

Identifier Document RemarkARCSTTP1.0 System test Plan MS-Word Document

3.4 Test Item

Identify the test items including their version/revision level. Also specify characteristics of their transmittal media which impact hardware requirements or indicate the need for logical or physical transformations before testing can begin. (An example would be that the code must be placed on a production server or migrated to a special testing environment separate from the development environment.)

Supply references to the following item documentation, if it exists:

Requirements specification Design specification User guide Operations guide Installation guide Reference any incident reports relating to the test items.

Items that are to be specifically excluded from testing may be identified.

V1.0 Page 36 of 52

Page 41: Guidelines for Software Testing

3.5 Test Schedule

Include test milestones identified in the software project schedule as well as all item transmittal events. Define any additional test milestones needed. Estimate the time required for each testing task. Specify the schedule for each testing task and test milestone. For each testing resource (that is, facilities, tools, and staff), specify its periods of use.

3.6 Test Approach

Describe the general approach to testing software features and how this approach will ensure that these features are adequately tested. Specify the major activities, techniques, and tools, which will be used to test the described features. The description should include such detail as identification of the major testing tasks and estimation of the time required to do each one.

3.6.1 Test Coverage

(Determine the adequacy of test plan) Indicate branch or multiple location. If all the conditions are covered in test cases.

3.6.2 Fixes and Regression Testing

3.6.3 Preparation of test Specification

This contains the criteria and reference used to develop test specification. Type of testing (Black or White Box Testing is to be mentioned).

3.6.4 Pass/Fail Criteria

Specify the criteria to be used to determine whether each test item has passed or failed testing. If no basis exists for passing or failing test items, explain how such a basis could be created and what steps will be taken to do so.

3.6.5. Suspension criteria and resumption requirements

Specify the criteria used to suspend all or a portion of the testing activity on the test items associated with this plan. Specify the testing activities that must be repeated, when testing resumes.

3.6.6 Defect Tracking

To track defects, a defect workflow process has to be implemented.

3.6.7 Constraints

Identify significant constraints on testing such as test item availability, testing resource availability, and deadlines.

3.6.8 Entry and Exit Criteria

3.6.9 Test Deliverables

Identify all documents relating to the testing effort. These should include the following documents: Test Plan, Test-Design Specifications, Test-Case Specifications, Test-Procedure Specifications, Test-Item Transmittal Reports, Test Logs, Test-Incident Reports, and Test-Summary Reports.

Also identify test input/output data, test drivers, testing tools, etc.

V1.0 Page 37 of 52

Page 42: Guidelines for Software Testing

3.6.10 Dependencies and Risk

Identify the high-risk assumptions of the test plan. Specify contingency plans for each.

3.6.11 ApprovalsSpecify the names and titles of all persons who must approve this plan. Provide space for the signatures and dates.

16 Amendment History16 Amendment History V1.0 – First Release

17 Guideline for Test Specifications17 Guideline for Test SpecificationsAn overall plan for integration of the software and a description of specific tests are documented in the test Specification. This document contains a test plan and a test procedure, is a work product of the software process, and becomes part of the software configuration. A history of actual test results, problems or peculiarities is recorded in this document.

Sr. Test Items Test Condition Expected Results

Observed Results

Remarks

ReferencesReferences1. Kaner, Cem (1997). Improving the Maintainability of Automated Test Suites.

www.kaner.com/lawst1.htm2. Myers, Glenford J. (1978). The Art of Software Testing. John Wiley & Sons.3. Pressman, Roger S. (5/e). Software Engineering – A practitioner’s Approach. McGraw Hill.4. Beizer, Boris, (1995). Black Box Testing – Techniques for Functional Testing of Software

and Systems. John Wiley & Sons 5. Dustin, Elfriede ; Rashka, Jeff ; Paul, John; (1999). Automated Software Testing –

Introduction, Management and Performance. Addison Wesley.

V1.0 Page 38 of 52

Page 43: Guidelines for Software Testing

Appendix – 1Appendix – 1

List of Testing Tools:List of Testing Tools:Life-Cycle Phase

Type of Tool, Tool Description Tool Example

Business Analysis Phase

Business Modeling Tool

Allow for recording definitions of user needs and automating the rapidconstruction of flexible,graphical, client-serverapplications

Oracle Designer 2000, Rational Rose

Configuration Management Tools

Allow for baselining important data repositories

Rational ClearCase,PVCS

Defect Tracking Tools

Manage system life-cycle defects

TestTrack, Census,PVCS Tracker, Spyder

Technical Review Management

Facilitate communication, while automating technical review/inspection process

ReviewPro (Software Development Technologies)

Documentation Generators

Automate document generation

Rational SoDA

Life-Cycle Type of Tool, Tool Description Tool Example

Requirements Definition Phase

Requirements Management Tools

Manage and organize requirements; allow for test procedure design; allowfor test progress reporting

Rational Requisite Pro, QSS DOORS

Requirements Verifiers

Verify syntax, semantics, and testability

Aonix Validator/Req

Use Case Generators

Allow for creation of use cases

Rational Rose

Life-Cycle Type of Tool, Tool Description Tool Example

Analysis Database Design Provide a solution for Oracle

V1.0 Page 39 of 52

Page 44: Guidelines for Software Testing

and Design Phase

Tools developing second- ,generation enterprise client-server systems

Developer 2000, Erwin, Popkins ,Terrain by Cayenne

Application Design Tools

Help define software architecture; allow for object-oriented analysis, modeling, design, and construction

Rational Rose, Oracle Developer 2000, Popkins, Platinum, Object Team by Cayenne

Structure Charts and Sequence Diagrams

Help manage processes Flowcharts,

Micrografx and FlowCharter 7

Test Procedure Generators

Generate test procedures from requirements or design or data and object models

Aonix Validator, StP/T from IDE, Rational TestStudio

Life-Cycle Type of Tool, Tool Description Tool Example

Programming Phase

Syntax Checkers/ Debuggers

Allow for syntax checking and debugging capability; usually come with built-in programming language compiler

Miscellaneous language compilers (C, C++, VB, Powerbuilder)

Memory Leak and Runtime Error Detection Tools

Detect runtime errors and memory leaks

Rational Purify

Source Code Testing

Verify maintainability, Tools portability, complexity, cyclomatic complexity, and standards compliance

CodeCheck fromAbraxas Software,Visual Quality from McCabe & Associates

Static and Dynamic Analyzers

Depict quality and structure of code

LDRA Testbed, Discover

Various Code Implementation Tools

Depending on the application, support code generation, among other things

PowerJ, Jbuilder SilverStream Symantec Café

Unit Test Tools Automate the unit testing process

MTE from Integrisoft

Life-Cycle Type of Tool, Tool Description Tool Example

V1.0 Page 40 of 52

Page 45: Guidelines for Software Testing

Metrics Tools

Code (Test) Coverage Analyzers or Code Instrumentors

Identify untested code and support dynamic testing

STW/Coverage, Software Research TCAT, Rational Pure Coverage, IntegriSoft, Hindsight and EZCover

Usability Measurements

Provide usability testing as conducted in usability labs

ErgoLight

Life-Cycle Type of Tool, Tool Description Tool Example

Other Testing Life-cycle Support Tools

Test Data Generators

Generate test data TestBytes, Rational Performance Studio

Prototyping Tools Allow for prototyping of applications, usingprogramming languages like Visual Basic or using tools like Access 97

VB, Powerbuilder

File Compare Utilities

Allow for searching for discrepancies between files that should be identical in content

Often part of capture/playback tools such as Rational’s Team Test, GMR Technologies’ D2K/PLUS, andSoftware Research'sEXDIFF

Simulation Tools Simulate application to measure for scalability,among other tasks

OPNET

Life-Cycle Type of Tool, Tool Description Tool Example

Testing Phase

Test Management Tools

Allow for test management

Rational Suite TestStudio,TestDirector fromMercury Interactive

Network Testing Tools

Allow for monitoring, measuring, testing, diagnosis of performance across the entire

NETClarity, Appliedand Computer Technology ITF

V1.0 Page 41 of 52

Page 46: Guidelines for Software Testing

networkGUI Testing Tools-(Capture/Playback)

Allow for automated GUI tests; capture/ playback tools interactions with online systems, so they may be replayed automatically

Rational Suite Test Studio, Visual Test,Mercury Interactive’sWinRunner, Segue’sSilk, STW/Regressionfrom Software Research, Auto Scriptor Inferno,Automated TestFacility from Softbridge, QARUNfrom Compuware

Non-GUI Test Drivers

Allow for automatedexecution of tests forproducts without a graphicaluser interface

Load/Performance Testing Tools

Allow for load/performance and stress testing

Rational PerformanceStudio

Web Testing Tools

Allow for testing of Web Applications, and so on

Segue’s Silk, , Java ParaSoft’s Jtest

Environment Testing

Various testing tools are on Tools the market for various testing environments

Mercury Interactive’sXRunner, Rational’s Prevue-X

V1.0 Page 42 of 52

Page 47: Guidelines for Software Testing

Appendix - 2Appendix - 2

Sample system test planSample system test plan

1.0 Introduction

1.1 Purpose

This test plan for ABC version 1.0 should support the following objectives.

1. To detail the activities required to prepare for and conduct the system test. 2. To communicate to all responsible parties the task(s), which they are to perform, and the

schedule to be followed in performing the tests. 3. To define the sources of the information used to prepare the plan. 4. To define the test tools and environment needed to conduct the system test.

1.2 Background

ABC is an integrated set of software tools developed to extract raw information and data flow information from C programs and then identify objects, patterns, and finite state machines.

1.3 Test Team ResourcesThe composition of the ABC Project test team is outlined within the test team profile depicted in Table 2A. This table identifies the test team positions on the project together with the names of the personnel who will fill these positions. The duties to be performed by each person are described, and the skills of the individuals filling the positions are documented. The last two columns reflect the years of experience for each test team member with regard to total test program experience as well as years of experience with the designated test management tool for the project.

V1.0 Page 43 of 52

Page 48: Guidelines for Software Testing

Table 2A Test Team Profile

Position Name Duties/Skills Test Experience(years)

Test ToolExperience(years)

Test manager Mr. X Responsible for test program, customer interface, recruiting, test tool introduction, and staff supervision. Skills: MS Project, C,Test tool experience

12 1

Test lead Miss. A Performs staff supervision, cost/progress status reporting, test planning/design/ development and execution. Skills: TeamTest, , SQL, SQA Basic, UNIX, MS Access, C/C++, SQL Server.

5 3

Test engineer Mr. D Performs test planning/design/ development and execution. Skills: Test tool experience, C.

2 5

Test engineer Miss T Responsible for test tool environment, network and middleware testing. Performs all other test activities. Skills: CNE, UNIX, C/C++,

1 -

Junior test Miss J Performs test planning/design/ engineer development and execution. Skills: C/C++

- -

2.0 Test Environment

2.1 Hardware & Software

Hardware

The tests will be conducted on one of the machines licensed to run REFINE/C.

Software

REFINE/C, ABC modules, etc.

2.2 Automated Tools

One tool, which we will use, is DGL. DGL is a test case generator tool. We will build a C grammar to generate random C code. While the generated code will not be syntactically correct in all cases, it will give us some good ideas for use in stress testing our code. Thus, the main purpose of DGL will be to generate arbitrary code that will give the test team ideas on building tests that might otherwise not be considered.

V1.0 Page 44 of 52

Page 49: Guidelines for Software Testing

2.3 Test Data

Discuss with the development team to generate test data and create a test database. May contact with the customer also.

3.0 Test Program

3.1 Scope of Testing

This test plan covers a complete "black box" or functional test of the associated program modules (below). We assume the correctness of the REFINE/C grammar and parser. We also assume that "white box" or "glass box" testing will be done prior to these tests. We will not explicitly test the interfaces between modules.

3.2 Area Beyond the Scope

There is no need to test security, recovery, or performance for this system.

3.3 System Test Plan Identifier

Identifier Document RemarkABC-STTP1.0 System Test Plan MS-Word Document

3.4 Test Items

Program Modules

The program modules are detailed below. The design documents and the references mentioned below will provide the basis for defining correct operation.

Design Documents

These are links to the program module design documents. The indentation shows the dependencies between modules. Modules at the same level do not depend upon each other. The inner level indented module depends upon the outer levels. All depend either directly or indirectly on "Interface ..". Control Dependency Graphs and Reaching Definitions depend upon Control Flow Graphs, but are independent of each other, and so on.

Interface to REFINE and REFINE/C Control Flow Graphs Control Dependency Graphs Reaching Definitions Data Dependency Graphs Canonicalize Variables Variable Dependency Graphs Cohesion Processing Slice Module

V1.0 Page 45 of 52

Page 50: Guidelines for Software Testing

Test Documents

This set of links 1points to the root of each individual module's test document tree.

Interface to REFINE and REFINE/C Control Flow Graphs Control Dependency Graphs Reaching Definitions Data Dependency Graphs Canonicalize Variables Variable Dependency Graphs Cohesion Processing Slice Module

3.5 Test Schedule

A detailed test schedule (portion of schedule) is given below:

TaskID Task Description

Duration Start Finish

Responsibility

1 Develop test responsibilities 1d 11/25 11/25 PM2 Develop review and reporting methods 1d 11/26 11/26 PL3 Develop management of test sessions 1d 11/27 11/27 PM/PL4 Verify change-control activities 1d 11/27 11/27 PL5 Develop issue/problem reporting

standards1d 11/30 11/30 PL

6 Develop test procedures 59d 12/12 2/12 PL7 Develop functional/usability test

procedures 55d 12/12 2/8 PL

8 Develop security test procedures 15d 12/22 1/7 PL9 Develop stress/volume test procedures 16d 1/7 1/23 PL10 Develop performance test procedures 14d 1/23 1/27 PL

3.6 Test Approach

The test personnel will use the design document references in conjunction with the ANSI C grammar by Jutta Degener to devise a comprehensive set of test cases. The aim will be to have a representative sample of any possible constuct which the module should handle. For example, in testing the Control Flow Graph module, we would want cases containing various combinations of iteration-statements, jump-statements, and selection-statements.

3.6.1 Fixes and Regression Testing

The complete set of test cases developed for a particular module will be rerun after program changes to correct errors found in that module during the course of testing.

3.6.2 Comprehensiveness

Using the C grammar as a basis for generating the test cases should result in a comprehensive set of test cases. We will not necessarily try to exhaustively cover all permutations of the grammar, but will strive for a representative sample of the permutations.

1 The links mentioned above, are for redirecting the reader to some pre-existing test documents. Since this is a sample test plan, it is beyond the scope of this document to include actual test documents. It only gives the idea that one can refer to existing test document with hyperlink.

V1.0 Page 46 of 52

Page 51: Guidelines for Software Testing

3.6.3 Pass/Fail Criteria

The initial run of tests on any given module will be verified by one of the test team personnel. After these tests are verified as correct, they will be archived and used as an oracle for automatic verification for additional or regression testing. As an example, when testing the Control Flow Graph module, an output is deemed correct if the module outputs the correct set of nodes and edges for a particular input C program fragment.

3.6.4 Suspension Criteria and Resumption Requirements

N/A

3.6.5 Defect Tracking

To track defects, <Name of the Defect Tracking System> will be used as a tracking tool. The status of a bug, recorded in <Name of the Defect Tracking System>, would follow the standard status identification terms and methodology of <Name of the Defect Tracking System> throughout its lifecycle. Status could be any of the following:Example:1. OpenForDev: Bug identified and reported to the development team.2. OpenForQA: Bug fixed and sent back to QA for verification.3. Fixed Bug fix verified by QA.

3.6.6 Constraints

We need to plug in our deadline(s) here. Testing deadlines are not firm at this time.

3.6.7 Test Deliverables

These will be detailed at the module test plan level.

We have decided that because the vast majority of the test cases will be submitted by the test team personnel, there is no need for a Test Item Transmittal Report. If there are any test cases submitted by outside parties, we will handle these as if they were a change request. This means that the test team must approve of the reason for the test and the specific test before it will be placed into the appropriate module's test suite.

3.6.8 Dependencies and Risk

Risk Affect Resolution Chances of OccurrenceAttrition Rate Very High/High/Medium/Low/Very LowHardware sharing with other projects

Very High/High/Medium/Low/Very Low

Non Availability of H/W & S/W in time

Very High/High/Medium/Low/Very Low

3.6.9 Approvals

Test Manager/Date

Development Project Manager/Date

Quality Assurance Manager/Date

V1.0 Page 47 of 52

Page 52: Guidelines for Software Testing

Appendix - 3 Appendix - 3

Sample Test Plan for Web Testing:Sample Test Plan for Web Testing:

PRE-DELIVERY TEST PLANPlan ID: TPA100Date: 28-Dec-00

Project : ABC 1.00

AMENDMENT HISTORY

Amendment ID

Version Date Amendment Author

01 1.00 28-Dec-2000 Document Created Sanjay

Brief description of the Product :

This project is an e-commerce order-placement site. The client is a US based organisation “ABC Cables” which deals in various types of cable. The Cables are available with a standard size as well as customisation of the length of Cable is also there. This site is restricted for the user of US only.

< Name of the Company > has developed a dynamic site, that displays the product ranges based on the database and allows a user to select items and place order through Internet.

The testing is being planned on special request from the Project Manager.

Test Objectives

The objective is to conduct black box testing of the system from user perspective.

Test Scope

1. Testing of all the features submitted by the Project Manager is to be done thoroughly. All these documents are available in the baseline library under SRS folder of ABC Cables.

2. Since the project has been used with Internet Explorer so the testing should be done under Netscape. The test cases should cover: Verification of maximum no. of Item (150 Item) selected by all the available methods. Normal (Below 150 Pounds of weight of selected Item), Boundary (150 Pounds of weight

of selected Item) & Extreme (Above 150 Pounds of weight of selected Item) condition of Shopping should be tested on Live site.

Testing should be done on Local Site for extreme conditions of large quantity (9999) of an Item, Large value (9999999999.99) of an invoice, large number of Items (100) in the Shopping Cart and large number of operations (approx. 50, other than adding item) on the shopping cart.

3. Coverage of the System (Based on Working Model & Specification document): Menu Options – 100%. Functionality – 100% – based on the specification document submitted by Project

Manager. User Interface – 75% (Mainly covering the general Look & Feel, screen appearance &

Popup Menu of each type of the page).

V1.0 Page 48 of 52

Page 53: Guidelines for Software Testing

Navigation - 30% (Mainly covering Switching from one page to another through Search (15% items) and links (15% items) and movement within the page)

Security – 75% - Covering in detail Login & Logout for registered users (at least one of each type) and some invalid conditions.

Test Environment

Pentium based PC and 486 DX. Netscape < Ver > and Internet Explorer <Ver > Modem – < Detail of the Modem > for example: US-Robotics, Speed 28,800/ Fax Modem

with V.34 & V.32 bis. Internet site http://000.00.000.00 Local site http://ABC_ntserver/Abc.com

Stop Criteria All the test cases are tested at least once. All the critical defects are addressed and verified. All the unsolved defects are analysed and marked with necessary comments / status. In the last iteration of testing, there are no critical defects reported.

Test Process Testing team will prepare Test Case List & Test Cases based on the documents provided by

the development team. Testing shall be done based on these test cases and Test Report will be prepared. The bugs encountered shall be reported using <Name of the Defect tracking System >

simultaneously. The decision for Project acceptance or rejection will be based on the feedback from the

Project Manager. The verification of the fixed defects shall be done after the release of fresh software (if

required). In case of any defects, that do not allow the test case to be tested completely, no further

testing will be done on that test case. During verification of the fixed defect, complete testing of the test case will be repeated.

Testing team will maintain the status and criticality of each reported defect. The process of defect finding and verification shall be iterated until stop criteria is satisfied.

Human Resources

All the team members of Testing Group < Name of the Team members > will be involved in testing. However depending on other tasks and resource availability, reallocation may be done.

Reporting

After the completion of each test cycle Testing Head will submit the defect report and inform whether the software is rejected or not.

Training Requirement

Domain Area / Application knowledge The Project Manager has given a proper training.

Operational It is acquired by the working on the site since it is an Internet Site.

V1.0 Page 49 of 52

Page 54: Guidelines for Software Testing

Sample Test cases For Login Page

Topic: Login

Functionality: Login

Reference(s): Nil.

Data set should cover the normal, boundary and extreme cases of data for each field in the screen concerned.

1. The testing should be done for the following valid conditions at least once: Login as a privileged user (with all 7 types) & add 5 items randomly selected

& verify the cost of an item against the percentage of discount allowed to that category of user.

Verify that after successful login, the control goes to the screen-displaying catalogue of ultra spec cables.

Search at least 2 items with no further sub-levels (after successful login). Search at least 3 items with further sub-levels (after successful login). Clicking on an item category display that category in details i.e. showing the

contents or items available under that category.

2. The testing should be done for the following invalid conditions: Try to login with a non-existing login name. Allows login into the system without entering login name (blank).

3. Testing for the User Interface issues of the screen should be done covering following points:

Make sure that control(s), caption(s), text etc. are clearly visible and looking fine.

Make sure that Alt + Tab is working for switching between different opened applications.

Make sure that pop-up menu is context sensitive. Make sure that the Heading, Sub-heading & Normal text are identifiable

clearly by the font size, attribute & color. Make sure that the company’s logo is clearly visible.

Testing Information

Environment: Iteration #:Start Date: End Date:Tester Name: Status:

V1.0 Page 50 of 52

Page 55: Guidelines for Software Testing

Sample Test cases for First page

Topic: General

Functionality: All

Reference(s): Nil.

Data set should cover the normal, boundary and extreme cases of data for each field in the screen concerned.

2. The testing should be done for the following valid conditions least once with:2.1.About Us:

Make sure that there is/are no spelling mistake(s).2.2.Ordering Information:

Make sure that there is/are no spelling mistake(s). Make sure that adequate information is given.

2.3.Terms and Conditions: Make sure that there is/are no spelling mistake(s). Make sure that adequate information is given. Make sure that all 16 hypertext is functioning properly.

3. The testing should be done for the following invalid conditions: Try to edit any information directly.

3. User Interface: Make sure that control / text / Caption is/are clearly visible. Make sure that Alt + Tab is working fine. Make sure that Heading, Subheading and normal text clearly identified by font

size. Make sure that 'catch word' is clearly identified by attribute and/or color. Make sure that the hypertext clearly identified by its font color whether it is

opened or not. Make sure that logo is clearly visible and looking fine. Make sure that pop-up menu is context sensitive.

Testing Information

Environment: Iteration #:Start Date: End Date:Tester Name: Status:

V1.0 Page 51 of 52

Page 56: Guidelines for Software Testing

Sample Test Case for User Registration Page

Topic: Register Me

Functionality: Register

Reference(s): Nil

Data set should cover the normal, boundary and extreme cases of data for each field in the screen concerned.

1. The testing should be done for the following valid conditions:

1.1. Prepare the test 8 data set fulfilling at least following criteria: with only mandatory fields with optional fields with maximum length data in all fields with minimum length data in all fields with at least 8 states selected from the combo with their respective zip codes NJ(08817-20),

NY(11100-01, 11104), AL(12200-10), CT(06030-34), OH(43803-04) for address in US with ‘ship to’ same as ‘bill to’ with ‘ship to’ different from ‘bill to’ at least 1 entry with all fields Register me with at least 6 different combinations moving to Home at least 2 times and make sure that home page is opened searching at least 2 different part IDs

2. The testing should be done for the following invalid conditions: Try to enter invalid e-mail ID(without @ in the address) Try to break the maximum allowable value in all fields Try to break the minimum allowable value in all fields Try to enter with zero length of mandatory fields Try to enter invalid time Try to edit state field Try to enter values in ‘Ship to’ fields when ‘same as bill to’ is selected Try to enter values for address outside US Try to search a non-existing/invalid part ID

3. Testing for the User Interface issues of the screen should be done covering following points:

as soon as the screen is opened, make sure that the cursor is positioned at the first enterable field

The unrelated options should be disabled. Alt+down key for listing from combo box. All the message boxes should have relevant and correct messages. Make sure that button(s) is/are accordingly enabled/disabled or displayed

according to screen functionality. Make sure that control(s), caption(s), text etc. are clearly visible and looking

fine.

V1.0 Page 52 of 52

Page 57: Guidelines for Software Testing

Make sure that Alt + Tab is working for switching between different opened applications.

Make sure that pop-up menu is context sensitive. Cut – Copy – Paste with short cut Keys (Like Cntrl C, Cntrl V etc.) are

working properly with every input screen as per Windows norms. Pasting of any text to Date field should not be allowed. Look and feel the appearance of all the controls like Text Boxes, Date Box etc

should be normal. Check that a scroll bar appears when a long text is entered in an editable

control. Make sure that the screens are invoked through all their available options.

Testing Information

Environment: Iteration #:Start Date: End Date:Tester Name: Status:

V1.0 Page 53 of 52

Page 58: Guidelines for Software Testing

Sample Test Case for Search Functionality of the Page

Topic: Search

Functionality: Search

Reference(s): Nil.

Data set should cover the normal, boundary and extreme cases of data for each field in the screen concerned.

4. The testing should be done for the following valid conditions at least once with:

Make sure that search option is opening the adequate page (the page, which contains the searched Item) and the cursor is positioned on quantity field.

Search at least 40 items covering each group. Try to search at least 20 item from different page. Try to search at least 10 item without login. (as a casual user)

5. The testing should be done for the following invalid conditions: Try to search non-existing / invalid part_id Try to search Empty part_id

Testing Information

Environment: Iteration #:Start Date: End Date:Tester Name: Status:

V1.0 Page 54 of 52

Page 59: Guidelines for Software Testing

GLOSSARY:GLOSSARY:

The Tester - Test engineer who manually tests or the Test Supervisor who starts the automatic test scripts

The Test Bed -This comprises of Hardware, Software, Test Scripts, Test Plan, Test Cases, etc

Module or Unit testing: It is the verification effort on the smallest unit of software design – the software component or module.

Integration testing: It is a systematic technique for constructing for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing.

System testing: It is a series of different tests whose primary purpose is to fully exercise the computer-based system to verify that system elements have been properly integrated and perform allocated functions.

Regression testing: It is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.

Acceptance testing: This test is conducted to enable the customer to validate all requirements.

Black & White box testing: White box testing is a testcase design method that uses the control structure of the procedural design to derive test cases to: Exercise all independent paths within a module at least once. Exercise all logical decisions on their true and false sides Execute all loops at their boundaries and within their operational bound Exercise internal data structures to ensure their validity.

Black box testing is to derive sets of input conditions that will fully exercise all functional requirements for a program.

V1.0 Page 55 of 52