27
Testing Type Have you ever thought why do you need to categorize software testing into different types? What is the benefits of dividing software testing in different types? One of the main purpose of software testing is to identify defects in the software. Defects in software testing can be defined as variance from requirement or user expectation. Based on this simple definition, it is very easy to categorize defects. For example: If system is not functioning properly, its a functional defect. If system is not performing well, its a performance defect. If system is not usable, its a usability defect If system is not secure, its a security defect and so on.. Identify these different defects require different skill set, different techniques and different type of test cases. Testing is divided into different types to reflect, what kind of defects can be uncovered by those activities. This division also helps management in managing these activities effectively. Also, it is very rare to have someone with skills in all the types of testing and this division helps in getting proper resources for team. Hope you understand importance of this categorization and also how important it is to have understanding of different types. This understanding will enable you to spot more defects which in turn will improve quality and make you more effective software tester. There are many ways in which software testing can be categorized. Some of them are described as follows: Categorization of testing based on the knowledge of system

Types of Testing

Embed Size (px)

DESCRIPTION

About Testing

Citation preview

Page 1: Types of Testing

Testing Type Have you ever thought why do you need to categorize software testing into different types? What is the benefits of dividing software testing in different types? One of the main purpose of software testing is to identify defects in the software. Defects in software testing can be defined as variance from requirement or user expectation. Based on this simple definition, it is very easy to categorize defects. For example:

 If system is not functioning properly, its a functional defect. If system is not performing well, its a performance defect. If system is not usable, its a usability defect If system is not secure, its a security defect and so on..

Identify these different defects require different skill set, different techniques and different type of test cases. Testing is divided into different types to reflect, what kind of defects can be uncovered by those activities. This division also helps management in managing these activities effectively. Also, it is very rare to have someone with skills in all the types of testing and this division helps in getting proper resources for team.

Hope you understand importance of this categorization and also how important it is to have understanding of different types. This understanding will enable you to spot more defects which in turn will improve quality and make you more effective software tester.

There are many ways in which software testing can be categorized. Some of them are described as follows: 

Categorization of testing based on the knowledge of system

Black Box Testing White Box Testing Gray Box Testing

Categorization of testing based on the time it is executed in the Software Development Life Cycle

Unit Testing Integration Testing System Testing User Acceptance Testing

Different types of testing can be categorized based on the purpose of testing. This can be classified further into Functional Testing and Non Functional Testing. 

Functional Testing

Page 2: Types of Testing

In functional testing, the focus of testing activities is on functional aspects of the system. In functional testing, test cases are written to check the expected output. Functional testing is normally performed in all the test phases from unit to system testing.

The following types of test activities are normally performed under Functional Testing

o Installation Testing o Regression Testing o Upgrade and backward compatibility testing o Accessibility testing o Internationalization and localization testing o API Testing

Non-Functional Testing

In non-functional testing, the focus of the testing activities is on non functional aspects of the system. Non functional testing is normally carried out during the System Testing phase only. The focus of non functional testing is on the behavior and user experience of the system.

o Performance, Load and Stress Testing o Usability Testing o Security testing o more to come..

Testing can also be categorized based on how it is executed. Execution  could be in the form of verification or static analysis or it could be validation or dynamic analysis. Verification and validation can be categorized further according to how it is done.

Verification

In very simple terms, verification is the human examination or review of the work product. There are many forms of verification which ranges from informal verification to formal verification. Verification can be used for various phases in SDLC and can be in the form of formal inspection, walkthrough or buddy-checking.

Validation

Validation or dynamic analysis is the most frequent activity that as a tester you perform. Whether you are doing black box testing, non functional testing or any type of testing, chances are that you are performing validation or dynamic

Page 3: Types of Testing

analyses. Validation or dynamic analyses is associated with the execution, and could be related to the execution of test cases or testing in general. There are many ways in which testing can be executed, for example

o Manual Scripted Testing o Exploratory Testing o Automated Testing o Model Based Testing

o many more..

 

Black Box Testing

Probably this is what most of us practice and is used most widely. This is also type of testing which is very close to customer experience. In this type of testing system is treated as close system and test engineer do not assume any thing about how system was created. As a test engineer if you are performing black box test cases, one thing that you need to make sure is that you do not make any assumptions about the system based on your knowledge. Assumption created in our mind because of the system knowledge could harm testing effort and increase the chances of missing critical test cases. 

Only input for test engineer in this type of testing is requirement document and functionality of system which you get by working with the system. Purpose of black box testing is to 

Make sure that system is working in accordance with the system requirement. Make sure that system is meeting user expectation.

In order to make sure that purpose of black box testing is met, various techniques can be used for data selection like 

Boundary value analyses Equivalence partitioning

Activities within every testing types can be divided into verification and validation. Within black box testing following activities will need verification techniques 

Review of requirement and functional specification.

Page 4: Types of Testing

Review of Test plan and test cases. Review of test data

And test case execution will fall under validation space.

White Box Testing White box testing is very different in nature from black box testing. In black box testing, focus of all the activities is only on the functionality of system and not on what is happening inside the system.

Purpose of white box testing is to make sure that Functionality is proper

Information on the code coverage

White box is primarily development teams job, but now test engineers have also started helping development team in this effort by contributing in writing unit test cases, generating data for unit test cases etc.

White box testing can be performed at various level starting from unit to system testing. Only distinction between black box and white box is the system knowledge, in white box testing you execute or select your test cases based on the code/architectural knowledge of system under test.

Even if you are executing integration or system test, but using data in such a way that one particular code path is exercised, it should fall under white box testing. 

There are different type of coverage that can be targeted in white box testing

Statement coverage Function coverage Decision coverage Decision and Statement coverage

Conditional Testing

Page 5: Types of Testing

The first improvement to white box techniques is to ensure that the Boolean controlling expressions are adequately tested, a process known as condition testing.

The process of condition testing ensures that a controlling expression has been adequately exercised whilst the software is under test by constructing a constraint set for every expression and then ensuring that every member on the constraint set is included in the values which are presented to the expression. This may require additional test runs to be included in the test plan.

To introduce the concept of constraint sets the simplest possible Boolean condition, a single Boolean variable or a negated Boolean variable, will be considered. These conditions may take forms such as:

if DateValid then while not DateValid then

The constraint set for both of these expressions is {t ,f } which indicates that to adequately test these expressions they should be tested twice with DateValid having the values True and False.

Perhaps the next simplest Boolean condition consists of a simple relational expression of the form value operator value, where the operator can be one of: is equal to ( = ), is not equal to ( /= ), is greater than ( > ), is less than ( <), is greater than or equal to ( >= ) and is less than or equal to ( <= ). It can be noted that the negation of the simple Boolean variable above had no effect upon the constraint set and that the six relational operators can be divided into three pairs of operators and their negations. Is equal to is a negation of is not equal to, is greater than is a negation of is less than or equal to and is less than is a negation of is greater than or equal to. Thus the condition set for a relational expression can be expressed as {=, >, <}, which indicates that to adequately test a relational expression it must be tested three times with values which ensure that the two values are equal, that the first value is less than the second value and that the first value is greater than the second value.

More complex control expressions involve the use of the Boolean operators, and, or and xor, which combine the values of two Boolean values. To construct a constraint set for a simple Boolean expression of the form BooleanValue operator BooleanValue all possible combinations of True and False have to be considered. This gives the constraint set for the expression as {{t ,t } {t ,f } {f ,t } {f ,f }}. If both BooleanValues are simple or negated Boolean variables then no further development of this set is required. However if one or both of the BooleanValues are relational expressions then the constraint set for the relational expression will have to be combined with this constraint set. The combination takes the form of noting that the equality condition is equivalent to true and the both inequality conditions are equivalent to false. Thus every true condition in the condition set is replaced with ' = ' and every false replaced twice, once with ' > ' and once with ' < '.

Thus if only one the left hand BooleanValue is a relational expression the condition set

Page 6: Types of Testing

would be {{= ,t } {= ,f } {> ,t } {< ,t } {> ,f } {< ,f }}. And if both BooleanValues are relation expressions this would become {{= ,= } {= ,> } {= ,< } {> ,= } {< ,= } {> ,> } {> ,< } {< ,> } {< ,< }}.

An increase in the complexity of the Boolean expression by the addition of more operators will introduce implicit or explicit bracketing of the order of evaluation which will be reflected in the condition set and will increase the number of terms in the set. For example a Boolean expression of the following form:

BooleanValue1 operator1 BooleanValue2 operator3nbsp;BooleanValue3

Has the implicit bracketing:

(BooleanValue1 operator1 BooleanValue2) operato3 BooleanValue3

The constraint set for the complete expression would be {{e1,t}{e1,f}}, where e1 is the condition set of the bracketed sub-expression and when it is used to expand this constraint set gives {{t,t,t} {t,f,t} {f,t,t} {f,f,t} {t,t,f} {t,f,f} {f,t,f} {f,f,f}}. If any of the BooleanValues are themselves relational expressions this will increase the number of terms in the condition set. In this example the worst case would be if all three values were relational expressions and would produce a total of 27 terms in the condition set. This would imply that 27 tests are required to adequately test the expression. As the number of Boolean operators increases the number of terms in a condition set increases exponentially and comprehensive testing of the expression becomes more complicated and less likely. It is this consideration which led to the advice, initially given in Section 1, to keep Boolean control expressions as simple as possible, and one way to do this is to use Boolean variables rather than expressions within such control conditions.

Data Life Cycle Testing

Keeping control expressions simple can be argued to simply distribute the complexity from the control expressions into the other parts of the subprogram and an effective testing strategy should recognise this and account for it. One approach to this consideration is known as data lifecyle testing, and it is based upon the consideration that a variable is at some stage created, and subsequently may have its value changed or used in a controlling expression several times before being destroyed. If only locally declared Boolean variables used in control conditions are considered then an examination of the source code will indicate the place in the source code where the variable is created, places where it is given a value, where the value is used as part of a control expression and the place where it is destroyed.

This approach to testing requires all possible feasible lifecycles of the variable to be covered whilst the module is under test. In the case of a Boolean variable this should include the possibility of a Boolean variable being given the values True and False at each place where it is given a value. A typical outline sketch of a possible lifecycle of a controlling Boolean variable within a subprogram might be as follows:

Page 7: Types of Testing

~~~ SomeSubProgram( ~~~ ) is 

ControlVar : BOOLEAN := FALSE;

begin -- SomeSubProgram

~~~

while not ControlVar loop

~~~

ControlVar := SomeExpression;

end loop;

~~~

end SomeSubProg;

In this sketch ~~~ indicates the parts of the subprogram which are not relevant to the lifecycle. In this example there are two places where the variable ControlVar is given a value, the location where it is created and the assignment within the loop. Additionally there is one place where it is used as a control expression. There are two possible lifecycles to consider, one which can be characterised as { f t } indicating that the variable is created with the value False and given the value True upon the first iteration of the loop. The other lifecycle can be characterised as { f, f, ... t }, which differs from the first by indicating that the variable is given the value False on the first iteration, following which there is the possibility of more iterations where it is also given the value False, being given the value True on the last iteration.

Other possible lifecycles such as { f } or { f t f .. t } can be shown from a consideration of the source code not to be possible. The first is not possible as the default value will ensure that the loop iterates causing the variable to experience at least two values during its lifecycle. The second is not possible as soon as the variable is given the value True within the loop, the loop and subsequently the subprogram will terminate, causing the variable to be destroyed.

This quick look at the variable lifecycle approach only indicates the basis of this approach to white box testing. It should also indicate that this is one of the most laborious and thus expensive and difficult techniques to apply. As it is expensive and does not add a great deal to the testing considerations which have already been discussed, it is not widely used.

Page 8: Types of Testing

 Loop Testing

The final white box consideration which will be introduced is the testing of loops, which have been shown to be the most common cause of faults in subprograms. If a loop, definite or indefinite, is intended to iterate n times then the test plan should include the following seven considerations and possible faults.

That the loop might iterate zero times. That the loop might iterate once That the loop might iterate twice That the loop might iterate several times That the loop might iterate n - 1 times That the loop might iterate n times That the loop might iterate n + 1 times That the loop might iterate infinite times

All feasible possibilities should be exercised whilst the software is under test. The last possibility, an infinite loop, is a very noticeable, and common fault. All loops should be constructed in such a way that it is guaranteed that they will at some stage come to an end. However this does not necessarily guarantee that they come to an end after the correct number of iterations, a loop which iterates one time too many or one time too few is probably the most common loop fault. Of these possibilities an additional iteration may hopefully cause a CONSTRAINT_ERROR exception to be raised announcing its presence. Otherwise the n - 1 and n + 1 loop faults can be very difficult to detect and correct.

A loop executing zero times may be part of the design, in which case it should be explicitly tested that it does so when required and does not do so when it is not required. Otherwise, if the loop should never execute zero times, and it does, this can also be a very subtle fault to locate. The additional considerations, once, twice and many are included to increase confidence that the loop is operating correctly.

The next consideration is the testing of nested loops. One approach to this is to combine the test considerations of the innermost loop with those of the outermost loop. As there are 7 considerations for a simple loop, this will give 49 considerations for two levels of nesting and 343 considerations for a triple nested loop, this is clearly not a feasible proposition.

What is possible is to start by testing the innermost loop, with all other loops set to iterate the minimum number of times. Once the innermost loop has been tested it should be configured so that it will iterate the minimum number of times and the next outermost loop tested. Testing of nested loops can continue in this manner, effectively testing each nested loop in sequence rather than in combination, which will result in the number of required tests increasing arithmetically ( 7, 14, 21 ..) rather than geometrically ( 7, 49, 343 .. ).

Page 9: Types of Testing

User Acceptance Testing Acceptance Testing is the formal testing conducted to determine whether a software system satisfies its acceptance criteria and to enable buyer to determine whether to accept the system or not.

Acceptance testing is designed to determine whether software is fit for use or not. Apart from functionality of application, other factors related to business environment also plays an important role.

User acceptance testing is different from System Testing. System testing is invariably performed by the development team which includes developer and tester. User acceptance testing on the other hand should be carried out by the end user. This could be in the form of

Alpha Testing - Where test are conducted at the development site by the end users. Environment can be controlled a little bit in this case.

Beta Testing - Where test are conducted at customer site and development team do not have any control on the test environment.

In both the cases, these testing might be assisted by software testers.

It is very important to define acceptance criteria with the buyer during the various phases of SDLC. A well defined acceptance plan will help development/QE teams by identifying user's need during software development. Acceptance Test plan must be created or reviewed by customer. Development team and customer should work together and make sure that they have

Identify interim and final products for acceptance, acceptance criteria and schedule.

Plan how and by whom each acceptance activities will be performed. Schedule adequate time for buyer's staff to examine and review the product Prepare the acceptance plan. Perform formal acceptance testing at delivery Make a decision based on the results of acceptance testing.

Entry Criteria System Testing is completed and defects identified are either fixed or

documented. Acceptance plan is prepared and resources have been identified. Test environment for the acceptance testing is available

Exit Criteria Acceptance decision is made for the software In case of any caveats, development team is notified

Page 10: Types of Testing

Gray Box Testing Grey box testing is the combination of black box and white box testing. Intention of this testing is to find out defects related to bad design or bad implementation of the system.

In gray box testing, test engineer is equipped with the knowledge of system and designs test cases or test data based on system knowledge. 

For example, consider a hypothetical case wherein you have to test a web application. Functionality of this web application is very simple, you just need to enter your personal details like email and field of interest on the web form and submit this form. Server will get this details, and based on the field of interest pick some articles and mail it to the given email. Email validation is happening at the client side using Java Scripts.

In this case, in the absence of implementation detail, you might test web form with valid/invalid mail IDs and different field of interests to make sure that functionality is intact.

But, if you know the implementation detail, you know that system is making following assumptions

Server will never get invalid mail ID Server will never send mail to invalid ID Server will never receive failure notification for this mail.

So as part of gray box testing, in the above example you will have a test case on clients where Java Scripts are disabled. It could happen due to any reason and if it happens, validation can not happen at the client site. In this case, assumptions made by the system are violated and 

Server will get invalid mail ID

Page 11: Types of Testing

Server will send mail to invalid mail ID Server will receive failure notification

Hope you understood the concept of gray box testing and how it can be used to create different test cases or data points based on the implementation details of the system.

Do you have a better example? Share with us.

Unit Testing I have heard some people saying that Unit testing is done primarily by the developers and test engineers need not know about Unit testing. But this is not the case, Unit testing is as important for test engineers as it is for developers.

Probably developers will write the unit test cases but your understanding of the framework and unit testing can certainly help him/her in designing the flow for unit test cases, generating test data and making good reports. All these activities will ultimately help you in the long run as product quality will increase significantly because of the efforts you put in on the unit testing.

So if you think as a test engineer, you should also learn/know about unit testing read on.

Unit testing is the method of making sure that smallest unit of your software is working properly in isolation. You can define this smallest unit in your own term, it could be a java class or library. A more formal definition and one of the earliest paper on Unit Testing by Kent Beck can be found here.

Kent Beck wrote JUnit framework along with Erich Gamma and it became extremely useful for the Java community. JUnit made it very easy for the developers to write, organize and execute unit test cases for there java code. Success of JUnit was enough to inspire people working on other languages to come up with unit testing framework for the language of their choice. Now we probably have more than 25 Unit testing frameworks based on the XUnit architecture covering all the major languages. Information on unit test frameworks available are present here.

Information about JUnit and Jtescase is also present here, because these are the most widely used unit testing frameworks. If you want me to add more information about any particular unit testing framework, contact us.

Page 12: Types of Testing

Regression Testing As some one has said, changes are the only thing constant in this world. It holds true for software also, existing software are either being changed or removed from the shelves. These changes could be due to any reason. There could be some critical defects which needs to be fixed, there could be some enhancements which has to happen in order to remain in the competition. 

Regression testing is done to ensure that enhancement, defect fixes or any other changes made to the software has not broken any existing functionality.

Regression testing is very important, because in most places these days iterative development is used. In iterative development, shorter cycle is used with some functionality added in every cycle. In this scenario, it make sense to have regression testing for every cycle to make sure that new features are not breaking any existing functionality.

Whenever there are any changes in the system, regression testing is performed. Regression testing can be performed for the complete product or it can be selective. Normally, full regression cycle is executed during the end of testing cycle and partial regression cycle is executed between the test cycles. During the regression cycle it becomes very important to select the proper test cases to get the maximum benefits. Test cases for regression testing should be selected based on the knowledge of 

What defect fixes, enhancement or changes have gone into the system? What is the impact of these changes on the other aspect of system?

Focus of the regression testing is always on the impact of the changes in the system. In most of the organizations, priority of the regression defect is always very high. It is normally included in the exit criteria of the product that it should have zero regression defects. 

Regression testing should always happen after the sanity or smoke testing. Sanity/Smoke can be defined as the type of testing which make sure that software is in testable state. Normally, sanity test suite will contain very basic and core test cases. These test cases decides the quality of the build and any failure in the sanity test suite should result in the rejection of build by the test team.

Regression testing is a continuous process and and it happens after every release. Test cases are added to the regression suite after every release and repetitively executed for

Page 13: Types of Testing

every release. 

Because test cases of regression suite are executed for every release, they are perfect candidate for the automation. Test automation is discussed separately in the Test Automation section of this website.

ntegration Testing Objective of Integration testing is to make sure that the interaction of two or more components produces results that satisfy functional requirement. In integration testing, test cases are developed with the express purpose of exercising the interface between the components.

Integration testing can also be treated as testing assumption of fellow programmer. During the coding phase, lots of assumptions are made. Assumptions can be made for how you will receive data from different components and how you have to pass data to different components. During Unit Testing, these assumptions are not tested. Purpose of unit testing is also to make sure that these assumptions are valid. There could be many reasons for integration to go wrong, it could be because

Interface Misuse - A calling component calls another component and makes an error in its use of interface, probably by calling/passing parameters in the wrong sequence.

Interface Misunderstanding - A calling component makes some assumption about the other components behavior which are incorrect. 

Integration Testing can be performed in three different ways based on the from where you start testing and in which direction you are progressing.

Big Bang Integration Testing Top Down Integration Testing Bottom Up Integration Testing Hybrid Integration testing

Top down testing can proceed in a depth-first or a breadth-first manner. For depth-first integration each module is tested in increasing detail, replacing more and more levels of detail with actual code rather than stubs. Alternatively breadth-first would proceed by

Page 14: Types of Testing

refining all the modules at the same level of control throughout the application. In practice a combination of the two techniques would be used. 

Entry Criteria

Main entry criteria for Integration testing is the completion of Unit Testing. If individual units are not tested properly for their functionality, then Integration testing should not be started.

Exit Criteria

Integration testing is complete when you make sure that all the interfaces where components interact with each other are covered. It is important to cover negative cases as well because components might make assumption with respect to the data.

System Testing System testing is probably the most important phase of complete testing cycle. This phase is started after the completion of other phases like Unit, Component and Integration testing. During the System Testing phase, non functional testing also comes in to picture and performance, load, stress, scalability all these types of testing are performed in this phase.

By Definition, System Testing is conducted on the complete integrated system and on a replicated production environment. System Testing also evaluates that system compliance with specific functional and non functional requirements both. 

It is very important to understand that not many test cases are written for the system testing. Test cases for the system testing are derived from the architecture/design of the system, from input of the end user and by user stories. It does not make sense to exercise extensive testing in the System Testing phase, as most of the functional defects should have been caught and corrected during earlier testing phase.

Utmost care is exercised for the defects uncovered during System Testing phase and proper impact analysis should be done before fixing the defect. Some times, if business permits defects are just documented and mentioned as the known limitation instead of fixing it.

Progress of the System Testing also instills and build confidence in the product teams as this is the first phase in which product is tested with production environment.

System Testing phase also prepares team for more user centric testing i.e User Acceptance Testing.

Page 15: Types of Testing

Entry Criteria Unit, component and Integration test are complete Defects identified during these test phases are resolved and closed by QE team Teams have sufficient tools and resources to mimic the production environment Teams have sufficient tools and resources to mimic the production environment

Exit Criteria Test cases execution reports shows that functional and non functional

requirements are met.

Defects found during the System Testing are either fixed after doing thorough impact analysis or are documented as known limitations.

User Acceptance Testing Acceptance Testing is the formal testing conducted to determine whether a software system satisfies its acceptance criteria and to enable buyer to determine whether to accept the system or not.

Acceptance testing is designed to determine whether software is fit for use or not. Apart from functionality of application, other factors related to business environment also plays an important role.

User acceptance testing is different from System Testing. System testing is invariably performed by the development team which includes developer and tester. User acceptance testing on the other hand should be carried out by the end user. This could be in the form of

Alpha Testing - Where test are conducted at the development site by the end users. Environment can be controlled a little bit in this case.

Beta Testing - Where test are conducted at customer site and development team do not have any control on the test environment.

In both the cases, these testing might be assisted by software testers.

It is very important to define acceptance criteria with the buyer during the various phases of SDLC. A well defined acceptance plan will help development/QE teams by identifying user's need during software development. Acceptance Test plan must be created or reviewed by customer. Development team and customer should work together and make sure that they have

Identify interim and final products for acceptance, acceptance criteria and schedule.

Plan how and by whom each acceptance activities will be performed. Schedule adequate time for buyer's staff to examine and review the product Prepare the acceptance plan. Perform formal acceptance testing at delivery Make a decision based on the results of acceptance testing.

Page 16: Types of Testing

Entry Criteria System Testing is completed and defects identified are either fixed or

documented. Acceptance plan is prepared and resources have been identified. Test environment for the acceptance testing is available

Exit Criteria Acceptance decision is made for the software In case of any caveats, development team is notified

Regression Testing As some one has said, changes are the only thing constant in this world. It holds true for software also, existing software are either being changed or removed from the shelves. These changes could be due to any reason. There could be some critical defects which needs to be fixed, there could be some enhancements which has to happen in order to remain in the competition. 

Regression testing is done to ensure that enhancement, defect fixes or any other changes made to the software has not broken any existing functionality.

Regression testing is very important, because in most places these days iterative development is used. In iterative development, shorter cycle is used with some functionality added in every cycle. In this scenario, it make sense to have regression testing for every cycle to make sure that new features are not breaking any existing functionality.

Whenever there are any changes in the system, regression testing is performed. Regression testing can be performed for the complete product or it can be selective. Normally, full regression cycle is executed during the end of testing cycle and partial regression cycle is executed between the test cycles. During the regression cycle it becomes very important to select the proper test cases to get the maximum benefits. Test cases for regression testing should be selected based on the knowledge of 

What defect fixes, enhancement or changes have gone into the system? What is the impact of these changes on the other aspect of system?

Focus of the regression testing is always on the impact of the changes in the system. In most of the organizations, priority of the regression defect is always very high. It is normally included in the exit criteria of the product that it should have zero regression defects. 

Page 17: Types of Testing

Regression testing should always happen after the sanity or smoke testing. Sanity/Smoke can be defined as the type of testing which make sure that software is in testable state. Normally, sanity test suite will contain very basic and core test cases. These test cases decides the quality of the build and any failure in the sanity test suite should result in the rejection of build by the test team.

Regression testing is a continuous process and and it happens after every release. Test cases are added to the regression suite after every release and repetitively executed for every release. 

Because test cases of regression suite are executed for every release, they are perfect candidate for the automation. Test automation is discussed separately in the Test Automation section of this website.

Security Testing

Security Testing is very important in today's world, because of the way computer and internet has affected the individual and organization. Today, it is very difficult to imagine world without Internet and latest communication system. All these communication systems increases efficiency of individual and organization by multifold.

Since every one from individual to organization, uses Internet or communication system to pass information, to do business, to transfer money it becomes very critical for the service provider to make sure that information and network are secured from the intruders.

Primary purpose of security testing is to identify the vulnerabilities and subsequently repairing them. Typically, security testing is conducted after the system has been developed, installed and is operational. Unlike other types of testing, network security testing is performed on the system on the periodic basis to make sure that all the vulnerabilities of the system are identified.

Network security testing can be further classified into following types Network Scanning Vulnerability Scanning Password Cracking Log Review File Integrity Checkers Virus Detection War Dialing Penetration Testing

Page 18: Types of Testing

None of these tests provide a complete picture of network security. You will need to perform a combination of these techniques to ascertain status of your Network Testing Activities.

Apart from Network Security Testing, you should also take care of Application security testing. Intruders can target specific applications for unauthorized access or for any malicious reason. It becomes even critical for the web applications, because of the visibility and access of the application through Internet. Web application Security Testing is covered in a different section. 

Recent Updates

Guerrilla Testing Tips One CPU better than two TG Tips For Automation?Is It Really Done? Exploratory Testing Automated Testing Model Based Testing Live News