29
Manual Testing Quality: Quality is defined as justification of all client requirements of a project/product & deliver the application in time without having any defects is called Quality. Quality Assurance: Quality Assurance is the process to verify all the standards and process of a company for giving the right product to the client. Quality Control: Quality Assurance is the process to verify all the standards and process of a company for giving the right product to the client. Software: Software is a set of Programs, set of logic & related data that gives instructions to the system, what to do & how to do. (Or) A set of executable programs is called software. We have 2 types of Software: 1. Product 2. Project 1. Product: Product is something which is developing for segment of customers. There is no end of developing & testing activities in the product, because the product will release into the market versions by versions. E.g.: Operating system, MS office, Photoshop & Processors Etc. 2. Project: Project is something which is developing for only specific customers. There is end of developing & testing activities in the project, because whenever we satisfying the client requirements we need to stop the developing & testing activities. E.g.: Manufacturing application, Hospital management application, etc In Each & every Project/Product development we have 2 different teams. 1. Development Team 2. Testing Team 1. Development Team: The responsibility of Development Team has to develop the Application project/ product according to the client requirements. 2. Testing Team: The responsibility of Testing Team has to Test the developed Application (Project/Product) by using different Testing Types & Techniques according to the client requirements. Testing: Testing is the Verification & Validation to ensure to deliver the defect free Application/Product to the client. (Or) Software Testing: Testing is a process of executing a program with the intent of finding errors.

Manual Testing complete

Embed Size (px)

DESCRIPTION

Manual Testing complete

Citation preview

Page 1: Manual Testing complete

Manual Testing

Quality: Quality is defined as justification of all client requirements of a project/product & deliver the

application in time without having any defects is called Quality.

Quality Assurance: Quality Assurance is the process to verify all the standards and process of a company

for giving the right product to the client.

Quality Control: Quality Assurance is the process to verify all the standards and process of a company for

giving the right product to the client.

Software: Software is a set of Programs, set of logic & related data that gives instructions to the system, what

to do & how to do. (Or) A set of executable programs is called software. We have 2 types of Software: 1. Product

2. Project1. Product: Product is something which is developing for segment of customers. There is no end of developing & testing activities in the product, because the product will release into the market versions by versions. E.g.: Operating system, MS office, Photoshop & Processors Etc.

2. Project: Project is something which is developing for only specific customers. There is end of developing & testing activities in the project, because whenever we satisfying the client requirements we need to stop the developing & testing activities. E.g.: Manufacturing application, Hospital management application, etc

In Each & every Project/Product development we have 2 different teams. 1. Development Team

2. Testing Team

1. Development Team: The responsibility of Development Team has to develop the Application project/ product according to the client requirements. 2. Testing Team: The responsibility of Testing Team has to Test the developed Application (Project/Product) by using different Testing Types & Techniques according to the client requirements.

Testing: Testing is the Verification & Validation to ensure to deliver the defect free Application/Product to the client. (Or)Software Testing: Testing is a process of executing a program with the intent of finding errors.

(Or)Perform the testing on the software application or product is called Software Testing.

We have 2 different types of Testing under Software Testing.1. Manual Testing2. Automation testing

Page 2: Manual Testing complete

1. Manual Testing: Perform the Testing on the Application/Product with human interaction is called Manual Testing.

2. Automation Testing: Perform the Testing on the Application/Product with the help of some Third party tools like QTP, Selenium, Load runner etc. is called an Automation Testing.

In both Manual Testing & Automation Testing we are performing same testing in the application, but the way is different while we are performing the testing in Manual Testing & Automation Testing.

Why Testing is required in Software? (Or) What is meant by Quality Software?1. Maintain the quality of the application.2. Identify the defects and solved the defects before release the application to the clients without having any defects in the application.3. Meet the client requirements (Functionality). 4. Meet the client expectations (Performance, Usability, Compatibility, Security …) 5. Time to release.

What are Testing objectives? 1. To find out the difference b/w customer Expected values & Actual values. 2. To find out the errors, which are unidentified by the development team during developing the application.3. Whether the application is working or not according to the company standards.

Client: Client is a person who is providing the requirements to the company for developing their business application is called client.

Company: Company is an organization which is developing the application according to the client requirements is called company.

End User: End user is a person who is using the application in the final stage is called an End user. E.g.: Infosys has developed an online application for SBI Bank. Here SBI Bank is a client to Infosys Company and End user will be customers of SBI Bank.

Difference between Defect, bug & error?Defect: While executing the test cases, if we found any issues then the issues is called defect.

Bug: Once the developer accepts our defect, then it is called as bug.

Error: It may be program error or syntax error.

Software biddingA proposal to develop the software is called software bidding.

Software Development Life Cycle (SDLC) SDLC is the process which we are following to complete the software project/product development activities that includes both Development & Testing Activities.

Page 3: Manual Testing complete

1. Requirements:

Requirements is the first phase of SDLC. Once the project has been confirmed between client & company. Client will provide the requirements to the company. Client always provides the requirements in their business prospective. In this phase BA (Business Analyst) will be involving to collect the requirements from the client. For collect the requirements from the client, BA will following below approaches.

1. Questionnaire 2. KT-Knowledge Transfer 3. Walkthrough 4. Inspection

1. Questionnaire: By using this approach BA will collect the requirements from the client by asking the questions to the clients.

2. KT-Knowledge Transfer: In this approach client will provide KT-Sessions to the BA for understand the requirements.

3. Walkthrough: In this approach BA will go through the requirements documents which has provided by the client and understand the requirements.

4. Inspection: In this approach BA will collect the requirements from the clients by inspecting the client business location directly.

2. Analysis: In Analysis phase, BA will be involving to analyze the requirements and BA will design the

documents as understandable format of the requirement documents called Use case/BRS doc.BRS doc: BRS doc divided into 2 docs.

1. SRS 2. FRS

SRS doc: SRS doc contains details about software & hardware requirement.FRS doc: FRS doc contains details about the functionality of the project.Use case Doc: Use case doc is in the format of word doc. One use case doc contains one flow of requirements.

3. Design: In designing phase, system architecture will be involving to design the architecture of the

application.There are 2 types of Designs 1. HLD (High Level Design): It defines overall architecture of the application that includes all the modules in the application.

Page 4: Manual Testing complete

2. LLD (Low Level Design): It defines overall architecture of the individual modules that includes all the sub modules & screens of the application.* Most of the projects are using UML for designing the architecture of the application.

4. Coding: In coding phase, development team will be involving to write the code for the functionality

of the individual module. After completion of all individual modules, the development team will be integrate all the modules and make it as a single application.

5. Testing: In testing phase, Testing Team will be involving to perform the testing on the application

based on the client requirements. While testing the application testing team will execute the test cases, using different types of testing & techniques.

6. Release/Maintenance/Production: In this phase, Technical People will be involving to deploy the

application into production environment.

Types of Testingwe have 2 types of Testing. 1. Functional Testing 2. Non Functional Testing

1. Functional Testing:  Testing the application against business requirements. Functional testing is done

using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team.

Unit TestingIn Unit testing, development team will be involving to perform the testing on individual module of the

project through a set of white box testing techniques is called Unit testing/Module/Component Testing.

Integration Testing (Top Down, Bottom up Testing) In Integration Testing, development team & testing team will be involving to perform the testing on the application. During Integration Testing we are verifying the data flow between all the modules in the application is working or not. Because every individual module is working fine but after integrates all the module & make it as single application that application may or may not work.

Approaches in Integration Testing1. Top-Down Approach2. Bottom-Up Approach3. Sandwich Approach4. Big bang Approach

1. Top-Down Approach If all main modules are developed and some of the sub modules were not developed in a project. In that case programmers need to create some temporary programs called Stubs. Which acting as a sub module.

2. Bottom-Up Approach If all sub modules are developed and some of the main modules were not developed in a project. In

Page 5: Manual Testing complete

that case programmers need to create are creating some temporary programs called Drivers. Which acting as a main module.

3. Sandwich Approach If some of the Main modules & some of the sub modules were not developed in a project. In that case programmers need to create some temporary programs called drivers & stubs. Which acting as a main modules & sub modules.

4. Big bang Approach Whenever all the modules are developed then integrate all the modules & make it as a single application is called Big bang approach.

Types of Testing

Sanity Testing After receiving initial build, Testing Engineers validating the major functionalities of the application whether it is working or not. If major functionalities of the application is working fine then we can perform further testing of the application. If major functionalities of the application is not working fine then we can’t move further testing. So we can reject the build.

Smoke Testing Validating the major functionality of the application by the development team in development environment is called smoke testing.Note: Definition wise both Sanity & Smoke testing is different. But practical oriented is same.

Usability Testing/User Interface Testing/GUI Testing Verifying the user friendliness of the application in terms of colors, logos, fonts, alignments etc. is called Usability Testing. Ease of use (Understandable to users to operate) Look & Feel (Pleasantness or attractive screens) Speed of interface (Less no. of events to complete the task)

Usability Test Scenarios: Web page content should be correct without any spelling or grammatical errors All fonts should be same as per the requirements. All the text should be properly aligned. All the error messages should be correct without any spelling or grammatical errors and the error

message should match with the field label. Tool tip text should be there for every field. All the fields should be properly aligned. Enough space should be provided between field labels, columns, rows, and error messages. All the buttons should be in a standard format and size. Home link should be there on every single page. Disabled fields should be grayed out. Check for broken links and images. Confirmation message should be displayed for any kind of update and delete operation. Check the site on different resolutions (640 x 480, 600x800 etc.?)

Page 6: Manual Testing complete

Check the end user can run the system without frustration. Check the tab should work properly. Scroll bar should appear only if required. If there is an error message on submit, the information filled by the user should be there. Title should display on each web page All fields (Textbox, dropdown, radio button etc) and buttons should be accessible by keyboard

shortcuts and the user should be able to perform all operations by using keyboard. Check if the dropdown data is not truncated due to the field size and also check whether the data is

hardcoded or managed via administrator.

Functional Testing Validating the overall functionality of the application includes the major functionality with respect to the clients business requirements is called functional testing.

Functionality or Requirements testing has following coverage:

Control Flow or Behavioral Coverage (Object Properties Checking). Input Domain Coverage (Correctness of Size and Type of every I/O Object). Error Handling Coverage (Preventing negative navigation). Calculations Coverage (correctness of o/p values). Backend Coverage (Data Validation & Data Integrity of database tables). Service Levels (Order of functionality or services). Successful Functionality (Combination of above all).

Behavioral Testing/GUI Testing/Control Flow Testing In Control Flow Testing, we are validate each & every object in the application whether the screens are responding correctly or not, while operating on those objects in the screens.

Input Domain Testing: In Input Domain Testing, we are using boundary value analysis & equivalence class partition techniques to validate the size & type of each & every input object in the application.

Boundary Value analysis: Boundary values are used for testing the size and range of an object.

Page 7: Manual Testing complete

Equivalence Class Partitions: Equivalence classes are used for testing the type of the object.

Error handling Testing In Error handling Testing, we are validating each & every screen in the application by giving invalid data to the objects to get error messages or not when user given the invalid data.

Calculation Coverage Testing In Calculation Testing, We are validate on the calculation part in the application by giving the valid data to the objects to get the correct output values or not when user given the valid data.

Functional Test Scenarios:

Test all the mandatory fields should be validated. Test the system should not display the error message for optional fields. Test the numeric fields should not accept the alphabets and proper error message should display. Test for negative numbers if allowed for numeric fields. Test the max length of every field to ensure the data is not truncated. Test the pop up message (“This field is limited to 500 characters) should display if the data reaches

the maximum size of the field. Test that a confirmation message should display for update and delete operations. Test the amount values should display in currency format. Test the timeout functionality. Test the functionality of the buttons available Test the Privacy Policy & FAQ is clearly defined and should be available for users. Test if any functionality fails the user gets redirected to the custom error page. Test all the uploaded documents are opened properly. Test the user should be able to download the uploaded files. Test the email functionality of the system. Test the java script is properly working in different browsers (IE, Firefox, Chrome, safari and Opera). Test to see what happens if a user deletes cookies while in the site. Test to see what happens if a user deletes cookies after visiting a site. Test all the data inside combo/list box is arranged in chronological order.

Recovery Testing In this testing we are verifying how much time the application is taking to come back from abnormal state to normal state.

Examples of recovery testing:

1. While an application is running, suddenly restart the computer, and afterwards check the validness of the application's data integrity.

Page 8: Manual Testing complete

2. While an application is receiving data from a network, unplug the connecting cable. After some time, plug the cable back in and analyze the application's ability to continue receiving data from the point at which the network connection disappeared.

3. Restart the system while a browser has a definite number of sessions. Afterwards, check that the browser is able to recover all of them.

Compatibility Testing In Compatibility Testing, we are verifying whether the application is working in different browsers, different types of OS, different types of system software & Etc.

Forward compatibility --> application is ready to run but Operating system is not supporting.

Backward compatibility --> Operating system is supporting but the application has some

internal coding problems to run on Operating system.Compatibility Test Scenarios:

Test the website in different browsers (IE, Firefox, Chrome, Safari and Opera) and ensure the website is displaying properly.

Test the HTML version being used is compatible with appropriate browser versions. Test the images display correctly in different browsers. Test the fonts are usable in different browsers. Test the java script code is usable in different browsers. Test the Animated GIF’s across different browsers.

 Tool for Compatibility Testing: Spoon.net: Spoon.net provides access to thousands of applications (Browsers) without any installs. This tool helps you to test your application on different browsers on one single machine.

Configuration TestingWe are validating the application in different configuration of the system like RAM, Processor and

HDD etc. is called Configuration Testing.Note: Compatibility testing is suggestible for projects. Configuration testing is suggestible for products.

Certification testing:  Certification testing is also related to Compatibility testing, however product will be certified as fit to use. Certification is applicable for hardware, Operating systems or Browser. e.g. hardware/laptops are certified for Windows 7 etc., (Or) We have to certified the product whether the product is compatible or not for appropriate software/hardware devices.

Re-testing Re-testing the application whether the bugs are fixed or not. Re-testing is done based on failed test cases of previous build. (Or) Re-execution of a test with multiple test data to validate a function, e.g. To validate multiplication, Test Engineers use different combinations of input in terms of min, max, -ve, +ve, zero, int, float, etc.

Page 9: Manual Testing complete

Regression Testing: During this testing, testing team will be involving to perform regression testing on modified build (application) to verify whether the fixed defect is impacting on other functionality of the application or not. Regression testing is done based on passed test cases of previous build.

Any dependent modules may also cause side effects.

When we should do the regression testing?Regression testing typically carryout in following cases1) When the new functionality introduced then regression testing will be conduct to find out whether existing functionality is broken or not.2) Whenever there are Bug fixes, to verify the other functionality is not affected by the bug fixes we need to do

the regression testing3) After Code refactoring, regression testing needs to carry out to make sure that there is no  functionality is regressed due to code refactoring.4) After Merging the branches or code, regression testing needs to done, as during merging branches or code, there are more chances of functionality breakage.Regression testing plays important role in software testing.

Performance Testing Testing the application how much load is applied on server to execute the current application in terms of Load, Stress & Volume testing.During performance testing we have to use 3 techniques.Load Testing: Validating the performance of the application with the client expected users is called Load Testing. Testing of an application with various No. of concurrent users to verify the response time of the application by increasing the No. of users. Run SUT under customer expected configuration and customer expected load (No. of users) to calculate “speed in processing” is called as Load testing.

Stress Testing: Validating the performance of the application by increasing the client expected users to the maximum level & identifies the breakpoint of the application. By giving different sets of load continuously for some time. We are going to find the application is stability and where it is getting crashed will be analyzed in the stress testing. Run SUT under customer expected configuration and more than customer expected configuration and reliability is called as stress testing.

Soak Testing/Endurance Testing Run SUT under customer expected configuration and customer expected load continuously to identify the memory leakages. It means how much time the application is working even after applying reasonable load on the application is called longevity/Durability is called Endurance testing/Soak Testing.

Volume Testing: Testing of an application with various volumes of data to verify where the application is breaking.

Page 10: Manual Testing complete

General Test scenarios for performance testing: To determine the performance, stability and scalability of an application under different load conditions. To determine if the current architecture can support the application at peak user levels. To determine which configuration sizing provides the best performance level. To identify application and infrastructure bottlenecks. To determine if the new version of the software adversely had an impact on response time. To evaluate product and/or hardware to determine if it can handle projected load volumes.

How to do Performance testing? By Manual Testing or by AutomationPractically it is not possible to do the performance testing manually because of some drawbacks like:

More number of resources will be required. Simultaneous actions are not possible. Proper system monitoring is not available. Not easy to perform the repetitive task.

 Hence to overcome the above problems we should use Performance testing tool. Below is the list of some popular testing tools.

Apache JMeter Load Runner Borland Silk Performer. Rational Performance Tester WAPT NEO LOAD

Security TestingValidating the security of the application in terms of the authentication & authorization.Authentication: Verifying whether the system is accepting valid/Right user or not.Authorization: Verifying whether the system is providing right information to right users or not.

Test Scenarios for Security Testing:

Verify the web page which contains important data like password, credit card numbers, secret answers for security question etc should be submitted via HTTPS (SSL).

Verify the important information like password, credit card numbers etc should display in encrypted format.

Verify password rules are implemented on all authentication pages like Registration, forgot password, change password.

Verify if the password is changed the user should not be able to login with the old password. Verify the error messages should not display any important information. Verify if the user is logged out from the system or user session was expired, the user should not be

able to navigate the site. Verify to access the secured and non secured web pages directly without login. Verify the “View Source code” option is disabled and should not be visible to the user. Verify the user account gets locked out if the user is entering the wrong password several times. Verify the cookies should not store passwords.

Page 11: Manual Testing complete

Verify if, any functionality is not working, the system should not display any application, server, or database information. Instead, it should display the custom error page.

Verify the SQL injection attacks. Verify the user roles and their rights. For Example The requestor should not be able to access the

admin page. Verify the important operations are written in log files, and that information should be traceable. Verify the session values are in an encrypted format in the address bar. Verify the cookie information is stored in encrypted format. Verify the application for Brute Force Attacks

Database TestingValidating the database of the application is called database testing. Whatever we performed in front

end application that should reflect in back end database & whatever we performed in back end application that should reflect in front end application.

To perform the Database testing, the tester should be aware of the below mentioned points: The tester should understand the functional requirements, business logic, application flow and

database design thoroughly. The tester should figure out the tables, triggers, store procedures, views and cursors used for the

application. The tester should understand the logic of the triggers, store procedures, views and cursors created. The tester should figure out the tables which get affected when insert update and delete (DML)

operations are performed through the web or desktop applications.With the help of the above mentioned points,  the tester can easily write the test scenarios for Database testing.

Test Scenarios for Database Testing: Verify the database name:  The database name should match with the specifications. Verify the Tables, columns, column types and defaults: All things should match with the specifications. Verify whether the column allows a null or not. Verify the Primary and foreign key of each table. Verify the Stored Procedure: Test whether the Stored procedure is installed or not. Verify the Stored procedure name Verify the parameter names, types and number of parameters. Test the parameters if they are required or not. Test the stored procedure by deleting some parameters Test when the output is zero, the zero records should be affected. Test the stored procedure by writing simple SQL queries. Test whether the stored procedure returns the values Test the stored procedure with sample input data. Verify the behavior of each flag in the table. Verify the data gets properly saved into the database after the each page submission. Verify the data if the DML (Update, delete and insert) operations are performed. Check the length of every field: The field length in the back end and front end must be same. Verify the database names of QA, UAT and production. The names should be unique. Verify the encrypted data in the database.

Page 12: Manual Testing complete

Verify the database size. Also test the response time of each query executed. Verify the data displayed on the front end and make sure it is same in the back end. Verify the data validity by inserting the invalid data in the database. Verify the Triggers.

Test dataA data or a value which we are using to test the application is called Test data. We are using test data

in input domain testing, re testing & regression testing types.

Positive Testing Performing testing on the application with +ve Test data is called Positive Testing.

Negative Testing Performing testing on the application with –ve Test data is called Negative Testing.E.g. Performing testing in login screen with valid Uid & pwd is called +ve Testing. If we perform the testing in same application with invalid uid & invalid pwd, invalid uid & valid pwd, valid uid & invalid pwd, etc. is called –ve testing.

α-Testing Performing the testing on the application directly by the client in developer’s environment is called α-Testing. (Or) We invite the Selected Customers/Client to our location and Ask them to take a look at software, how it works? etc. get the feedback front them its called as Alpha testing. Also Alpha testing happens only ONCE and not many times. Main purpose of Alpha testing to get the feedback or client or customers or users about how they feel about the software. this feedback gets analyzed by Product Manager, Delivery Manager and Team to change the certain things before the release.

β-Testing Performing the testing on the application directly by the client like people is called β-Testing. (Or) In Beta Testing we distribute the software to the selected users or customer or client asks the feedback of it. as Alpha testing was already done before Beta testing.

Installation Testing During this test, Testing team validates whether application build along with supported software’s into customers site like configured systems. During this test, Testing team observes below factors.

Setup program execution to start installation Easy Interface Amount of disk occupied after installation

Parallel Testing/ Comparative Testing Testing team comparing our application with various versions (or) similar applications to identify the weakness & strengths of the application.

Page 13: Manual Testing complete

Acceptance Testing After completing of system testing, Project management is concentrating on acceptance testing to collect the feedback from real customers & Model customers. In this acceptance testing developers & testers are also involving to convince customers. There are two ways in acceptance testing such as α-testing & β-testing.

Release Testing After completing of Acceptance testing & their modifications, Project management is concentrating on software release. Here project manager will form the release team with few developers, few testers, few h/w engineers and one delivery manager has head. This team will go to customer site and start s/w installation in customer’s site. Release team observes the following factors during Complete installation Overall Installation Input devices handling Co-existence with OS Co-existence with other s/w to share resources.After completion of above observations in customer’s site, release team will provide training to the customer’s site people and then release team come back to the organization.

Adhoc Testing

Adhoc testing is an informal testing type with an aim to break the system. This testing is usually an unplanned activity. It doesn’t follow any test design techniques to create test cases. In fact is does not create test cases

altogether! It is primarily performed if the knowledge of testers in the system under test is very high. Testers randomly test the application without any test cases or any business requirement doc. Adhoc testing can be achieved with the testing technique called Error guessing. Error guessing can be done by the people having enough experience on the system to “Guess” the

most likely source of errors.

Types of adhoc testing1. Buddy Testing2. Exploratory Testing

3. Pair testing

Page 14: Manual Testing complete

4. Monkey Testing

Buddy Testing Due to lack of time to complete the application, Testers will join with developers to continue development &

testing parallel from beginning stages of development. (Or)Two buddies mutually work on identifying defects in the same module. Mostly one buddy will be from development team and another person will be from testing team. Buddy testing helps the testers develop better test cases and development team can also make design changes early. This testing usually happens after unit testing completion.

Exploratory TestingDue to lack of documentation, testers can prepare scenarios and cases for responsible modules depends on past experience, discussions with others, by operating SUT screens etc.

Pair testingTwo testers are assigned modules, share ideas and work on the same machines to find defects. One person can execute the tests and another person can take notes on the findings. Roles of the persons can be a tester and scriber during testing. (Or)Due to lack of skills, junior tester will join with senior testers to share the knowledge during testing.

Buddy testing is combination of unit and system testing together with developers and testers but Pair testing is done only with the testers with different knowledge levels.(Experienced and non-experienced to share their ideas and views)

Monkey TestingRandomly test the product or application without test cases with a goal to break the system

Advantages of Adhoc Testing : Adhoc Testing saves lot of time as it doesn’t require elaborate test planning , documentation and test

case design. It checks for the completeness of testing and find more defects then  planned testing.

Disadvantages of Adhoc Testing :   This testing requires no documentation/ planning /process to be followed. Since this testing aims at

finding defects through random approach, without any documentation, defects will not be mapped to test cases. Hence, sometimes, it is very difficult to reproduce the defects as there are no test-steps or requirements mapped to it.

Domain Testing: Domain testing is a software testing technique, Objective of domain testing is to select test cases of critical functionality of the software and execute them. Domain testing does not intend to run all the existing test cases.

End-to-end Testing: End to end testing is performed by testing team, focus of end to end testing is to test end to end flows e.g. right from order creation till reporting or order creation till item return etc and checking ., End to end testing is

Page 15: Manual Testing complete

usually focused mimicking real life scenarios and usage. End to end testing involves testing information flow across applications.

Parallel Testing/ competitive testing  Perform the testing on the application by comparing the previous versions of the application or with other competitive products in the market to find the weakness & strengths of the application.

Agile Testing Due to sudden changes in requirements, testing team can change corresponding scenarios and cases then they will go to retesting and regression testing on those modified software.

User Acceptance testing (UAT):User Acceptance testing is a must for any project; it is performed by clients/end users of the software. User Acceptance testing allows SMEs (Subject matter experts) from client to test the software with their actual business or real-world scenarios and to check if the software meets their business requirements.

Testing methodologies There are 3 different types of testing methodologies.

1. Black box Testing2. White box Testing3. Grey box Testing

1. Black box TestingPerforming testing on the application without having the structural knowledge (or) coding knowledge

is called Black box testing. Black box testing is done by the Testing team, because testing team people doesn’t required coding knowledge but application knowledge is required for testing the application.

2. White box testing Performing testing on the application with having the structural or source code knowledge is called white box testing. White box testing is done by the development team, because the developers should have structural knowledge is required.

3. Grey box testing Performing testing on the application with structural as well as application knowledge is called grey box testing. Grey box testing is performed by person who is having knowledge on both structural knowledge as well as application testing.

Software Testing Life Cycle (STLC) STLC is a process which we are following to complete the Software testing activities in a project. It is part of SDLC, because SDLC is the combination of both Development & Testing activities. But in STLC only Testing activities.

Page 16: Manual Testing complete

Test Initiation In Test Initiation phase, QA manager will be involving to prepare the Test strategy document. This document is known as test methodology or test approach. QA Manager will form the team for testing the application.

Test Plan After getting the Test strategy document from QA Manager, Test lead will be involving to design the Test Plan document along with senior members in a project based on the SRS, Project Plan & Test strategy documents.

Test Plan document Test Plan document is in the format of word document. It is the route map document for testing of the application. It defines what to test, how to test, who to test & when to test.

Contents in Test Plan document

1. Author: It defines who designed this document (Test lead along with senior members)

2. Reviewed by: It defines who has reviewed this document (QA Manage)

3. Description: It defines the brief description about the project.

4. Test Items: List of all modules in the project

5. Features to be tested: List out the testable features in the current release, and then we are going to

test only those features.

6. Features not to be tested: List out the not testable features in current release, and then we are not

going to test those features.

7. Entry Criteria: It defines when to start performing the testing. We are starting the testing after

completion of development of the application and also application should be available to the testing environment.

8. Exit Criteria: It defines when to close the testing activity. After completion of execution of all the

test cases for minimum one time and all defects should be closed. 9. Suspension Criteria: It defines when to suspend or stop the testing activity. We stopping the testing based on the below factors.

Page 17: Manual Testing complete

Environment Issues Application crashes Exception Errors

Major defect detected in the build (Blocker) 10. Resumption Criteria: It defines when to continue the testing activity. We are continue testing based on the below factors. Blocker defect got fixed Application crashes got clear Exception errors are fixed11. Test Deliverables: It defines list of documents to be prepared by testers in testing environment. E.g.: Test scenarios, Test cases, Automation test scripts, defect reports & status report.12. Testing types to be used: It defines what are all the testing types we are using to perform the testing in the project. E.g.: Sanity testing, Smoke testing, Adhoc testing, Exploratory testing, Functional Testing, User Interface Testing etc.13. Tools to be used: It defines what are all the automation tools we are using in the project. E.g.: QTP, Selenium, Load runner, etc.14. Environment needs: It defines what are the software used for developing the application & what are the minimum hardware requirements for this application.15. Roles & Responsibilities: It defines allocate the work to every member in the team module wise.

Test Case Design In this phase, testing team will be involving to design the test cases based on the client requirements. Once we are getting the requirements in the form of Use case doc & BRS doc. We need to start the designing the test cases.

Test case: Test case is a sequential elaborated executable form of requirements is called Test case.

To design the test cases we are following different testing techniques. 1. BVA (Boundary value Analysis) 2. ECP (Equivalence class partitioning) 3. Error guessing

1. BVA (Boundary value Analysis): It defines the range of the data which we are using to perform the testing.

Use case BRS BVA

1.Validate UID (User Id) 1. UID should accept only 4-15 characters

Min=4=PassMax=15=PassMin+1=>4+1=5=>PassMax+1=>15+1=16=>FailMin-1=>4-1=3=>FailMax-1=>15-1=14=>Pass

2.Validate PWD (Password) 2. PWD should accept only 4-7 characters

Min=4=PassMax=7=PassMin+1=> 4+1=5=>PassMax+1=>7+1=8=>FailMin-1=>4-1=3=>Fail

Page 18: Manual Testing complete

Max-1=>7-1=6=>Pass

E.g.: Prepare BVA for local mobile number which is starting with ‘9’ Max= 9 9 9 9 9 9 9 9 9 9 => Pass Min= 9 0 0 0 0 0 0 0 0 0 => Pass Max+1=> 1 0 0 0 0 0 0 0 0 0 => Fail Min+1=> 9 0 0 0 0 0 0 0 0 1 =>Pass Max-1=> 9 9 9 9 9 9 9 9 9 8 => Pass Min-1=> 8 9 9 9 9 9 9 9 9 9 => Fail

2. ECP (Equivalence class partitioning): It defines whether the data is valid or invalid which we are using

to performing the testing on the application. 2 types of Equivalence class partitioning 1. Valid 2. InvalidE.g.: To validate the User Name, it contains only alphabets “Naresh”

Valid Invalid

a-z (Small letters) Numbers(0-9)

A-Z (Capital letters) Special characters (*, @, $, (, &, ^, %)

E.g.: Prepare the ECP for mobile number “8099163726”

Valid Invalid

0-9 a-z

A-Z

Space

Special character

E.g.: Prepare the ECP for PAN card Number “PPR9468K”

Valid Invalid

0-9 a-z

A-Z Space

Special character

3. Error Guessing: Error guessing is experience based testing. Based on the previous experience the tester will guess the errors in the application and design the test cases.

Test Case Execution: In this phase, testing team will be involving to execute the test cases. Once the application has deploying into the team environment testing team will perform testing on the application by executing the test cases. During execution time we are comparing expected result of test case with actual result in the application. The expected result is same as actual result we are marking the status of the step as pass and if

Page 19: Manual Testing complete

expected result is mismatch with actual result we are marking the status of the step is fail. If we identified any mismatch b/w expected & actual result. We need to report that mismatch as a defect to the development team.

Defects In this phase, both Testing team & developing team will be involving to report the defects, fix the defects, retesting the defects & close the defects etc.

Defect Life Cycle /Bug Life CycleDefect: A mismatch between expected result & actual result is called defect.

New: When the tester found new defect and posted for the first time. Then the status as New.

Assigned: After the tester has posted the defect, the test lead validates the defect whether it is correct or not.

If the defect is correct, then the lead assigns the bug to corresponding development lead. Then the status as assigned from new.

Open: After test lead assigned the defect, the developer has started analyzing and working on the defect fix. Then the developer gives the status as open.Fixed:  When developer makes necessary code changes and fixes the defect, then the developer gives the status as ‘Fixed’.

Verified: After fixed the bug by the developer, the tester tests the bug again. If the bug is not present in the software, tester approves that the bug is fixed and changes the status to “verified”.

Reopen:  If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “reopened”. The bug goes through the life cycle once again.

Closed:  Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the application, tester changes the status of the bug to “closed”. This state means that the bug is fixed, tested and approved.

Page 20: Manual Testing complete

Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “duplicate“.

Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “rejected”.

Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software. 

Not a bug:  The state given as “Not a bug” if there is no change in the functionality of the application. For an example: If customer asks for some change in the look and field of the application like change of color of some text then it is not a bug but just some change in the looks of the  application.

Regression Testing: During this testing, testing team will be involving to perform regression testing on modified application to verify the whether the fixed defect is impacting on other functionality of the application. Most of the projects are using Automation tools like QTP, Selenium, silk test etc. to perform the regression testing on the application.

Test closure whenever we are executed all the test cases and when all defects got closed, QA manager will be involving to signoff the testing activity and technical team will deploy the application into production environment

Adhoc testing strategies From software testing principles, exhaustive testing is impossible due to this reason organizations are

following optimal test strategies for software. Due to some risks some organizations are following adhoc testing.

a) Monkey Testing:Due to lack of time, testing team is conducting testing on some modules of SUT only instead of all modules.

b) Buddy Testing:Due to lack of time, testers will join with developers to continue development and testing parallel from early stages of development.

c) Exploratory Testing:Due to lack of documentation, testers can prepare scenarios and cases for responsible modules depends on past experience, discussions with others, internet browsing, by operating SUT screens, conference with the customer site people and etc.

d) Pair Testing:Due to lack of skills, junior testers will join with senior testers to share their knowledge during testing.

e) Agile Testing:

Page 21: Manual Testing complete

Due to sudden changes in requirements, testing team can change corresponding scenarios and cases then they will go to retesting and regression testing on those modified software.

f) Debugging:To increase or to improve the skills of testers, developers can release software with known bugs. If testers are found those bugs testing team is good otherwise team needs some training.

Responsibilities of Manual Tester

1. Understanding all requirements of a project or product.2. Understanding testing requirements related to the current project / product.3. Assist test lead during test planning.4. Writing test scenarios for responsible modules and responsible testing topics by using "Black box

techniques".5. Follow IEEE 829 standards while documenting test cases with required test data and test

environment,6. Involve in test cases review meeting along with the test lead.7. Involve in smoke testing on initial software build8. Execute test cases by using test data on SUT in test environment to detect defects.9. Involve in defect reporting and tracking10. Conduct Retesting, Sanity testing and Regression testing on every modified SUT to close bugs.11. Strong in SQL to connect to SUT database while testing.12. Involve in final regression testing on final SUT during test closure13. Join with the developers to do acceptance testing to collect the feed back on software from the real

customers / model customers14. Assist test lead in RTM preparation before signoff from the project / product

When a tester is said to be good tester?A good Tester is a tester who will find more defects & report the defects which are accepted by the developers then the tester is a good tester.

Requirement Traceability MatrixThis document is used for tracking the requirements for checking the scenarios and test cases coverage.

Default, this matrix is having columns requirement id, scenario id and test case id. Requirement id will be updated by the test lead and he will share this document to the entire testing team for updating the scenario ids and test case ids. Once entire team is updated with their scenario ids and test case ids, test lead is going to verify every requirement is covered for writing the test case or not. If at all he finds any of the requirements is not covered which means scenario ids and test case ids will be empty for corresponding requirements, then he will assign those requirements to somebody in our team for writing scenarios and test cases.

Requirement Id Scenario Id Test Case Id

Difference between Severity & Priority with example Severity & Priority will be assigned for a particular bug to know the importance of the bug.

Page 22: Manual Testing complete

Severity: Severity describes how the defect is impacting on functionality of the application. Types of Severity 1. Blocker 2. Critical 3. Major 4. Minor1) Blocker: application is not working/ major functionality is completely brown. Tester can not do further testing. Tester is blocked.

2) Critical: some part of functionality is brown, tester cannot test some part of functionality and there is no workaround.

3) Major: in this type, defects are logical defects which do not block any functionality. Major type usually contains functional and major UI defects.

4) Minor: it mostly contains UI defects, minor usability defects. Defects which does not harm to application under test

Priority :  It is term indicates the importance of the defect and when it should gets addressed or fixed.

1) High: it has high business value, end user can not work, unless the defect gets fixed. in this case Priority should be High, means immediate fix of the defect.

2) Medium: end user can work using workaround but some functionality end user cannot use and that functionality is not regularly used by the user.

3) Low: no or very less impact on end user

Difference between Severity and Priority:

1) Severity should be defined by QA  whereas Priority should be Defined by Dev/ Delivery manager

2) Severity driven by  functionality whereas priority driven by the business value.

3)  Severity describe the how the defect is impacting the functionality of the product or software under test and Priority indicates the importance of the defect and when it should gets addressed or fixed.

Examples

High Priority and High Severity: 

Major functionality failure like log in is not working, crashes in basic workflow of the software are the best example of High priority and High Severity

1)  Application crashed while opening

Page 23: Manual Testing complete

2) Website home page failed to load.

 High Priority and Low Severity:

1) Spelling mistake on menu names, client’s names or any important name which is getting highlighted to the end user. There is very common mistakes people were doing while giving the examples, they give example of logo and logo misspelled  this is wrong example. it comes under high priority and high severity. Logo and company name is identity of the company or organization then how it should be low severity? Tester should be judgmental while assigning the Severity to the defect

Low  Priority and high Severity:

1) Crashed in application if end users do some weird steps which are not usual or invalid steps.

This is all about Severity and Priority, let me know if anyone has questions on it.

How to test cookies in web application?

Cookies is encrypted text file used to store small information on user’s or client’s machine. It is generated by the web server and sent to the internet browser.There are 6 ways to test the cookies.

1. Typically, website should be work as expected even cookie is disable. to test this case, disable

to cookies from internet browser settings. and try to access the website or web application. in this case

application should work as expected.

2. Configure the internet browser to prompt the cookie for acceptance or rejection. try several times by

accepting and rejecting the cookies and observe the behavior.

3. Edit Cookie file in any text editor and modify the some parameters, due to modification of cookie, it

gets corrupted  now access the website and see its behavior

4. Remove the all cookies and from defined location (usually its Temp folder) and check the behavior of

pages.

5. Check the cookies behavior on different browsers.

6. Open the cookies in Text editor and make sure that all sensitive information like user password,

session ids are encrypted properly.

What is Test case? What are the contents in Test case? Test Case is a set of steps which gives the certain output with defined input conditions. Typically test case contains test steps and at the end of the test steps, tester needs to compare the actual result with expected result.

What are the items of test case template?Following parameter should be considered while writing test case.

1. Test Case Number: this number should be unique.2. Pre-Condition:  Pre-conditions is necessary to make sure that we can get desired output. Pre-

condition should be all necessary date to start test execution. Sometimes no need of pre-condition.

Page 24: Manual Testing complete

3. Description: Description or test steps are very important parameter of Test case writing. Where tester writes the exact test steps to get the desired output.

4. Expected Result: Expected output at the end of test steps should be written here.5. Actual Result: What actual result tester gets after executing the test case should be go here. it helps

when we execute the same test case again.6. Status (Pass/Fail): this is decision making status whether test case is passed or failed or blocked after

test execution.7. Remarks: if tester wants to specify some specific thing, then this item is made to make such comments

or remark.

To me it seems like you 'want' a android phone. Perhaps by accident you ended up with 520. Most of the things you have listed are features found in droids and that is what you should consider buying. Just a friendly advise, after reading your post. I would seriously consider returning the phone. Otherwise, you will continue to hate this phone everyday, especially when you consider that windows phone cost a lot more than comparable droids.