Manual testing concepts course 1

  • Published on
    26-May-2015

  • View
    6.745

  • Download
    5

Transcript

  • 1. Why Do We Test Software? Testing is conducted to ensure that you develop a product that will prove tobe useful to the end user.The primary objectives of testing assure that: The system meets the users needs ... has the right system been built The user requirements are built as specified ... has the system been built rightOther secondary objectives of testing are to: Instill confidence in the system, through user involvement Ensure the system will work from both a functional and performance viewpoint Ensure that the interfaces between systems work Establish exactly what the system does (and does not do) so that the user doesnot receive any "surprises" at implementation time Identify problem areas where the system deliverables do not meet the agreed tospecifications Improve the development processes that cause errors.

2. Testing is the systematic search fordefects in all project deliverables Testing is a process of verifying and/orvalidating an output against a set ofexpectations and observing the variances. 3. Verification :Verification ensures that the system complies with an organizationsstandards and processes, relying on review or non-executable methodsDid we build the right systemValidation:ensures that the system operates according to plan by executing the systemfunctions through a series of tests that can be observed and evaluatedDid we build the system right?Variances:deviations of the output of a process from the expected outcome. Thesevariances are often referred to as defects. 4. Testing is a Quality Control Activity.Quality has two working definitions:Producers Viewpoint The quality of the product meets therequirements.Customers Viewpoint The quality of the product is fit for use ormeets the customers needs. 5. Quality assurance:is a planned and systematic set of activities necessary to provideadequate confidence that products and services will conform tospecified requirements and meet user needs Estimation processes Testing processes and standardsQuality ControlQuality control is the process by which product quality is comparedwith applicable standards, and the action taken whennonconformance is detectedQuality control activities focus on identifying defects in the actualproducts produced. 6. Testing ApproachesStatic Testing:is a detailed examination of a work products characteristics to anexpected set of attributes, experiences, and standardsSome representative examples of static testing are: Requirements walkthroughs or sign-off reviews Design or code inspections Test plan inspections Test case reviewsDynamic Testingprocess of verification or validation by exercising (or operating) a workproduct under scrutiny and observing its behavior to changing inputs orenvironments , it is executed dynamically to test the behavior of its logicand its response to inputs 7. Project Life Cycle ModelsPopular Life cycle models V & V Life Cycle Model Waterfall life Cycle Model 8. V &V!IT InfrastructureDevelopment andMaintenanceInterface ControlRequirementsApplicationDevelopment andMaintenanceRoll-Out,TrainingandImplementationBusinessProcessesRe-EngineeringOperability TestUser Acceptance Test (UAT)Systems Integration Test(Solution Level)Integration Test (ComponentLevel)Unit TestBuild / CodeBusinessRequirementsSystem RequirementsSolution RequirementsComponent RequirementsDetailed DesignApplication / Function RequirementsIntegrationVerificationValidationRequirementsDevelopment&ManagementSystem Test (Application Level) 9. Levels of Testing 10. Objectives To verify that the stated requirements meet the business needs ofthe end-user before the external design is startedTo evaluate the requirements for testabilityWhen After requirements have been statedInput Detailed RequirementsOutput Verified RequirementsWho Users & DevelopersMethods Static Testing TechniquesChecklistsRequirements TestingRequirements testing involves the verification and validation ofrequirements through static and dynamic tests 11. Objectives To verify that the system design meets the agreed to business andtechnical requirements before the system construction beginsTo identify missed requirementsWhen After External Design is completedAfter Internal Design is completedInput External Application DesignInternal Application DesignOutput Verified External DesignVerified Internal DesignWho Business Analysts & DevelopersMethods Static Testing TechniquesChecklistsDesign TestingDesign testing involves the verification and validation of the system design throughstatic and dynamic tests. The validation testing of external design is done duringUser Acceptance Testing and the validation testing of internal design is coveredduring, Unit, Integration and System Testing 12. Objectives To test the function of a program or unit of code such as a programor moduleTo test internal logicTo verify internal designTo test path & conditions coverageTo test exception conditions & error handlingAfter modules are codedInputDetail Design or Technical Design &Unit Test PlanOutput Unit Test ReportWho DevelopersMethods DebugCode AnalyzersPath/statement coverage toolsUnit TestingUnit level test is the initial testing of new and changed code in amodule. It verifies the program specifications to the internal logicof the program or module and validates the logic 13. Objectives To technically verify proper interfacing between modules, and withinsub-systemsInputDetail Design & Compound Requirements & Integration Test PlanOutput Integration Test reportWho DevelopersWhen After modules are unit testedMethods White and Black Box techniquesIntegration TestingIntegration level tests verify proper execution of application componentsand do not require that the application under test interface with otherapplications . Communication between modules within the sub-systemis tested in a controlled and isolated environment within the project 14. Objectives To verify that the system components perform control functionsTo perform inter-system testTo demonstrate that the system performs both functionally andoperationally as specifiedTo perform appropriate types of tests relating to Transaction Flow,Installation, Reliability, Regression etc.Input Detailed Requirements & External Application DesignMaster Test Plan & System Test PlanOutputSystem Test ReportWho System TestersWhen After Integration TestingSystem TestingSystem level tests verify proper execution of the entire applicationcomponents including interfaces to other applications. Both functionaland structural types of tests are performed to verify that the system isfunctionally and operationally sound. 15. Objectives To test the co-existence of products and applications that arerequired to perform together in the production-like operationalenvironment (hardware, software, network)To ensure that the system functions together with all thecomponents of its environment as a total systemTo ensure that the system releases can be deployed in the currentenvironmentInput Master Test PlanSystems Integration Test PlanOutput Systems Integration Test reportWho System TestsWhen After system testingSystems Integration TestingSystems Integration testing is a test level which verifies theintegration of all applications, including interfaces internal andexternal to the organization, with their hardware, software andinfrastructure components in a production-like environment 16. ObjectivesTo verify that the system meets the user requirementsInput Business Needs & Detailed RequirementsMaster Test PlanUser Acceptance Test PlanOutput User Acceptance Test reportWho CustomerWhen After system testing/System Integration TestUser Acceptance TestingUser acceptance tests (UAT) verify that the systemmeets user requirements as specified 17. Test EstimationThere are 3 Estimation Techniques Top-Down Estimation Expert Judgment Bottom-Up EstimationEstimation 18. Top-Down Estimation: the initial stages of the project and is based onsimilar projects. Past data plays an importantrole in this form of estimation.Function Points Model:-Function points (FP) measure the size in terms of the amount offunctionality in asystem.Expert Judgment :If someone has experience in certain types ofprojects their expertise can beused to estimate the cost that will be incurred in implementing theproject.Bottom-Up EstimationThis cost estimate can be developed only when the project is definedas in a baseline. The WBS (Work Breakdown Structure) must bedefined and scope must be fixed. The tasks can then be brokendown to the lowest level and a cost attached to eachThis can then be added up to the top baselines thereby giving the costestimate 19. Test StrategyWho (in generic terms) will conduct the testingThe methods, processes and standards used to define, manage and conduct all levels oftesting of the applicationLevel of Testing is in Scope/Out of ScopeTypes of Projects does Application will supportTest FocusTest Environment strategyTest Data strategyTest ToolsMetricsOATS 20. Test Plan A document prescribing the approach to be taken for intended testingactivities. The plan typically identifies the items to be tested, the testobjectives, the testing to be performed, test schedules, entry / exitcriteria, personnel requirements, reporting requirements, evaluationcriteria, and any risks requiring contingency planning. What will be tested. How testing will be performed. What resources are needed, The test scope, focus areas and objectives; The test responsibilities; The test strategy for the levels and types of test for this release; The entry and exit criteria; Any risks, issues, assumptions and test dependencies; The test schedule and major milestones;MTP 21. Test TechniquesWhite Box TestingEvaluation techniques that are executed with the knowledge of theimplementation of the program. The objective of white box testing is to testthe programs statements, code paths, conditions, or data flow pathsHow it has done not what is doneIdentify all decisions, conditions and pathsBlack box TestingEvaluation techniques that are executed without knowledge of the programsimplementation. The tests are based on an analysis of the specification ofthe component without reference to its internal workings.What is Done not on How it is DoneEquivalence PartitioningEquivalence partitioning is a method for developing test cases by analyzingeach possible class of values. In equivalence partitioning you can selectany element from Valid equivalence class Invalid equivalence class 22. Boundary Value AnalysisBoundary value analysis is one of the most useful test casedesign methods and is a refinement to equivalencepartitioning. Boundary conditions are those situationsdirectly on, above, and beneath the edges of inputequivalence classes and output equivalence classes.In boundary analysis one or more elements must beselected to test each edgeError Guessing:Based on past experience, test data can be created toanticipate those errors that will most often occur. Usingexperience and knowledge of the application, invaliddata representing common mistakes a user might beexpected to make can be entered to verify that thesystem will handle these types of errorsTest Case Template 23. Types of TestsFunctional Testing: The purpose offunctional testing is to ensure that theuser functional requirements andspecifications are met.Test conditions are generated to evaluatethe correctness of the application 24. Types of TestsFunctional Testing:The purpose of functional testing is to ensure that the user functional requirements andspecifications are metStructural Testingdesigned to verify that the system is structurally sound and can perform the intended tasks.Audit and Controls testing:verifies the adequacy and effectiveness of controls and ensures the capability to prove thecompleteness of data processing resultsInstallation Testing:The purpose of this testing is ensureAll required components are in the installation packageThe installation procedure is user-friendly and easy to useThe installation documentation is complete and accurate 25. Inter-system Testing :. Interface or inter-system testing ensures thatthe interconnections between applications function correctlyParallel Testing :Parallel testing compares the results of processing thesame data in both the old and new systemsParallel testing is useful when a new application replaces an existingsystemRegression Testing:Regression testing verifies that no unwanted changes were introducedto one part of the system as a result of making changes toanother part of the systemUsability Testing: The purpose of usability testing is to ensure that thefinal product is usable in a practical, day-to-day fashion 26. Ad-hoc testing: Testing carried out using no recognized Test Case DesignTechnique. Here the testing is done by the knowledge of the tester in theapplication and he tests the system randomly with out any test cases or anyspecifications or requirements.Smoke Testing:Smoke testing is done at the start of the application is deployedSmoke testing is conducted to ensure whether the most crucial functions ofa program are working. It is main functionality oriented testSmoke testing is normal health check up to a build of an application beforetaking it to testing in depthSanity Testing:Sanity test is used to determine a small section of the application is still workingafter a minor change, focuses on one or a few areas of functionality.Once a new build is obtained with minor revisions 27. Backup and Recovery testing:Recovery is the ability of an application to be restarted after failure. The process usually involvesbacking up to a point in the processing cycle where the integrity of the system is assured andthen re-processing the transactions past the original point of failureContingency testing:is to verify that an application and its databases, networks, and operating processes can all bemigrated smoothly to the other sitePerformance Testing: Performance Testing is designed to test whether the system meets the desiredlevel of performance in a production like environmentSecurity TestingSecurity of an application system is required to ensure the protection of confidential information in asystem and in other affected systems is protected against loss, corruption, or misuse; either bydeliberate or accidental actionsStress / Volume Testing :Stress testing is defined as the processing of a large number of transactions through the system in adefined period of time The production system can process large volumes of transactions within the expected timeframe The system architecture and construction is capable of processing large volumes of data 28. RTM/Test Coverage MatrixA worksheet used to plan and cross checkto ensure all requirements and functionsare covered adequately by test casesRTM 29. Defect Life cycle 30. New:When a bug is found for the first time, the software tester communicates it to his/her team leader(Test Leader) in order to confirm if that is a valid bug. After getting confirmation from the TestLead, the software tester logs the bug and the status of New is assigned to the bug. Open:Once the developer starts working on the bug, he/she changes the status of the bug to Open toindicate that he/she is working on it to find a solution.Fixed:Once the developer makes necessary changes in the code and verifies the code, he/she marksthe bug as Fixed and passes it over to the Development Lead in order to pass it to the Testingteam. Retest:After the bug is fixed, it is passed back to the testing team to get retested and the status ofRetest is assigned to it. Closed:After the bug is assigned a status of Retest, it is again tested. If the problem is solved, the testercloses it and marks it with Closed status. 31. Reopen:If after retesting the software for the bug opened, if the system behavesin the same way or same bug arises once again, then the testerreopens the bug and again sends it back to the developer markingits status as Reopen.Rejected:If the System Test Lead finds that the system is working according tothe specifications or the bug is invalid as per the explanation fromthe development, he/she rejects the bug and marks its status asRejected.Deferred:In some cases a particular bug stands no importance and is neededto be avoided, that time it is marked with Deferred status. 32. Severity & Priority Severity: Severity determines the defectseffect on the application. Severity is givenby Testers Priority: Determines the defect urgency ofrepair. Priority is given by Test lead orproject manager or Senior Testers 33. Severity 1: A critical problem for which there is no bypass and testing cannot continue. Aproblem is considered to be severity 1 when:It causes the system to stop (e.g. noresponse to any command, no user display, etc.).The system cannot be restarted after a crash.It disables user access to the system.It causes function aborts, data loss or corrupts data.It corrupts or damages the operating system, hardware or any associated applications.Severity 2:A high severity problem where some testing can continue. A problem isconsidered to be severity 2 if:It affects the operation of more than two functions.The operation of the function permanently degrades performance outside the requiredparameters specified.Severity 3: A medium severity problem which does not prevent testing from continuing.A problem is considered severity 3 if:It causes the function to abort with incorrect or malicious use.It affects a subset of a function but does not prevent the function from being performed.It is unclear documentation which causes user misunderstanding but does not causefunction aborts, data loss or data corruption. 34. Severity 4:An low severity problem which does notprevent testing from continuing. A problem is considered tobe incidental if:It is a spelling or grammatical error.It is a cosmetic or layout flaw.Priority 1:The problem has a major impact on the testteam. Testing may not continue. The priority of the fix isurgent and immediate action is required.Priority 2:The problem is preventing testing of an entirecomponent or function. Testing of other components orfunctions may be able to continue. The priority is highand prompt action is required. 35. Priority 3:The test cases for a particular functional matrix (or sub-component) cannot be tested but testing can continue in otherareas. The priority is medium.Priority 4:Some of the function tested using a test case will not work asexpected but testing can continue. The priority is low.Examples:1. High Severity & Low Priority : For example an application whichgenerates some banking related reports weekly, monthly, quarterly& yearly by doing some calculations. If there is a fault whilecalculating yearly report. This is a high severity fault but low prioritybecause this fault can be fixed in the next release as a changerequest.2. High Severity & High Priority : In the above example if there is a faultwhile calculating weekly report. This is a high severity and highpriority fault because this fault will block the functionality of theapplication immediately within a week. It should be fixed urgently. 36. 3. Low Severity & High Priority : If there is aspelling mistake or content issue on thehomepage of a website which has daily hits oflakhs. In this case, though this fault is notaffecting the website or other functionalities butconsidering the status and popularity of thewebsite in the competitive market it is a highpriority fault4. Low Severity & Low Priority : If there is aspelling mistake on the pages which has veryless hits throughout the month on any website.This fault can be considered as low severity andlow priority. 37. Test Summary ReportWhat was tested?What are the results?What are the recommendations?System Test Report 38. The Basic Structure that has to befollowed for developing anySoftware/project including Testing processis called as Testing Methodology.

Recommended

View more >