Manual testing concepts course 1

  • Published on

  • View

  • Download

Embed Size (px)


<ul><li> 1. Why Do We Test Software? Testing is conducted to ensure that you develop a product that will prove tobe useful to the end user.The primary objectives of testing assure that: The system meets the users needs ... has the right system been built The user requirements are built as specified ... has the system been built rightOther secondary objectives of testing are to: Instill confidence in the system, through user involvement Ensure the system will work from both a functional and performance viewpoint Ensure that the interfaces between systems work Establish exactly what the system does (and does not do) so that the user doesnot receive any "surprises" at implementation time Identify problem areas where the system deliverables do not meet the agreed tospecifications Improve the development processes that cause errors.</li></ul><p> 2. Testing is the systematic search fordefects in all project deliverables Testing is a process of verifying and/orvalidating an output against a set ofexpectations and observing the variances. 3. Verification :Verification ensures that the system complies with an organizationsstandards and processes, relying on review or non-executable methodsDid we build the right systemValidation:ensures that the system operates according to plan by executing the systemfunctions through a series of tests that can be observed and evaluatedDid we build the system right?Variances:deviations of the output of a process from the expected outcome. Thesevariances are often referred to as defects. 4. Testing is a Quality Control Activity.Quality has two working definitions:Producers Viewpoint The quality of the product meets therequirements.Customers Viewpoint The quality of the product is fit for use ormeets the customers needs. 5. Quality assurance:is a planned and systematic set of activities necessary to provideadequate confidence that products and services will conform tospecified requirements and meet user needs Estimation processes Testing processes and standardsQuality ControlQuality control is the process by which product quality is comparedwith applicable standards, and the action taken whennonconformance is detectedQuality control activities focus on identifying defects in the actualproducts produced. 6. Testing ApproachesStatic Testing:is a detailed examination of a work products characteristics to anexpected set of attributes, experiences, and standardsSome representative examples of static testing are: Requirements walkthroughs or sign-off reviews Design or code inspections Test plan inspections Test case reviewsDynamic Testingprocess of verification or validation by exercising (or operating) a workproduct under scrutiny and observing its behavior to changing inputs orenvironments , it is executed dynamically to test the behavior of its logicand its response to inputs 7. Project Life Cycle ModelsPopular Life cycle models V &amp; V Life Cycle Model Waterfall life Cycle Model 8. V &amp;V!IT InfrastructureDevelopment andMaintenanceInterface ControlRequirementsApplicationDevelopment andMaintenanceRoll-Out,TrainingandImplementationBusinessProcessesRe-EngineeringOperability TestUser Acceptance Test (UAT)Systems Integration Test(Solution Level)Integration Test (ComponentLevel)Unit TestBuild / CodeBusinessRequirementsSystem RequirementsSolution RequirementsComponent RequirementsDetailed DesignApplication / Function RequirementsIntegrationVerificationValidationRequirementsDevelopment&amp;ManagementSystem Test (Application Level) 9. Levels of Testing 10. Objectives To verify that the stated requirements meet the business needs ofthe end-user before the external design is startedTo evaluate the requirements for testabilityWhen After requirements have been statedInput Detailed RequirementsOutput Verified RequirementsWho Users &amp; DevelopersMethods Static Testing TechniquesChecklistsRequirements TestingRequirements testing involves the verification and validation ofrequirements through static and dynamic tests 11. Objectives To verify that the system design meets the agreed to business andtechnical requirements before the system construction beginsTo identify missed requirementsWhen After External Design is completedAfter Internal Design is completedInput External Application DesignInternal Application DesignOutput Verified External DesignVerified Internal DesignWho Business Analysts &amp; DevelopersMethods Static Testing TechniquesChecklistsDesign TestingDesign testing involves the verification and validation of the system design throughstatic and dynamic tests. The validation testing of external design is done duringUser Acceptance Testing and the validation testing of internal design is coveredduring, Unit, Integration and System Testing 12. Objectives To test the function of a program or unit of code such as a programor moduleTo test internal logicTo verify internal designTo test path &amp; conditions coverageTo test exception conditions &amp; error handlingAfter modules are codedInputDetail Design or Technical Design &amp;Unit Test PlanOutput Unit Test ReportWho DevelopersMethods DebugCode AnalyzersPath/statement coverage toolsUnit TestingUnit level test is the initial testing of new and changed code in amodule. It verifies the program specifications to the internal logicof the program or module and validates the logic 13. Objectives To technically verify proper interfacing between modules, and withinsub-systemsInputDetail Design &amp; Compound Requirements &amp; Integration Test PlanOutput Integration Test reportWho DevelopersWhen After modules are unit testedMethods White and Black Box techniquesIntegration TestingIntegration level tests verify proper execution of application componentsand do not require that the application under test interface with otherapplications . Communication between modules within the sub-systemis tested in a controlled and isolated environment within the project 14. Objectives To verify that the system components perform control functionsTo perform inter-system testTo demonstrate that the system performs both functionally andoperationally as specifiedTo perform appropriate types of tests relating to Transaction Flow,Installation, Reliability, Regression etc.Input Detailed Requirements &amp; External Application DesignMaster Test Plan &amp; System Test PlanOutputSystem Test ReportWho System TestersWhen After Integration TestingSystem TestingSystem level tests verify proper execution of the entire applicationcomponents including interfaces to other applications. Both functionaland structural types of tests are performed to verify that the system isfunctionally and operationally sound. 15. Objectives To test the co-existence of products and applications that arerequired to perform together in the production-like operationalenvironment (hardware, software, network)To ensure that the system functions together with all thecomponents of its environment as a total systemTo ensure that the system releases can be deployed in the currentenvironmentInput Master Test PlanSystems Integration Test PlanOutput Systems Integration Test reportWho System TestsWhen After system testingSystems Integration TestingSystems Integration testing is a test level which verifies theintegration of all applications, including interfaces internal andexternal to the organization, with their hardware, software andinfrastructure components in a production-like environment 16. ObjectivesTo verify that the system meets the user requirementsInput Business Needs &amp; Detailed RequirementsMaster Test PlanUser Acceptance Test PlanOutput User Acceptance Test reportWho CustomerWhen After system testing/System Integration TestUser Acceptance TestingUser acceptance tests (UAT) verify that the systemmeets user requirements as specified 17. Test EstimationThere are 3 Estimation Techniques Top-Down Estimation Expert Judgment Bottom-Up EstimationEstimation 18. Top-Down Estimation: the initial stages of the project and is based onsimilar projects. Past data plays an importantrole in this form of estimation.Function Points Model:-Function points (FP) measure the size in terms of the amount offunctionality in asystem.Expert Judgment :If someone has experience in certain types ofprojects their expertise can beused to estimate the cost that will be incurred in implementing theproject.Bottom-Up EstimationThis cost estimate can be developed only when the project is definedas in a baseline. The WBS (Work Breakdown Structure) must bedefined and scope must be fixed. The tasks can then be brokendown to the lowest level and a cost attached to eachThis can then be added up to the top baselines thereby giving the costestimate 19. Test StrategyWho (in generic terms) will conduct the testingThe methods, processes and standards used to define, manage and conduct all levels oftesting of the applicationLevel of Testing is in Scope/Out of ScopeTypes of Projects does Application will supportTest FocusTest Environment strategyTest Data strategyTest ToolsMetricsOATS 20. Test Plan A document prescribing the approach to be taken for intended testingactivities. The plan typically identifies the items to be tested, the testobjectives, the testing to be performed, test schedules, entry / exitcriteria, personnel requirements, reporting requirements, evaluationcriteria, and any risks requiring contingency planning. What will be tested. How testing will be performed. What resources are needed, The test scope, focus areas and objectives; The test responsibilities; The test strategy for the levels and types of test for this release; The entry and exit criteria; Any risks, issues, assumptions and test dependencies; The test schedule and major milestones;MTP 21. Test TechniquesWhite Box TestingEvaluation techniques that are executed with the knowledge of theimplementation of the program. The objective of white box testing is to testthe programs statements, code paths, conditions, or data flow pathsHow it has done not what is doneIdentify all decisions, conditions and pathsBlack box TestingEvaluation techniques that are executed without knowledge of the programsimplementation. The tests are based on an analysis of the specification ofthe component without reference to its internal workings.What is Done not on How it is DoneEquivalence PartitioningEquivalence partitioning is a method for developing test cases by analyzingeach possible class of values. In equivalence partitioning you can selectany element from Valid equivalence class Invalid equivalence class 22. Boundary Value AnalysisBoundary value analysis is one of the most useful test casedesign methods and is a refinement to equivalencepartitioning. Boundary conditions are those situationsdirectly on, above, and beneath the edges of inputequivalence classes and output equivalence classes.In boundary analysis one or more elements must beselected to test each edgeError Guessing:Based on past experience, test data can be created toanticipate those errors that will most often occur. Usingexperience and knowledge of the application, invaliddata representing common mistakes a user might beexpected to make can be entered to verify that thesystem will handle these types of errorsTest Case Template 23. Types of TestsFunctional Testing: The purpose offunctional testing is to ensure that theuser functional requirements andspecifications are met.Test conditions are generated to evaluatethe correctness of the application 24. Types of TestsFunctional Testing:The purpose of functional testing is to ensure that the user functional requirements andspecifications are metStructural Testingdesigned to verify that the system is structurally sound and can perform the intended tasks.Audit and Controls testing:verifies the adequacy and effectiveness of controls and ensures the capability to prove thecompleteness of data processing resultsInstallation Testing:The purpose of this testing is ensureAll required components are in the installation packageThe installation procedure is user-friendly and easy to useThe installation documentation is complete and accurate 25. Inter-system Testing :. Interface or inter-system testing ensures thatthe interconnections between applications function correctlyParallel Testing :Parallel testing compares the results of processing thesame data in both the old and new systemsParallel testing is useful when a new application replaces an existingsystemRegression Testing:Regression testing verifies that no unwanted changes were introducedto one part of the system as a result of making changes toanother part of the systemUsability Testing: The purpose of usability testing is to ensure that thefinal product is usable in a practical, day-to-day fashion 26. Ad-hoc testing: Testing carried out using no recognized Test Case DesignTechnique. Here the testing is done by the knowledge of the tester in theapplication and he tests the system randomly with out any test cases or anyspecifications or requirements.Smoke Testing:Smoke testing is done at the start of the application is deployedSmoke testing is conducted to ensure whether the most crucial functions ofa program are working. It is main functionality oriented testSmoke testing is normal health check up to a build of an application beforetaking it to testing in depthSanity Testing:Sanity test is used to determine a small section of the application is still workingafter a minor change, focuses on one or a few areas of functionality.Once a new build is obtained with minor revisions 27. Backup and Recovery testing:Recovery is the ability of an application to be restarted after failure. The process usually involvesbacking up to a point in the processing cycle where the integrity of the system is assured andthen re-processing the transactions past the original point of failureContingency testing:is to verify that an application and its databases, networks, and operating processes can all bemigrated smoothly to the other sitePerformance Testing: Performance Testing is designed to test whether the system meets the desiredlevel of performance in a production like environmentSecurity TestingSecurity of an application system is required to ensure the protection of confidential information in asystem and in other affected systems is protected against loss, corruption, or misuse; either bydeliberate or accidental actionsStress / Volume Testing :Stress testing is defined as the processing of a large number of transactions through the system in adefined period of time The production system can process large volumes of transactions within the expected timeframe The system architecture and construction is capable of processing large volumes of data 28. RTM/Test Coverage MatrixA worksheet used to plan and cross checkto ensure all requirements and functionsare covered adequately by test casesRTM 29. Defect Life cycle 30. New:When a bug is found for the first time, the software tester communicates it to his/her team leader(Test Leader) in order...</p>