Sdlc Final

  • Upload
    ivybuds

  • View
    271

  • Download
    0

Embed Size (px)

Citation preview

  • 8/4/2019 Sdlc Final

    1/29

    QAfast

    SDLCBasics of Software / System DevelopmentLife Cycle

    Software Development Life Cycle (SDLC) is a conceptual model used in project management that describes the differentphages involved in a system development project from an initial feasibility study through maintenance of the completedapplication.Software development is a complex process comprising of many stages. Each stage requires a lot of paperwork anddocumentation in addition to the development and planning process. SDLC mainly goes through the following phases:

    Requirements:

    Designs:

    Coding & Implementation :

    Development:

    Testing:

    Software Release: Maintenance:

    Requirements Business AnalystsSales and marketing people have first-hand knowledge of their customersrequirements that is Marketing Requirement Document or MRD, also known asSRD /SRS /FRS

    Design(Technical Design & Functional Design) ArchitectEngineers+ Senior Software Developers.Based upon these requirements, senior software developers create architecture for the products along with functionaland technical design specifications. Then the Development Process starts.(During Design phase, testers work with developers in determining what aspects of a design are testableand under what parameter those tests work).

    Development ProcessProgrammers, Stakeholders + QA TeamAfter the initial development phase, software testing begins, and many times it is done in parallel with the development

    process. Defining what needs to be tested and the necessary testing requirements.Stakeholders: Application Managers, Project Managers, Project Manager, Business Analysts, Database Designers,Developers, and End-Users.

    Testing and Debugging QA Team: First of all, QA Tester must have intimate knowledge about what theapplication is required to do. Without understanding the systems functionality, it will be difficult or impossible to createtests.

    DocumentationProgrammers + QA Team Documentation is alsopart of the development process because a product cannot be brought to market without manuals. Once developmentand testing are done, the software is released and the support cycle begins.

    Software ReleaseQA Team + End-Users + Third party SME This phase mayinclude bug fixes and new releases

    Maintenance:Business Analysts + End-Users + Programmers + QA Team Maintaining withnewly discovered problems or new requirements can take far more time than the initial development of the software.

    Requirements: (Requirements Gathering / Requirements Analysis): Requirements are traditionally the foundation for test planning,based on the following activities:

    Requirement Analysis

    Competitive analysis

    Feasibility studies

    Risk Management.

    Requirementgathering is usually the first part of any software product. This stage starts when you are thinking about developing software.In this phase, you meet customers or prospective customers, analyzing market requirements and features that are in demand. You also find

    1

    Syed Kamrul

  • 8/4/2019 Sdlc Final

    2/29

    out if there is a real need in the market for the software product you are trying to develop. In this stage, marketing and sales people or peoplewho have direct contact with the customers do most of the work. These people talk to these customers and try to understand what they

    need. A comprehensive understanding of the customers needs and writing down features of the proposed software product are the keys tosuccess in this phase. This phase is actually a base for the whole development effort.

    MRD (Marketing Requirement Document)SRD (System Requirements Document)SRS (System Requirements Specifications)FRS (Functional Requirements Specifications)

    Designs (Analysis and design):The technical architecture used by the application is described by presenting the various hardware, softwareand networking components, and their interfaces.

    Designhas two parts:

    Functional Design

    Technical Design.

    Functional Design:A functional design assures that each modular part of a computer program has only one responsibility and performsthat responsibility with the minimum of side effects on other parts.

    Customer ModuleOrder ModulePayment ModuleClass Module

    Report Module

    Technical Design:In this part technical aspects are taken care of e.g. selection of operating system:Dos/ Windows/ Windows NT / Windows 2000/ HP UNIX/ Sun Solaris/ Linux / Red hat/ IBM AIX etc.

    Creating architecture and design documentsphase begins when you have all of the requirements collected and arranged; it is the turn of thetechnical architecture team, consisting of highly qualified technical specialists, to create the architecture of the product. The architecturedefines different components of the product and how they interact with each other. In many cases the architecture also defines thetechnologies used to build the product. While creating the architecture documents of the project, the team also needs to consider thetimelines of the project. This refers to the target date when the product is required to be on the market. Many excellent products fail becausethey are either too early or late to market. The marketing and sales people usually decide a suitable time frame to bring the product tomarket. Based on the timeline, the architecture team may drop some features of the product if it is not possible to bring the full-featuredproduct to market within the required time limits.

    Once components of the product have been decided and their functionality defined, interfaces are designed for these components to worktogether. In most cases, no component works in isolation; each one has to coordinate with other components of the product. Interfaces arethe rules and regulations that define how these components will interact with each other. There may be major problems down the road if

    these interfaces are not designed properly and in a detailed way. Different people will work on different components of any large softwaredevelopment project and if they dont fully understand how a particular component will communicate with others, integration becomes amajor problem.For some products, new hardware is also required to make use of technology advancements. The architects of the product also need toconsider this aspect of the product.

    After defining architecture, software components and their interfaces, the next phase of development is the creation of design documents. Atthe architecture level, a component is defined as a black box that provides certain functionality. At the design documents stage, Life Cycle ofa Software Development Project have to define what is in that black box. Senior software developers usually create design documents andthese documents define individual software components to the level of functions and procedures. The design document is the last documentcompleted before development of the software begins. These design documents are passed on to software developers and they start coding.Architecture documents and MRDs typically need to stay in sync, as sales and marketing will work from MRDs while engineering works fromengineering documents.

    What is Software Testing: The goal of software testing is to execute the codes with the intent of finding errors and to verify that thefinished product performs as expected.

    Ensure that the finished product meets SRS

    Verify that Performance Requirements have been met

    Stress testing confirms the Application can handle the required load

    Certify that the System functions on required Hardware and software platforms

    Intentionally attempt to make things go wrong to determine if things happen when they shouldnt or things dont happen whenthey should.It is oriented to DETECTION.

    What is quality assurance? Software QA involves the entire software development PROCESS monitoring and improving the process, making sure that any

    agreed-upon standards and procedures are followed, and ensuring that problems are found and properly dealt with.It is oriented to PREVENTION.

    2

    Syed Kamrul

  • 8/4/2019 Sdlc Final

    3/29

    What is error? A software bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from behaving asintended. An application, which is poorly documented or inefficient, and awkward to use is an application with bugs.

    Type of errors:Functional error: Can be caused by ambiguous requirement specifications.System error: Operating systems, database systems, communications software, printer driversCommunication error: Bad NIC, Poorly connected cable,

    Logic error: If the codes do not perform as the requirements dictate, the result is a logic error.User Interface error: Objects dont function as expected are certainly included in the rolls of UI error.Data error: Incorrectly defined data types for input.Coding error: Syntax error,Testing error: Environment changing causes testing error.

    What is Manual Testing?

    To verify whether or not the AUT performs the designated tasks, you manually execute a series of tests.

    What is Automated Testing?

    Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc.testing.

    The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up oftest preconditions, and other test control and test reporting functions.

    Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as

    a file or spreadsheet. A common technique in Automated Testing To verify whether or not the AUT performs the designated tasks, you automate a series of tests that run without human

    intervention.

    What is Automation technology? Automation is a technology that makes it possible to access software objects inside one applicationfrom other applications. These objects can easily be created and manipulated using a scripting or programming language such as VBScriptor VC++. Automation enables you to control the functionality of an application programmatically.

    Manual vs. Automated Testing:Manual testing can no longer keep pace in this dynamically changing environment. Some writers believe that test automation is so expensiverelative to its value that it should be used sparingly. Others, such as advocates of agile development, recommend automating 100% of all

    tests. A challenge with automation is that automated testing requires automated test oracles (an oracle is a mechanism or principle by whicha problem in the software can be recognized). Such tools have value in load testing software (by signing on to an application with hundredsor thousands of instances simultaneously), or in checking for intermittent errors in software. The success of automated software testingdepends on complete and comprehensive test planning. Software development strategies such as test-driven development are highly

    compatible with the idea of devoting a large part of an organization's testing resources to automated testing. Many large software

    organizations perform automated testing.

    Benefits of Automated Testing

    Fast Runs tests significantly faster than human users.

    Consistent Automated tests provide the same test results every time. Therefore, you useautomated tests forRegression testing.

    Reliable Tests perform precisely the same operations each time they are run, therebyeliminating human error.

    Repeatable You can test how the Web site or application reacts after repeated execution of

    the same operations.End-of-Use with Depth ofFunction

    Keyword View and visual reference to the screens being tested provide richoptions for verifying expected results and enhancing test capabilities withoutscripting.

    Data-Driven You can drive execution paths and iterations with parameterized data in Excel-

    like data sheets.

    Programmable You can program sophisticated tests that bring out hidden information.

    Comprehensive You can build a suite of tests that covers every feature in your Web site or application.

    (You can build a suite of tests that covers many application features)

    Reusable The tests that you automate can be reused during different phases of the testing

    3

    Syed Kamrul

    http://www.aptest.com/glossary.html#automatedtesting%23automatedtestinghttp://www.aptest.com/glossary.html#automatedtesting%23automatedtesting
  • 8/4/2019 Sdlc Final

    4/29

    process.You can reuse tests on different versions of a Web site or application, even if theusers interface changes.

    Documented Automatically-generated step documentation in plain English improves testclarity user-transferability.

    Productivity Automated testing enables you to run tests unattended. Therefore, humanresources that are otherwise involved in the testing process can be used foraccomplishing other tasks and the overall testing productivity of your

    organization increases.

    Drawbacks of Manual Testing

    Time consuming Manual testing is tedious and time-consuming.

    Heavy Investment Manual testing requires heavy investment in human resources.

    Time constraint The time constraint often makes it impossible to manually test every featurethoroughly before the application is released. This leaves you wondering whether

    serious bugs have gone undetected.

    METHODOLOGY______Software Development Methods

    Sequential (Waterfall model, V-model): Static is sequential followed in Waterfall method.Iterative (Incremental and evolutionary Development): Dynamic is Spiral or Incremental followed in RUP method. Structured is acombination of both.

    Rapid Application Development (RAD) &

    Joint Application Development (JAD): Are paradigms in which development is done in sessions attended jointly by software developersand user representatives. The fundamentals ofRAD and JAD are to jointly design the screens in the system. The developers then quicklycodes minimal functionality on those screens and present them to the users.

    Test Development / Test Planning: These phases are accomplished using documents and graphical representations. They are calledas: Test Strategy, Test Procedures, Writing Functional Specifications, Test Plan, Test Scenarios, Test Bed creation, Test Cases, and Test

    Scripts, etc.

    Test Planning: test objects (e.g. the functions to be tested) are identified.Test case investigation: test cases for these objects are Created and described in fine detail.Test data creation: test data substantiates the test cases.Test script creation: test scripts make the test excitable.Test Bed:An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of theproduct under test, other application or system software, etc. The Test Plan for a project should enumerate the test beds(s) to be used.

    Test Script:Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

    Test Scenario:Definition of a set oftest casesortest scripts and the sequence in which they are to be executed.

    Test Specification:A document specifying the test approach for a software feature or combination or features and the inputs, predictedresults and execution conditions for the associated tests.

    Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization toorganization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what they are intended to test.

    Written test cases are usually collected intoTest suites. The mostcommon term for a collection oftest cases is a test suite. The test suite often also contains more detailed instructions or goals for eachcollection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of testcases may also contain prerequisite states or steps, and descriptions of the following tests.Collections of test cases are sometimes incorrectly termed atest plan. They may also be called atest script, or even a test scenario.

    4

    Syed Kamrul

    http://www.aptest.com/glossary.html#testcase%23testcasehttp://www.aptest.com/glossary.html#testcase%23testcasehttp://www.aptest.com/glossary.html#testcase%23testcasehttp://www.aptest.com/glossary.html#testscript%23testscripthttp://www.aptest.com/glossary.html#testscript%23testscripthttp://en.wikipedia.org/wiki/Test_suitehttp://en.wikipedia.org/wiki/Test_suitehttp://en.wikipedia.org/wiki/Test_suitehttp://en.wikipedia.org/wiki/Test_casehttp://en.wikipedia.org/wiki/Test_casehttp://en.wikipedia.org/wiki/Test_casehttp://en.wikipedia.org/wiki/Test_planhttp://en.wikipedia.org/wiki/Test_planhttp://en.wikipedia.org/wiki/Test_planhttp://en.wikipedia.org/wiki/Test_scripthttp://en.wikipedia.org/wiki/Test_scripthttp://en.wikipedia.org/wiki/Test_scripthttp://en.wikipedia.org/wiki/Test_scenariohttp://en.wikipedia.org/wiki/Test_scenariohttp://www.aptest.com/glossary.html#testcase%23testcasehttp://www.aptest.com/glossary.html#testscript%23testscripthttp://en.wikipedia.org/wiki/Test_suitehttp://en.wikipedia.org/wiki/Test_casehttp://en.wikipedia.org/wiki/Test_planhttp://en.wikipedia.org/wiki/Test_scripthttp://en.wikipedia.org/wiki/Test_scenario
  • 8/4/2019 Sdlc Final

    5/29

    An executable test suite is a test suite that is ready to be executed. This usually means that there exists a test harnessthat is integratedwith the suite and such that the test suite and the test harness together can work on a sufficiently detailed level to correctly communicate with

    thesystem under test(SUT).The counterpart of an executable test suite is an abstract test suite. However, often terms test suites and test plans are used, roughly withthe same meaning as executable and abstract test suites, respectively.

    Implementation & coding:The software developers take the design documents and development tools (editors, compilers, debuggersetc.) and start writing software. This is usually the longest phase in the product life cycle. Each developer has to write his/her own code andcollaborate with other developers to make sure that different components can interoperate with each other. A revision control system such as

    CVS (Concurrent Versions System) is needed in this phase. There are a few other open source revision control systems as well ascommercial options. The version control system provides a central repository to store individual files. A typical software project may containany where from hundreds to thousands of files. In large and complex projects, someone also needs to decide directory hierarchy so that filesare stored in appropriate locations. During the development cycle, multiple persons may modify files. If everyone is not following the rules,this may easily break the whole compilation and building process. For example, duplicate definitions of the same variables may causeproblems. Similarly, if included files are not written properly, you can easily cause the creation of loops. Other problems pop up when multiplefiles are included in a single file with conflicting definitions of variables and functions. Coding guidelines should also be defined by architectsor senior software developers. For example, if software is intended to be ported to some other platform as well, it should be written on astandard like ANSI.

    During the implementation process, developers must write enough comments inside thecode so that if anybody starts working on the code later on, he/she is able to understand what has already been written. Writing goodcomments is very important as all other documents, no matter how good they are, will be lost eventually. Ten years after the initial work, youmay find only that information which is present inside the code in the form of comments.

    Development tools also play an important role in this phase of the project. Good development tools save a lot of time for the developers, as

    well as saving money in terms of improved productivity. The most important development tools for time saving are editors and debuggers. A

    good editor helps a developer to write code quickly. A good debugger helps make the written code operational in a short period of time.Before starting the coding process, you should spend some time choosing good development tools.

    Review meetings during the development phase also prove helpful. Potential problems are caught earlier in the development. These meetingare also helpful to keep track of whether the product is on time or if more effort is needed complete it in the required time frame. Sometimes

    you may also need to make some changes in the design of some components because of new requirements from the marketing and salespeople. Review meetings are a great tool to convey these new requirements. Again, architecture documents and MRDs are kept in sync withany changes/problems encountered during development.

    Testing and Quality Assurance: (also discussed above): Testing is probably the most important phase for long-term support as well asfor the reputation of the company. If you dont control the quality of the software, it will not be able to compete with other products on themarket. If software crashes at the customer site, your customer loses productivity as well money and you lose credibility. Sometimes theselosses are huge. Unhappy customers will not buy your other products and will not refer other customers to you. You can avoid this situationby doing extensive testing. This testing is referred to as Quality Assurance, or QA, in most of the software world.

    Usually, testing starts as soon as the initial parts of the software are available. There are multiple types of testing and these are explainedlater in this section. Each of these has its own importance.

    Release of Software:

    NOTE: SDLC beginswhen you are thinking about developing software and end when the software is no longer in use.

    Methodology:Process modelsA decades-long goal has been to find repeatable, predictable processes or methodologies that improve productivity and quality. Some try tosystematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software.Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects notmeeting their expectations in terms of functionality, cost, or delivery schedule, effective project management is proving difficult.

    Waterfall processes

    The traditional methodology for delivering software projects has been, for years, the Waterfall Method. If project requirements are static orhave been clearly stated from the beginning, then this method is useful.The best-known and oldest process is thewaterfall model, where developers (roughly) follow these steps in order. They state requirements,analyze them, design a solution approach, architect a software framework for that solution, develop code, test (perhaps unit tests thensystem tests), deploy, and maintain. After each step is finished, the process proceeds to the next step, just as builders don't revise thefoundation of a house after the framing has been erected.There is a misconception that the process has no provision for correcting errors in early steps (for example, in the requirements). In fact thisis where the domain of requirements management comes in which includes change control.This approach is used in high risk projects, particularly large defense contracts. The problems in waterfall do not arise from "immatureengineering practices, particularly in requirements analysis and requirements management." Studies of the failure rate of the DOD-STD-2167specification, which enforced WaterFall, have shown that the more closely a project follows its process, specifically in up-front requirementsgathering, the more likely the project is to release features that are not used in their current form.

    5

    Syed Kamrul

    http://en.wikipedia.org/wiki/Test_harnesshttp://en.wikipedia.org/wiki/Test_harnesshttp://en.wikipedia.org/wiki/System_under_testhttp://en.wikipedia.org/wiki/System_under_testhttp://en.wikipedia.org/wiki/System_under_testhttp://en.wikipedia.org/wiki/Test_planhttp://en.wikipedia.org/wiki/Waterfall_modelhttp://en.wikipedia.org/wiki/Waterfall_modelhttp://en.wikipedia.org/wiki/Waterfall_modelhttp://en.wikipedia.org/wiki/Unit_testshttp://en.wikipedia.org/w/index.php?title=DOD-STD-2167&action=edithttp://en.wikipedia.org/wiki/Test_harnesshttp://en.wikipedia.org/wiki/System_under_testhttp://en.wikipedia.org/wiki/Test_planhttp://en.wikipedia.org/wiki/Waterfall_modelhttp://en.wikipedia.org/wiki/Unit_testshttp://en.wikipedia.org/w/index.php?title=DOD-STD-2167&action=edit
  • 8/4/2019 Sdlc Final

    6/29

    More often too the supposed stages are part of joint review between customer and supplier, the supplier can, in fact, develop at risk andevolve the design but must sell off the design at a key milestone called Critical Design Review. This shifts engineering burdens from

    engineers to customers who may have other skills.

    The waterfall model is a sequentialsoftware development model(a process for the creation of software) in which development is seen asflowing steadily downwards (like a waterfall) through the phases ofrequirements analysis, design, implementation, testing (validation),integration, and maintenance. The origin of the term "waterfall" is often cited to be an article published in 1970 byW. W. Royce; ironically,Royce himself advocated an iterative approach to software development and did not even use the term "waterfall". Royce originallydescribed what is now known as the waterfall model as an example of a method that he argued "is risky and invites failure".

    In 1970 Royce proposed what is now popularly referred to as the waterfall model as an initial concept, a model which he argued was flawed(Royce 1970). His paper then explored how the initial model could be developed into an iterative model, with feedback from each phaseinfluencing previous phases, similar to many methods used widely and highly regarded by many today. Ironically, it is only the initial modelthat received notice; his own criticism of this initial model has been largely ignored. The "waterfall model" quickly came to refer not to Royce'sfinal, iterative design, but rather to his purely sequentially ordered model. This article will use this popular meaning of the phrase waterfallmodel. For an iterative model similar to Royce's final vision, see the spiral model.

    Despite Royce's intentions for the waterfall model to be modified into an iterative model, use of the "waterfall model" as a purely sequentialprocess is still popular, and, for some, the phrase "waterfall model" has since come to refer to any approach to software creation which is

    seen as inflexible and non-iterative. Those who use the phrase waterfall model pejoratively for non-iterative models that they dislike usuallysee the waterfall model itself as naive and unsuitable for a "real world" process.

    Usage of the waterfall model

    The unmodified "waterfall model". Progress flows from the top to the bottom, like a waterfall.

    The V-Model.

    Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing atest-first design paradigm. See also Test Driven Development.

    6

    Syed Kamrul

    http://en.wikipedia.org/wiki/Sequencehttp://en.wikipedia.org/wiki/Software_development_modelhttp://en.wikipedia.org/wiki/Software_development_modelhttp://en.wikipedia.org/wiki/Software_development_modelhttp://en.wikipedia.org/wiki/Requirements_analysishttp://en.wikipedia.org/wiki/Requirements_analysishttp://en.wikipedia.org/wiki/Software_designhttp://en.wikipedia.org/wiki/Implementationhttp://en.wikipedia.org/wiki/Implementationhttp://en.wikipedia.org/wiki/Software_testinghttp://en.wikipedia.org/wiki/Software_testinghttp://en.wikipedia.org/wiki/Software_testinghttp://en.wikipedia.org/wiki/Enterprise_application_integrationhttp://en.wikipedia.org/wiki/Software_maintenancehttp://en.wikipedia.org/wiki/1970http://en.wikipedia.org/wiki/W._W._Roycehttp://en.wikipedia.org/wiki/W._W._Roycehttp://en.wikipedia.org/wiki/Iterative_developmenthttp://en.wikipedia.org/wiki/Iterative_developmenthttp://en.wikipedia.org/wiki/Iterative_developmenthttp://en.wikipedia.org/wiki/Waterfall_model#CITEREFRoyce1970http://en.wikipedia.org/wiki/Waterfall_model#CITEREFRoyce1970http://en.wikipedia.org/wiki/Waterfall_model#CITEREFRoyce1970http://en.wikipedia.org/wiki/Spiral_modelhttp://en.wikipedia.org/wiki/Spiral_modelhttp://en.wikipedia.org/wiki/Real_worldhttp://en.wikipedia.org/wiki/Real_worldhttp://www.aptest.com/glossary.html#tdd%23tddhttp://www.aptest.com/glossary.html#tdd%23tddhttp://en.wikipedia.org/wiki/Image:Waterfall_model.pnghttp://en.wikipedia.org/wiki/Sequencehttp://en.wikipedia.org/wiki/Software_development_modelhttp://en.wikipedia.org/wiki/Requirements_analysishttp://en.wikipedia.org/wiki/Software_designhttp://en.wikipedia.org/wiki/Implementationhttp://en.wikipedia.org/wiki/Software_testinghttp://en.wikipedia.org/wiki/Enterprise_application_integrationhttp://en.wikipedia.org/wiki/Software_maintenancehttp://en.wikipedia.org/wiki/1970http://en.wikipedia.org/wiki/W._W._Roycehttp://en.wikipedia.org/wiki/Iterative_developmenthttp://en.wikipedia.org/wiki/Waterfall_model#CITEREFRoyce1970http://en.wikipedia.org/wiki/Spiral_modelhttp://en.wikipedia.org/wiki/Real_worldhttp://www.aptest.com/glossary.html#tdd%23tdd
  • 8/4/2019 Sdlc Final

    7/29

    Agile software development processes are built on the foundation of iterative development. To that foundation they add a lighter, morepeople-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control

    mechanism. The feedback is driven by regular tests and releases of the evolving software.Agile processes seem to be more efficient than older methodologies, using less programmer time to produce more functional, higher qualitysoftware, but have the drawback from a business perspective that they do not provide long-term planning capability. In essence, they saythat they will provide the most bang for the buck, but won't say exactly when that bang will be.

    KEY 1:User Stories occur at each level of activityKEY 2: Development and Release occurs in a module-by-module fashion.

    KEY 3: Constant Communication between the client and within the development team.KEY 4: Team members physically present in the same room.KEY 5: No code integrated into the repository without Unit Testing.KEY 6: Change Management along with Refactoring of code as an intrinsic part of process.

    KEY 7: Continuous Integration occur after each release

    An alternative approach is the Iterative Development Life Cycle(sometimes referred to as the Iterative Life Cycle, analysis is done just thesame as with the less frequently used Waterfall method. Iterative processes

    Iterative development prescribes the construction of initially small but ever larger portions of a software project to help all those involved touncover important issues early before problems or faulty assumptions can lead to disaster. Iterative processes are preferred by commercialdevelopers because it allows a potential of reaching the design goals of a customer who does not know how to define what they want.

    However, once analysis is done, each requirement is prioritized as follows:

    High - These are mission critical requirements that absolutely have to be done in the first release.

    Medium - These are requirements that are important but can be worked around until implemented.

    Low - These are requirements that are nice-to-have but not critical to the operation of the software.

    Once priorities have been established, the releases are planned. The first release (Release 1.0) will contain just the High priorityitems and should take about 1 to 3 months to deliver.

    Below are the advantages of the Iterative Life Cycle: The Design phase goes

    much faster, as designs are only done on the items in the current release (Release 1.0 for example).Coding and Testing go much faster because there are less items to code and test.

    If major design flaws are found, re-work is much faster since the functional areas have been greatly reduced.

    The client gets into production in less than 3 months, allowing them to begin earning revenue or reducing expenses quickerwith their product.

    If market conditions change for the client, changes can be incorporated in the next iterative release, allowing the software tobe much more nimble.

    7

    Syed Kamrul

    http://en.wikipedia.org/wiki/Agile_software_developmenthttp://en.wikipedia.org/wiki/Agile_software_developmenthttp://en.wikipedia.org/wiki/Agile_software_development
  • 8/4/2019 Sdlc Final

    8/29

    As the software is implemented, the client can make recommendations for the next iteration due to experiences learned in the

    past iteration.

    Our experience has found that you should space iterations at least 2 3 months a part. If iterations are closer than that, you spend toomuch time on convergence and the project timeframe expands. During the coding phase, code reviews must be done weekly to ensurethat the developers are delivering to specification and all source code is put under source control. Also, full installation routines are to beused for each iterative release as it would be done in production.

    Six Sigma is a methodology to manage process variations that uses data and statistical analysis to measure and improve a company'soperational performance. It works by identifying and eliminating defects in manufacturing and service-related processes. The maximumpermissible defects is one per 3.4 per million opportunities. However, Six Sigma is manufacturing-oriented and needs further research on it'srelevance to software development.

    What is Six Sigma?

    Six Sigma is the latest reincarnation of what used to be known as Total Quality Management or Continuous Improvement. The basic idea is

    simple: while most companies are organized by functions such as sales, marketing, operations, etc., the actual work process often times

    crosses these functional boundaries. And while each function aims at improving the efficiency with which they perform their part of the overall

    process, the effectiveness of the work viewed from the perspective of the customer is less than stunning.Examples of poorly designed and managed processes are everywhere: from handling customer complaints in call centers that require

    multiple transfers to poor coordination between engineering and manufacturing resulting in products that require extensive rework and

    testing. Thus, almost every business process can be improved.

    Six Sigma provides a framework for understanding business processes, identifying the root causes of problems, and making targeted

    changes to improve the performance against business and customer needs. The Six Sigma tools such as Pareto Charts, Process Maps, and

    Regression Analysis are used in a structured way to ensure that problems are systematically addressed, typically following a five-step

    method knows as DMAIC:

    Define the problem, the process, and the customer needs.Measure the performance of the underlying process.Analyze the data to identify

    root causes.Improve the process by finding solutions for those causes.Control the performance once the improvements are in place.

    The term Six Sigma describes the aspiration: a process performing at Six Sigma levels results in less than four defects per million

    opportunities (or a quality level of 99.9996%), with a defect defined as anything that does not meet customer requirements. While that level

    of performance is often not required, the underlying process is flexible enough to help companies address a wide range of process problems,

    such as cycle times, product and service defects, customer complaints, etc.

    Six Sigma projects can make a substantial contribution to the bottom line by reducing process cost, increasing revenue, improving cash flow,

    eliminating the need for capital expenditure, etc. Equally important are three "soft" benefits:Improves employee morale and effectiveness: over time, even processes that were designed well in the beginning become obsolete,cumbersome, and complex. As the business grows, so does complexity. Employees react to process problems through ad-hoc fixes, whichin the long run become counterproductive. (A classic example are the many signatures required for approving small capital expenditures,which are often a reaction to isolated cases of abuse). Providing a common methodology and framework to address those issues helpsemployees become more productive and effective. Failure to address these issues can result in lower morale and in some instances loss oftalent.Sharpens the focus on customer needs: Six Sigma measures defects from the customer perspective. By integrating customerrequirements into the problem-solving process, the methodology helps safeguard against shortsighted cost reductions without making lifebetter for the customer.Manages by fact and strategic alignment: Six Sigma provides a very effective framework for developing and usingdashboards and scorecards that align with the strategy of the business. Instead of each department using a different set of measures thatare often at odds with one another, the Six Sigma methodology helps to align metrics with the strategic objectives of the business and toensure that everybody in the organization has a clear line of sight between their role and the objectives for the business.

    8

    Syed Kamrul

    http://en.wikipedia.org/wiki/Six_Sigmahttp://en.wikipedia.org/wiki/Statistical_analysishttp://en.wikipedia.org/wiki/Statistical_analysishttp://en.wikipedia.org/wiki/Statistical_analysishttp://en.wikipedia.org/w/index.php?title=Operational_performance&action=edithttp://en.wikipedia.org/w/index.php?title=Operational_performance&action=edithttp://en.wikipedia.org/wiki/Six_Sigmahttp://en.wikipedia.org/wiki/Statistical_analysishttp://en.wikipedia.org/w/index.php?title=Operational_performance&action=edit
  • 8/4/2019 Sdlc Final

    9/29

    Iterative or Spiralor RUP Methodology: RUP recommend creating at least two test cases for each requirement. One of them shouldperform Positive testing of requirement and other should perform Negative testing.

    R a t i o n a l U n i f i e d P r o c e s s ( R U P )

    RUP is an iterative development process, consisting of four phases and nine disciplines. The four phases are carried sequentially and thenine disciplines are carried iteratively throughout the four phases. This methodology is best suited for developing systems with objects and/orcomponents based technologies.

    Phases of Rational Unified Process:The four phases are

    Inception

    Technical Design - Elaboration

    Construction - Coding

    Transition

    In addition to the above four mentioned phases, Post project support is also included in the HyTech Professionals project development li fecycle.

    9

    Syed Kamrul

  • 8/4/2019 Sdlc Final

    10/29

    Phase 1- Project Initiation (Inception): This is the explanatory phase of the project. Project objective and description is described at thisstage. The purpose of this phase is to collect and understand business requirements, detail the project plan and agree upon a high levelstatement of work. This phase identifies the projects primary objectives, assumptions, constraints, deliverables and acceptance criteria.

    Phase 2- Technical Design (Elaboration): In this phase, the architecture of the system is designed. The goal is to translate requirementsand specification into a technical solution to produce Technical Design. High Level Design deals with overall architecture and framework ofthe project e.g. project is decomposed into modules/functions/entities/classes etc. whereas Low Level Design deals with more details of the

    project like it incorporates pseudo code and definitions of all technical interfaces of the project.

    Phase 3- Construction: In this phase, the complete development of the system based upon the baselined architecture takes place. Here,emphasis is laid on optimizing costs, schedules and quality.

    Phase 4- Transition: This phase includes testing of the product in preparation of release and fine tuning the product and taking care ofissues like configuring, installing and usability issues. The focus of this phase is to ensure thatsoftware is available for its end users.

    Phase 5- Post Project Support: This is a specific timeframe in which HyTech Professionals would provide support and fixes for bugsencountered in the functioning of the system. We implement Change Request approach in support phase.

    Extreme Programming, XP, is the best-known agile process. In XP, the phases are carried out in extremely small (or "continuous") stepscompared to the older, "batch" processes. The (intentionally incomplete) first pass through the steps might take a day or a week, rather thanthe months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals fordevelopment. Next is coding (by a pair of programmers), which is complete when all the tests pass, and the programmers can't think of anymore tests that are needed. Design and architecture emerge out ofrefactoring, and come after coding. Design is done by the same peoplewho do the coding. (Only the last feature - merging design and code - is common to all the other agile processes.) The incomplete but

    functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this

    point, the practitioners start again on writing tests for the next most important part of the system.

    Test Driven Development(TDD) is a useful output of the Agile camp but raises a conundrum. TDD requires that a unit test be written for aclass before the class is written. Therefore, the class firstly has to be "discovered" and secondly defined in sufficient detail to allow the write-test-once-and-code-until-class-passes model that TDD actually uses. This is actually counter to Agile approaches, particularly (so-called)Agile Modeling, where developers are still encouraged to code early, with light design. Obviously to get the claimed benefits of TDD a fulldesign down to class and, say, responsibilities (captured using, for example,Design By Contract) is necessary. This counts towards iterativedevelopment, with a design locked down, but not iterative design - as heavy refactoring and re-engineering negate the usefulness of TDD.

    Types of Operating Systems:

    Client operating systems: examples

    10

    Syed Kamrul

    http://en.wikipedia.org/wiki/Extreme_Programminghttp://en.wikipedia.org/wiki/Refactoringhttp://en.wikipedia.org/wiki/Refactoringhttp://en.wikipedia.org/wiki/Refactoringhttp://en.wikipedia.org/wiki/Test_Driven_Developmenthttp://en.wikipedia.org/wiki/Test_Driven_Developmenthttp://en.wikipedia.org/wiki/Design_By_Contracthttp://en.wikipedia.org/wiki/Design_By_Contracthttp://en.wikipedia.org/wiki/Design_By_Contracthttp://en.wikipedia.org/wiki/Extreme_Programminghttp://en.wikipedia.org/wiki/Refactoringhttp://en.wikipedia.org/wiki/Test_Driven_Developmenthttp://en.wikipedia.org/wiki/Design_By_Contract
  • 8/4/2019 Sdlc Final

    11/29

    Dos

    WindowsWindows 95

    Windows 98Windows ME

    Windows XP

    Server based operating systems:It has all the information. Clients always look upon

    the server to get information. Examples of server operating system are:

    Windows NTWindows 2000Windows 2003HP UNIX

    Sun SolarisSCO UNIXRed HatIBM AIX

    Database Selection:In this part they decide which database is used for database management. Examples of databases are oracle,Sybase, Informix, Access, SQL server.

    Database Management System:It manages the data in such a way that we can store and retrieveour required data very quickly e.g.

    Oracle: Runs on Microsoft operating system and Unix.Sybase and Informix: Run on UNIX Operating System.SQL server and MS Access: Run on Microsoft operating system.

    Maintenance:Based on feed back from the user enhancements in the software.

    Testing Phases: Actual testing activity can be broken down into three different and equally essential phases:

    Unit Testing

    Integration testing

    System testing

    We are going beyond the three phases.

    Developers side testing or White box testing-- based on knowledge of the internal logic of an application's code.Tests are based on coverage of code statements, branches, paths, conditions.

    Unit testing: the most 'micro' scale of testing; to test particular functions or code modules. Typically done by theprogrammer and not by testers, as it requires detailedknowledge of the internal program design and code. Not always easily done unless theapplication has a well-designed architecture with tight code; may require developingtest driver modules or test harnesses.Unit testing is testing one part or one component of the product. The developer usuallydoes this when he/she has completed writing code for that part of the product. Thismakes sure that the component is doing what it is intended to do. This also saves a lot oftime for software testers as well as developers by eliminating many cycles of softwarebeing passed back and forth between the developer and the tester.When a developer is confident that a particular part of the software is ready, he/she canwrite test cases to test functionality of this part of the software. The component is thenforwarded to the software testing people who run test cases to make sure that the unit isworking properly.

    Boundary Testing:Test which focus on the boundary or limit conditions of the software being tested. (Some of these testsare stress tests).

    Component testing:A minimal software item for which a separate specification is available. See Unit Testing.

    11

    Syed Kamrul

    http://www.aptest.com/glossary.html#unittesting%23unittestinghttp://www.aptest.com/glossary.html#unittesting%23unittestinghttp://www.aptest.com/glossary.html#unittesting%23unittesting
  • 8/4/2019 Sdlc Final

    12/29

    Developers side & Testers side Testing:

    White-box testing is testing that verifies specific lines of code work as defined. (Also referred to as clear-box testing).

    White-box testing is actually more: Covering any element in the code. This can be lines, but also branches, conditions, combinations ofconditions, variables and their use, interfaces. White-box testing is connected with coverage criteria. Examples are statement coverage,

    branch coverage etc. Coverage can be monitored by automated tools, so-called test coverage analysis tools. Such tools instrument thecode and count what is executed during testing, and how often.

    There is no white-box coverage criterion which guarantees that the software will be error-free.

    It is most popular to check coverage during module, unit or component testing, but white-box testing is not a synonym of unit, module orcomponent testing.

    Gray Box Testing:A combination ofBlack Box andWhite Boxtesting methodologies: testing a piece of software against its specificationbut using some knowledge of its internal workings.

    Integration testing:How do you perform Integration Testing?

    Integration Testing:Testing of combined parts of an application to determine if they function together correctly. Usually performed after

    unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

    To perform Integration testing, first, all Unit Testing has to be completed. Upon completion of Unit Testing, Integration testing begins.Integration testing is Black Box testing. The purpose ofIntegration testing is to ensure distinct components of the application still work inaccordance with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.

    Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable, oracceptable, based on client input.

    There are basically two types of Integration testing methods:

    Incremental

    Non- Incremental

    Incremental testing can have two sub-types, viz:

    1. Top-Down Incremental:No test drivers needed but extensive test stubs may be needed.Increasingly more complex.Starts with interface.

    2. Bottom-Up Incremental

    Top-Down Integration: Modules are integrated by moving downward through the controlhierarchy, beginning with the main control module. Modules subordinate to the main control module are incorporated into the structurein either a depth-first or breadth-first manner.Top-down strategy sounds relatively uncomplicated, but in actual practice, logistical problems can arise. The most common of theseproblems occurs when processing at low-levels in the hierarchy is required to adequately test upper levels. Stubs replace low-levelmodules at the beginning of top-down testing; therefore no significant data can flow upward in the program structure.

    Top Down Testing: Is an approach to integration testing where the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are then used to test lower level components. The process isrepeated until the lowest level components have been tested.

    Bottom-Up Integration: As its name implies, begins construction and testing with atomic modules (i.e., modules at the lowest levels inthe program structure).Because modules are integrated from the bottom up, processing required for the modules subordinate to agiven level is always available and the need for stubs is eliminated.

    Bottom Up Testing:An approach to integration testing where the lowest level components are tested first, then used to facilitate the testingof higher level components. The process is repeated until the component at the top of the hierarchy is tested.

    12

    Syed Kamrul

    http://www.aptest.com/glossary.html#blackbox%23blackboxhttp://www.aptest.com/glossary.html#blackbox%23blackboxhttp://www.aptest.com/glossary.html#whitebox%23whiteboxhttp://www.aptest.com/glossary.html#whitebox%23whiteboxhttp://www.aptest.com/glossary.html#whitebox%23whiteboxhttp://www.aptest.com/glossary.html#blackbox%23blackboxhttp://www.aptest.com/glossary.html#whitebox%23whitebox
  • 8/4/2019 Sdlc Final

    13/29

    'Big bang' approach:There is often a tendency to attempt non-incremental integration: that is, to construct the program using a 'big-bang' approach. All modules are combined in advance. The entire program is tested as a whole. And chaos usually results! A set of errorsare encountered. Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once theseerrors are corrected, new ones appear and the process continues in a seemingly endless loop.

    Test component individuallyIntegrate them all and test

    Top down integration testingthe high level control routines are tested first, possibly with the middle level control structures present only as

    stubs. Subprogram stubs were presented in Section 2 as incomplete subprograms which are only present to allow the higher level controlroutines to be tested. Thus a menu driven program may have the major menu options initially only present as stubs, which merely announcethat they have been successfully called, in order to allow the high level menu driver to be tested.Top down testing can proceed in a depth-firstor a breadth-firstmanner. For depth-first integration each module is tested in increasingdetail, replacing more and more levels of detail with actual code rather than stubs. Alternatively breadth-first would proceed by refining all themodules at the same level of control throughout the application. In practice a combination of the two techniques would be used. At the initialstages all the modules might be only partly functional, possibly being implemented only to deal with non-erroneous data. These would betested in breadth-first manner, but over a period of time each would be replaced with successive refinements which were closer to the fullfunctionality. This allows depth-first testing of a module to be performed simultaneously with breadth-first testing of all the modules.The other major category of integration testing is bottom up integration testingwhere an individual module is tested from a test harness.Once a set of individual modules have been tested they are then combined into a collection of modules, known as builds, which are thentested by a second test harness. This process can continue until the build consists of the entire application.In practice a combination of top-down and bottom-up testing would be used. In a large software project being developed by a number of sub-teams, or a smaller project where different modules were being built by individuals. The sub-teams or individuals would conduct bottom-uptesting of the modules which they were constructing before releasing them to an integration team which would assemble them together fortop-down testing.

    Regression Testing : Regression testing is testing that ensures previously tested behaviors still work as expected afterchanges have been made to an application.Regression testing is a two phases testing:

    Full Functionality Tests

    Bug Regression Test

    Full Functionality Tests: These tests are used for comprehensive re-testing of software to validate that all functionality andfeatures of previous builds (or releases) have maintained integrity of features and functions tested previously.Bug Regression Test: Re-testing after fixes or modifications of the software or its environment is done. Regression bugs occuras an unintended consequence of program changes.

    Black Box or Functional or Testers side Testing: Testing the features and operational behavior of a product to ensurethey correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solelyon the outputs generated in response to selected inputs and execution conditions.

    Negative Testing:Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

    Positive Testing:Testing aimed at showing software works. Also known as "test to pass". See alsoNegative Testing.

    Black Box Testing:Testing based on an analysis of the specification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the published requirements for the component. Testing that verifies that giveninput A the component or system being tested gives you expected results B.Black box testing is a focus on Functional Requirements. It complements White Box Testing. It attempts to find:

    incorrect or missing functionsinterface errors errors in data structures or external database access Performance errors initialization and termination errors.

    Functional testing enables you to verify the following features of AUT: Accuracy

    Reliability

    Predictability

    Security

    Functional testing is carried out to make sure that the software is doing exactly what it is supposed to do. This type of testing is a mustbefore any software is released to customers. Functional testing is done by people whose primary job is software testing, not the developersthemselves. In small software projects where a company cant afford dedicated testers, other developers may do functional testing also. Thekey point to keep in mind is that the person who wrote a software component should not be the person who tested it. A developer will tend totest the software the way he/she has written it. He/she may easily miss any problems in the software.

    13

    Syed Kamrul

    http://www.aptest.com/glossary.html#positivetesting%23positivetestinghttp://www.aptest.com/glossary.html#negativetesting%23negativetestinghttp://www.aptest.com/glossary.html#negativetesting%23negativetestinghttp://www.aptest.com/glossary.html#negativetesting%23negativetestinghttp://www.aptest.com/glossary.html#positivetesting%23positivetestinghttp://www.aptest.com/glossary.html#negativetesting%23negativetesting
  • 8/4/2019 Sdlc Final

    14/29

    Exploratory testing:It is a testing while exploring the application. This is in contrast to a process in which the tests are all designedfirst, and then run later. In Exploratory testing, test cases are not defined in advance. It is defined and executed on the fly, while thetesters learn about the application. We chose the exploratory approach because it is the best way to test an application quickly whenstarting from scratch. With this procedure you will walk through the product, find out what is it, and what type of test it is? It has muchin common with informal approaches to testing that go by names like ad hoc testing, guerrilla testing,or intuitive testing. However,unlike traditional informal testing, Exploratory testing consists of specific tasks, objectives, and deliverables that make it s systematicprocess. In operational terms, Exploratory testing is an interactive process of concurrent product exploration, test design, and testexecution. The outcome of an Exploratory testing is a set of notes about the product, failures found, and a concise record of how theproduct was tested. The elements ofExploratory testing are:

    Product Exploration: Discover and record the purposes and functions of the product, types of data processed, and

    areas of potential instability. Your ability to perform exploration depends upon your general understanding oftechnology, the information you have about the product and its intended users, and the amount of time you haveto do the work.

    Test Deign: Determine strategies of operating, observing, and evaluating the product.

    Test Execution: Operate the product, observe its behavior, and use that information to form hypotheses about how

    the product works.

    Heuristics: Heuristics are guidelines or rules of thumb that help you decide what to do. This procedure employs a

    number of heuristics that help you decide what should be tested and how to test it.

    Reviewable Results: Exploratory testing is a result-oriented process. It is finished once you have produceddeliverables that meet the specified requirements. It is especially important for the test results to be reviewableand defensible for certification. As the tester, you must be prepared to explain any aspect of your work to the TestManager, and show how it meets the requirements documented in the procedure.

    This is done by trained testers.

    Why explore? Harry Robinsondescribes exploratory testers as the leading edge of the testing effort, building models (maps) used by later testers.

    Brian Marick uses an example of unexpected interactions to justify exploratory testing.

    End-to-End testing - similar to system testing; the 'macro' end of the test scale; involves testing of a completeapplication environment in a situation that mimics real-world use, such as interacting with a database, using networkcommunications, or interacting with other hardware, applications, or systems if appropriate.

    Sanity testing - typically an initial testing effort to determine if a new software version is performing well enoughto accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging downsystems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing inits current state.

    Sanity Testing brief test of major functional elements of a piece of software to determine if its basically operational. See alsoSmoke Testing.

    Monkey Testing:Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or anapplication does not crash out.

    Smoke Testing:A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testingpractice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

    Ad Hoc Testing:A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality.Can include negative testing as well. See alsoMonkey Testing.

    Gorilla Testing:Testing one particular module, functionality heavily.

    Accessibility Testing:Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

    Installation Testing:Confirms that the application under test recovers from expected or unexpected events without loss ofdata or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

    System testing(functional and non-functional):Testing that attempts to discover defects that are properties of the entiresystem rather than of its individual components. Based on overall requirements specifications; covers all integrated parts of asystem.System testing is a testing process in which you find and fix any known problems to prepare your application for usertesting. It begins after Unit testing and Integration testing are done. The focus of system testing is to determine whether thesystem is both accurate and complete.

    14

    Syed Kamrul

    http://www.aptest.com/glossary.html#smoketest%23smoketesthttp://www.aptest.com/glossary.html#monkeytesting%23monkeytestinghttp://www.aptest.com/glossary.html#monkeytesting%23monkeytestinghttp://www.aptest.com/glossary.html#monkeytesting%23monkeytestinghttp://www.aptest.com/glossary.html#smoketest%23smoketesthttp://www.aptest.com/glossary.html#monkeytesting%23monkeytesting
  • 8/4/2019 Sdlc Final

    15/29

    Functional Verification Testing, Stress Testing, performance testing, Configuration testing, Recovery testing, Security testing areneeded to be tested during System Testing phase begins.

    Validation testing is a concern which overlaps with integration testing. Ensuring that the application fulfils its specification is amajor criterion for the construction of an integration test. Validation testing also overlaps to a large extent with system testing,where the application is tested with respect to its typical working environment. Consequently for many processes no clear divisionbetween validation and system testing can be made. Specific tests which can be

    Non-Functional Testing:

    Boundary testing: Tests that focus on the limit conditions of the software being tested.Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs,

    and data files), can be accommodated by theprogram and will not cause the program to stop working or degrade its operation inany manner. Volume Testing: Volume testing is a subset ofStress Testing.Volume testing is where you test your database with an amount of data that's at least equal to what you expect to have tohandle in production. (Ideally, the amount should be greaterthan what you really expect).Can mean: determining how many transactions or database accesses an application

    can handle during a defined period of time. But this is better called Load Testing. Data should include high volumes in every respect. For example especially many links to other data,especially long texts in free text fields.

    This testing should also try to check what happens when the data storage is fragmented. (This may result in exhaustion of resources or in lower performance on in failures due to timeouts.

    Load testing, Performance testing, Stress testing, Volume testing,

    Compatibility & Migration testing, Data Conversion testing Security /Penetrating testing

    Security, Network Security testing

    Security testing: Security Testing which confirms that the program can restrict access to authorized personnel and that the

    authorized personnel can access the functions available to their security level.

    Stress Testing:Stress testing is testing that ensures the system performs as expected under a high volume of transactions, high number ofusers, and so on.

    Load testing is a subset ofStress Testing.Load testing is a testing that is performed with a representative number of transactions. Todo it right, you should direct the transactional test load toward a test database that already contains representative volume anddistribution of data. In other words, a good database to use for load testing is one you've already validated through Volume testing.

    Things to consider during load testing:Everybody...

    ...logs into the network at the same time.

    ...gets into the application at the same time.

    ...runs the same test scripts.

    ...hits save simultaneously.

    ...does a heavy refresh simultaneously.

    ...tries to edit the same record simultaneously.

    ...runs a variety of queries simultaneously.

    ...starts a difficult search simultaneously.

    ...tries to list many items simultaneously.

    End Users & Testers side:Pre-UAT Alpha testing, Beta testing & Acceptance testingAlpha Testing: testing of an application when development is nearing completion; minor design changes may still be made as aresult of such testing. Typically doneby end-users or others, not by programmers or testers.

    Beta testing:beta testing - testing when development and testing are essentially

    completed and final bugs and problems need to be found before final release.Typically, it is done by end-users or others, not by programmers or testers.

    Pilot Testing: Pilot testing is a testing process equivalent toBeta Testing that organizations use to test applications theyhave developed for internal use.

    Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time. Testing conducted to enable a user/customer todetermine whether to accept a software product. Normally performed to validate the software meets a set of agreedacceptance criteria.

    Pre-UAT

    15

    Syed Kamrul

    http://fox.wikis.com/wc.dll?Wiki~BetaTestinghttp://fox.wikis.com/wc.dll?Wiki~BetaTestinghttp://fox.wikis.com/wc.dll?Wiki~BetaTesting
  • 8/4/2019 Sdlc Final

    16/29

    Alpha and beta testing. This is where the software is released to the actual end users. An initial release, the alpha release, mightbe made to selected users who would be expected to report bugs and other detailed observations back to the production team.

    Once the application has passed through the alpha phase a beta release, possibly incorporating changes necessitated by thealpha phase, can be made to a larger more representative set users, before the final release is made to all users.

    Alpha Testing: Testing of an application when development is nearing tocompletion; minor design changes may still be made as a result of such testing.Typically, this testing is done by End users or others, not by testers or developers.

    Alpha is often done on a product under development, while Beta is reserved for "feature-complete" testing. Under releasepressures, many organizations have loosened this rule.

    Beta testing: This testing is performed when development and testings areessentially completed, just to make sure that bugs and problems are fixed and thesoftware is acceptable before final release.

    Beta testing is a similar process to alpha testing, except the software productshould be less buggy.Alpha is generally done in-house while Beta is done with a wider scope ofselected clients outside of the organization.Typically, this testing is done by End users or others, not by testers or developers.

    End-User testing UAT: User Acceptance Testing: A formal product evaluation performed by a customer as a condition ofpurchase.

    User Acceptance Testing (UAT) is a process to obtain confirmation by a Subject Matter Expert(SME), preferably the owner orclient of the object under test, through trial or review, that the modification or addition meets mutually agreed-upon requirements. Insoftware development, UAT is one of the final stages of a project and will often occur before a client or customer accepts a newsystem.

    Users of the system will perform these tests which, ideally, developers have derived from the client's contract or the user requirementsspecification.

    Test designers will draw up formal tests and devise a range of severity levels. The test designer may or may not be the creator of theformal test cases for the same system. The focus is on a final verification of the required business function and flow of the system,emulating real-world usage of the system. The idea is that if the software works as intended and without issues during a simulation of

    normal use, it will work just the same in production. These tests are not focused on fleshing out simple problems (spelling mistakes,cosmetic problems) or show stoppers (major problems like the software crashing, software will not run etc.). Developers should have

    worked out these issues duringunit testing andintegration testing.

    The results of these tests will give confidence to the customers of how the system will perform in production.

    What are two most important testing techniques?

    The two most important testing techniques are:

    Front-end testing.

    Back-end testing.

    Difference between Smoking Testing, Sanity Testing, Monkey Testing, and Regression Testing:

    Smoke testing is a shallow and wide spread approach to the application. You test all areas of the application without getting too deep.While Sanity testing is usually narrow and in-depth testing.In Sanity testing tests few areas but all aspects of the application.

    A Smoke testing is scripted either using a written set of tests or an automated test whereas a Sanity test is usually unscripted.Sanity test is to determine a small section of the application still works after minor change.A Monkey test also unscripted like Sanity test, but thissort of test explained with an analogy that think a room full of monkeys with atype writer placed in front of each of them. Given enough time, you could get the sonnets of Shakespeare out of them. This is based on

    the idea that random activity can create order or cover all options. This is definitely not a good policy. Regression testing is betterpolicy, instead.A Sanity test tends to be unscripted, but you could use a subset of your existing test cases to verify a small part of your application.This is not a Regression tes, where all areas of the application are tested using a subset of the test cases.

    A Regression test is a more complete Smoke test orBVT.A Sanity test is a narrow Regression test that focuses on a one or a few area of functionality.

    What is the difference between Test Plan, Test case, and Test Script?

    16

    Syed Kamrul

    http://en.wikipedia.org/wiki/Subject_Matter_Experthttp://en.wikipedia.org/wiki/Subject_Matter_Experthttp://en.wikipedia.org/wiki/Software_development_processhttp://en.wikipedia.org/wiki/Software_development_processhttp://en.wikipedia.org/wiki/Requirements_analysishttp://en.wikipedia.org/wiki/Requirements_analysishttp://en.wikipedia.org/wiki/Requirements_analysishttp://en.wikipedia.org/wiki/Requirements_analysishttp://en.wikipedia.org/wiki/Test_casehttp://en.wikipedia.org/wiki/Test_casehttp://en.wikipedia.org/wiki/Computer_bughttp://en.wikipedia.org/wiki/Computer_bughttp://en.wikipedia.org/wiki/Computer_bughttp://en.wikipedia.org/wiki/Unit_testinghttp://en.wikipedia.org/wiki/Unit_testinghttp://en.wikipedia.org/wiki/Integration_testinghttp://en.wikipedia.org/wiki/Integration_testinghttp://en.wikipedia.org/wiki/Subject_Matter_Experthttp://en.wikipedia.org/wiki/Software_development_processhttp://en.wikipedia.org/wiki/Requirements_analysishttp://en.wikipedia.org/wiki/Requirements_analysishttp://en.wikipedia.org/wiki/Test_casehttp://en.wikipedia.org/wiki/Computer_bughttp://en.wikipedia.org/wiki/Unit_testinghttp://en.wikipedia.org/wiki/Integration_testing
  • 8/4/2019 Sdlc Final

    17/29

    Test Plan:A software project test plan is a document that describes the objectives, scope, approach, and focus of a softwaretesting effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate theacceptability of a software product. The completed document will help people outside the test group understand the 'why' and'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test groupwill read it. The following are some of the items that might be included in a test plan, depending on the particular project:

    7 items needed for a successful system test planThis word processor template is a system test plan template with complete instructions for completing and tailoring to your company and/oryour projects. It provides instruction as well as examples and documentation on:Determining a software test strategy What are the important capabilities?What capabilities are used more frequently?What capabilities are most likely to have defects?Determining software testing resources and scheduling themDeciding which, if any, of the tests can be automatedDetermining what will be tested during each phase of testing Component

    AlphaBetaFinal acceptance

    Determine the use cases What roles do the end users play? What do they do most often with yoursoftware?

    Determining what types of software system testing are most appropriate and applicable for this project GUIStressError handlingCritical path

    PerformanceConfiguration

    RecoverySecurity

    Key BenefitsReduces time required to create a test plan documentComes with instructions for tailoringComes with examples of each of the types of tests that can be performed

    Illustrates criteria for testing that your company may have overlookedThis template is designed for organizations who do not have a standard test plan in place. It will help you get started in the manualwork required to document the appropriate sections of your test plan.This template is complimentary for those who purchase the 1 day test course

    A test plan is a systematic approach to testing a system such as a machine orsoftware. The plan typically contains a detailedunderstanding of what the eventualworkflow will be.

    Test plan template, IEEE 829 format

    17

    Syed Kamrul

    http://en.wikipedia.org/wiki/Machinehttp://en.wikipedia.org/wiki/Machinehttp://en.wikipedia.org/wiki/Softwarehttp://en.wikipedia.org/wiki/Softwarehttp://en.wikipedia.org/wiki/Softwarehttp://en.wikipedia.org/wiki/Workflowhttp://en.wikipedia.org/wiki/Workflowhttp://en.wikipedia.org/wiki/IEEE_829http://en.wikipedia.org/wiki/IEEE_829http://your-6bvpxyztoq/pyramid.gifhttp://en.wikipedia.org/wiki/Machinehttp://en.wikipedia.org/wiki/Softwarehttp://en.wikipedia.org/wiki/Workflowhttp://en.wikipedia.org/wiki/IEEE_829
  • 8/4/2019 Sdlc Final

    18/29

    Outline

    1. Test Plan Identifier2. References3. Introduction4. Test Items5. Software Risk Issues6. Features to be Tested

    7. Features not to be Tested8. Approach9. Item Pass/Fail Criteria10. Entry & Exit Criteria11. Suspension Criteria and Resumption Requirements12. Test Deliverables13. Remaining Test Tasks14. Environmental Needs15. Staffing and Training Needs16. Responsibilities17. Schedule18. Planning Risks and Contingencies19. Approvals

    20. Glossary

    Test plan identifier

    The test plan identifier number is.......( to identify this test plan, its level and the level of software that it is related to. Preferably the test planlevel will be the same as the related software level. The number also identifies our test plan to be a smoke test plan. This is to assist incoordinating software and testware versions within configuration management.

    References

    List all documents that support this test plan.

    Documents that are referenced include:

    Project Plan

    System Requirements specifications

    High Level design document

    Detail design document

    Development and Test process standards

    Methodology guidelines and examples

    Corporate standards and guidelines

    Introduction Objective of the plan

    Scope of the plan

    in relation to the Software Project plan that it relates to. Other items may include, resource and budgetconstraints, scope of the testingeffort, how testing relates to other evaluation activities ( Analysis&Reviews), and possibly the process to be used for change controland communication and coordination of key activities.As this is the "Executive Summary" keep information brief and to the point.in relation to the Software Project plan that it relates to. Other items may include, resource and budgetconstraints, scope of the testingeffort, how testing relates to other evaluation activities (Analysis &Reviews), and possibly the process to be used for change controland communication and coordination of key activities.As this is the "Executive Summary" keep information brief and to the point.

    18

    Syed Kamrul

    http://en.wikipedia.org/wiki/Budgethttp://en.wikipedia.org/wiki/Budgethttp://en.wikipedia.org/wiki/Analysishttp://en.wikipedia.org/wiki/Analysishttp://en.wikipedia.org/wiki/Reviewhttp://en.wikipedia.org/wiki/Reviewhttp://en.wikipedia.org/wiki/Budgethttp://en.wikipedia.org/wiki/Budgethttp://en.wikipedia.org/wiki/Budgethttp://en.wikipedia.org/wiki/Analysishttp://en.wikipedia.org/wiki/Reviewhttp://en.wikipedia.org/wiki/Reviewhttp://en.wikipedia.org/wiki/Reviewhttp://en.wikipedia.org/wiki/Budgethttp://en.wikipedia.org/wiki/Analysishttp://en.wikipedia.org/wiki/Reviewhttp://en.wikipedia.org/wiki/Budgethttp://en.wikipedia.org/wiki/Analysishttp://en.wikipedia.org/wiki/Review
  • 8/4/2019 Sdlc Final

    19/29

    Test items (functions)

    These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested.This can be developed from the software application inventories as well as other sources of documentation and information.

    This can be controlled on a local Configuration Management (CM) process if you have one. This information includes version numbers,configuration requirements where needed, (especially if multiple versions of the product are supported). It may also include key deliveryschedule issues for critical elements.

    Remember, what you are testing is what you intend to deliver to theClient.

    This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels itmay be by program, unit, module or build.

    Software risk issues

    Identify what software is to be tested and what the critical areas are, such as:

    1. Delivery of a third party product

    2. New version ofinterfacingsoftware3. Ability to use and understand a new package/tool, etc.

    4. Extremely complex functions5. Modifications to components with a past history of failure

    6. Poorly documented modules or change requests

    There are some inherent software risks such as complexity; these need to be identified.

    1. Safety2. Multiple interfaces3. Impacts on Client

    4. Governmentregulations and rules

    Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developerlevels. Be aware of vague or unclear requirements and requirements that cannot be tested.

    The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the software that are risky. If theunit testing discovered a large number of defects or a tendency towards defects in a particular area of the software, this is an indicationof potential future problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will most likelycontinue to be defect prone.

    One good approach to define where the risks are is to have severalbrainstormingsessions.

    Start with ideas, such as, what worries me about this project/application.

    Features to be tested

    This is a listing of what is to be tested from the user's viewpoint of what the system does. This is not a technical description of thesoftware, but a USERS view of the functions.

    Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High, Medium and Low. These types of levels areunderstandable to a User. You should be prepared to discuss why a particular level was chosen.

    Sections 4 and 6 are very similar, and the only true difference is the point of view. Section 4 is a technical type description includingversion numbers and other technical information and Section 6 is from the Users viewpoint. Users do not understand technicalsoftware terminology; they understand functions and processes as they relate to their jobs.

    Features not to be tested

    This is a listing of what is 'not' to be tested from both the user's viewpoint of what the system does and a configurationmanagement/version control view. This is not a technical description of the software, but a user's view of the functions.

    19

    Syed Kamrul

    http://en.wikipedia.org/wiki/Consumerhttp://en.wikipedia.org/wiki/Consumerhttp://en.wikipedia.org/wiki/Consumerhttp://en.wikipedia.org/wiki/Third_partyhttp://en.wikipedia.org/wiki/Interface_(computer_science)http://en.wikipedia.org/wiki/Interface_(computer_science)http://en.wikipedia.org/wiki/Regulationhttp://en.wikipedia.org/wiki/Regulationhttp://en.wikipedia.org/wiki/Software_bughttp://en.wikipedia.org/wiki/Software_bughttp://en.wikipedia.org/wiki/Brainstorminghttp://en.wikipedia.org/wiki/Brainstorminghttp://en.wikipedia.org/wiki/Brainstorminghttp://en.wikipedia.org/wiki/User_(computing)http://en.wikipedia.org/wiki/User_(computing)http://en.wikipedia.org/wiki/User_(computing)http://en.wikipedia.org/wiki/Consumerhttp://en.wikipedia.org/wiki/Third_partyhttp://en.wikipedia.org/wiki/Interface_(computer_science)http://en.wikipedia.org/wiki/Regulationhttp://en.wikipedia.org/wiki/Software_bughttp://en.wikipedia.org/wiki/Brainstorminghttp://en.wikipedia.org/wiki/User_(computing)
  • 8/4/2019 Sdlc Final

    20/29

    Identify whythe feature is not to be tested, there can be any number of reasons.

    Not to be included in this release of the Software.

    Low risk, has been used before and was consideredstable.

    Will be released but not tested or documented as a functional part of the release of this version of the software.

    Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by the levels of

    acceptable risk within the project, and what does not get tested affects the level of risk of the project.

    Approach (strategy)

    This is your overall teststrategyfor this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and shouldbe in agreement with all higher and lower levels of plans. Overall rules and processes should be identified.

    Are any special tools to be used and what are they?

    Will the tool require special training?

    Whatmetricswill be collected?

    Which level is each metric to be collected at?

    How is Configuration Management to be handled?

    How many different configurations will be tested?

    Hardware

    Software

    Combinations of HW, SW and othervendorpackages

    What levels ofregression testingwill be done and how much at each test level?

    Will regression testing be based on severity of defects detected?

    How will elements in the requirements and design that do not make sense or are untestable be processed?

    If this is a master test plan the overall project testing approach and coverage requirements must also be identified.

    Specify if there are special requirements for the testing.

    Only the full component will be tested.

    A specified segment of grouping of features/components must be tested together.

    Other information that may be useful in setting the approach are:

    MTBF,Mean Time Between Failures- if this is a valid measurement for the test involved and if the data is available.

    SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.

    How will meetings and other organizational processes be handled?

    Item pass/fail criteria

    Suspension criteria'Resumption Criteria'

    Test deliverables

    What is to be delivered as part of this plan?

    Test plan document.

    Test cases.

    Test design specifications.

    Tools and their outputs.

    Simulators.

    Static and dynamic generators.

    20

    Syed Kamrul

    http://en.wikipedia.org/wiki/Stabilityhttp://en.wikipedia.org/wiki/Stabilityhttp://en.wikipedia.org/wiki/Stabilityhttp://en.wikipedia.org/wiki/Strategyhttp://en.wikipedia.org/wiki/Strategyhttp://en.wikipedia.org/wiki/Strategyhttp://en.wikipedia.org/wiki/Traininghttp://en.wikipedia.org/wiki/Software_metrichttp://en.wikipedia.org/wiki/Software_metrichttp://en.wikipedia.org/wiki/Software_metrichttp://en.wikipedia.org/wiki/Vendorhttp://en.wikipedia.org/wiki/Vendorhttp://en.wikipedia.org/wiki/Regression_testinghttp://en.wikipedia.org/wiki/Regression_testinghttp://en.wikipedia.org/wiki/Regression_testinghttp://en.wikipedia.org/wiki/Mean_Time_Between_Failureshttp://en.wikipedia.org/wiki/Mean_Time_Between_Failureshttp://en.wikipedia.org/wiki/Mean_Time_Between_Failureshttp://en.wikipedia.org/wiki/Simulatorhttp://en.wikipedia.org/wiki/Simulatorhttp://en.wikipedia.org/wiki/Stabilityhttp://en.wikipedia.org/wiki/Strategyhttp://en.wikipedia.org/wiki/Traininghttp://en.wikipedia.org/wiki/Software_metrichttp://en.wikipedia.org/wiki/Vendorhttp://en.wikipedia.org/wiki/Regression_testinghttp://en.wikipedia.org/wiki/Mean_Time_Between_Failureshttp://en.wikipedia.org/wiki/Simulator
  • 8/4/2019 Sdlc Final

    21/29

    Errorlogsand execution logs.

    Problem reports and corrective actions.

    One thing that is not a test deliverable is the software itself that is listed under test items and is delivered by development.

    Remaining test tasks

    If this is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plandoes not address. These areas need to be identified to avoid any confusion should defects be reported back on those future functions.This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing non-defects.

    If the project is being developed as a multi-party process, this plan may only cover a portion of the total functions/features. This statusneeds to be identified so that those other areas have plans developed for them and to avoid wasting resources tracking defects that donot relate to this plan.

    When a third party is developing the software, this section may contain descriptions of those test tasks belonging to both the internalgroups and the external groups.

    Environmental needs

    Are there any special requirements for this test plan, such as:

    Special hardware such as simulators, static generators etc.

    How will test data be provided. Are there special collection requirements or specific ranges of data that must be provided?

    How much testing will be done on each component of a multi-part feature?

    Special powerrequirements.

    Specific versions of other supporting software.

    Restricted use of the system during testing.

    Staffing and training needs

    Training on the application/system.

    Training for any test tools to be used.

    Section 4