6

Click here to load reader

Exist Quality Assurance & Testing Services

  • Upload
    exist

  • View
    329

  • Download
    0

Embed Size (px)

DESCRIPTION

Exist offers a wide range of software testing services -- from functional testing of a simple web app to a dedicated QA team that manages all of your software testing needs.

Citation preview

Page 1: Exist Quality Assurance  & Testing Services

[email protected]

www.exist.com

Outsourced Software Testing Services

Every company wants to ship their products faster, more cost-effectively and with lower defects. We're here to help. Exist QA specialists will work closely with your team so that you can succeed without breaking your software, and your budget.

Depending on the requirements of the project, we can provide:

Testing in Agile Software Development

Exist enables our clients to respond quickly to strategic business requirements and adapt rapidly to changes as they come. Agility is at the core of our software development model.

Throughout our history, we have worked successfully with customers in creating software using agile development methodologies.

We believe the core value of agile software development is in the ability to adapt to changes in project needs and requirements; and we employ test-driven development to produce interim, user-visible results quickly and regularly to enable a powerful feedback loop.

Enterprise-Level Tooling & Methodology

Level up with proven, best-in-practice test processes, tools, and frameworks. Exist QA and software testing experts can help with:

Lifecycle QA: Create a methodology so that software testing becomes part of the development strategy. Start planning with testing involved on day 1. Construct test plans even before you write the first line of code.Testing Tools: Adopt the best testing tools for your needs.

Some of the tools we use:

Functional testingLoad testingSecurity testingData completeness

Platform compatibilityPerformance testingStress testingAcceptance testing

Selenium RCSelenium IDEJMeterCucumber

Consult with our team today about your testing requirements.

Let’s talk: [email protected].

Page 2: Exist Quality Assurance  & Testing Services

[email protected]

www.exist.com

Our Testing Process

Each project undergoes a thorough testing process, managed by a dedicated QA team. The information presented below defines the software quality assurance framework applied to projects undertaken by Exist. It describes the testing strategy and approach to testing that our QA team will use to validate the quality of the product prior to any release.

Test Environment Preparation. As part of the objective of achieving the highest level of quality of the software prior to any release, we prepare the test environment at the onset and this involves:

Technical analysis based on client materialsCreation of test plan including: thorough analysis of project specifications and goals, setup and configuration instructions of testing environments, detailed timetable for testing with project milestones (refer to Test Plan Development)Determining testing approaches and methodsEnsuring all members have the same level of understanding of the client’s specifications and requirementsProper setup of hardware and software requirementsEnsuring all team members are fully trained in use of bug tracking system and in correct methods of bug reporting

Testing Proper. We employ a mix of manual and automated testing processes for each project and we can employ various testing methods, depending on your requirements.

Bug Reporting and Tracking. Our ability to collaborate regardless of location stems from the use of a web-based project management tool called Development Engineering Network or DEN for short which offers our distributed teams as well as our customers a unified on-demand view of all aspects of the software project life cycle.

This project management tool based from the open source application Redmine provides us and our customers the ability to:

monitor and track a project's progresscommunicate instantly and seamlesslyassess velocity of development and team efficiencyidentify and resolve bottlenecks and issues during the early part of development, reducing overruns in the long-termoptimize resourcesdetermine best practicesmaintain documentation of a project

Below are our best practice methodologies for bug reporting and tracking.

New bugs are reported in DENVerified more than once using the same environmentInformation of the issue must include summary, detailed steps to replicate, possible screenshot, environment it was encountered, priority, and assignee

Our iterative engineering model helps you deliver software faster. Meet with our team today to discuss your project.

Contact: [email protected].

Page 3: Exist Quality Assurance  & Testing Services

[email protected]

www.exist.com

Bug Reporting and Tracking continued ...

Steps how to create an issue in DENChoose the specific projectChoose a tracker type (most commonly used is bug)Subject refers to the general description of an issueChoose the priority of the issue (blocker, major, minor, trivial)Assign issue to the most appropriate personIndicate the environment as to which the issue was encounteredProvide the detailed steps on how to replicate the issue on the description field

Submit DefectTester log defects into the defect tracking system, the defect is in 'Submitted' or 'New' status, and set the related parties who will receive the mailTester should set the severity level and priority according to the related definition.Tester should describe the defect as much detail as possible to help the reviewer to understand and reproduce the defectIf Tester / Test Lead needs to update some items or add more information about the defect, he/she could modify the defect, the status is still 'Submitted' or 'New'

Retesting DefectThe Tester should check the 'Resolved' defect, and do retesting for this defect ASAPIf Tester finds the defect unresolved, tester should set the status of the defect to 'Assigned' and record the reason in the defect notesIf Tester has validated that a defect has been fixed, tester should update the defect to 'Closed' and record the pass reason in the comment areaIf the Tester/Test Lead found the new issue as the same with the 'Closed' defect, the tester should 'Re-open' the defect status

Defect StatusNew – Defect is first reported and submitted, or reopened to new reported status because it could not pass the related acceptance criteriaRejected – After analysis, reported defect is confirmed to be invalid by Tester / Test Lead / DeveloperAssigned – After analysis, defect is accepted by the development team and is assigned to a party for resolutionIn-progress – Developer has started the development work on the defectResolved – Developer verified and defect has been resolved by developer or QA TeamVerified – Defect has been internally validated (by development leader or member); updated application has been deployed to QA testing environment or defect has been verified by QA teamOn-Hold – Defect resolution is on-hold pending confirmation of issues and assignmentReopen – Defect is restarted from Pending, Rejected or Closed and is assigned to a party for resolution / follow-upClosed – Defect has been validated and passed the acceptance criteria

Exist has a dedicated testing team working with Ace Metrix's development team.

Ace Metrix is the new standard in television analytics. Television advertising represents a CMO's biggest risk and largest expense, yet has some of the least effective measurement tools. Ace Metrix solves this by bringing digital technology, analytics, and speed to TV. They measure creative effectiveness for every ad, every single time, in real time– making media dollars work harder.

“Given the complexity of our software, Exist team has evolved quickly and demonstrated a good understanding of our tools which has allowed them to deliver high quality results. Exist records results into our tracking systems with excellent detail which enables our engineering team to diagnose the issues quickly.”

-- Greg Falzon, VP, Product Engineering

of Ace Metrix

Page 4: Exist Quality Assurance  & Testing Services

[email protected]

www.exist.com

Bug Reporting and Tracking continued ...

Defect Severity. Defect severity is an indicator on how damaging the defect is, we have defined four severity levels:

Blocker Defect – is defined as the system is inoperable. Prevent function on the site from being used, and no work-around. The tester or the user can not move forward with the test.Major Defect – is defined as a problem that prevents function from being used, or work-around is possible. The tester can continue with other testing.Minor Defect – is defined as a problem making a function difficult to use, but no special work-around is required.Trivial Defect – is defined as a problem not affecting the actual function, but the behavior is not right.

Test Plan Development

Step 1 – Establishing Test ObjectivesStep 1.1 – Identify Test Objectives. Identified test objectives include the requirements document, wireframe and user stories as reference materials. The test objectives should be a reflection of the test requirements.

Output: Statement of Test Objectives Step 1.2 – Define Completion Criteria. A completion criterion is the standard by which a test objective is measured. The test team must be able to determine when a test objective has been satisfied. One or more completion criteria must be specified for each test objective. QA must check that each requirement and how it is validated is documented. Important test metrics that should be calculated and reported are the percentage of test requirements that have been covered by test cases, and the percentage of test requirements that have been successfully validated.

Output: Statement of Objective Completion CriteriaStep 1.3 – Prioritize Test Objectives. Test objectives are prioritized based on the scale from low to high.

Output: Prioritized Test Objectives

Step 2 – Construct Test Plans. The purpose of the test plan is to specify WHO does WHAT, and WHEN and WHY of the test design, test construction, test execution, and test analysis steps. The test plan also describes the test environment and required test resources. The created test plan must also provide measurable goals by which the product ownder can gauge testing. The test plan is an operational document that is the basis for testing. It describes test strategies and test cases. This document must be considered an evolving document. In addition, the test plan must be designed with test automation in mind.

Inputs: Requirements document, Software Design Description documentStep 2.1 – Construct the System Test Plan. Identify the business scenarios to be tested. The user will employ the application system to conduct business day in and day out and according to daily, weekly, monthly and/or yearly business cycles. The task identifies the business process that can be translated into scripted test scenarios. A system test scenario is a set of test scripts which reflect user’s behavior in a typical business situation. Step 2.2 – Construct the Integration Test Plan. Workability of each module must be considered when creating the test plan for integration test. The focus is to create a plan where the integrated modules will be tested whether they do what they should do, and do not do what they should not do.

Step 3 – Design and Construct Test Cases. The purpose of this step is to apply test case design techniques to design and build a set of intelligent test data. The data must address the system as completely as possible, but it must also focus in on high-risk areas, and system/data components where weaknesses are traditionally found (system boundaries, input value boundaries, output value boundaries, etc.)

Would you benefit in testing earlier in the software development life cycle?

Let’s talk: [email protected].

Exist is helping Where2GetIt ensure that quality is built into their system as it continues to scale.

Where2GetIt was founded in 1997 and has since grown into an industry-leading provider of location-based digital marketing solutions powering more than 500 brands. Serving more than 500,000 brick-and- mortar locations, Where2GetIt has channel strength that reaches millions of consumers around the world.

Page 5: Exist Quality Assurance  & Testing Services

[email protected]

www.exist.com

Test Plan Development continued ...

The test data set will be a compromise of economics and need. It is not economically feasible to test every possible situation, so representative sampling of test conditions, etc. will be present in the test data.Created test cases are stored in the test case repository tool, TestLink, or GoogleDocs for easier collaboration of results.Step 3.1 – Specify the test case design strategies. This step should identify what test case design approaches will be used at what levels of testing. Step 3.2 – Design the test cases. This task involves applying the test case design techniques to identify test date values that will be constructed. The test team must get hold of the functional requirement document, user stories from Project Manager, and wireframe/s as to be able to create a high level test case that will be regularly updated as the project progresses.The test team is responsible for designing System Testing test cases. The test case description must be documented manually and stored in GoogleDocs.Step 3.3 – Construct the Test Data. This is the construction of the actual physical data sets that will satisfy the test cases designed in Step 3.2. The medium in which the data are constructed will be determined at the time of construction.

Step 4 – Execute Integration Tests. The purpose of integration testing is to prove that the software modules work together properly. It should prove that the integrated modules do what they are intended to do, and that the integrated modules do not do things they are not intended to do. Step 4.1 – Approve Test Environment. The purpose of this step is to verify that the required test environment is in place before testing starts.The test team must ensure that QA staging environment is in place where the build for testing will be deployed. Machine specifications must be considered that will match (or closely match) the actual production machine specifications.If a project does not require a staging environment, local machines of the testers will suffice where the new builds will be deployed.Step 4.2 – Execute Integration Tests. This task is the responsibility of the test team. Its focus is to prove that the integrated software modules do what they should do, and do not do what they should not do. This test is conducted in a formal manner. The testers use integrated test cases that have predicted outputs. The test results are recorded in structured test logs. The structured test logs and test scripts drive the integration testing process.Step 4.3 – Retest Problem Areas. This task is cyclic in nature. Retesting will continue until pre-specified stopping criterions are met.

Step 5 – Execute System Tests. The purpose of the system test is to use the system in ”controlled” test environment, but to do so as the user would use the system in the production environment. The system test should prove that the complete system will do what it is supposed to do, and that it will not do anything that it is not supposed to do.

Step 6 – Execute Regression Tests. The primary purpose of regression testing is to prove that the system enhancements, and routine tuning and maintenance do not affect the original functionality of the system. The secondary purpose is to prove that the enhance/maintenance changes do what they are intended to do, and do not do anything that they are not intended to do. We use Selenium to design and build automated test scripts. The scripts can then be enhanced and replayed for each subsequent regression test.

We use Selenium for automated testing to / for:

Save time by dramatically speeding up testing of web apps by running multiple tests in parallelFrequent regression testingInstant feedback to enable improved collaborationVirtually limitless iterations of test case executionCustomized reporting of application defectsDiscover defects missed by manual testing

Do you require test automation?

Contact: [email protected].

Page 6: Exist Quality Assurance  & Testing Services

[email protected]

www.exist.com

Test Case Development

Test cases should be written comprehensively so that they can be used by new team members tasked to execute the testing. There are levels in which each test case will fall in order to avoid duplicate in effort:

Level 1: We write test cases based on the available specification and user documentation.Level 2: We write test cases based on the actual functional and system flow of the application.Level 3: Automation of the project. We minimize tester’s interaction with the system so we can focus more on current updated functionalities to test rather than remaining busy with regression testing.

Test cases are written in the QA team’s test case repository tool, GoogleDocs. The test group is responsible for creating the test cases prior to any test execution. The following should be considered when writing test cases:

Write the Test Case first. Before the actual testing commences, QA must ensure that test cases are available and written based on the levels mentioned above.Read the specification carefully. Missing a point in the spec is one of the most common errors, and one that cannot be checked in any other way than hand. Test the simple stuff. Focusing only on the difficult logical portions of the program is a mistake; most bugs are simple things that are obvious once tested.Test the error cases, the rarer cases, and the boundary conditions. Think carefully to ensure that every error condition, all the odd boundary conditions, etc are tried and tested.

It is the responsibility of the test group to follow-up to the Project Manager for any functional specifications, wireframes, user stories, or any documents that can aid in creating high-level test cases. The writing of test cases will evolve from being high-level down to becoming more specific as the documents are provided to the test group.

Build a dedicated software testing team with us today

Whether you require independent testing services or looking to get started with testing involved on day 1 -- we can help.

Have questions? Need a quote? Drop an email to [email protected] or call 632-9106010 / 1-310-728-2142 local 5304.

For additional info, visit www.exist.com/software-testing.