27
CHRISTINA HARTMAN PROFESSOR MATTHEW CHRENKA CISS 493 21 JULY 2016 Quality Control Designed to Improve User Acceptance

Quality Control Deisgned to Improve User Acceptance

Embed Size (px)

Citation preview

Quality Control Designed to Improve User Acceptance

Christina HartmanProfessor Matthew Chrenka CISS 493 21 July 2016

Quality Control Designed to Improve User Acceptance

Benjamin Franklin once said that, "the bitterness of poor quality remains long after the sweetness of low price is forgotten" (Palamarchuk). Although Franklin coined the phrase before the introduction of the first computer, the importance applies to software quality. Poor software quality can effect a business's productivity which could mean a loss of money, reputation and time. Due the potential impact, it is important to incorporate quality control into your functional (also known as structural testing) and non-function (also known as behavioral testing) into your test development plans. To do this you must understand quality control strategies and how functional and non-functional testing plays a part in the overall quality of the system. 1

ObjectivesDiscover quality control perspectivesDiscuss non-functional requirements testing as it applies to quality controlDiscuss functional requirements testing as it applies to quality controlConclusion

2

Quality Control Designed to Improve User AcceptanceQuality Control Perspectives

Years ago, the industry recognized the need for the presence of quality control within all areas of the software production process. With this revolution, came the concept of five different perspectives. These perspectives drive an environment of quality built within all software development, testing, and implementation aspects; the five perceptions are:

3

Five Quality Control PerspectivesTranscendental ViewIll know quality when I see it.User ViewAll requirements were met; therefore, quality was received.Manufacturing ViewAll specifications were met; therefore, quality was produced.Product ViewThe internal system works appropriately; therefore, the user the external system contains quality.Value-Based ViewOur customers are willing to pay for the product; therefore, the product contains quality.

Transcendental View. The transcendental view defines quality as knowing when quality is present or not present. In this view the user must see it to know it. This particular view can make it difficult to define, especially in technical specifications (Kshirasagar and Tripathy). While this view may be hard to define, quality control practices and using the other views as a measuring stick will aid you in knowing when you see it.

User View. The user view describes quality as the fulfillment of user requirements and needs (Kshirasagar and Tripathy). This perspective is easier to define and is highlighted in non functional testing areas such as compliance testing.

Manufacturing View. Manufacturing view interprets quality from the technical specifications outlined (Kshirasagar and Tripathy). This view could be seen from not only the developers view but also the quality assurance team. The developer would need to follow the design and implementation of the system specifications document where the quality assurance tester would validate the specification were meet. At which point, quality would be defined as met.

Product View. Product view defines quality as the internal quality of the system defining the quality of external qualities (Kshirasagar and Tripathy). Internal components, which the customer don't typically see, potentially affect the way the user interacts with the application. For example, this could be seen by the database design impacting the user ease of use when viewing data from the product. Another example might be when a web service is down, the user views it as the system's fault, regardless of the applications control of the web service; thus, the product is seen to have poor quality.

Value-Based View. The value-based view outlines the quality of a product based on "the amount a customer is willing to pay for it" (Kshirasagar and Tripathy). This view could be representative of the Apple versus Microsoft debate where people are willing to pay more for Apple products because they are viewed as better products. Apple invested prided itself in delivering products that exceed customer's expectations and protected the customer from unknown sources. Microsoft on the other hand wanted to provide a product at a more competitive price; thus, introducing third-party solutions.

These views should be included in all non-functional and functional testing plans. By defining the perspective view, the stakeholder will understand the definition of quality and can determine when it is met. This practice provides a definable attribute to the overall quality when verifying and validating quality for non-functional and functional requirements.

4

Quality Control Designed to Improve User AcceptanceNon-Functional Requirements

Non-functional requirements determine how the system should behave. These requirements outline the characteristics of the system that define how the business or users expect the system to perform functional requirements. For example, a non-functional requirement for a pen might be that the pen must write at least one thousand characters before running out of ink. The functional requirement might be that the pen contains ink that can write on paper. Within functional and non-functional testing there are several different areas you may consider for inclusion in your test case development. Specifically within non-functional, you should consider compliance testing, load testing, performance testing, usability testing, and compliance testing at a minimum.

5

Non-Functional RequirementsLoad Testing

Load Testing. Load testing seeks to analyze the system's ability to perform under different or extreme loads and varies between different systems design and architecture. These differences mean that one cannot perform the same set of defined test cases between every system. The analyst must gather information and identify several factors prior to building the appropriate set of test cases. This process includes six steps:

6

Six Processes of Load TestingAnalysis of DataIdentifying Key Scenarios Identifying Test Data Designing Load Tests Simulate Load and Monitoring Results Analyze Results and Addressing Issues

Analysis of Data gathers existing data to determine when trends indicate spike or drop in system usage or significant increase or decrease in data storage. The analyst should also determine where interfaces of data transmission occurs (Nagy and Chis 1404). Performing analysis of data requires research and documentation skills. Poor documentation skill and analysis skills could result in poor trend analysis.

Identifying Key Scenarios requires interpretation of the data gathered in the previous step. Here you utilize the data to determine what test scenarios will verify and validate the quality and ability of the system to perform given flexible constraints (Nagy and Chis 1404). When vetting the scenarios consider the business requirements and acceptance criteria for proper system functionality.

Identifying Test Data complies the data that will drive the test case scenarios mentioned above. This data should be carefully considered as it will affect the outcome of the scenarios given. For instance, if your scenario was to determine whether the system could handle X (where X was considered a high volume of users) number of user simultaneously logging in, you would want to ensure your test case data didn't contain invalid user IDs or passwords. Invalid user IDs and passwords could skew the results, depending on how your system architecture and database is setup (Nagy and Chis 1404). Understanding the intended outcome of your scenario will guide you on what data to select.

Designing Load Tests builds the actual test by combing the scenarios with the test data identified previously (Nagy and Chis 1404). As you combine the data, pay particular attention to whether the select scenarios and data provide the intended result when combined.

Simulate Load and Monitoring Results step performs the actual load tests and screens the process to ensure the test are running appropriately (Nagy and Chis 1404). During this steps, you'll need to monitor the results to identify trends in valued performance or inferior performance. You'll want to document these results as they will provide a benchmark for future testing.

Analyze Results and Addressing Issues attempts to verify and validate the quality of the product produced by evaluating the results of the load tests performed. To evaluate, collect the information identified while monitoring the simulation, create graphs, and prepare a "load testing analysis report" (Nagy and Chis 1404) that combines the information for prioritization and severity. This information can provide important benchmarks for the product going forward.

Implementation of these six steps allows the analyst to formulate a quantitative approach to quality control. Poor load testing can result in unwanted perceptions of quality within the product; thus, reducing the value-based perception of the product. If the product quality isn't identified prior to release, this means loss in revenue due to poor user acceptance. These benchmarks outlined during load testing can assist in the benchmarking and testing process for performance testing.

7

Non-Functional RequirementsPerformance Testing

Performance Testing. Performance testing provides an understanding of the system's ability to provide reliable, accurate, sustainable and scalable results under given conditions. There are typically four areas of performance testing which will provide a gauge for quantitative software quality. 8

Four Gauges of Performance TestingPerformance Regression Testing Performance Optimization and Tuning Testing Performance Benchmarking Testing Scalability Testing

Performance Regression Testing provides verification and validation after code changes were implemented on the software package. By selecting specific test cases that ensure the product performs the intended result as the original test case, provides stakeholders with feasible and quantitative perspectives which ensure quality control (Liu 70). Using the same load test previously mentioned, if X number of users were able to successfully log into the system, with the implementation of the enhancements to the login page, the same number of users (or more) should be able to authenticate into the site. If the same number of user could simultaneously authenticate, then quality was met. This process and set of test cases provide measurable verification that the newly introduced code meets the requirements outlined prior to upgrades.

Performance Optimization and Tuning Testing builds and performs the actual test cases for the performance testing. In this area you need to adequately assess the production environment to determine what specification of networking and architecture will support your performance test cases. You will need to document thoroughly document the hardware utilized for performance testing (being cautious to use outdated equipment as this could skew future testing results). Once you've documented the hardware, you'll need to run initial test cases to develop your baseline. Once your baseline is established you'll need to verify your test cases develop a variety of outcomes to determine where bottlenecks occur. As you tweak your performance, you may need to reestablish your baseline so future test cases are not skewed (Liu 73).

Performance Benchmarking Testing seeks to ensure that benchmarks are met with the introduction of the software. Benchmarks set the standard for the product and provide historical comparison in future releases.

Scalability Testing seeks to measure the limits of your system; to ensure that maximum transaction usage in an incremental approach. In this testing you will seek to determine how the system responds to additional data sets and transaction volumes. When you perform scalability testing you might seek to find memory leaks or improve database indexing. Figure 1 represents how increasing the database indexes, the system displayed improved response by creating object quicker than previous test cases (Liu).

9

Scalability TestingLine A identifies the original baseline.Line B represents the degraded scalability. Line C displays improved scalability after indexing the database.

Line A identifies the original baseline.

Line B represents the degraded scalability.

Line C displays improved scalability after indexing the database.

Regular performance testing will result in improved perceptions of quality control. If the system continuously meets or exceed prior benchmarks the system will be considered of good quality from the transcendent and user perspective. Another way to improve the user-view perception, is through proper design of usability test cases.

10

Non-Functional RequirementsUsability Testing

Usability testing seeks to verify and validate that users can perform actions within the system without interference or "do what he or she wants to do the way he or she expects to be able to do it, without hindrance, hesitation, or questions" (Rubin and Chisnell). Usability testing should be considered during all aspects of the system development cycle; from design to implementation. Design without regard to usability could result in customer frustration and loss in profitability. There are several types of usability tests that occur:

11

Four Types of Usability TestingExploratory or Formative Study Assessment or Summative Test Validation and Verification Test Comparison Test

Exploratory or Formative Study seeks to examine the value of the original design. During this process, it is important to understand the skill level of your users. You will need the ability to assess their ability to immediately grasp the product features, define value to the user, and determine whether the user can infer the next step in the process. Rubin and Chisnell refer to the exploratory study as the skeleton of the system (Rubin and Chisnell).

Assessment or Summative Test determines how well a user can perform a task; rather than just a design. This level of testing typically occurs once a product is developed and visible to the user. For this reason, it can be more quantitative (easier to define) because it focuses on actual system behavior (Rubin and Chisnell). System behavior plays vital role in the perception of quality from the product view and user's view.

Validation and Verification Test ascertain whether the product meets system, industry, or company standards of usability. These tests seek to ensure the baseline product contains the same value of original release and that the product isn't released with degraded value. Validations and verifications must be determine reasonable by the entire team. (Rubin and Chisnell) For example, if you previous discovered that formatting dates best suited clients by automatically populating the backslashes improve users understanding, then you would want to validate that new date fields auto-populated the slashes.

Comparison Test discover which design concept or design implementation provides a more user-friendly environment. To perform this type of test, you would evaluate two different concepts or implementations and compare the advantages and disadvantages of both (Rubin and Chisnell). For example, the sign for Target.com and Kohls.com differs greatly. While both reference an account, Target's implementation provides a clear visual queue while Kohl's implementation is more difficult to find. You can see the comparison in figure 2.

12

Comparison of Kohls and Target Account Access

Non-Functional RequirementsCompliance Testing

Compliance tests the system to determine if the user's expectations are meet. It validates whether the system has meet the definitions of done outlined by the business in the requirements. 14

Compliance Testing PathIdentify Business RequirementsBuild Test Cases with Specific Data SetsDefinition of Done or Acceptance CriteriaConfirm User Acceptance

To complete this testing, you'll need to identify the business requirements and build test cases with specific data sets against each requirement. During this testing, you'll need to ensure the definition of done or acceptance criteria was met. One way to confirm acceptance criteria is met, is ensuring the acceptance criteria is properly documented, understood, and agreed to prior to coding, testing, etc. Without the definition of done or acceptance criteria, testing compliance testing can become difficult because you can't anticipate the clients satisfaction. Focus on "three major objectives of user acceptance": confirm, identify, and determine (Kshirasagar and Tripathy 451). Confirm that testing results in desired behavior, identify and resolve any non-compliance issues, and determine whether your system meets standards for release. In order to confirm whether your product meets the user view of quality, you'll need to validate that the acceptance test plan outlines the businesses expectations so they can perform the final user acceptance testing.

Non-functional testing provides a way for an analyst to evaluate the quality of how the system performs. However, in order to consider a complete testing strategy, you must consider functional test strategies.

15

Quality Control Designed to Improve User AcceptanceFunctional Requirements

Functional testing consists of testing areas of the system that determine compliance with laws, policy, and business practices (like business rules, authentication rules, reporting, etc). Functional requirements are commonly described as 'what' the system should do. During functional testing you will test the boundaries, data transitions, business rules, and decision points implemented throughout the system. To test these areas, you should include: unit testing, integration testing, control flow testing, data flow testing, and security testing in your quality control plans.

16

Functional RequirementsUnit Testing

Unit testing evaluates the code level of a software solution such as functions, procedures, methods, objects, or classes. Here you must test the unit of the system isolated from integration with other pieces of the software. By doing this you isolate the testing to a single zone of the system. You can select specific data inputs to verify and validate that function performs the intended result for the desired outcome. You will need to ensure you test the positive and negative results to verify the system executes the code as expected. By isolating the issues now, you save resources and time because finding the root-cause of a problem within the integrated system consumes more resources. Unit testing can be performed at two levels: static and dynamic (Kshirasagar and Tripathy 52).

17

Two Types of Unit TestingStatic Testing ReadinessCompleteness Maintainability Readability ComplexityDynamic Testing StubsDrivers

Static Testing inspects the code produced prior to executing the code and integrating it with the system. This can be performed by inspections and walkthroughs. Inspections analyze the specific code as a team; where walkthroughs are simulated scenarios to evaluate quality for readiness, completeness, maintainability, readability, and complexity (Kshirasagar and Tripathy).

Dynamic Testing removes certain aspects of the environment and utilizes stubs and drivers to execute the code in a series of simulations. The driver typically contains the ability to determine whether the test case passed or failed (Kshirasagar and Tripathy). Creating stubs and drivers isolates any discoveries to a single unit of the system.

While testing these pieces it is important to understand the infrastructure (like the database, web services, etc.) may not be setup at the point in which you are ready to test so these tests are typically performed with mock objects that emulate the structure (Smith). By doing this, you eliminate the complexity and isolate any issues to the code. For this reason, you'll need to perform integration testing which links the unit and the system together.

18

Functional RequirementsIntegration Testing

Integration testing performs verification and validation of a system component (a collection of two or more units) to work as a product; it tests the interfaces between the units. Data flow and control flow testing will improve the overall design o f the integration testing. Being mindful of these testing practices, means you'll need to implement the lowest-level components first and work through the data and control testing building up to the highest-level components (Microsoft). Within integration testing there are three classes:

19

Three Classes of Integration TestingProcedure Call Interface Shared Memory Interface Message Passage Interface

Procedure Call Interface refers to testing interfaces where a component calls a procedure from a different component (Kshirasagar and Tripathy). You can determine the procedure calls from control flow graphs or unified modeling language diagrams.

Shared Memory Interface refers to the testing of systems components which share a block of memory; where one module writes the data and another reads the data (Kshirasagar and Tripathy). Identifying shared memory interfaces could be determined by control flow graphs and data flow graphs.

Message Passage Interface testing examines components ability "prepare a field by initializing the fields of the data structure and sending the message to another module" (Kshirasagar and Tripathy). In other words, transmitting messages from one module to another.

By implementing low-level to high-level approach in combination with the integration of the three integrated testing techniques, you will minimize the stubs and drivers mentioned earlier. Plus, early releases become for feasible. Before performing integration testing, consider control flow testing and data flow testing.

20

Functional RequirementsControl Flow Testing

Control flow testing seeks to determine if the software structure has testing coverage for each base case and alternative cases. The amount of coverage varies for each project and system size. To determine the control flow test cases, you will need to utilize the control flow graph which outlines the paths within a unit (in some instances the complier can produce the CFG). You will need to evaluate the feasibility of complete coverage. In some instances, "100% coverage may be difficult to achieve" (Kshirasagar and Tripathy 90).

21

Example of Control Flow Graph Notation

http://ecomputernotes.com/software-engineering/testing-techniques

Insert example of control flow graph

http://ecomputernotes.com/software-engineering/testing-techniques

22

Functional RequirementsData Flow Testing

Data flow testing evaluates the data and transformation process that can occur when moving the data through the program. Through these paths, one must understand the control flow testing performed as well - these tests are commonly performed together; however, produce different results. (Kshirasagar and Tripathy 112). You must consider your control paths to determine how the system will modify the data and determine if the data flow progressed appropriately.

23

Example of a Data Flow Graph

http://modeling-languages.com/distributed-model-to-model-transformation-with-mapreduce/

Insert data flow graph

Figure above shows an example of models, derived from a small program calculating a number factorial. Instructions are represented by rectangles, and variables by squares. An instruction points to the set of variables it defines or writes (def), and a set of variables it reads (use). The links cfNext and dfNext refer to the next control flow and data flow instructions respectively. As it can be seen in the figure, the transformation changes the topology of the model graph. Source of the transformation can be found in the tool website (Cabot)

Cabot, Jordi. "Distributed Model-to-Model Transformation with MapReduce." 26 October 2015. Modeling Languages. Web. 22 July 2016. .24

Functional RequirementsSecurity Testing

Security testing aims to find security flaws and vulnerabilities in the software. There are many areas of the system and ways to test security of a system. Some utilize software built specifically to find known security vulnerabilities and to combat Open Web Application Security Project's (OWASP) Top 10 List. Others utilize specialized software in combination of fuzzing. Fuzzing aims to break system by sending massive amounts of invalid or semi-valid data to the system in an attempt to overload or crash the system (Takanen, DeMott and Miller). This type of testing protects against cross-site scripting, denial of service attacks (commonly referred to as DOS), buffer overflows, and SQL injections. When performing fuzzing, there are several areas to consider. 25

Consider These Six Areas of Security Testing Network Interface Files And Data Application Programmer Interface Web Application Client-Side Transmission Layers

Network Interface (or server side) security testing ensures the system network isn't vulnerable to attacks. Network interface fuzzing involves "sending semi-valid applications packets to the server or client" (Takanen, DeMott and Miller 162). Fuzzing internet protocols (IP) ensures your system isn't vulnerable between clients sending information to each other or client to server communication.

Files And Data security testing attempts to inserts overloaded field sizes, exploit format constraints, and file structure. By running these types of uploads over a period of time simultaneously, a production system could crash if not discovered during quality assurance testing (Takanen, DeMott and Miller 165).

Application Programmer Interface security testing ensures that your system doesn't become exploited by overloading field sizes, exploiting data inputs, or other SQL type transactions against your interfaces.

Web Application security testing has increased in importance over the last decade as more online solutions become available. This testing performs various quality checks on file uploads and downloads (if available), data inputs, cross-site scripting, message transmissions, etc. One of the most basic security tests on a web application, would be to log into the application, log out, and then click the browser back (or click the backspace key) to see if you are authenticated. Another example, would be to log into the site and once at the payment process, open another tab or browser to attempt to steal the session and get two items for the price of one.

Client-Side security testing attempts to exploit vulnerabilities in the user interface by testing all button sequences, limitations of text input options (like formatting constraints), any command-line options, and any import/ export options (similar to the file testing mentioned earlier).

Transmission Layers security testing emulates the transmission of data across the system. To perform security testing, you should consider the various layers within your system: application, presentation, session, transport, network, data links, and physical layers (Takanen, DeMott and Miller). These layers each have their own vulnerabilities and exploitations. For instance, if you send massive amounts of semi-valid packages to the data link layer, you may exploit vulnerabilities within the layer and stop the network layer as well.

The varies types of attacks mentioned earlier, represent a large threat to companies. It can prevent a loss of revenue, loss of processing (such as patient care, depending on the industry), loss of trust in the system (typically equating to system quality). For example, the social media site known as Snapchat had an unknown vulnerability which led to the release of private phone numbers and bogus account creation (Worstall). The exploit could be viewed as a potential quality concern, steering consumer base away from the product.

Functional testing provides validation that the system meets requirements by performing what is supposed to do. There are many more types of functional test that should be considered; however, these types of functional testing provide a beneficial approach to test case development and planning.

26

Quality Control Designed to Improve User AcceptanceConclusion

Quality control by designing test cases for functional and non-function requirements improves the likelihood that your product is acceptable to users. By covering many areas of the system in combination with the functional and non-functional requirements throughout the testing plan, you improve the quality of your product through different viewpoints. Transcendental view improves because the product performs well. User view accepts because you've validated the product features. Manufacturers view (in this case programmers) views the product with high quality because the specifications were verified. Product view increases because you've applied internal and external verification of features and security in your quality control testing. And finally value-based view improves because you've validated that the product meets or exceeds competitor standards; thus, increasing the likeliness that your product will be highly marketable. When you integrate quality control through the varies system development lifecycle (from development to implementation) process, you continuously verify and validate quality into every feature of the product. In short, quality is about verification and validation of your product.

27