56
Software Testing CS 470

Software Testing

  • Upload
    dasan

  • View
    46

  • Download
    0

Embed Size (px)

DESCRIPTION

Software Testing. CS 470. Testing. Goal is to find faults What kind of faults? Algorithmic Computation and Precision Documentation Overload Capacity Timing Performance Recovery System Standards. Algorithmic Fault. Code’s logic does not produce the proper output for a given input - PowerPoint PPT Presentation

Citation preview

Page 1: Software Testing

Software Testing

CS 470

Page 2: Software Testing

Testing

• Goal is to find faults• What kind of faults?

– Algorithmic– Computation and Precision– Documentation– Overload – Capacity– Timing– Performance– Recovery– System– Standards

Page 3: Software Testing

Algorithmic Fault

• Code’s logic does not produce the proper output for a given input– Some examples

• Branch under wrong conditions

• Fail to set loop invariant

• Fail to initialize variables

• Comparing inappropriate data types

– Syntax faults may help find algorithmic faults• E.g. missing { } around a loop, require even for loops with

one statement

Page 4: Software Testing

Computation and Precision Faults

• Formula is implemented incorrectly

• Computation not performed to the required accuracy– E.g. precision errors we saw with the Patriot

Missile System

Page 5: Software Testing

Documentation Faults

• Documentation typically derived from the program design

• Implementation may be performed differently than specified in the documentation

• Improper documentation of an API can lead to further faults as other developers use the documentation to write code

Page 6: Software Testing

Stress or Overload Faults

• Data structures filled past their capacity– Buffer overflow– Dimension of tables larger than code can

handle– Length of queue too great

Page 7: Software Testing

Capacity Faults

• Performance becomes unacceptable as activity reaches system limit– E.g., system capable of handling 1,000,000

records becomes slower and slower as more records are added

– May be examined in relation to disk access, number of processes, etc.

Page 8: Software Testing

Timing or Coordination Faults

• Race conditions

• Specific inter-process communication

• May be particularly difficult to detect– Hard to replicate a fault– Many states usually possible among each

process; even higher number of states when considered in conjunction

Page 9: Software Testing

Throughput or Performance Faults

• System does not perform at the speed prescribed by the requirements

Page 10: Software Testing

Recovery Faults

• System does not recover from a failure as specified

Page 11: Software Testing

Hardware and System Faults

• Problem with underlying hardware or operating system– Disk failure– Out of memory– Process table full

• Software doesn’t adequately handle the system fault

Page 12: Software Testing

Standards and Procedures Faults

• Organizational standards and procedures not followed, e.g.– E.g., always add header to each method with

comments– Always use the letter i preceding an integer

variable, s preceding a String, etc.

• May not affect the running of programs, but may foster an environment where faults are created

Page 13: Software Testing

Some other Faults used at IBM

• Interface– Fault in interacting with another component or driver

• Checking– Fault in logic that fails to validate data properly before

it is used

• Build/Package/Merge– Fault that occurs because of problems in repositories

and version control

Page 14: Software Testing

Back to Testing…

• Goal is to find the faults• Different types of testing

– Unit testing– Integration testing– Functional testing– Performance testing– Acceptance testing– Installation testing

Page 15: Software Testing

Unit Testing• The first opportunity to test whether a particular

module or component is complete• Often performed by the developer• Can choose test cases in many ways. Some sample to

get broad test coverage:– Statement testing : Every statement is executed at least once– Branch testing : Every decision point is seen at least once.– Path testing : Variation of above, each path through the

code tested at least once (visualize code as a flow chart)– Definition-use path testing : Every path from every

definition of every variable to every use of that definition is exercised

Page 16: Software Testing

Integration Testing

• Integration– Piecing together individual components to

make a working system– Individual components should already have

passed their unit tests

• Integration Testing– Tests to ensure the system functions when

components pieced together

Page 17: Software Testing

Integration Testing

• Bottom-up– Low-level first, with test harness

• Top-Down– Stubs

• Big-Bang– Put them together and see if it works

• Sandwich– System viewed as three layers– Top-down in top layer, bottom-up in lower, converge in

the middle

Page 18: Software Testing

Comparison of Integration Strategies

Bottom-Up Top-Down Big-Bang Sandwich

Integration Early Early Late Early

Time to basic working program

Late Early Late Early

Component drivers needed

Yes No Yes Yes

Stubs needed No Yes No Yes

Work parallelism at beginning

Medium Low High Medium

Ability to test particular paths

Easy Hard Easy Medium

Page 19: Software Testing

Acceptance Testing

• Enable customers and users to determine if the system really meets their needs and expectations– Benchmarks

• Test cases under typical conditions to examine performance– Pilot test

• Users exercise system on an experimental basis• Alpha : Performed in-house• Beta : Performed on customers

– Parallel testing• If replacing in old system, use both in parallel to allow users to

gradually become accustomed to the new system– Longevity test

• Can the software function if running for X hours?

Page 20: Software Testing

Results of Acceptance Tests

• Ideally: acceptance testing uncovers discrepancies in requirements

• Reality: by working with the system users discover aspects of the problem which they were not aware– New requirements

– New development and features

– Other direction:• Some requirements may not be needed

Page 21: Software Testing

Installation Testing

• Installing the system at the user site

• System-specific configuration

• Co-existence test with other software– E.g. does it work with Office software?– Does anything break if the software is

removed?

Page 22: Software Testing

Test Cases

• Want a broad set of test cases to cover the range of possible values and code paths

• Closed Box– Apply all possible inputs, compare with expected

output according to requirements– Includes “out of range” inputs

• Open Box– View code’s internal structure– Generate tests based on this structure, e.g. give values

on both sides of an if-then else test

Page 23: Software Testing

Who Should Test?

• In your case, you (the developer) will test– But the developer is often “too close” to the code to be

objective and recognize some subtle faults– Difficult for developer to separate implementation

structure and required function

• Independent tester or test team is preferable– Familiar with how to “break” software– Familiar with testing methods and tools

• Users should be involved throughout the process– Might catch missing requirements at early stages

Page 24: Software Testing

Automated Testing Tools

• Programs that will help test your code• Static Analysis, e.g. lint

– Code Analysis• Check syntax, of a construct is fault-prone, etc.

– Structure Checker• Tool to depict logic flow, check for structural flaws

– Data analyzer• Review data structures, illegal data usage, improper linkage

– Sequence checker• Highlight events in the wrong sequence

• Dynamic Analysis, e.g. VTune– Monitor and collect statistics as program is running– How many times a statement was executed, branches covered, memory

use, etc.– Useful for performance analysis to find the hotspots of the code

Page 25: Software Testing

How Much Testing?

• How do we know when to stop?– When all tests pass?– When ‘enough’ tests pass meeting some threshold

• Myers (1979)– As the number of detected faults increases, the probability of the

existence of more undetected faults increases– Suggests that if the number of detected faults is decreasing, we

may not have too many left and can stop soon

• Fault Seeding– Deliberately add faults to the process– Use their detection as a measure of the remaining faults

Page 26: Software Testing

Fault Discovery Techniques• Jones, 1991

Discovery Technique

Requirements Design Coding Documentation

Prototyping 40 35 35 15

Requirements Review

40 15 0 5

Design Review 15 55 0 15

Code Inspection

20 40 65 25

Unit Testing 1 5 20 0

Page 27: Software Testing

Code Inspections

• Inspections can be applied to many different things by many different groups

• Inspections are a “Best Known Method” (BKM) for increasing quality – Developed by Michael Fagan at IBM, paper published

1976– Estimates: Inspections of design and code usually

remove 50-90% of defects before testing– Very economical compared to testing

• Formal inspections are more productive than informal reviews

Page 28: Software Testing

Industry Experience

• Widely used at many companies– HP, Microsoft, IBM, AT&T, Bell Labs, etc. all have an

established track record with inspections– Aetna: Found 82% of errors in a program by inspections,

was able to decrease development resources by 25%– AT&T : Reported 90% decrease in defects after

introducing inspections• Finding defects early can save money and time

– TRW : Changes in requirements stage cost 50-200 times less than changes during coding or maintenance

– JPL : Estimates saved $25,000 per inspection by finding/fixing defects early on

Page 29: Software Testing

Cost of Fixing a Defect

Req Design Coding Test Accept Prod

Page 30: Software Testing

Formal Inspections

• By formalizing the process, inspections become systematic and repeatable– Each person in the inspection process must understand

their role– Use of checklists focus concentration on detection of

defects that have been problematic• Metrics

– Feedback and data collection metrics are quantifiable– Feed into future inspections to improve them

• Designers and developers learn to improve their work through inspection participation

Page 31: Software Testing

More reasons to use inspections

• Inspections are measurable• Ability to track progress• Reduces rework and debug time• Cannot guarantee that a deadline will be

met but can give early warning of impending problems

• Information sharing with other developers, testers

Page 32: Software Testing

Definition

• What is an inspection?– A formal review of a work product by peers. A

standard process is followed with the purpose of detecting defects early in the development lifecycle.

• Examples of work products– Code, Specs, Web Pages– Presentations, Guides, Requirements, – Specifications, Documentation

Page 33: Software Testing

When are inspections used?

• Possible anytime code or documents are complete– Requirements: Inspect specs, plans, schedules– Design: Inspect architecture, design doc– Implementation: Inspect technical code– Test: Inspect test procedure, test report

Page 34: Software Testing

Defects

• A defect is a deviation from specific or expected behavior

• Something wrong• Missing information• Common error• Standards violation• Ambiguity• Inconsistency• Perception error• Design error

• Inspections are used to find defects

Page 35: Software Testing

A defect is a defect

• A defect is based on the opinion of the person doing the review– This means that any defect that is found IS a defect– Not open to debate– Not all defects are necessarily bugs– Many defects may not be “fixed” in the end

• No voting or consensus process on what is a defect

• How to fix a defect should be debated later, not when the defects are logged

Page 36: Software Testing

Other Review Methods

Review Walkthrough Inspection

What Present idea or proposal

Technical presentation of work

Formal review by peers

Audience Mgmt/Tech Tech Tech

Objective Provide Info, Evaluate specs or plan – Give Status

Explain work, may find design or logic defect - Give context

Find defects early

- Find defects

Page 37: Software Testing

Other Defect Detection Methods

Buddy Testing Inspection

What Developers work in pairs

Formal testing Formal review by peers

Audience Tech Tech Tech

Objective Develop, explain work, find defects

Find defects by symptom, usability, performance

Find defects where they occur

Page 38: Software Testing

Why a formal review?

• Provides a well-defined process– Repeatability, measurement– Avoids some scenarios with less formal processes

• “My work is perfect”– Point is not to criticize the author

• “I don’t have time”– Formal process proceeds only when all are prepared, have

inspected code in advance

– We won’t do inspections this semester but you may encounter them in the future if you work on a large software project

Page 39: Software Testing

Walkthrough vs. Inspection

Walkthrough Inspection

Focus Improve product Find defects

Activities Find defectsExamine alternativesForum for learningDiscussion

Find defectsOnly defect explanation allowedLearning through defects and inspection

Process Informal Formal

Quality Variable; personalities can modify outcome

Repeatable with fixed process

Time Preparation ad-hoc, less formal

Preparation required, efficient use of time

Page 40: Software Testing

What should be inspected?

• For existing code or documentation, select– The most critical piece to the program’s operation

– Most used section

– Most costly if defects were to exist

– Most error-prone

– Least well-known

– Most frequently changed

• For new code or documentation– 20% <= inspect <= 100%

Page 41: Software Testing

Inspection Process

Page 42: Software Testing

Typical Inspection Process

Planning45 mins Prep

15-120 mins

Log Defects60-120 mins

Rework

Follow-Up

Page 43: Software Testing

Roles

Moderator

Scribe Work Owner

Inspectors

Page 44: Software Testing

Owner Planning

• Owner decides what code/documents to review– Should include requirements

– Common-errors list• One will also be provided by the moderator

• Owner can include more specific common errors

– Paper copy of code listing for everyone• See previous slides for what code to inspect

• If possible, add line numbers so code is easier to reference

• Up to owner’s discretion as to what/how much, but generally one does not continue more than two hours at a time

Page 45: Software Testing

Preparation

• Ideally, each inspector has materials to inspect in advance– Identify defects on their own to ensure independent

thought– Note defects and questions– Complete a defect log

• High/Medium/Low– Without this preparation, group review might find only

10% of defects that could otherwise be found (Fagan)• Rules of thumb

– 2 hours for 10 full pages of text

Page 46: Software Testing

Preparation – Our Exercise

• Too many projects/would take too much time to perform detailed inspection for all our projects

• Compromise – Hopefully enough to give you a flavor of inspections, still be useful– Everyone should prepare by becoming familiar with

common defects

– Owner will provide brief walkthrough

– Inspectors will log defects in real-time as walkthrough is made

Page 47: Software Testing

Common Defects

• See web page links– C++– Java

• Similar issues apply to other languages

Page 48: Software Testing

Walkthrough

• Prior to walkthrough– Owner distributes selected code, requirements docs– Inspectors have prepared

• Purpose– Provide context, explanation for selected code to inspectors– Relate code back to requirements

• Process– Owner provides walkthrough for one “chunk” of code that

logically belongs together (e.g., a method)– Inspectors search for defects– Repeat for next chunk

Page 49: Software Testing

Walkthrough Example

• Requirement: Support authentication based upon user@host using regular expressions

1 /*********************************************************2 * Returns a 1 if the user is on the ops list, and3 * returns a 0 if the user is not on the ops list.4 *********************************************************/5 int Authorized(char *user)6 {7 FILE *f;89 f=fopen(OPSPATH,"r"); /* open authorized file */10 while (fgets(tempstr,80,f)!=NULL)11 {12 tempstr[strlen(tempstr)-1]='\0'; /* annoying \r at end */13 if (!fnmatch(tempstr,user,FNM_CASEFOLD)) { fclose(f); return(1); }14 }15 fclose(f);16 return(0);17 }

Open fileContainingoperators

Returns true if wildcards match

Page 50: Software Testing

Defect Logging

• Performed by the scribe; leaves work owner free to concentrate on other tasks

• Moderator leads meeting and facilitates process– Keeps discussions to a minimum– Defect understanding only– No criticisms– No “rat holes”– Limited discussion– Moderator has the right to stop discussion at any time– May use round-robin for each inspector

• Focus is on finding defects– A defect is a defect

Page 51: Software Testing

Defect Log

Severity: H M L Q

Location Description

1 L 7 Did not initialize variable f

2 L 7 Poor choice of variable name, f

3 M 9 Check for fopen errors

4

5

6

Page 52: Software Testing

Defect Logging

• High, Medium, Low, or Question• Brief description should be ~7 words or less, or

until the owner understands• If possible, resolve questions: defect or not• Also log defects found in

– Parent document, e.g. requirements– Common errors list– Work product guidelines

• Will be up to the work owner whether or not to fix a defect

Page 53: Software Testing

Causal Analysis Meeting

• We won’t hold these, but in general they are a good idea

• Purpose – Brainstorming session on the root cause of specific defects– This meeting supports the continuous improvement

– Initiate thinking and action about most common or severe defects

– Can help prevent future defects from occurring• Specific action items may be achieve this goal

Page 54: Software Testing

Rework

• Purpose: Address defects found during the logging process

• Rules– Performed by product owner– All defects must be addressed

• Does not mean they are fixed, but that sufficient analysis/action has taken place

– All defects found in any other documents should be recorded

– Owner should keep work log

Page 55: Software Testing

Follow-Up

• Purpose: Verify resolution of defects– Work product redistributed for review– Inspection team can re-inspect or assign a few

inspectors to review– Unfixed defects are reported to the team and

discussed to resolution

Page 56: Software Testing

Inspection Summary

• Systematic process due to standardized checklists and roles

• Requires preparation on behalf of all participants, moderator, scribe

• Provides a way to formally measure software quality and over time can provide data on hidden defects

• Allows participants to learn from each other and improve the inspection process for the future

• Powerful technique almost no matter how it begins