32
Using error reports in SPI Tor Stålhane IDI / NTNU

Using error reports in SPI Tor Stålhane IDI / NTNU

Embed Size (px)

Citation preview

Page 1: Using error reports in SPI Tor Stålhane IDI / NTNU

Using error reports in SPI

Tor Stålhane

IDI / NTNU

Page 2: Using error reports in SPI Tor Stålhane IDI / NTNU

How to use error reports Several SPI-concepts – e.g. PMA – relies on

the participants’ memory.

Important general question: “Do the information sources provide

consistent information about a project?”

Error reports from the project can be used to refresh the participants’ memory

Page 3: Using error reports in SPI Tor Stålhane IDI / NTNU

Info consistency People are

• good at identifying and collecting data => which data are important

• not so good at analyzing these data => what does the data mean

But

People perform better when they can base their analysis on concrete categories

Page 4: Using error reports in SPI Tor Stålhane IDI / NTNU

Data collection

Most people:

• focus on the start and end of a project but forget most of what goes on in between

• put emphasis on – single, concrete events– personal experience

but ignore statistical data

Page 5: Using error reports in SPI Tor Stålhane IDI / NTNU

Using error reports – 1 Info from error reports is important• the development activity assigned to the error

=> link errors to problem areas from the affinity and RCA diagrams

• the error’s correction cost => used to prioritize

A useful, concrete categorization must connect each error to an identified project activity.

Page 6: Using error reports in SPI Tor Stålhane IDI / NTNU

Using error reports – 2 • Use problem causes from affinity diagrams

and RCAs.

• Prioritize the problems using – the participants’ opinion => experience– the priorities from the error reports => data

We can then improve the process based on info that is

• concrete

• partly objective

Page 7: Using error reports in SPI Tor Stålhane IDI / NTNU

Only PMA

Exp. Exp. Exp.

All possible project info

PMA

RCA

Affinity diagrams

Improvement needs and opportunities

Improvement plan

Page 8: Using error reports in SPI Tor Stålhane IDI / NTNU

Exp. Exp. Exp.

All possible project info

PMA

RCA

Affinity diagrams

Improvement needs and opportunities

error

error

error

errorerror

error error

error error

User error reportsReal world info

Error categories

Distribution Priorities

Improvement plan

ODC

Experience

PMA plus data from error reports

Page 9: Using error reports in SPI Tor Stålhane IDI / NTNU

Summing up

We should do

• PMA to get an overview over problem areas

• Error report analysis in order to get the priorities right. We can use error categories from– The PMA– The literature – e.g. ODC

Page 10: Using error reports in SPI Tor Stålhane IDI / NTNU

Error reports as a source for SPI

Tor Stålhane

Jingyue Li, Jan M.N. Kristiansen

IDI / NTNU

Page 11: Using error reports in SPI Tor Stålhane IDI / NTNU

Goals

Business:

How can we reduce the cost of corrective maintenance?

Research:

What are the main cost drivers for corrective maintenance

Page 12: Using error reports in SPI Tor Stålhane IDI / NTNU

Company A – 1 Company A is a software company with only

one product.

The product is deployed on more than 50 operating systems and hardware platforms.

The company has 700 employees – 400 are developers and testers.

Page 13: Using error reports in SPI Tor Stålhane IDI / NTNU

Company A – 2Via the company’s DTS – Defect Tracking

System – the company collected:• Defect ID, priority, severity and report

creation date• Defect summary and detailed description• Who found the defect and when• Estimated correction time• When was the defect corrected• Detailed comments and work log of the

person who corrected the defect

Page 14: Using error reports in SPI Tor Stålhane IDI / NTNU

What we did in company A

We improved the company’s DTS based on the IBM concept of orthogonal defect classification – ODC.

Based on a study of earlier reported defects, the classification scheme was modified to fit the company’s needs.

Page 15: Using error reports in SPI Tor Stålhane IDI / NTNU

DTS system enhancements at company A

Added or revisedattributes

Value of the attributes

Effort to fixTime-consuming: more than one person-day effort Easy: less than one person-day effort

Qualifier Missing; Incorrect; or Extraneous

Fixing type Extended the ODC “type” attributes to reflect the actualdefect correction activities of the company.

Root causeProject entities, such as requirement,design, and documentation, which should be done betterto prevent the defect from occurring earlier.

Page 16: Using error reports in SPI Tor Stålhane IDI / NTNU

Company B – 1 Company B is a software house that

develops business critical systems, primarily for the bank and finance sector.

Most of their projects have a fixed price and a fixed delivery date.

The company has 800 developers and testers.

Page 17: Using error reports in SPI Tor Stålhane IDI / NTNU

Company B – 2

Via the company’s DTS the company collected, among other things:

• Defect ID, priority, severity and report creation date

• Defect summary and detailed description

• Who found the defect and when

• Who tested the correction and how

Page 18: Using error reports in SPI Tor Stålhane IDI / NTNU

What we did in company B

Just as for company A, we improved the company’s DTS based on a study of earlier reported defects.

In addition, the changes also enabled us to collect data that should later be used for software process improvement.

Page 19: Using error reports in SPI Tor Stålhane IDI / NTNU

DTS system enhancements at company ¨B

Added or revised attributes

Value of the attributes

Effort to fixClassify effort to reproduce and fix defects as “simple”– less than 20 minutes, “medium” – between 20 minutesand 4 hours, and “extensive” – more than 4 hours.

Defect typeIntroduced a new set of attributes that were adapted tothe way the developers and testers wanted to classifydefects.

Root causeThe attributes here are project entities, such asrequirement, design, development, and documentation,which can be done better to prevent the defect earlier.

Page 20: Using error reports in SPI Tor Stålhane IDI / NTNU

Data collection

In both companies, we collected defect data when the companies had used the new DTS for six months. Only defect reports that had all their fields filled in were used for analysis.

This gave us:

• Company A – 810 defects

• Company B – 688 defects

Page 21: Using error reports in SPI Tor Stålhane IDI / NTNU

Data analysis – 1 Our goal was to identify cost drivers for the

most expensive defects. Thus, we split the defects into categories depending on reported “time to fix”.

• Company A – two groups: “easy to fix” and “time consuming”

• Company B – three groups: “simple”, “medium” and “extensive”. “Simple” and “medium” defects was combined into one group – “simple” – to be compatible with company A

Page 22: Using error reports in SPI Tor Stålhane IDI / NTNU

Data analysis – 2

We identified the most important root causes for the costly fixes through qualitative analysis.

• For both companies we had “correction type” and “cause”.

• We also found important information in the– developer discussions (company A)– test descriptions (company B)

Page 23: Using error reports in SPI Tor Stålhane IDI / NTNU

Root causes for high effort fixes - A

Reasons for the costly debugging Numbers of high effort defects in each business unit

Core B2C B2B

Hard to determine the location ofthe defect

20 37 4

Implemented functionality was newor needed a heavy rewrite

13 29 2

Long clarification (discussion) ofdefect

19 5 0

The original fix introduces newdefects / multiple fixes

13 9 0

Others (documentation is incorrector communication is bad)

2 0 0

Reasons not clear 3 16 0

Page 24: Using error reports in SPI Tor Stålhane IDI / NTNU

Root causes for company AThe most important root causes for costly

defects:• Hard to determine the defect’s location• Implemented functionality was new or

needed a heavy rewrite• Long, costly discussion on whether the

defect report really was a defect or just misuse of the system.

Page 25: Using error reports in SPI Tor Stålhane IDI / NTNU

Root causes for high effort fixes – B

Root cause attribute

Number of defects

SimpleMedium Extensive Sum

Functional defect 9 2 11

Wrong test data in build 77 4 81

Bad specification 89 12 101

Bad test environment 9 1 10

Development problems 317 57 374

Sum 501 76 577

Page 26: Using error reports in SPI Tor Stålhane IDI / NTNU

Root causes for company B Bad specifications and development

problems account for 91% of the high effort defects.

If we consider the sub-categories defined for these two root causes, we find that 70% of all correction costs are due to:

• Errors in business logic

• Unclear requirements

• Problems in graphical interface

Page 27: Using error reports in SPI Tor Stålhane IDI / NTNU

Important maintenance factors – 1

Several published studies have identified the following important maintenance cost factors:

• Maintainers’ experience with system and application domain

• System size and complexity

• The development activity where the defect is discovered

• Tool and process support

Page 28: Using error reports in SPI Tor Stålhane IDI / NTNU

Important maintenance factors – 2

• System size and complexity – large complex systems makes it difficult to– Analyze the system to decide where the

defect stems from – company A – Decide how to fix the defect – company A – Find the right solution to implement –

companies A, B

Page 29: Using error reports in SPI Tor Stålhane IDI / NTNU

Important maintenance factors – 3

• Low maintainers’ experience with system and application domain cause – Need for heavy rewrite of functionality –

company A– Development problems, e.g. with business

logic and user interface – company B

Page 30: Using error reports in SPI Tor Stålhane IDI / NTNU

Important maintenance factors – 3 ISO 9126 defines maintainability as: • Analyzability• Changeability• Stability • Testability

The high maintenance cost factors are size and complexity. This fits well into this model.

However, the model focuses on software characteristics and ignore the influence of the developers’ knowledge and experience

Page 31: Using error reports in SPI Tor Stålhane IDI / NTNU

Conclusions - 1 Important data sources when analyzing

maintainability are:

• Resources needed for fixing

• Defect typology, e.g. ODC

• Qualitative data such as test description and developer discussions

The effort model should be updated regularly, based on the defect profile and project environment.

Page 32: Using error reports in SPI Tor Stålhane IDI / NTNU

Conclusions – 2

There is no “best estimator” for corrective maintenance.

Important factors are

• Software characteristics – e.g. as defined in ISO 9126

• Staff characteristics – e.g. knowledge and experience