40
1 Bibliografía. Resumen de lecturas basado en la siguiente bibliografía de base: 1) Garvin, D. A., "What does 'Product Quality' really mean?", Sloan Management Review, no. 1, pp. 25-43, 1984. 2) McCall, J. A., Richards, P. K., and Walters, G. F., "Factors in Software Quality", Nat'l Tech.Information Service, no. Vol. 1, 2 and 3, 1977. 3) Boehm, B. W., Brown, J. R., Kaspar, H., Lipow, M., McLeod, G., and Merritt, M., Characteristics of Software Quality, North Holland, 1978. 4) Boehm, Barry W., Brown, J. R, and Lipow, M.: Quantitative evaluation of software quality, International Conference on Software Engineering, Proceedings of the 2nd international conference on Software engineering, 1976. 5) ISO, International Organization for Standardization, "ISO 9126-1:2001, Software engineering - Product quality, Part 1: Quality model", 2001. 6) CMMI for Development, Version 1.3/ CMMI-DEV, V1.3 / CMU/SEI-2010-TR-033, ESC-TR-2010-033 2 Resumen de lecturas: Modelos de Calidad. Concepts and Principles What is Quality? Quality is an elusive concept. Most people can recognize quality and give examples of quality, but have trouble supplying a clear, unambiguous definition of the term. In everyday discourse this usually isn't a problem because language is very tolerant of ambiguity.

Lectura-Modelos de Calidad

Embed Size (px)

Citation preview

Page 1: Lectura-Modelos de Calidad

1 Bibliografía.

Resumen de lecturas basado en la siguiente bibliografía de base:

1) Garvin, D. A., "What does 'Product Quality' really mean?", Sloan Management Review, no. 1, pp. 25-43,

1984.

2) McCall, J. A., Richards, P. K., and Walters, G. F., "Factors in Software Quality", Nat'l Tech.Information

Service, no. Vol. 1, 2 and 3, 1977.

3) Boehm, B. W., Brown, J. R., Kaspar, H., Lipow, M., McLeod, G., and Merritt, M., Characteristics of

Software Quality, North Holland, 1978.

4) Boehm, Barry W., Brown, J. R, and Lipow, M.: Quantitative evaluation of software quality, International

Conference on Software Engineering, Proceedings of the 2nd international conference on Software

engineering, 1976.

5) ISO, International Organization for Standardization, "ISO 9126-1:2001, Software engineering - Product

quality, Part 1: Quality model", 2001.

6) CMMI for Development, Version 1.3/ CMMI-DEV, V1.3 / CMU/SEI-2010-TR-033, ESC-TR-2010-033

2 Resumen de lecturas: Modelos de

Calidad.

Concepts and Principles

What is Quality?

Quality is an elusive concept. Most people can recognize quality and give examples of

quality, but have trouble supplying a clear, unambiguous definition of the term. In everyday

discourse this usually isn't a problem because language is very tolerant of ambiguity.

Page 2: Lectura-Modelos de Calidad

However, before the quality goals of a project can be specified, managed, and assured, we

need a good working definition of the term quality.

This first section explores different perspectives on quality in general. These perspectives

provide a foundation for the definition of software quality in particular presented in the next

section.

The following are the most common perspectives on quality [Garvin, Crosby]:

1. Transcendent

2. User-Based

3. Conformance to Requirements*

4. Product-Based

5. Value-Based

Figure 2 shows a conceptual depiction of each one of these perspectives on quality.

______________________

*Garvin refers to conformance to requirements as the manufacturing-based view because it is the view preferred by those responsible for the production phase of a manufacturing process. The term

conformance to requirements is used here because it is more descriptive of the software development environment where products are developed rather than manufactured.

______________________

Figure 2. Perspectives on quality.

Page 3: Lectura-Modelos de Calidad

Transcendent. The transcendent view of quality equates quality with "innate excellence".

According to this perspective, quality can't be described with words, it can only be

understood through experience. The transcendent view of quality relies on perception and

experience but isn't completely subjective. For example, wine tasting experts generally

agree on the relative merits of different vintages of wine which suggests that the activity

isn't completely subjective. Robert Pirsig takes on the topic of quality in Zen and the Art of

Motorcycle Maintenance and concludes, "Quality is not objective... It doesn't reside in the

material world....Quality is not subjective...It doesn't reside merely in the mind....Quality is

neither a part of mind, nor is it a part of matter. It is a third entity which is independent of

the two." [Pirsig]

The transcendent view of quality is a poor basis from which to develop an operational

definition of quality. The most obvious problem is measurability. There is no precise way

of quantifying innate excellence. The transcendent view of quality is better suited to

products that appeal to the senses. Software is practical and utilitarian and therefore not

well suited to the transcendent view of quality.

User-Based. The user-based view of quality is certainly more down-to-earth than the

transcendent view. According to the user-based view of quality, quality is the satisfaction

of user wants or needs. If product A satisfies more user wants and needs than product B,

then product A has higher quality than product B.

There are two potential problems with this definition of quality though. The first problem is

that of designing a product that simultaneously meets the needs of a diverse group of users.

For example, the popular image editing program PhotoShop is used by average consumers,

professional photographers, and high-end animation studios. It would be difficult to design

a product that simultaneously meets the needs of this diverse group of customers. What

might be satisfying to one customer might be completely unacceptable to another.

Figure 3. Quality is only one component of user satisfaction

Page 4: Lectura-Modelos de Calidad

The second problem with this perspective is more fundamental. User satisfaction is based

on an aggregate of product factors--only one of which is quality. (See figure 3.) Just

because a user is satisfied with a product, you can't conclude it is exclusively because of

high quality. A user might be satisfied with a low quality product if the other factors that

influence satisfaction are high enough to compensate for the dissatisfaction attributable to

low quality. To say that user satisfaction is a good measure of product quality is to ignore

the other factors. For example, there is a robust market for classic Jaguar automobiles from

the early 90's. This suggests that these cars are providing user satisfaction. However,

Jaguars of this vintage are notorious for electrical and mechanical problems. These cars

might be meeting the overall needs of their buyers, but even their most enthusiastic

admirers would freely admit that they are not high quality. More likely, it is a combination

of product features including their visceral beauty and style that make them appealing in

spite of their reputation for poor quality. (In fact, for some owners it's possible that the

sense of adventure that comes from driving a low quality unpredictable automobile adds to

their appeal.)

There is certainly value in having a definition of quality that defines quality relative to user

wants and needs. However, before this perspective can serve as a basis for quality

management, there has to be some way of measuring the extent to which product quality is

contributing to user satisfaction independent of other product features that might also be

influencing overall user satisfaction.

Conformance to requirements. Quality as conformance to requirements is a popular

operational definition of quality. Philip Crosby, respected author and evangelist of quality,

is probably the best-known advocate of this interpretation [Crosby, Quality is Free, p. 17].

It is also the definition used in the well-known ISO 9000 quality standard.

When quality is defined as conformance to requirements, there is no ambiguity about what

quality is. A quality product is one that conforms to specified requirements and design.

According to this definition of quality, a meal at McDonalds is high quality as long as it

conforms to the specifications of the franchise. By the same token a meal at the Four

Seasons could be considered of lesser quality if, for example, the salad fork isn't chilled.

If this definition of quality bothers you it's probably because you are equating quality with

luxury or goodness [Crosby]. Viewed in this way, a Lexus has more quality than a

Hyundai. A Rolex watch has more quality than a Timex watch. This is a common

interpretation of quality but it's incompatible with the interpretation of quality as

Page 5: Lectura-Modelos de Calidad

conformance to requirements. Quality as conformance to requirements is absolute. Quality

can't be both conformance to requirements and luxury or goodness. The source of the

confusion is in the use of the same term for two different concepts. A manufacture may

plan a product with a certain level of quality (luxury or goodness). In order to deliver on

this plan, the manufacture will need to maintain high standards of quality (conformance to

requirements) during production. Quality as conformance to requirements is a good

operational definition of quality because manufactures of all types of products can specify

quality goals and control progress towards the accomplishment of these goals.

Another potentially confusing aspect of defining quality as "conformance to requirements"

is that it seems to disregard the needs of the user. Quality as conformance to requirements

doesn't discount the importance of meeting the needs of the user, it just assumes that these

needs will be completely expressed in the requirements. This puts the onus on the

requirements process to capture the true requirements with accuracy and precision.

The principle advantage of defining quality as conformance to requirements is that it

simplifies the production or implementation phase of the product life cycle. It doesn't make

it any easier to identify the right set of product features that will lead to customer

satisfaction, but it does make it easier to manage and control the production or

implementation phase of the product life cycle.

When quality is defined as satisfaction of user needs the whole life cycle is working toward

subjective hard-to-measure goals which makes the whole process hard to control. When

quality is defined as conformance to requirements, the requirements phase isn't any easier

to control, but the design and production phase are working toward an objective easier-to-

measure goal--conformance to written requirements. The advantage of defining quality as

conformance to requirements is that it makes it possible to define quality goals in specific

terms and then measure and control progress towards these goals.

Conformance to requirements is the definition of quality preferred by manufactures because

it provides an objective easy-to-measure goal for quality during the production phase of the

product life cycle.

Product-Based. With the product-based approach to quality, differences in quality are

measured by differences in the quantity of some ingredient or attribute of the product. For

example, one way to measure the quality of fabric is by thread count. Fabrics with a higher

thread count are usually considered more desirable.

Page 6: Lectura-Modelos de Calidad

In practice, the quality of a product is often a function of multiple attributes, some more

important than others. For example, three attributes that can be used to measure the quality

of a digital camera are resolution, light sensitivity and frame rate. These aren't the only

attributes by which the quality of a digital camera can be measured and how much each

attribute contributes to overall product quality will be different for different buyers.

Camera Quality = WResolution * Resolution + WSensitivity * Light Sensitivity + WStorage

Capacity * Storage Capacity

The challenge when defining product quality using this definition is determining the

attributes and their weights. If the choice of attributes and weights are subjective, there may

be no choice of attributes and weights that is optimal for all customers.

The section on Software Quality Models in this chapter describes various models for

defining software quality in terms of product attributes.

Value-Based. The value-based approach to quality considers value to be an aspect of

quality. Adding features to a product and controlling its defects, beyond a certain point, will

likely increase its cost. Higher cost means less value. Tying value to quality means that as

the cost of a product goes up, its perceived quality goes down. According to the value-

based approach to quality, a quality product is one that has the features the customer

desires, limited defects (if any), and is offered at a price that is acceptable to the customer.

This approach to quality explains why, when shopping for flooring, linoleum starts to look

better than tile after seeing the price of each.

In practice, this view of quality confuses product quality with desire for a product. A

customer may marvel at the quality of a pair of shoes, but upon seeing the price may loose

interest in purchasing them. The quality of the shoes haven't changed but the customer's

desire for owning a pair diminishes because they don't represent good value for the

customer.

The relationship between software quality and cost is explored in the section titled Cost of

Quality.

When planning for software quality it's important to keep in mind the hierarchical nature of

these factors [Humphrey 2005, p 136]. A relevant analogy is Abraham Maslow's theory of

human motivation. Maslow's popular theory of motivation posits that humans are driven by

Page 7: Lectura-Modelos de Calidad

a hierarchy of needs. Certain lower level needs including safety and security have to be met

before higher level needs such as those for loving and belonging can be addressed. Once a

need is satisfied, it no longer motivates and the next higher level need takes its place.

Figure 7. The primary factors that influence users'

perception of software quality change as quality improves.

There is an analogous hierarchy of factors that drive users' perceptions of software quality.

Ease of use doesn't mean much if the frequency of defects prevents the software from being

used at all. Lower level quality factors have to be addressed before higher level factors

become relevant.

Correctness. Quality starts with understanding the real problem and delivering a

correct solution. If users ask for a mortgage calculator and they are given a bond trading

system, the system won’t be used and any other quality factors don't matter.

System requirements are a mix of functional and non-functional requirements. The quality

of a software system is measured by the degree to which functional as well as non-

functional requirements are met. Correctness is the degree to which stated and implied

functional requirements are met. The word implied makes it clear that quality isn't strictly

conformance to written requirements. If a desired feature isn't implemented because it's not

included in the requirements, it's a deficiency of the requirements process that also

diminishes the overall quality of the product.

Page 8: Lectura-Modelos de Calidad

Reliability and Availability. Beyond correctness, users look for reliability and

availability. A mortgage calculator isn't effective unless it is available when needed and

able to operate reliably for the duration it is needed. The frequency and severity of defects

determines the reliability and availability of a software system. For most software

systems, zero defects is an impossible goal. If there are any defects, they should interfere

with the normal operation of the system. This usually means no severe defects and a

limited number of minor ones.

Usability. Usability refers to the ease of use or user friendliness of an application.

Usability is the measure of effort required to learn and use the application. A mortgage

calculator is easy to use if users understand the meaning of each field and can easily

calculate the mortgage for different scenarios.

Proficiency. Proficiency addresses the needs of power users. It is a measure of how

well the software enables work to be done quickly and efficiently. A mortgage calculator

that responds quickly (performance) and has short-cuts for frequently used operations is

proficient.

Enjoyment. Products at this level achieve an emotional appeal that goes beyond

mere functionality. The best examples of companies with products at this level are: Apple,

BMW and Hermes. Imagining an enjoyable mortgage calculator requires a stretch of the

imagination, but an enjoyable mortgage calculator is one that you enjoy using because it

makes you feel good. The feeling might come from a coherent design style, superior

craftsmanship and/or attention to detail.

In addition to being hierarchical, another subtle distinction among software quality factors

is that some lead to satisfaction while others lead to dissatisfaction. Furthermore, these two

sets of factors are mostly disjoint. Lower level quality factors such as correctness,

availability and reliability have to be present in order to avoid negative feelings

(dissatisfaction), where as high level factors such as usability, proficiency and enjoyment

are needed to create a positive impression (satisfaction). Users expect correctness and

reliability, they are delighted when they get proficiency and enjoyment.

Page 9: Lectura-Modelos de Calidad

Figure 8. Quality factors causing satisfaction are not the same as those causing

dissatisfaction

Cost of Quality

Everyone is for higher quality, but the reality is that expenditures on quality improvement

initiatives are economically justified only when the expected savings from higher quality is

greater than the initial upfront investment it takes to achieve higher quality. The cost of

quality is both a metric and economic model for understanding the cost-benefit trade-offs of

pursuing higher levels of quality. It can be used at the start of a project to justify an

investment in quality improvement or at the conclusion of a project to measure the

outcomes of such expenditures.

Quality is one of the 4 variables of a project that has to be balanced against the other three

which are cost, schedule and features. The following conceptual formula expresses the

general relationship between these four variables:

Cost * Schedule = Features * Quality

The formula suggests that improving quality requires some combination of higher cost,

longer schedule, or fewer features. Holding schedule and features constant, quality

correlates with cost:

Quality = aconstant * Cost

In practice the true relationship between quality and cost is not so simple. In some cases it's

possible to increase quality without increasing costs. To understand why, consider that cost

represents not only the money spent on quality improvement (mainly defect prevention and

Page 10: Lectura-Modelos de Calidad

early reviews), but also the money not spent on late stage testing and rework when defects

are avoided through additional quality improvement efforts. In other words:

Cost-of-Quality = (cost of defect prevention and early reviews) + (cost of late stage testing

and rework)

Increasing the first category of costs (prevention) tends to reduce the second category

(rework). When the amount saved by avoiding defects rises faster than the amount invested

in order to avoid these defects, quality is free*.

________________________

*Philip Crosby was so convinced of this dynamic that he titled one of his books Quality is Free, lest anyone would miss the main point which was

that the major costs of quality are the costs associated with poor quality. Crosby maintained that, as the voluntary costs of defect prevention are

increased, the involuntary costs of poor quality (mainly rework) decrease by an even greater amount thus making quality free [Crosby].

________________________

How much does quality cost? Is quality free? Is there a point of diminishing returns,

beyond which an investment in quality is no longer cost-justified? The cost of quality

model helps to answer these questions by modeling the relationship between quality and

costs*.

_____________________

*When discussing the cost of quality, quality is defined as freedom from defects. It explicitly doesn't mean luxury or features. More luxury and

features almost always means higher cost.

_____________________

Formally the cost of quality is the sum of conformance costs and non-conformance costs.

Conformance costs further breakdown into prevention and appraisal costs. Conformance

costs are the costs associated with attaining quality. These are the voluntary investment

costs for things like defect prevention and early reviews. Non-conformance costs are the

costs of dealing with defects. This includes things like debugging and rework resulting

from exposed defects. These are involuntary costs and a source of potential savings if

defects can be reduced. Non-conformance costs may be internal failure costs (cost of errors

found before the product is released), and external failure costs (cost of errors found once

the product has been released). Figure 16 shows the taxonomic breakdown of these costs

and figure 17 shows when these costs generally occur during the software life cycle.

Page 11: Lectura-Modelos de Calidad

Figure 16. Taxonomy of Software Quality Costs

Figure 17. Relative Timing of Cost of Quality Expenses

The following is a closer look at these 4 categories of costs. They are also summarized in

table 2 below.

Prevention costs - These are the costs associated with preventing defects from ever

occurring. If you can prevent errors from occurring you can avoid the expense of finding

and fixing them (not to mention the wasted effort spent making the error in the first place).

Prevention costs include the costs of training, early reviews, quality planning, root cause

analysis, baseline configuration management, and process improvement initiatives.

Appraisal costs - These are the costs associated with analyzing and testing the product to

make sure it conforms to requirements. The most common appraisal costs are the costs for

testing, inspections and quality control.

Internal failure costs - These are the costs associated with fixing defects found prior to

product release. This includes the cost of rework, retesting, and the administrative overhead

of pre-release defect management. It may also include the cost of updating documentation,

additional reviews and inspections.

External failure costs - These are the costs associated with fixing defects found after the

product has been released. The process of diagnosing and fixing an external failure is much

more complicated and involved than fixing an internal failure so costs are higher. There are

the regular expenses that go along with internal failures, plus additional expenses that come

with field defects. Less tangible external failure costs are the costs associated with possible

Page 12: Lectura-Modelos de Calidad

damage to reputation and the loss of future sales. External failure costs may also include the

unpredictable costs of litigation.

Category Definition Example

Prevention Costs associated with preventing

defects.

Training, early reviews, quality

planning, tools, process improvement

initiatives.

Appraisal

Costs associated with analyzing and

testing the product to ensure it conforms

to specifications.

Inspections, testing, audits, quality

control.

Internal Failure Costs associated with fixing defects

found prior to release.

Repair, retesting, updating

documentation.

External Failure Costs associated with fixing defects

found after release.

Technical support, defect reporting

and tracking, field updates, loss of

future sales.

Table 2. Software Quality Cost Categories

The cost of quality is the sum of all conformance and non-conformance costs. The cost of

quality is a real expense that shows up indirectly in the budget of every project or directly

when the cost of quality is being tracked. It is also an expense that can be controlled by the

project manager. In order to control the cost of quality the project manager must understand

the complex relationship between conformance costs and non-conformance costs.

There are three notable aspects of the relationship between conformance and non-

conformance costs. First, the two categories of costs tend to be inversely related. Spending

more on conformance should reduce non-conformance costs. Spending less on

conformance will likely increase non-conformance costs. Second, conformance costs are

voluntary while non-conformance costs are involuntary. The project manager decides how

much to spend on conformance, and this in turn determines how much will be spent

involuntarily on non-conformance. Third, conformance costs represent an investment that

must be justified by even larger anticipated savings in non-conformance costs. It is an

investment that usually has a point of diminishing returns beyond which spending more on

conformance won't reduce failure costs enough to justify the initial expense and risk.

The project manager is responsible for determining the optimal amount to spend on

conformance activities such that the overall cost of quality (conformance and non-

conformance costs) is minimal. Projects vary in size and budget so the amount won't be an

absolute number but rather some portion of the total budget. The optimal cost of quality is

Page 13: Lectura-Modelos de Calidad

usually expressed as the cost per good unit of product produced or as a percentage of total

development costs.

The problem of finding the optimal cost of quality is made more difficult by the fact that

the optimal balance between conformance and non-conformance costs depends on the

characteristics of the project and the maturity of the development process being used. There

isn't one precise answer, but there are models for understanding in general the forces that

affect the optimal balance in different environments. Figure 18 shows two such models.

Figure 18.a shows the traditional cost of quality model and figure 18.b shows the

contemporary one. The models depict the general relationship between costs and quality for

different types of projects.

Figure 18.a. Traditional Cost of Quality Model

Figure 18.b. Contemporary Cost of Quality Model

Page 14: Lectura-Modelos de Calidad

Both models show the costs of quality in terms of conformance and non-conformance as

quality increases (the x axis). In both models the total cost of quality is the sum of

conformance and non-conformance costs. Also in both models, non-conformance costs

drop to 0 as quality reaches 100 % or zero defects. The models differ in their assumptions

about the amount of investment needed in conformance to achieve perfect quality. In the

traditional model, a sharp increase in conformance cost is needed to achieve perfect quality.

The contemporary model shows a more modest increase in conformance costs as quality

rises, thus making it possible to have both perfect quality and minimal cost.

"Quality is free but perfection you have to pay for."

Which model is correct? They are both correct, for the project environments they depict.

The conventional cost of quality model in figure 18.a is appropriate for project

environments where the cost of failure is low and the effort required to achieve zero defects

increases rapidly as product quality approaches 100%. For example, this model accurately

depicts the cost of removing all of the errors from a large daily newspaper. The cost of an

error is minimal and the cost of removing every single error is prohibitively expensive. It

could take as much time to find the last single error as it took to find all of the other errors

combined. The economic impact of a reader finding an error isn't worth the extraordinary

effort it would take to find all the errors. To paraphrase Crosby, the traditional cost of

quality model seems to be saying that quality is free but perfection you have to pay for.

The contemporary cost of quality model in figure 18.b is more appropriate in environments

where:

The cost of failure is high which justifies increased spending on conformance.

There are more effective development and appraisal methods (usually automated)

which reduces the cost of conformance. More effective development methods reduces the

total number of errors and more effective appraisal methods reduces the cost of finding

and removing the errors that are introduced. In some cases complete testing may be both

feasible and economical.

As an example, consider the development process for microprocessors. The cost of putting

a defective microprocessors into production is very high thus justifying increased spending

on conformance. Modern production processes limit the number and types of errors

introduced during development, and automated testing tools in most cases make it

Page 15: Lectura-Modelos de Calidad

economical to perform complete testing during appraisal. This reduces the cost of

conformance.

So, which model is most appropriate when the environment is software development?

Again, it depends on the specific project, but for the average software development project

the traditional cost of quality model is probably more appropriate. Not all of the trends that

are making the contemporary cost of quality model more applicable in manufacturing apply

when the product is software.

The reason for this rather pessimistic assessment is that first, software development is a

human-intensive activity. Humans are fallible, so it follows that software will always have

defects as long as humans are in the loop. Second, all but the most trivial software systems

are too complex to completely test. There will always be some inputs that haven't been

tested. The difficulty of preventing all defects and the inability to test for all inputs,

suggests that for most software products the goal of zero defects is either impossible or

prohibitively expensive. This implies that for most software projects there is a point of

diminishing returns beyond which improvements in quality are not cost-justified.

The above conclusion is mainly a theoretical result--and not a very original one at that.

Most software practitioners are well aware that for most software systems complete testing

is impractical. What would be more useful is some way of predicting the point of

diminishing returns for a software system. Looking at figure 18.a it appears that the rising

cost of conformance (principally appraisal costs) is what causes the total cost of quality

curve to reverse course and head upward. However, a rapid rise in conformance costs

doesn't by itself determine the low point in the cost of quality curve. The cost of quality is

conformance costs plus non-conformance costs. If the cost of a failure is high it can justify

high, even exponentially increasing, conformance costs. For example, figure 19 shows the

cost of achieving ultra high reliability with the primary flight control software on the Space

Shuttle. As you might expect, developing near perfect code for the Space Shuttle is an order

of magnitude more expensive than developing what might be considered very reliable

software (1 defect per KSLOC). In spite of the high conformance costs necessary to

achieve near-perfect code, the total cost of quality is still minimal because the cost of

failure is even greater. The high cost of a defect in the Space Shuttle software in terms of

human life, vehicle expense, and national pride can justify large expenditures on

conformance. In general, a rapid rise in appraisal costs alone isn't enough to trigger a point

of diminishing returns. Applications that have very high reliability requirements can justify

very high, even exponentially increasing, conformance costs.

Page 16: Lectura-Modelos de Calidad

Figure 19. Cost of ultra-high reliability in space shuttle software

[Image is from: Herb Krasner, Using the Cost of Quality Approach for Software, Crosstalk,

Nov. 1998. Original data is from: Krasner, H., "A Case History of the NASA Space Shuttle

Onboard Systems Project," SEMATECH Technology Transfer Report 94092551A-TR,

Oct. 31, 1994]

As the cost of failure rises, increased spending on conformance is justified and the point of

diminishing returns in figure 18.a shifts to the right. In other words, a high cost of failure

can justify large investments in quality improvement. When calculating the cost of failure

the non-obvious costs are often overlooked. These include:

1. The intangible costs of failure, including loss of good will and future sales.

2. The user's cost of failure. Factoring in the user's cost of failure may justify even

higher investments in conformance.

The point of diminishing returns is also shifted right by improvements in the standard

development process. With an improved development process fewer defects will be

injected and conformance activities will be more efficient and effective. This makes higher

quality more economical.

Software Quality Models

Models are common tools in software development. There are software life-cycle models

(spiral, waterfall, etc.), process improvement models (CMMI, ISO 9000, etc.), and cost

estimation models (COCOMO, SLIM, etc.). Models are used to manage complexity. They

provide an abstraction of a system or phenomenon from a particular perspective for a

particular purpose. This section introduces another type of software development model--

the software quality model. Software quality models provide a framework for expressing

Page 17: Lectura-Modelos de Calidad

and measuring software quality as a collection of desirable attributes (the product-based

perspective on quality mentioned above). Just as software life-cycle models help manage

the software development process by suggesting activities and their timing, software quality

models help manage the specification and measurement of quality in software systems by

specifying quality characteristics, their relationships, and associated metrics.

This section presents three well-know quality models:

1. McCall's quality model

2. Boehm's quality model

3. The ISO/IEC 9126 Product Quality Standard

These models can be used as-is or as a starting point for creating project-specific or

organization-specific software quality models.

All three models have a common hierarchical structure which is illustrated in figure 8. High

level quality factors represent external quality characteristics of interest to users, customers

and managers. The models relate these high-level quality factors to lower-level subfactors

that represent attributes of the software or production process. These subfactors are more

meaningful to developers and technical staff. In order to quantify the presence of each

factor in a system, metrics are defined for all subfactors. It may also be possible to define

direct metrics for some higher level quality factors.

Figure 9. Framework for quality models

McCall's Quality Model

Page 18: Lectura-Modelos de Calidad

McCall's quality model is shown in figure 10. It groups software quality characteristics

according to the main phases in the software life cycle. The high-level categories of quality

characteristics are:

1. Product operation - quality characteristics related to using the product.

2. Product revision - quality characteristics related to maintaining the product in its

current environment.

3. Product transition - quality characteristics related to porting the product to a new

environment.

Figure 10. McCall's Quality Model

Like most software quality models the high level quality factors in McCall's model

generally represent external emergent system properties. Emergent system properties are

system characteristics not attributable to specific elements of the system but rather those

that emerge from the structure and interaction of individual system elements. Low level

quality factors in the model refer to more tangible internal properties of the system.

McCall's model links high-level quality characteristics of interest to end users to low-level

system properties that developers understand. More generally, it provides a vocabulary of

quality factors that users and managers can use when setting the quality goals of a project.

For example, end users may decide that usability is an important quality requirement. When

quality requirements are stated in terms of high-level quality factors, it's not always clear to

Page 19: Lectura-Modelos de Calidad

technical staff what specific product properties are needed in order to accomplish the

desired system qualities. McCall's quality model breaks down high-level quality factors

into their constituent components or quality criteria. These low level quality criteria are

more familiar to technical staff and represent product properties that can be built directly

into work products.

Besides suggesting quality characteristics and their relationships, McCall's model--and the

others that will be examined--also suggest metrics for measuring the degree to which

quality characteristics are present. McCall's model breaks down high-level quality factors

that are difficult to measure into low-level detailed quality criteria which are more tangible

and measurable. For each quality criteria, McCall suggests one or more metrics for

measuring the degree to which the system possesses or exhibits the associated quality

criteria. For example, storage efficiency may have the associated metrics: (1) total disk

space required, and (2) average internal memory required while the program is running.

Every quality criteria has at least one metric, but metrics may also be associated with some

high level quality factors. For example, mean time to failure (MTTF) might be used to

measure reliability.

One of the key features of McCall's model is that it provides a quantitative measure of

quality. Metric values are expressed in quantitative terms and the relationship between

quality criteria and quality factors is formally expressed with a linear equation of the form:

QFa = c1*m1 + c2*m2 + ... + cn*mn

Where:

QFa is the degree to which quality factor a is present.

ci is a constant which denotes the relative importance of the measurement mi to quality

factor a. This constant may be determined by using regression analysis on historical data

from similar projects.

mi is a metric value associated with quality factor QFa.

As an example, the quality criteria "self-descriptiveness" which is associated with quality

factor maintainability might have the the metric "number of modules with standard

headers". If 9 out of 10 modules have a standard header, and historical data suggests that

this metric has a 30% influence on the quality factor maintainability, the formula becomes:

QFmaintainability = ... + 30% * 9/10

Page 20: Lectura-Modelos de Calidad

Note, that the the number of modules with standard headers is divided by the total number

of modules. Metric values should be normalized to remove any bias related to the size of

the product.

The formal relationship between metrics and quality factors is captured in the equation's

coefficients (ci in the equation above). McCall showed that for a limited domain significant

correlation could be established between predicted quality and actual quality. However, its

not clear that a quality model calibrated for one domain is generalizable to other domains.

Like other software models calibrated from empirical data, the accuracy of McCall's model

depends heavily on tuning and using the model on projects that are similar in type.

One of the ways McCall's software quality model is distinguished from the others that are

presented here is that it defines software broadly to include not only code but also

requirements and design. A few of the quality characteristics in McCall's model are limited

to code (i.e. efficiency and instrumentation), but most are general enough to apply to code

and other pre-code artifacts such as requirements and design. For example, traceability and

completeness are two desirable characteristics of requirements and design documents.

Boehm's Quality Model

Boehm's quality model is shown in figure 11. It has the same basic categories of quality

characteristics found in McCall's model. In Boehm's model, overall system quality (called

general utility in the model) is the aggregation of portability, usability (called "as is utility"

in the model), and maintainability.

Figure 11. Boehm's Quality Model

Page 21: Lectura-Modelos de Calidad

Like McCall's software quality model, Boehm's quality model is hierarchical with metrics

defined for lower-level quality attributes. Boehm's model is similar to McCall's but does

have a few notable differences. One obvious difference is its factoring of quality

characteristics. As an example, in McCall's model maintainability and testability are at the

same level which means in McCall's model a product can be maintainable without being

testable and vise versa. In Boehm's model, testability is a subfactor of maintainability,

which means an increase in testability implies an increase in maintainability. By itself this

isn't necessary an important distinction but it does point out the difficulty of having a

definitive quality model.

Another way Boehm's model is different from McCall's model is that Boehm's model

applies only to code. Boehm's quality model includes 151 metrics which provide

measurements of 12 primitive quality characteristics. Each metric is expressed in the form

of a question. Stated in this way the metrics form a checklist of good programming

practice. Table 1 shows a partial checklist for the quality characteristic self-descriptiveness.

Self-descriptiveness is the extent to which a program or module contains enough

information for the reader to understand the function of the module and its dependencies.

Self-descriptiveness is a measure of a product's testability and understandability.

1. Does each program module contain a header block of commentary which describes program

name, purpose, modification history, and assumptions?

2. Are the functions of the modules as well as inputs/outputs adequately defined to allow module

testing?

3. Where there is module dependence, is it clearly specified by commentary, program

documentation, or inherent program structure.

4. Are variable names descriptive of the physical or functional property represented?

5. Do functions contain adequate descriptive information (i.e., comments) so that the purpose of

each is clear?

Table 1. Partial checklist for the quality self-descriptiveness in Boehm's quality model

By expressing quality metrics in the form of checklists, the metrics used for quality

measurement can also be used for code reviews and general process documentation and

improvement. The metrics developed in Boehm's original work are for Fortran programs,

making them mostly outdated by today's standards. However, the quality characteristics and

their factorings as shown in figure 11 above are just as applicable today as when they were

proposed in 1978, demonstrating that the principles of quality are timeless.

Page 22: Lectura-Modelos de Calidad

The ISO/IEC 9126 Product Quality Standard

ISO/IEC 9126 is an international standard for defining and measuring software product

quality. It comes from the International Organization for Standards (ISO) which is also the

publisher of the popular ISO 9000 series of process quality standards (ISO 9000, ISO 9001,

ISO 9000-3) ISO/IEC 9126 is a software product quality standard designed to complement

the well-known process quality standards offered by the ISO.

ISO 9126 isn't remarkably different from the other software quality models discussed here,

but it is distinguished in some important ways. First, it is an internationally recognized

standard and a safe choice for any organization looking for a standard way to define and

measure product quality. In the 80's there was a popular saying that nobody got fired for

buying an IBM computer. Today, it can be said that no one gets fired for choosing to follow

ISO standards. A universal standard provides a common language and framework for

specifying and measuring software product quality. Having a common definition of

software quality makes it easier to compare the quality of one product with another. Just as

some organizations rely on internationally recognized ISO process quality standards to

ensure a certain level of process quality, ISO 9126 can be used to ensure a certain level of

product quality. Second, the ISO has shown a willingness to revise the standard and keep it

up-to-date. Quality characteristics are generally timeless, but the metrics associated with

quality characteristics depend on current technology and product environments. Since being

introduced in 1999, the ISO 9126 standard has gone through 4 significant revisions or

updates.

The original version of ISO 9126 introduced in 1991 was rather sparse. The original title of

the standard, "Information technology -- Software product evaluation -- Quality

characteristics and guidelines for their use", included almost as much information as the

actual standard. As the title suggests the original standard offered quality characteristics

and a process for applying them. The standard was essentially 6 high level quality

characteristics and their definition. The standard claimed that the 6 quality characteristics

were sufficient to represent any aspect of software quality. The model was limited because

it didn't offer a standard factoring or taxonomy of quality characteristics or associated

metrics. Sub-characteristics and metrics were suggested in an annex to the standard but they

weren't part of the official standard. Overall the standard identified the elements of a

software quality model for specifying and measuring software product quality, but fell short

of offering a complete and comprehensive model.

Page 23: Lectura-Modelos de Calidad

Figure 12 shows the evolution of the ISO 9126 standard since its introduction in 1991. The

original standard is now represented in two separate sets of standards. The ISO 9126-x

series represents the expanded quality model, and the ISO 14598-x series details a process

model for evaluating quality.

The new standard has evolved significantly from its roots. The two main changes are (1)

suggested quality sub-characteristics that were informative in the 1991 standard are now

normative, and (2) an expended set of suggested metrics have been added for measuring

quality characteristics from multiple perspectives.

Figure 12. Evolution of ISO/IEC 9126

ISO 9126-1 contains the original quality model and its extensions. The quality sub-

characteristics that were specified in an annex in the original standard have been modified

slightly and are now part of the official standard. Figure 13 shows the full taxonomy of

quality characteristics in the ISO 9126-1 standard. It also illustrates the connection to the

other two related standards for external and internal metrics which are defined in ISO 9126-

2 and ISO 9126-3, respectively.

Page 24: Lectura-Modelos de Calidad

Figure 13. ISO 9126 Quality Model

The quality model in ISO 9126-1 is the foundation for the other three parts of the ISO 9126

standard which define metrics and techniques for measuring quality from three different

perspectives: internal quality, external quality, and quality in use. Figure 14 shows the

relationship between these three perspectives.

Figure 14. Quality Perspectives

ISO 9126-2 defines external metrics for measuring sub-characteristics in the base quality

model. These are measures of quality from the perspective of the external behavior of the

product at its interface. The difference between external quality metrics and quality in use

Page 25: Lectura-Modelos de Calidad

metrics is that external quality metrics measure the external behavior of the product while it

is running in a simulated environment. Quality in use metrics measure effects on actual end

users while the product is running in its production environment. The difference is

analogous to the difference between system testing a product with fabricated data and

acceptance testing with real user data in the presence of actual users.

ISO 9126-3 defines internal metrics for measuring sub-characteristics in the base quality

model. Some quality characteristics can only be measured with internal product metrics.

For example, the maintainability of a solution isn't visible from a product's external

interface. Internal quality metrics are typically measurements of static structural properties

of the implementation or intermediate forms of the implementation.

ISO 9126-4 defines quality in use metrics that measure the results of using the system in a

specific environment. Quality in use is a measure of quality from the user's perspective

while using the product in its environment. Quality in use measures the ease and extent to

which users can accomplish their goals. As figure 15 shows, quality in use metrics are

based on the smaller set of user-oriented quality characteristics: effectiveness, productivity,

safety, and satisfaction.

Figure 15. ISO 9126-4 Quality Model

The same basic quality characteristic from the ISO 9126-1 model can be measured in three

separate ways depending on the perspective taken. For example, consider the quality

characteristic efficiency. Internal efficiency can be measured by analyzing the asymptotic

complexity of the algorithms used in the implementation. In terms of efficiency, an O(n)

algorithm is more efficient or has higher internal quality than an O(n2) algorithm. External

efficiency can be measured by simulation. If a product performs well in a simulated

environment it is of high quality. From the quality in use perspective, program efficiency

indirectly influences productivity and satisfaction. The product has sufficient quality if

performance doesn't negatively impact the productivity or satisfaction of end users.

Page 26: Lectura-Modelos de Calidad

The ISO 9126 standard defines sub-characteristics and associated metrics but allows for

application-specific quality characteristics and metrics. It would be difficult to mandate a

specific closed set of quality characteristics or metrics because they are heavily dependent

on implementation technology and the application domain. The ISO 9126 standard also

doesn't define an absolute measure of quality because, again, the relative importance of

each characteristic is application-dependent. For example, reliability is more important for

mission critical applications; efficiency for time-critical applications; and usability for

interactive applications.

Recent updates make the ISO 9126 standard one of the most current. At the very least it

offers another taxonomy of quality characteristics and metrics to consider. The standard

will be of special interest to organizations that are already using ISO process standards.

3 Descripción del modelo ISO 9126

3.1.1 Funcionalidad:

1. Adecuación (Suitability): Capacidad del producto software para proporcionar un

conjunto apropiado de funciones para tareas y objetivos de usuario especificados.

2. Exactitud (Accuracy): Capacidad del producto software para proporcionar los

resultados o efectos correctos o acordados, con el grado necesario de precisión.

3. Interoperabilidad (Interoperability): Capacidad del producto software para

interactuar con uno o más sistemas especificados.

4. Seguridad de acceso (Security): Capacidad del producto software para proteger

información y datos de manera que las personas o sistemas no autorizados no

puedan leerlos o modificarlos, al tiempo que no se deniega el acceso a las personas

o sistemas autorizados

5. Cumplimiento funcional (Functionality Compliance): Capacidad del producto

software para adherirse a normas, convenciones o regulaciones en leyes y

prescripciones similares relacionadas con funcionalidad.

Page 27: Lectura-Modelos de Calidad

3.1.2 Confiabilidad (Reliability)

1. Madurez (Maturity): Capacidad del producto software para evitar fallar como

resultado de fallos en el software.

2. Tolerancia a fallos (Fault Tolerance): Capacidad del software para mantener un

nivel especificado de prestaciones en caso de fallos software o de infringir sus

interfaces especificados.

3. Capacidad de recuperación (Recoverability): Capacidad del producto software para

reestablecer un nivel de prestaciones especificado y de recuperar los datos

directamente afectados en caso de fallo.

4. Cumplimiento de la fiabilidad (Reliability Compliance): Capacidad del producto

software para adherirse a normas, convenciones o regulaciones relacionadas con al

fiabilidad

3.1.3 Usabilidad (Usability)

1. Capacidad para ser entendido (Understandability): Capacidad del producto software

que permite al usuario entender si el software es adecuado y cómo puede ser usado

para unas tareas o condiciones de uso particulares.

2. Capacidad para ser aprendido (Learnability): Capacidad del producto software que

permite al usuario aprender sobre su aplicación.

3. Capacidad para ser operado (Operability): Capacidad del producto software que

permite al usuario operarlo y controlarlo.

4. Capacidad de atracción (Attractiveness): Capacidad del producto software para ser

atractivo al usuario.

5. Cumplimiento de la usabilidad (Usability Compliance): Capacidad del producto

software para adherirse a normas, convenciones, guías de estilo o regulaciones

relacionadas con la usabilidad.

3.1.4 Eficiencia:

1. Comportamiento temporal (Time Behavior): Capacidad del producto software para

proporcionar tiempos de respuesta, tiempos de proceso y potencia apropiados, bajo

condiciones determinadas.

Page 28: Lectura-Modelos de Calidad

2. Utilización de recursos (Resource utilization): Capacidad del producto software

para usar las cantidades y tipos de recursos adecuados cuando el software lleva a

cabo su función bajo condiciones determinadas.

3. Cumplimiento de la eficiencia (Efficiency Compliance): Capacidad del producto

software para adherirse a normas o convenciones relacionadas con la eficiencia.

3.1.5 Mantenibilidad:

1. Capacidad para ser analizado (Analyzability): Es la capacidad del producto software

para serle diagnosticadas deficiencias o causas de los fallos en el software, o para

identificar las partes que han de ser modificadas.

2. Capacidad para ser cambiado (Changeability): Capacidad del producto software que

permite que una determinada modificación sea implementada.

3. Estabilidad (Stability): Capacidad del producto software para evitar efectos

inesperados debidos a modificaciones del software.

4. Capacidad para ser probado (Testability): Capacidad del producto software que

permite que el software modificado sea validado.

5. Cumplimiento de la mantenibilidad (Maintainability Compliance): Capacidad del

producto software para adherirse a normas o convenciones relacionadas con la

mantenibilidad

3.1.6 Portabilidad:

1. Adaptabilidad (Adaptability): Capacidad del producto software para ser adaptado a

diferentes entornos especificados, sin aplicar acciones o mecanismos distintos de

aquellos proporcionados para este propósito por el propio software considerado.

2. Instalabilidad (Installability): Capacidad del producto software para ser instalado en

un entorno especificado.

3. Coexistencia(Coexistence): Capacidad del producto software para coexistir con otro

software independiente, en un entorno común, compartiendo recursos comunes.

4. Capacidad para reemplazar (Rreplaceability): Capacidad del producto software para

ser usado en lugar de otro producto software, para el mismo propósito, en el mismo

entorno.

Page 29: Lectura-Modelos de Calidad

5. Cumplimiento de la portabilidad (Portability Compliance): Capacidad del producto

software para adherirse a normas o convenciones relacionadas con la portabilidad.

4 CMMI- Capability Maturity Model Integration (CMMI)

Tying It All Together

Now that you have been introduced to the components of CMMI

models, you need to understand how they fit together to meet your

process improvement needs. This chapter introduces the concept of

levels and shows how the process areas are organized and used.

CMMI-DEV does not specify that a project or organization must follow

a particular process flow or that a certain number of products be

developed per day or specific performance targets be achieved. The

model does specify that a project or organization should have

processes that address development related practices. To determine

whether these processes are in place, a project or organization maps

its processes to the process areas in this model.

The mapping of processes to process areas enables the organization

to track its progress against the CMMI-DEV model as it updates or

creates processes. Do not expect that every CMMI-DEV process area

will map one to one with your organization’s or project’s processes.

Understanding Levels

Levels are used in CMMI-DEV to describe an evolutionary path

recommended for an organization that wants to improve the

processes it uses to develop products or services. Levels can also be

the outcome of the rating activity in appraisals.1 Appraisals can apply

to entire organizations or to smaller groups such as a group of

projects or a division.

1 For more information about appraisals, refer to Appraisal Requirements for CMMI and the Standard CMMI Appraisal

Method for Process Improvement Method Definition Document [SEI 2011a, SEI 2011b].

Page 30: Lectura-Modelos de Calidad

CMMI supports two improvement paths using levels. One path

enables organizations to incrementally improve processes

corresponding to an individual process area (or group of process

areas) selected by the organization. The other path enables

organizations to improve a set of related processes by incrementally

addressing successive sets of process areas.

These two improvement paths are associated with the two types of

levels: capability levels and maturity levels. These levels correspond

to two approaches to process improvement called “representations.”

The two representations are called “continuous” and “staged.” Using

the continuous representation enables you to achieve “capability

levels.” Using the staged representation enables you to achieve

“maturity levels.”

To reach a particular level, an organization must satisfy all of the

goals of the process area or set of process areas that are targeted for

improvement, regardless of whether it is a capability or a maturity

level.

Both representations provide ways to improve your processes to

achieve business objectives, and both provide the same essential

content and use the same model components.

Structures of the Continuous and Staged Representations

Figure 3.1 illustrates the structures of the continuous and staged

representations. The differences between the structures are subtle but

significant. The staged representation uses maturity levels to

characterize the overall state of the organization’s processes relative

to the model as a whole, whereas the continuous representation uses

capability levels to characterize the state of the organization’s

processes relative to an individual process area.

Page 31: Lectura-Modelos de Calidad

Process Areas

Capability Levels

Continuous Representation

Generic Practices

Generic GoalsSpecific Goals

Specific Practices

Maturity Levels

Staged Representation

Generic Practices

Generic GoalsSpecific Goals

Specific Practices

Process Areas

Figure 3.1: Structure of the Continuous and Staged Representations

What may strike you as you compare these two representations is

their similarity. Both have many of the same components (e.g.,

process areas, specific goals, specific practices), and these

components have the same hierarchy and configuration.

What is not readily apparent from the high-level view in Figure 3.1 is

that the continuous representation focuses on process area capability

as measured by capability levels and the staged representation

focuses on overall maturity as measured by maturity levels. This

dimension (the capability/maturity dimension) of CMMI is used for

benchmarking and appraisal activities, as well as guiding an

organization’s improvement efforts.

Capability levels apply to an organization’s process improvement

achievement in individual process areas. These levels are a means

for incrementally improving the processes corresponding to a given

process area. The four capability levels are numbered 0 through 3.

Page 32: Lectura-Modelos de Calidad

Maturity levels apply to an organization’s process improvement

achievement across multiple process areas. These levels are a

means of improving the processes corresponding to a given set of

process areas (i.e., maturity level). The five maturity levels are

numbered 1 through 5.

Table 3.1 compares the four capability levels to the five maturity

levels. Notice that the names of two of the levels are the same in both

representations (i.e., Managed and Defined). The differences are that

there is no maturity level 0; there are no capability levels 4 and 5; and

at level 1, the names used for capability level 1 and maturity level 1

are different.

Table 3.1 Comparison of Capability and Maturity Levels

Level Continuous Representation

Capability Levels

Staged Representation

Maturity Levels

Level 0 Incomplete

Level 1 Performed Initial

Level 2 Managed Managed

Level 3 Defined Defined

Level 4 Quantitatively Managed

Level 5 Optimizing

The continuous representation is concerned with selecting both a

particular process area to improve and the desired capability level for

that process area. In this context, whether a process is performed or

incomplete is important. Therefore, the name “Incomplete” is given to

the continuous representation starting point.

The staged representation is concerned with selecting multiple

process areas to improve within a maturity level; whether individual

processes are performed or incomplete is not the primary focus.

Therefore, the name “Initial” is given to the staged representation

starting point.

Both capability levels and maturity levels provide a way to improve the

processes of an organization and measure how well organizations

can and do improve their processes. However, the associated

approach to process improvement is different.

Understanding Capability Levels

To support those who use the continuous representation, all CMMI

models reflect capability levels in their design and content.

Page 33: Lectura-Modelos de Calidad

The four capability levels, each a layer in the foundation for ongoing

process improvement, are designated by the numbers 0 through 3:

0. Incomplete

1. Performed

2. Managed

3. Defined

A capability level for a process area is achieved when all of the

generic goals are satisfied up to that level. The fact that capability

levels 2 and 3 use the same terms as generic goals 2 and 3 is

intentional because each of these generic goals and practices reflects

the meaning of the capability levels of the goals and practices. (See

the Generic Goals and Generic Practices section in Part Two for more

information about generic goals and practices.) A short description of

each capability level follows.

Capability Level 0: Incomplete

An incomplete process is a process that either is not performed or is

partially performed. One or more of the specific goals of the process

area are not satisfied and no generic goals exist for this level since

there is no reason to institutionalize a partially performed process.

Capability Level 1: Performed

A capability level 1 process is characterized as a performed process.

A performed process is a process that accomplishes the needed work

to produce work products; the specific goals of the process area are

satisfied.

Although capability level 1 results in important improvements, those

improvements can be lost over time if they are not institutionalized.

The application of institutionalization (the CMMI generic practices at

capability levels 2 and 3) helps to ensure that improvements are

maintained.

Capability Level 2: Managed

A capability level 2 process is characterized as a managed process. A

managed process is a performed process that is planned and

executed in accordance with policy; employs skilled people having

adequate resources to produce controlled outputs; involves relevant

stakeholders; is monitored, controlled, and reviewed; and is evaluated

for adherence to its process description.

The process discipline reflected by capability level 2 helps to ensure

that existing practices are retained during times of stress.

Capability Level 3: Defined

A capability level 3 process is characterized as a defined process. A

defined process is a managed process that is tailored from the

Page 34: Lectura-Modelos de Calidad

organization’s set of standard processes according to the

organization’s tailoring guidelines; has a maintained process

description; and contributes process related experiences to the

organizational process assets.

A critical distinction between capability levels 2 and 3 is the scope of

standards, process descriptions, and procedures. At capability level 2,

the standards, process descriptions, and procedures can be quite

different in each specific instance of the process (e.g., on a particular

project). At capability level 3, the standards, process descriptions, and

procedures for a project are tailored from the organization’s set of

standard processes to suit a particular project or organizational unit

and therefore are more consistent, except for the differences allowed

by the tailoring guidelines.

Another critical distinction is that at capability level 3 processes are

typically described more rigorously than at capability level 2. A defined

process clearly states the purpose, inputs, entry criteria, activities,

roles, measures, verification steps, outputs, and exit criteria. At

capability level 3, processes are managed more proactively using an

understanding of the interrelationships of the process activities and

detailed measures of the process and its work products.

Understanding Maturity Levels

To support those who use the staged representation, all CMMI

models reflect maturity levels in their design and content. A maturity

level consists of related specific and generic practices for a predefined

set of process areas that improve the organization’s overall

performance.

The maturity level of an organization provides a way to characterize

its performance. Experience has shown that organizations do their

best when they focus their process improvement efforts on a

manageable number of process areas at a time and that those areas

require increasing sophistication as the organization improves.

A maturity level is a defined evolutionary plateau for organizational

process improvement. Each maturity level matures an important

subset of the organization’s processes, preparing it to move to the

next maturity level. The maturity levels are measured by the

achievement of the specific and generic goals associated with each

predefined set of process areas.

The five maturity levels, each a layer in the foundation for ongoing

process improvement, are designated by the numbers 1 through 5:

1. Initial

2. Managed

3. Defined

4. Quantitatively Managed

Page 35: Lectura-Modelos de Calidad

5. Optimizing

Remember that maturity levels 2 and 3 use the same terms as

capability levels 2 and 3. This consistency of terminology was

intentional because the concepts of maturity levels and capability

levels are complementary. Maturity levels are used to characterize

organizational improvement relative to a set of process areas, and

capability levels characterize organizational improvement relative to

an individual process area.

Maturity Level 1: Initial

At maturity level 1, processes are usually ad hoc and chaotic. The

organization usually does not provide a stable environment to support

processes. Success in these organizations depends on the

competence and heroics of the people in the organization and not on

the use of proven processes. In spite of this chaos, maturity level 1

organizations often produce products and services that work, but they

frequently exceed the budget and schedule documented in their

plans.

Maturity level 1 organizations are characterized by a tendency to

overcommit, abandon their processes in a time of crisis, and be

unable to repeat their successes.

Maturity Level 2: Managed

At maturity level 2, the projects have ensured that processes are

planned and executed in accordance with policy; the projects employ

skilled people who have adequate resources to produce controlled

outputs; involve relevant stakeholders; are monitored, controlled, and

reviewed; and are evaluated for adherence to their process

descriptions. The process discipline reflected by maturity level 2 helps

to ensure that existing practices are retained during times of stress.

When these practices are in place, projects are performed and

managed according to their documented plans.

Also at maturity level 2, the status of the work products are visible to

management at defined points (e.g., at major milestones, at the

completion of major tasks). Commitments are established among

relevant stakeholders and are revised as needed. Work products are

appropriately controlled. The work products and services satisfy their

specified process descriptions, standards, and procedures.

Maturity Level 3: Defined

At maturity level 3, processes are well characterized and understood,

and are described in standards, procedures, tools, and methods. The

organization’s set of standard processes, which is the basis for

maturity level 3, is established and improved over time. These

standard processes are used to establish consistency across the

organization. Projects establish their defined processes by tailoring

the organization’s set of standard processes according to tailoring

Page 36: Lectura-Modelos de Calidad

guidelines. (See the definition of “organization’s set of standard

processes” in the glossary.)

A critical distinction between maturity levels 2 and 3 is the scope of

standards, process descriptions, and procedures. At maturity level 2,

the standards, process descriptions, and procedures can be quite

different in each specific instance of the process (e.g., on a particular

project). At maturity level 3, the standards, process descriptions, and

procedures for a project are tailored from the organization’s set of

standard processes to suit a particular project or organizational unit

and therefore are more consistent except for the differences allowed

by the tailoring guidelines.

Another critical distinction is that at maturity level 3, processes are

typically described more rigorously than at maturity level 2. A defined

process clearly states the purpose, inputs, entry criteria, activities,

roles, measures, verification steps, outputs, and exit criteria. At

maturity level 3, processes are managed more proactively using an

understanding of the interrelationships of process activities and

detailed measures of the process, its work products, and its services.

At maturity level 3, the organization further improves its processes

that are related to the maturity level 2 process areas. Generic

practices associated with generic goal 3 that were not addressed at

maturity level 2 are applied to achieve maturity level 3.

Maturity Level 4: Quantitatively Managed

At maturity level 4, the organization and projects establish quantitative

objectives for quality and process performance and use them as

criteria in managing projects. Quantitative objectives are based on the

needs of the customer, end users, organization, and process

implementers. Quality and process performance is understood in

statistical terms and is managed throughout the life of projects.

For selected subprocesses, specific measures of process

performance are collected and statistically analyzed. When selecting

subprocesses for analyses, it is critical to understand the relationships

between different subprocesses and their impact on achieving the

objectives for quality and process performance. Such an approach

helps to ensure that subprocess monitoring using statistical and other

quantitative techniques is applied to where it has the most overall

value to the business. Process performance baselines and models

can be used to help set quality and process performance objectives

that help achieve business objectives.

A critical distinction between maturity levels 3 and 4 is the

predictability of process performance. At maturity level 4, the

performance of projects and selected subprocesses is controlled

using statistical and other quantitative techniques, and predictions are

based, in part, on a statistical analysis of fine-grained process data.

Page 37: Lectura-Modelos de Calidad

Maturity Level 5: Optimizing

At maturity level 5, an organization continually improves its processes

based on a quantitative understanding of its business objectives and

performance needs. The organization uses a quantitative approach to

understand the variation inherent in the process and the causes of

process outcomes.

Maturity level 5 focuses on continually improving process

performance through incremental and innovative process and

technological improvements. The organization’s quality and process

performance objectives are established, continually revised to reflect

changing business objectives and organizational performance, and

used as criteria in managing process improvement. The effects of

deployed process improvements are measured using statistical and

other quantitative techniques and compared to quality and process

performance objectives. The project’s defined processes, the

organization’s set of standard processes, and supporting technology

are targets of measurable improvement activities.

A critical distinction between maturity levels 4 and 5 is the focus on

managing and improving organizational performance. At maturity level

4, the organization and projects focus on understanding and

controlling performance at the subprocess level and using the results

to manage projects. At maturity level 5, the organization is concerned

with overall organizational performance using data collected from

multiple projects. Analysis of the data identifies shortfalls or gaps in

performance. These gaps are used to drive organizational process

improvement that generates measureable improvement in

performance.

Advancing Through Maturity Levels

Organizations can achieve progressive improvements in their maturity

by achieving control first at the project level and continuing to the

most advanced level—organization-wide performance management

and continuous process improvement—using both qualitative and

quantitative data to make decisions.

Since improved organizational maturity is associated with

improvement in the range of expected results that can be achieved by

an organization, maturity is one way of predicting general outcomes of

the organization’s next project. For instance, at maturity level 2, the

organization has been elevated from ad hoc to disciplined by

establishing sound project management. As the organization achieves

generic and specific goals for the set of process areas in a maturity

level, it increases its organizational maturity and reaps the benefits of

process improvement. Because each maturity level forms a necessary

foundation for the next level, trying to skip maturity levels is usually

counterproductive.

At the same time, recognize that process improvement efforts should

focus on the needs of the organization in the context of its business

Page 38: Lectura-Modelos de Calidad

environment and that process areas at higher maturity levels can

address the current and future needs of an organization or project.

For example, organizations seeking to move from maturity level 1 to

maturity level 2 are frequently encouraged to establish a process

group, which is addressed by the Organizational Process Focus

process area at maturity level 3. Although a process group is not a

necessary characteristic of a maturity level 2 organization, it can be a

useful part of the organization’s approach to achieving maturity level

2.

This situation is sometimes characterized as establishing a maturity

level 1 process group to bootstrap the maturity level 1 organization to

maturity level 2. Maturity level 1 process improvement activities may

depend primarily on the insight and competence of the process group

until an infrastructure to support more disciplined and widespread

improvement is in place.

Organizations can institute process improvements anytime they

choose, even before they are prepared to advance to the maturity

level at which the specific practice is recommended. In such

situations, however, organizations should understand that the success

of these improvements is at risk because the foundation for their

successful institutionalization has not been completed. Processes

without the proper foundation can fail at the point they are needed

most—under stress.

A defined process that is characteristic of a maturity level 3

organization can be placed at great risk if maturity level 2

management practices are deficient. For example, management may

commit to a poorly planned schedule or fail to control changes to

baselined requirements. Similarly, many organizations prematurely

collect the detailed data characteristic of maturity level 4 only to find

the data uninterpretable because of inconsistencies in processes and

measurement definitions.

Another example of using processes associated with higher maturity

level process areas is in the building of products. Certainly, we would

expect maturity level 1 organizations to perform requirements

analysis, design, product integration, and verification. However, these

activities are not described until maturity level 3, where they are

defined as coherent, well-integrated engineering processes. The

maturity level 3 engineering process complements a maturing project

management capability put in place so that the engineering

improvements are not lost by an ad hoc management process.

Table 3.2 provides a list of CMMI-DEV process areas and their

associated categories and maturity levels.

Page 39: Lectura-Modelos de Calidad

Table 3.2 Process Areas, Categories, and Maturity Levels

Process Area Category Maturity Level

Causal Analysis and Resolution (CAR) Support

5

Configuration Management (CM) Support

2

Decision Analysis and Resolution (DAR)

Support 3

Integrated Project Management (IPM) Project Management

3

Measurement and Analysis (MA) Support

2

Organizational Process Definition (OPD)

Process Management

3

Organizational Process Focus (OPF) Process Management

3

Organizational Performance Management (OPM)

Process Management

5

Organizational Process Performance (OPP)

Process Management

4

Organizational Training (OT) Process Management

3

Product Integration (PI) Engineering

3

Project Monitoring and Control (PMC) Project Management

2

Project Planning (PP) Project Management

2

Process and Product Quality Assurance (PPQA)

Support 2

Quantitative Project Management (QPM)

Project Management

4

Requirements Development (RD) Engineering

3

Requirements Management (REQM) Project Management

2

Risk Management (RSKM) Project Management

3

Supplier Agreement Management (SAM)

Project Management

2

Technical Solution (TS) Engineering

3

Validation (VAL) Engineering

3

Page 40: Lectura-Modelos de Calidad

Verification (VER) Engineering

3