Upload
independent
View
0
Download
0
Embed Size (px)
Citation preview
On Theory in Operations Management
Roger W. Schmenner
Morgan L. Swink
Kelley School of Business, Indiana University
January 1998
Published in the Journal of Operations Management
(Elsevier), Vol. 17, No. 1 (December 1998), pp. 97-113.
Direct correspondence to: Roger W. Schmenner
Kelley School of Business, Indiana University
801 W. Michigan Street
Indianapolis, IN 46202
Phone: 317 / 274-2544
Fax: 317 / 274-3312
E-mail: [email protected]
Acknowledgement: The authors are grateful for the comments of the referees and editors on prior versions of this paper. The first part of the paper owes much to Carl G. Hempel, the former Stuart Professor of Philosophy at Princeton, and to his book, Philosophy of Natural Science.
Keywords: operations strategy, productivity
On Theory in Operations Management
Abstract
The field of operations management has been criticized for the inadequacy of its theory. We
suggest that this criticism may be too harsh, and further, that many building blocks of theory are
prevalent in the body of existing research. This paper has two goals. The first is to suggest that
careful organization of our thinking can lead to useful, productive theories in operations management
that demonstrate all the hallmarks of the familiar theories of natural science. We discuss the nature
of scientific inquiry in general terms, and examine the implications for what should be expected from
theory in operations management.
Our second goal is to illustrate through examples how such theories and their related laws
might be developed. Two theories are proposed: the Theory of Swift, Even Flow, and the Theory of
Performance Frontiers. The Theory of Swift, Even Flow addresses the phenomenon of cross-factory
productivity differences. The Theory of Performance Frontiers addresses the multiple dimensions of
factory performance and seeks to unify prior statements regarding cumulative capabilities and trade-
offs. Implications drawn from the theories are discussed and concluding remarks suggest the
advantages of future theory development and test.
1
On Theory in Operations Management
Over the years, as research in operations management has become increasingly rigorous,
there have been various comments about the inadequacies of theory within the discipline (Swamidass
and Newell, 1987; Anderson, Cleveland, and Schroeder, 1989; Flynn, Sakakibara, Schroeder and
Flynn, 1990; Swink and Way, 1995). Although recognized as vital to the prospects of any company,
operations management suffers in at least some quarters because there is no recognized theory on
which it rests or for which it is famous. Finance has economic theory as its foundation and there are
various theories, mainly of psychology, that anchor aspects of organizational behavior and marketing.
Operations management lacks the intellectual rigor of these other business disciplines, or so it seems.
Unfortunately, this has afflicted the field with theory envy.
In our view, the field of operations management has been too harsh on itself. This paper has
two goals. One is to suggest that careful organization of our thinking can lead to useful, productive
theories in operations management that demonstrate all the hallmarks of the familiar theories of
natural science. Our second goal is to illustrate through two examples how such theories and their
related laws might be developed. In so doing, we advance several propositions derived from those
theories that are susceptible of test. However, the purpose of these examples is to demonstrate how
existing knowledge in operations management might be fashioned into theory, not to defend the
theories themselves. We leave that to future research.
In Section 1 of the paper, we discuss the nature of scientific inquiry in general terms. In
Section 2 we introduce a phenomenon to be explained – cross-factory productivity differences – and
discuss what traditional microeconomic theory can say about it. We advance some laws and a theory
in Sections 3 and 4 as examples of the explanatory power of operations management in dealing with
this phenomenon. With the theory as a base, Section 5 proposes some additional, testable hypotheses
and refines some understandings, and Section 6 discusses the related framework of the product-
process matrix and relates it to the proposed theory. In an effort to generalize further, we propose
another example of an operations management theory in Sections 7 and 8 and explore its implications
in Section 9. The paper ends with some concluding remarks.
1. Comments on the Nature of Scientific Inquiry
Any study of the philosophy of science reveals how silly it is to think that the “scientific
method” consists of (1) a collection of facts without a priori guesses as to their importance or
relevance, (2) an analysis of those facts without hypotheses, (3) the inductive derivation of
generalizations from them, and then (4) the further testing of the generalizations. All science
depends upon guesses and suspicions (hypotheses) to guide both the collection of data and the
analysis of that data. Without such guesses and suspicions we would be overwhelmed by irrelevant
data; we need to simplify and prioritize if our inquiries are to bear fruit. In developing the law of
2
gravity, Newton was not concerned with the pressure and temperature of gases because he thought
them irrelevant to characterizing gravity. Nor were Boyle and Graham concerned with the mass of
celestial objects and their distances from one another as they puzzled over the kinetic theory of gases.
There are, then, no generally applicable “rules of induction”, by which
hypotheses or theories can be mechanically derived or inferred from empirical
data. The transition from data to theory requires creative imagination.
Scientific hypotheses and theories are not derived from observed facts, but
invented in order to account for them. (Hempel, 1966, p. 15)
Proof is also elusive. Although one can disprove a hypothesis of “universal form” with a
proper counterexample, one can do nothing more than give added support, rather than proof, for any
hypothesis. Nevertheless, there can be more support for one hypothesis than for a competing one.
The more cases that support the hypothesis, the more different ways it is tested, the richer the follow-
on implications of the hypothesis, the more likely we are to hold fast to it as opposed to its rival.
Hypotheses and their tests are the basic building blocks of scientific inquiry. Their proposed
explanations are of two major types: deductive and probabilistic. Newton’s explanation of the
movement of the planets and of objects thrown in the air is an example of a deductive explanation
from empirical observation. All objects are included and a single instance contrary to Newton’s
expectations would be grounds for abandoning or, at least, seriously modifying Newton’s explanation
of gravity (as Einstein’s broader general theory of relativity later did). Mathematical (non-empirical)
explanations are of this deductive type as well; one is only concerned that the implications of the
accepted premises are logically derived.
A good deal of science, however, must content itself with inductive, probabilistic
explanations. For example, we know now that germs cause disease. Yet, not all exposures to germs
will lead to disease. All we know, at least at this stage of our understanding, is that exposure will
lead to disease with a certain probability. Tests of probabilistic explanations have been devised by
using statistics and by applying conventional standards for the agreement of the observed frequencies
and the hypothesized probabilities. We have to be content with probabilistic explanations for some
hypotheses because our understanding is incomplete. However, there are other phenomena for which
a probabilistic explanation is all that we can hope (e.g., quantum physics).
Hypotheses perforce guide data collection and analysis. We can learn a great deal from
hypotheses and the support given them, irrespective of any surrounding theory. Much good science
has been accomplished while we were as yet shamelessly ignorant of theory.
As hypotheses are supported by more and more evidence, especially evidence of different
kinds, they can often be organized into laws. Laws are the precise descriptions of observed and
3
supported regularities. Laws show how something to be explained can be accounted for. Moreover,
laws, like hypotheses, can exist independent of theories. Kepler’s laws of planetary motion and
Boyle’s law of gases were accepted as laws well before there were theories to explain why they work
as they do.
Laws in the social sciences are often referred to as “models of phenomena”, conveying an
incompleteness that is often appropriate (Little, 1992). While it is sometimes difficult to describe
laws in the social sciences using simple equations, much valuable knowledge has been packaged in
laws or models in ways that have improved management practice. Like many laws in natural
science, many of these managerial laws have made great impacts while rigorous theories remained
absent (e.g., the learning curve).
Theories, then, exist at a higher and deeper level than laws or hypotheses.
Theories are usually introduced when previous study of a class of
phenomena has revealed a system of uniformities that can be expressed in the
form of empirical laws. Theories then seek to explain those regularities and,
generally, to afford a deeper and more accurate understanding of the
phenomena in question. To this end, a theory construes those phenomena as
manifestations of entities and processes that lie behind or beneath them, as it
were. These are assumed to be governed by characteristic theoretical laws, or
theoretical principles, by means of which the theory then explains the empirical
uniformities that have been previously discovered, and usually also predicts
“new” regularities of similar kinds. (Hempel, 1966, p. 70)
Theories often take the form of internal principles or concepts for which are devised some
theoretical terms (e.g., atomic nucleus, orbiting electron, energy level, quanta). The theory is often
explained by using these terms. Also, there are typically some “bridging principles” that tie the
theory to feasible tests of its implications. As with hypotheses, one cannot prove a theory. Rather, a
theory is more or less satisfying to the extent to which it (1) explains a broad range of observed
phenomena, (2) provides hypotheses for subsequent test, and (3) is simple in character. In addition,
A theory will usually deepen our understanding also in a different way,
namely by showing that the previously formulated empirical laws that it is
meant to explain do not hold strictly and unexceptionally, but only
approximately and with a certain limited range of application. Thus, Newton’s
theoretical account of planetary motion shows that Kepler’s laws hold only
approximately, and it explains why this is so: the Newtonian principles imply
that the orbit of a planet moving about the sun under its gravitational influence
alone would indeed be an ellipse, but that the gravitational pull exerted on it by
4
other planets leads to departures from a strictly elliptical path. … (Hempel,
1966, p. 76)
Operations management can arguably be viewed as a mongrel mixture of natural and
behavioral science. So far, our examples and reasoning have concentrated on natural science. Is
behavioral science any different? The philosopher Abraham Kaplan argued in his book, The
Conduct of Inquiry: Methodology for Behavioral Science, that it is not.
I do not believe that the role of theory in behavioral science is any
different from what it is in physical or biological science. There are, of course,
differences in the number, power, and weight of the theories which have been
established concerning these different subject-matters; but everywhere, so it
seems to me, theory works in essentially the same way.
What is important is that laws propagate when they are united in a theory:
theory serves as matchmaker, midwife, and godfather all in one…. (Kaplan,
1964, p.303)
Kaplan does acknowledge that behavioral science has its own particular theoretical
problems:
It might well be said that the predicament of behavioral science is not the
absence of theory but its proliferation. The history of science is undeniably a
history of the successive replacement of poor theories by better ones, but
advances depend on the way in which each takes account of the achievement of
its predecessors. Much of the theorizing in behavioral science is not building
on what has already been established so much as laying out new foundations,
or even worse, producing only another set of blueprints. …. (Kaplan, 1964,
p.304)
And, Kaplan argues, behavioral science often has an unhealthy fixation on methodology:
Many behavioral scientists, I am afraid, look to methodology as a source of
salvation: their expectation is that if only they are willing and obedient, though
their sins are like scarlet they shall be as white as snow. Methodology is not
turned to only as and when specific methodological difficulties arise in the
course of particular inquiries; it is made all encompassing, a faith in which the
tormented inquirer can hope to be reborn to a new life. If there are such
illusions, it has been my purpose to be disillusioning. In these matters, the
performance of the ritual leaves everything unchanged, and methodological
5
precepts are likely to be as ineffective as moral exhortations usually are. There
are indeed techniques to be mastered, and their resources and limitations are to
be thoroughly explored. But these techniques are specific to their subject-
matters, or to distinctive problems, and the norms governing their use derive
from the contexts of their application, not from general principles of
methodology. There are behavioral scientists who, in their desperate search for
scientific status, give the impression that they don’t much care what they do if
only they do it right: substance gives way to form. And here a vicious circle is
engendered; when the outcome is seen to be empty, this is taken as pointing all
the more to the need for a better methodology. The work of the behavioral
scientist might well become methodologically sounder if only he did not try so
hard to be so scientific! (Kaplan, 1964, p. 406)
What then should we expect from an operations management theory? The preceding
discussion leads us to the following characteristics:
1. The operations management phenomenon for which explanation is sought should be
clearly defined. This clarity is enhanced by unambiguous measures of the phenomenon.
2. The description of the phenomenon will likely center on some observed regularities that
have been derived either logically or empirically.
3. There should be one or more precise statements of these regularities (laws).
Mathematical statements of the laws will naturally help the precision.
4. The theory should indicate a mechanism or tell a story that explains why the laws work
as they do and how, and in which ways, the laws may be subject to limitations. The
theory may include some special terms or concepts that aid the explanation.
5. The more powerful the theory, the more likely it will unify various laws and also
generate predictions or implications that can be tested with data. Furthermore, the
power of the theory does not necessarily rest with the methodological choice of the tests
made.
With this review of the philosophy of science as background, let us turn now to some
examples within the realm of operations management. Specifically, let us start with the examination
of a particular phenomenon and what an already well-accepted theory has to say about it.
2. A Phenomenon to be Explained and What Microeconomic Theory Can Say About It
For those of us intrigued by operations, one of the key phenomena we seek to understand is
why one operation (factory or service) is more productive than another. Much of what we research
has a direct bearing on the improvement of some existing operation. Microeconomic theory, well-
accepted and broadly applicable, does have something to say about this issue. How would it explain
6
differences in observed labor productivity across, say, two factories in the same industry? Here are
some of its conventional insights:
i. The across-factory differences could be the result of different inputs of labor and
capital. Thus, one factory could have:
a. a higher ratio of capital to labor than the other, and/or
b. newer, better capital as compared with the other factory, and/or
c. higher skilled labor
ii. There could be different, better technology for transforming inputs into outputs in the
one factory. In microeconomic theory, however, the nature of this better technology is
not detailed very well. It is customarily treated as the residual left from what is
explained by factor differences.
iii. There could be economies of scale enjoyed by one factory as compared to the other.
Here, again, the nature of what those economies might be is not well envisioned by the
theory.
iv. One factory might be rebounding faster from a recession than the other and thus have
more output to spread over its previously slack resources. (This is admittedly a more
specialized insight that is less applicable when factories are in the same industry and
when the phenomenon persists during any phase of the business cycle.)
These are helpful insights but they are not altogether satisfying as the factory itself remains
largely a “black box”. A great deal more could be said of the operations in each factory. Note some
of the aspects of factory operations for which microeconomic theory has relatively little to say:
the type of production process used
the existence of significant bottlenecks
variabilities in quality
variabilities in demand on the process
variabilities in the processing itself
scheduling capabilities
workforce organization, including indirect versus direct labor
workforce morale and effort
the supply chain, from initial materials suppliers through the distribution
system
7
Given these omissions, it is difficult to embrace microeconomic theory as a complete
explanation for the productivity differences between two factories. Too many of the details of
factory operation are ignored and the implications about technology and scale are too ambiguous to
test.
3. A Competing Explanation
The discipline of operations management has, over the years, developed a set of laws that
have a bearing on the phenomenon of differential factory productivity. We have not labeled them as
laws, but in the terminology of the philosophy of science, that is what they are. Some of these laws
are deductive and derived from mathematical foundations, and some of them are probabilistic and
derived from observed data. Consider the deductive laws first. Note that within these laws, as stated,
productivity is defined as output per unit of input resource (e.g., units produced per labor hour, or per
machine hour, or per material dollar, or per some combination, such as those used in total factor
measures).
Law of Variability: The greater the random variability, either demanded
of the process or inherent in the process itself or in the items processed, the less
productive the process is.
This law derives from queuing theory and can easily be verified by simulation (Conway, et
al., 1988). The more variable the timing or the nature of the jobs to be done by the process, and the
more variable the processing steps themselves or the items processed, the less output there will be
from the process, as captured by labor productivity measures, machine productivity measures,
materials productivity measures, or total factor productivity measures.
Law of Bottlenecks: An operation’s productivity is improved by
eliminating or by better managing its bottlenecks. If a bottleneck cannot be
eliminated in some way, say by adding capacity, productivity can be augmented by
maintaining consistent production through it, if need be with long runs and few
changeovers. Non-bottleneck operations do not require long runs and few
changeovers.
This law is most readily associated with Eliyahu Goldratt (1989) and his “theory of
constraints”, although, in the terms of the philosophy of science, his theory is perhaps better
described as a deductive law whose foundations are not empirical but are rather mathematical and
capable of verification by simulation.
8
These deductive laws are complemented by several probabilistic laws that are derived from
experience. We offer three here that relate to the issue in question, productivity differences across
factories.
Law of Scientific Methods: The productivity of labor (i.e., output per
worker-hour of labor) can be augmented in most instances by applying methods
such as those identified by the Scientific Management movement.
This law, of course, dates from the time of Taylor, Gantt, and the Gilbreths. In making a
scientific study of methods they discovered a toolbox of improvements that have withstood the tests
of time in countless situations. Their work anchors much of industrial engineering. This law,
however, is an empirical one. That is how it was discovered. And, it is probabilistic in character for
it has not been observed to hold in every labor situation.
Law of Quality: Productivity can frequently be improved as quality (i.e.,
conformance to specifications, as valued by customers) is improved and as waste
declines, either by changes in product design, or by changes in materials or
processing. Various techniques of the quality movement can be responsible for
these improvements.
This is perhaps a more controversial law. It is stated as a probabilistic law that need not
hold in all situations (e.g., not in situations close to zero defects), but it has been shown to hold in
most. The evidence for this law is widespread, although much of it is anecdotal (Crosby, 1979;
Juran, 1979; Deming, 1982).
Law of Factory Focus: Factories that focus on a limited set of tasks will
be more productive than similar factories with a broader array of tasks.
This is Wick Skinner’s influential observation, drawn from his study of factories in a variety
of industries (Skinner, 1974). Its immediate acceptance by managers is evidence for its
persuasiveness as a law, and the experiences of many have supported it.
Each of these laws contributes to our understanding of why one factory is more productive
than another. Yet, none of them is so broad in character to be able to tie many threads together.
Theory needs to incorporate these laws but also to expand, or limit, their scope. Fortunately, the field
of operations management has already developed the relevant insight that can be advanced as theory.
Much of this theory will, of course, be very familiar, but permit us to discuss it as if one were
unacquainted with operations management.
4. The Theory of Swift, Even Flow
9
The Theory of Swift, Even Flow holds that the more swift and even the flow of materials
through a process, the more productive that process is. Thus, productivity for any process – be it
labor productivity, machine productivity, materials productivity, or total factor productivity -- rises
with the speed by which materials flow through the process, and it falls with increases in the
variability associated with the flow, be that variability associated with the demand on the process or
with steps in the process itself.
To understand this theory one must understand several theoretical concepts:
i. The first concepts are value-added and non-value-added work. According to the theory,
all work can be divided into either value-added work or non-value-added work. Work
that transforms materials into good product is considered value-added, while work that
moves materials, catalogs them, inspects them, counts them, or reworks them is not
regarded as value-added. Anything that adds waste to the process is non-value-added,
including the classic seven wastes of Shigeo Shingo: overproduction, waiting,
transportation, unnecessary processing steps, stocks, motion, and defects (Hall, 1987, p.
26). Given this understanding, materials can move more swiftly through a process if the
non-value-added, wasteful steps of the process are either eliminated or greatly reduced.
ii. Similarly, materials can move swiftly only if there are no bottlenecks or other
impediments to flow in the way. To capture this, the theory clings to another concept,
throughput time, as a useful measure of the speed of the flow from the point where
materials for a unit of the product are first worked on until that unit is completed and
supplied to either the customer or to a finished goods warehouse. Other things equal,
the theory urges the process to reduce the clock time spent in this way (the throughput
time). Throughput time is particularly useful as a mechanism to isolate where flows
have become retarded or blocked.
iii. For materials to flow more evenly, one must narrow the variability associated with
either the demand on the process or with the process’s operations steps. Variability is
measured by the variance or standard deviation of the timing or quantities demanded or
of the time spent in various process steps. Variability is narrowed when the demands
placed on the process are even and regular. “Level” production plans are more
compatible with productivity than are production plans with irregular quantities or due
dates. Such steadiness of demand demonstrates lower variances of both timing and
quantities demanded. Variability is also narrowed whenever like things are processed
together. Hence, whenever like things can be worked on together, without slowing
down the process, then productivity will increase.
The Theory of Swift, Even Flow is thus consistent with the deductive laws of variability and
of bottlenecks, and its measures of throughput time and variability are easily understood. (A variety
10
of other laws on the topics of variability and bottlenecks, and so stated as laws, are available in the
fine book, Factory Physics, by Wallace J. Hopp and Mark L. Spearman.)
Importantly as well, this theory, like others, serves to explain why the probabilistic laws that
are relevant to the phenomenon work as they do. Consider these laws in turn:
1. Scientific Methods. Scientific methods are means by which non-value-added motions
and steps are removed from what labor does and by which value-added steps can be
done more quickly and with less exertion. They serve to speed up the flow of materials
without, helpfully, putting more physical stress on the workforce. Scientific methods
make little difference, according to the theory, when applied to non-value-added work.
One should thus expect to see varying success with scientific methods depending upon
the steps in the process to which they are applied. They should have their biggest
impact on bottleneck operations.
2. Quality. Among the most disruptive things to a process are snafus (temporary
bottlenecks, if you will) caused by quality problems that force rework, scrap, machine
downtime, interrupted flow of materials, and the like. Good quality is essential to the
swift, even flow of materials as it helps both to lower variability and to avoid
bottlenecks.
3. Factory Focus. Factory focus groups like products (and less commonly, like processes)
together. It is thus a mechanism for reducing variability. Moreover, factory focus is a
means for allowing flows to surface within the process. By grouping like products
together the flows of materials for those products are exposed to view more easily and
naturally and this permits the identification of bottlenecks and of non-value-added steps
and facilitates their removal.
The Theory of Swift, Even Flow thus provides a broad explanation that unifies the variously
identified probabilistic laws of scientific methods, quality, and factory focus and shows how they
work. It is consistent, as well, with the deductive laws of variability and of bottlenecks. The Theory
of Swift, Even Flow does not nullify Microeconomic Theory, but, for this factory-specific
phenomenon, it offers a more complete explanation and thus augments that theory.
5. Further Implications
As mentioned above, theories are more satisfying to the degree to which they (1) imply
other, testable hypotheses, and (2) refine and unify existing laws or theories. Consider the latter first.
A. Comparisons with Microeconomic Theory
11
The Theory of Swift, Even Flow offers a variety of qualifications to microeconomic theory’s
implications for the factory. These qualifications do not argue against microeconomic theory whose
implications are far-ranging; they merely discuss instances where microeconomic theory, as it relates
to factory productivity, is limited. For example, microeconomic theory argues that labor productivity
(e.g., units per man-hour) will be augmented with the substitution of capital for labor. The Swift,
Even Flow theory argues that capital-for-labor substitution does not, of itself, imply higher
productivity but only if such a substitution leads to faster, steadier flows. Continuous flow processes
are nearly always both more capital-intensive than other types of processes and more productive.
However, the Swift, Even Flow theory argues that it is not the capital of the continuous flow process
that is important to its high productivity, but it is rather the continuous, less variable nature of the
flow.
In a similar vein, microeconomic theory supports the inclusion of labor savings in the
justification of new capital equipment. After all, according to the theory, substituting capital for
labor leads to productivity advance. The Swift, Even Flow theory, on the other hand, dismisses labor
savings as a justification in favor of a justification based on what the new capital equipment does for
the flows in the process. Are the flows swifter and more even? Post-audits of new capital equipment
should reveal that the less successful new capital investments were also the ones that did not effect
swifter, more even flows within the process.
Microeconomic theory also argues for new, better capital investments and higher skilled
labor. Both of these policies can lead to enhanced productivity according to the Swift, Even Flow
theory. However, they do so because they either speed up the flow of materials or they reduce the
variability of the process. Microeconomic theory also prizes new technology and scale economies as
possible sources of productivity gain, although the exact nature of these is rather ambiguous. The
Swift, Even Flow theory can value technology as well, but, again, only if it leads to faster, less
variable flows of materials. The Swift, Even Flow theory does not lean one way or another with
respect to scale economies. Increasing the scale of a process is not unambiguously good if it has no
beneficial consequence on flows.
B. Other Implications
There are a number of facets of any operation for which the Swift, Even Flow theory would
predict augmented productivity. For example, the theory is much in favor of the creation of cells and
of compact layouts. Cells highlight flows and, often increase the speed by which a product is made.
And, by grouping like products together, cells reduce variability. Indeed, one can view factory focus
as cells writ large.
12
The theory also favors reducing work-in-process inventories as they can bog down the swift
flow of materials and thus raise throughput times to high levels. (The theory offers no implications
for either raw materials or finished goods inventories.)
Several other policies are favored by the theory because they either speed flows or reduce
variation, or both. Among them: cross-training of the workforce, quicker changeovers of equipment,
smaller batches of materials to process, and regular preventive maintenance. A pull system,
according to the theory, would stand a better chance at being productive than a push system,
especially if demands on the process are steady. With a pull system, smooth flow is more assured
because upstream operations cannot act without the authorization of downstream operations and thus
cannot flood the operation with work-in-process inventory. And, because work-in-process inventory
levels are capped by the number of containers or spaces permitted, throughput times are assured to be
low. In short, the Swift, Even Flow theory is very much in tune with the just-in-time manufacturing
philosophy that owes so much to Japanese such as Taiichi Ohno. Indeed, the success of JIT provides
strong support for the theory. (See also Schmenner, 1991).
The theory also argues for the coordination of the supply chain. The smoother the links and
the faster the flow from initial materials to the end customer, the more productive all aspects of the
supply chain should be.
In addition to these policies, the Swift, Even Flow theory argues for the abandonment of
numerous performance measures. Measures such as machine utilization or labor efficiency (standard
hours of labor relative to actual hours) are not measures of either flow or variability. For this reason,
the theory argues that they should be abandoned as measures in favor of measures of throughput time
and variability (say, delivery performance to plan). Indeed, there is some confirmation that machine
utilization and labor efficiency are not associated very much with productivity (Schmenner, 1991;
Schmenner and Vollmann, 1994).
6. An Example of What is Not a Theory
Hempel (1965) and Bacharach (1989), among others, point out the need to distinguish
description from theory. Much of social science is about describing phenomena using categorization,
typologies, and metaphors. While these approaches often yield powerful tools for managers, they
clearly fall in the realm of description, not theory. One such example is the product-process matrix.
It is one of the most appealing notions in operations management, advanced by Hayes and
Wheelwright in two articles in back-to-back issues of the 1979 Harvard Business Review. The
product-process matrix depicts a relationship between product volume and mix – what Hayes and
Wheelwright called “product life cycle stage” -- and the character of the production process
stretching along a spectrum from job shop to continuous flow process – what Hayes and Wheelwright
called “process life cycle stage”. The arguments behind the product-process matrix assert that the
13
diagonal offers the best match of product volume/mix and process character, but the authors did not
dismiss the possibility for operations to exist off the diagonal.
In the second of their articles, Hayes and Wheelwright state that they “proposed ‘the
product-process matrix’ as a way of combining these concepts into a framework for describing
alternative business strategies and examining their implications for the company’s manufacturing
organization”. Thus, for Hayes and Wheelwright, the product-process matrix represents a useful
framework, but it was never a theory. Empirical work has confirmed that most, although far from all,
factories can be arrayed on or near the diagonal. One could even claim such a regularity deserves to
be recognized as a law, but it is not a theory.
Why not? First of all, the phenomenon is not defined as neatly as one would require for a
theory. In their articles, Hayes and Wheelwright do not propose a particular measure for either their
“product life cycle stages” nor for their “process life cycle stages”. In fact, in their second article,
they superimpose the learning curve on the matrix, although it is clear from the text of the article that
they do not want to label the horizontal axis of the product-process matrix with “cumulative output”,
nor do they want to label the vertical axis with either “labor hours per unit” or “cost per unit”. For
them, the product-process matrix is obviously more suggestive and representative than theoretically
precise, although they see the lower right portion as the most cost-effective area of the matrix.
In addition, the product-process matrix does not offer many predictions. While Hayes and
Wheelwright see the diagonal as home to the best “patches” of space within the matrix, they do not
necessarily predict ill fates for off-diagonal operations, although they do counsel caution in such
situations. The mechanisms by which off-diagonal operations either die or change is not spelled out
in detail, although it is recognized that firms are unlikely to coordinate product life cycle stage
changes with process life cycle stage changes and so must linger in off-diagonal situations from time
to time.
The product-process matrix is thus not, in its current state, a theory. Rather, it is an
insightful framework for examining product and process coordination and matching. This is not to
say that frameworks cannot be useful tools in the generation of theory. Frameworks can depict
hypotheses and laws in accessible, often visual, ways. To move toward theory, however, frameworks
need to become more precise and they need to detail the mechanisms that explain the phenomenon at
issue and to suggest some implications that can be tested. Safizadeh, et al. (1996) provide a step in
this direction using the product-process matrix. They suggest and provide some limited support for
three propositions connected to the matrix. (The propositions are as follows: (1) process choice for
the primary product line produced in a plant falls on or close to the diagonal, (2) competitive
priorities are consistent with the plant’s process choice, and (3) firms positioned on or close to the
diagonal outperform those choosing extreme off-diagonal positions.) Nevertheless, the product-
process matrix remains a framework, and not theory.
14
The Theory of Swift, Even Flow, however, can refine our understanding of the product-
process matrix. Consider Figure 1. If one redefines the horizontal axis of the product-process matrix
as demand variability, going from high variability to low variability (from highly customized
products with irregular demands to commodities with steady demands), and if one redefines the
vertical axis of the matrix as speed of flow, from slow to swift, then a new type of diagonal can be
seen. The lower right portion of the matrix represents those operations that combine low demand
variability with swift materials flow, a combination that the Theory of Swift, Even Flow would argue
is the most productive (most output per unit of input resource). By the same reasoning, the upper left
portion of the matrix represents those operations that are the least productive. Such a redefinition of
the product-process matrix is quite consistent with the thrust of the Hayes and Wheelwright
framework. What is more, it is a consistent implication of the Theory of Swift, Even Flow.
7. A Related Phenomenon and the Conflicting Explanations Surrounding It
The Theory of Swift, Even Flow is concerned with the productivity of plants. Productivity
is an important metric for plants, but certainly not the only one of merit. A more general
phenomenon addresses why it is that some manufacturing plants appear to outperform their rivals in
many dimensions of performance, not only productivity, while other plants appear to be faced with
strategic “either/or” choices about what to do? A grand debate has erupted in this matter, a debate
that pits “trade-offs” vs. “cumulative capabilities”.
Permit us to capture the essence of the debate by stating the rival propositions as laws and,
in the process, begin to clarify the nature of the phenomenon at issue.
Law of Trade-offs: A manufacturing plant cannot simultaneously provide the highest levels
among all competitors of product quality, flexibility, and delivery, at the lowest manufactured cost.
Law of Cumulative Capabilities: Improvements in certain manufacturing capabilities (e.g.,
quality) are basic and enable improvements to be made more easily in other manufacturing
capabilities (e.g., flexibility). The sequence that the law of cumulative capabilities is most
comfortable with is quality, delivery, cost, and flexibility. (See Hall (1987) and Hall and Nakane
(1990).)
It is our intention to argue that these two laws are not competing rivals, as many see them,
but are instead complements that are subsumed by a broader theory, the Theory of Performance
Frontiers.
15
The first step in resolving the apparent conflict between the laws of trade-offs and
cumulative capabilities is to understand the type of law they each represent. In the language of the
philosophy of science, the law of trade-offs is a deductive law. Indeed, it is almost axiomatic. The
law of cumulative capabilities, on the other hand, is empirical and probabilistic. This insight leads to
the first of two clarifications we wish to make:
Clarification #1: The laws of trade-offs and of cumulative capabilities are of two distinct types.
Consider trade-offs first. Skinner (1996) describes a manufacturing plant as a
technologically constrained entity. Choices among technologies define constraints on manufacturing
capabilities, thereby necessarily forcing trade-offs among various dimensions of performance in the
short term. Over time, plants that focus their resources on achieving excellence in a few selected
performance dimensions will, in those aspects of performance, necessarily outperform plants that
pursue excellence in many dimensions of performance. Technological constraints will assure the
result. The law of trade-offs explains performance differences across different plants.
Certain proponents of Japanese management and World Class Manufacturing have
challenged the law of trade-offs citing evidence that certain firms do indeed lead their competitors in
almost every dimension of performance. Moreover, some evidence suggests that performance
improvements in different dimensions are most effective if they are pursued in a certain sequence,
layered one upon another just as cones of sand might be layered one upon another (Ferdows and
DeMeyer, 1990). In this view, certain “trajectories” of improvement are regarded as easier and more
effective than others. Quality is seen to be a precursor to cost reduction, process dependability a
precursor to flexibility, and so on. The law of cumulative capabilities is predicated on the idea that
certain dimensions of performance facilitate other dimensions of performance. This law is an
empirical, probabilistic law that deals with improvements within a plant over time.
These insights suggest a second clarification:
Clarification #2: The law of trade-offs is reflected in comparisons across plants at a given point in
time, whereas the law of cumulative capabilities is reflected in improvement within individual
plants over time. The two laws are not in conflict.
Indeed, these two laws can be unified. We state the case for such unification in the next
section.
8. The Theory of Performance Frontiers
16
Performance frontiers in manufacturing have been much discussed of late. (See the special
issue of Production and Operations Management, no. 1, Spring 1996.) We build on these ideas to
put forward a theory that subsumes both the law of trade-offs and the law of cumulative capabilities.
First, better definitions are needed. The performance frontier concept has been widely used
under various names including the “production function” and the “trade-off curve”. However, we are
at a loss to find a good definition of this construct in the operations management literature.
Borrowing from economic theory, a production frontier is defined as the maximum output that can be
produced from any given set of inputs, given technical considerations (Samuelson, 1947). The
performance frontier concept results from enlarging the scope of this definition. First, the nature of
“output” is expanded to include all dimensions of manufacturing performance (e.g., cost, product
range, quality), consistent with notions of data envelopment analysis (Charnes, 1994). Second,
“technical considerations” is expanded to include all choices affecting the design and operation of the
manufacturing unit, including the sources and nature of inputs. A performance frontier is therefore
defined by the maximum performance that can be achieved by a manufacturing unit given a set of
operating choices.
Within the operations management literature, there is also some ambiguity regarding the
make-up of a performance frontier. Skinner (1996) identified technology as the source of bounds on
the dimensions of performance. Other writers (Hayes and Pisano, 1996; Clark, 1996) suggest that
performance frontiers are formed and changed by manufacturing “systems”, that is, the aggregate set
of policies used to manage quality, production planning and control, and other procedures. For
example, Clark (1996) identified just-in-time manufacturing, statistical process control, total quality
management, continuous improvement, and cross-functional integration as “advanced”
manufacturing systems.
Manufacturing strategy concepts have made clear distinctions between choices affecting
physical assets and those affecting operating policies in manufacturing, using terms such as
“structural” and “infrastructural” to classify these decisions. We suggest that the same distinctions
apply to performance frontiers. Frontiers are formed by choices in plant design and investment as
well as by choices in plant operation. There are thus two frontiers. We call one, for lack of a more
precise word, an asset frontier. The other we term an operating frontier. The asset frontier is altered
by the kinds of investments that would typically show up on the fixed asset portion of the balance
sheet, whereas the operating frontier is altered by changes in the choices that can be made, given the
set of assets that the plant management is “dealt”.
Figure 2 illustrates a performance space within which there are performance frontiers for
two hypothetical plants. Both plants are located on their operating frontiers, and both operating
frontiers are located well within the asset frontier that is shared by both plants. Thus, while these
plants employ similar technologies and physical assets, they follow different management policies.
17
Each plant’s performance is immediately bounded by its policies and procedures – its operating
frontier. Each plant’s operating frontier may be moved or changed, say, by the adoption of advanced
manufacturing systems. However, each plant’s performance is also ultimately bounded by an asset
frontier. When two plants utilize similar production equipment, they share the same asset frontier, as
plants A and B do.
It is important to differentiate between two types of beneficial movement within the
performance space: improvement and betterment. Improvement is defined as increased plant
performance in one or more dimensions without degradation in any other dimension. (This definition
is analogous to Pareto optimality in microeconomic theory.) Improvement can be derived by
increasing utilization or efficiency, in the sense of bringing performance up to a predetermined
standard (e.g., standard hours). Improvement trajectories bring a plant like that in position A in
Figure 3 closer to its operating frontier. Under this strict definition, improvement has only to do
with removing inefficiencies in transformation processes and nothing to do with changing the
substance of either operating policy or physical assets. Improvement can result from changes in
inputs, experience, motivation, planning, and controls. Improvement takes a plant to its operating
frontier, and there, improvement ceases.
Betterment, on the other hand, is about altering manufacturing operating policies in ways
that move or change the shape of the operating frontier. Once betterment has occurred and the
operating frontier is itself moved outward, improvement can start anew as the plant can then attempt
to achieve its full potential within its new operating frontier. Betterment occurs in a number of ways.
Recent popular examples include the adoption of JIT or TQC principles or the introduction of cells.
These programs show that performance can be dramatically changed with little change in the amount
or type of physical assets employed. Of course, the asset frontier is itself movable through radical
technology upgrades or replacement. We expect, however, that because movement of the asset
frontier normally requires large capital investments and radical changes to the physical plant, this
occurs less frequently than movements of operating frontiers.
Under a given set of operating policies and assets, a fully utilized plant faces trade-offs
among dimensions of performance. It is immediately subject to its operating frontier and ultimately
subject to its asset frontier. For example, an increase in the number of different products
manufactured at a single plant is very likely to produce an increase in average unit costs if operating
policies and physical assets remain unchanged. However, a betterment in the plant’s policies, say a
move to cells or factory focus, can change the shape or position of the operating frontier such that the
plant can absorb the increase in products without a concomitant increase in unit cost. Nevertheless,
further advances are ultimately constrained by the limits of the plant’s technological assets.
The law of trade-offs states that no single plant can provide superior performance in all
dimensions simultaneously. We would expect to find support for this law if all competitors use
18
similar technologies and are operating near the asset frontier. If all plants are far from the asset
frontier, however, one plant can simultaneously provide higher levels of product quality, flexibility,
and delivery at a lower manufactured cost if, through betterment, its management approaches create
an operating frontier which is superior to its competitors’. In Figure 2 the operating frontier for plant
B is superior to that of plant A.
Figure 2 illustrates the performance frontier as a curve. However, a performance frontier is
really a surface that spans many different dimensions. In keeping with the law of cumulative
capabilities, the movement of a frontier in a certain dimension might produce economies and
leveraging effects for other dimensions of performance within a plant. Changing the shape of the
frontier in one dimension probably also changes its shape in other dimensions. We posit that these
effects are also subject to certain limitations, which are captured in the following familiar laws
adapted from microeconomic theory:
Law of Diminishing Returns: As improvement (or betterment) moves a
manufacturing plant nearer and nearer to its operating frontier (or its asset frontier), more
and more resources must be expended in order to achieve each additional increment of
benefit.
Law of Diminishing Synergy: The strength of the synergistic effects predicted by
the law of cumulative capabilities diminishes as a manufacturing plant approaches its asset
frontier.
The law of diminishing returns is common to the fields of economics and operations
management. We are simply applying this concept to performance frontiers. For example, we
expect that it requires less of an expenditure of resources to move from 8% defects to 5% defects
than it does to move from 5% defects to 2% defects. Once the easiest problems causing defects are
solved, deeper and more intensive activities will be required to find and solve more difficult
problems. And, in accordance with the law of diminishing synergy, we expect that the beneficial
impact on, say, delivery reliability, of improving quality from 8% defects to 5% defects is greater
than the beneficial impact on delivery reliability of improving quality from 5% defects to 2% defects.
The laws of diminishing returns and diminishing synergy work together with the previously
stated laws to explain the nature of differences within and among plants more completely than any of
the laws do in isolation. By considering the position of a plant’s operating frontier relative to other
plants’ operating frontiers and relative to the asset frontier, predictions can be made regarding the
applicability of the laws of trade-offs and cumulative capabilities. Specifically, the law of
cumulative capabilities should be most applicable to plants that are not tightly bound by an asset
frontier.
19
Consider a plant that operates far away from its operating frontier (position A in Figure 3).
The plant contains a multitude of inefficiencies; many slack resources are present. As resource
rationalization programs or increased sales resolve inefficiencies and more fully utilize critical
resources, the plant begins to bump into operating constraints that prevent improvement in one
dimension of performance without degradation in another dimension. For example, the plant may
continually increase the range of products it produces while maintaining a certain level of on-time
delivery as long as sufficient non-value added activities and inefficiencies can be detected and
eliminated. However, once the majority of these inefficiencies have been dealt with, operating
limitations will begin to cause on-time delivery performance to suffer if the product range continues
to be increased. At this point, the plant finds itself in position A1 in Figure 3. It must better its
operating policies in order to achieve higher levels of performance at the same levels of cost.
As the plant engages in betterment, it aligns its operating policies more effectively and
completely with the capabilities of its physical assets, thus shifting its operating frontier to the right.
Throughout the early portion of the plant’s journey, the law of cumulative capabilities is in effect. At
some point, however, diminishing returns make it increasingly difficult to find and resolve the
remaining problems and operating-asset misalignments. In addition, increasing amounts of resources
are needed to accomplish each incremental betterment. For example, the difficulty and expense of
advancing delivery service levels increases at an increasing rate. Soon the cost of betterment is so
high that available resources are exhausted on advances in only one dimension, leaving few resources
for advances in other dimensions.
Once the plant has resolved most inefficiencies and has nearly completely aligned its
operating policies with its assets capabilities, it finds itself in position A2 in Figure 3. Now the limits
of technology begin to impose themselves and the law of trade-offs again begins to take precedence
over the law of cumulative capabilities.
9. Implications of the theory
The theory of performance frontiers provides us with a clearer understanding of when the
laws of trade-offs and cumulative capabilities apply. The theory suggests that a plant that operates
near its asset performance frontier will reap greater benefits from structural, technological changes
than other plants. These types of changes usually require a large investment of resources in order to
move the frontier or change its shape. Plants that are not near their frontiers are not likely to enjoy as
high returns on these investments because the frontier is largely irrelevant to them. Instead, a plant in
this condition would benefit more from a cumulative improvement approach aimed at improving
infrastructure and operating efficiencies (such as quality-related improvements).
Any predictions generated by the theory must take into account the relative positioning of
plants with regard to each other and to their frontiers. Consider the following:
20
1. Benchmarking. The theory admonishes managers to take stock of current competitive
positions and technological limits when considering improvement initiatives. Given where a plant is
within the diagram, change strategies should be very different. In this light, benchmarking takes on
additional meaning. Traditional process benchmarking is most effective when plants are up against
their operating frontiers but not up against their asset frontiers. Technology-intensive benchmarking
is needed as the asset frontier is approached.
2. Metrics. The theory also emphasizes the need to understand the limitations of physical
assets, and it suggests the need to develop metrics that characterize proximity to an asset frontier.
For example, a possible indicator of nearness to the product range / product cost frontier is the ratio
of throughput time and processing time for the plant’s products. Imagine the factory that has a lowo
throughput to processing time ratio. (One would be perfect swift, even flow, but a ratio near 2 is
regarded by many as “world class”.) Technological and capacity constraints impose the only
limitations to increased product range at equivalent cost. Moreover, it becomes important under this
theoretical framework to assess the severity (i.e. slope) of frontiers in different dimensions. It seems
plausible, for example, that in most circumstances trade-offs between product performance quality
and product cost would be more severe than trade-offs between conformance quality and product
cost, if indeed there are trade-offs at all between conformance quality and product cost. As factory
modeling becomes more sophisticated and complete we may be able to deductively establish
theoretical performance frontiers.
3. Variation by Industry. The theory of performance frontiers suggests that plants in more
competitive or progressive industries are less likely to gain advantage from the law of cumulative
capabilities than their counterparts in less competitive industries. As competition forces more and
more firms to adopt increasingly effective, advanced manufacturing systems and physical assets,
technological trade-offs become more and more important considerations influencing strategic
choices and competitive positioning. Thus, the law of trade-offs is likely to be more important in
continuous flow industries than in others.
4. Competitive Strategy. The theory of performance frontiers clarifies the impacts that
assets and operating practices have on competitive advantage. Armed with an understanding of a
firm’s operating position relative to both competitors and the performance frontiers, strategic
planners are better equipped to evaluate and plan manufacturing initiatives. For example, a quality
improvement initiative may well be more attractive than a new technology initiative to a firm that
considers itself far from its asset frontier.
10. Conclusion
21
Researchers of operations management need not fret that the field has no theory to ground
its various policy implications. Past work has developed the basic laws and the elements of theory to
the point where now we can now invent theories (1) to explain cross-factory productivity differences,
and (2) to address even broader measures of across-factory performance. The theories offered here
propose several implications that are testable, such as the sporadic benefit of capital for labor
substitution and the inappropriateness of some standard performance measures, the applicability of
benchmarking, and the character of manufacturing change in selected industries. More importantly,
these example theories demonstrate how a system of existing laws can be unified by increased
precision, clarification, and insight.
We in the OM field have often been sloppy in the development of laws and theories. For
example, the debate surrounding performance trade-offs has been muddied by imprecision and
inconsistently used definitions of quality, flexibility, and so on. Clarification is needed to more
completely specify the nature, measures, and applicability of laws and the contingencies pertaining to
them.
We encourage others in the field to examine operations phenomena, to propose theories to
explain them, and to probe their implications. When our discipline can routinely assault proposed
theories, refine them when needed, and abandon them when warranted, then we will shed any legacy
of theory envy.
22
References
Anderson, J.C., Cleveland, G., Schroeder, R.G., 1989. Operations strategy: a literature review. Journal of Operations Management 8 (2), 1-26.
Bacharach, S.B., 1989. Organizational theories: some criteria for evaluation. Academy of Management Review 14 (4), 496-515.
Charnes, A., 1994. Data envelopment analysis: theory, methodology and application. Boson: Kluwer Academic Publishers.
Clark, K.B., 1996. Competing through manufacturing and the new manufacturing paradigm: is manufacturing strategy passé? Production and Operations Management 5 (1), 42-58.
Conway, R., Maxwell, W., McClain, J.O., Thomas, L.J., 1988. The role of work-in-process inventory in serial production lines. Operations Research 36, (2), 229-241.
Crosby, P.B., 1979. Quality Is Free. New York: New American Library.
Deming, W.E., 1982. Quality, Productivity, and Competitive Position. Cambridge, MA: MIT Center for Advanced Engineering Study.
Dubin, R., 1969. Theory Building. New York: Free Press.
Ferdows, K., DeMeyer, A., 1990. Lasting improvements in manufacturing performance. Journal of Operations Management 9 (2), 168-184.
Flynn, B.B., Sakkakibara, S., Schroeder, R.G., Bates, K.A., Flynn, J.E., 1990. Empirical research methods in operations management. Journal of Operations Management 9 (2), 250-284.
Goldratt, E.M., 1989. The General Theory of Constraints. New Haven, CT: Abraham Goldratt Institute.
Hall, R.W., 1987. Attaining Manufacturing Excellence. Dow Jones-Irwin, Homewood, IL.
Hall, R.W., Nakane, J., 1990. Flexibility: Manufacturing Battlefield of the ‘90s: Attaining Manufacturing Flexibility in Japan and the United States. Association for Manufacturing Excellence, Palatine, IL.
Hayes, R. H., Pisano, G.P., 1996. Manufacturing strategy: at the intersection of two paradigm shifts. Production and Operations Management 5 (1), 25-41.
Hayes, R. H., Wheelwright, S.C., 1979. Link manufacturing process and product life cycles. Harvard Business Review 57 (1), 133-140.
Hayes, R. H., Wheelwright, S.C., 1979. The dynamics of process-product life cycles. Harvard Business Review 57 (2), 127-136.
Hempel, C.G., 1965. Aspects of Scientific Explanation. Free Press, New York, NY.
Hempel, C.G., 1966. Philosophy of Natural Science. Prentice-Hall, Inc., Englewood Cliffs, NJ.
Hill, C.W.L., 1988. Differentiation versus low cost or differentiation and low cost: a contingency framework. Academy of Management Review 13 (3), 401-412.
Hopp, W.J., Spearman, M.L., 1996. Factory Physics: foundations of manufacturing management. Chicago: Irwin.
Juran, J.M., 1979. Quality Control Handbook. 3rd ed. New York: McGraw-Hill.
23
Kaplan, A., 1964. The Conduct of Inquiry: Methodology for Behavioral Science. San Francisco: Chandler Publishing Company.
Little, J.D.C. , 1992. Tautologies, models, and theories: can we find ‘laws’ of manufacturing? IIE Transactions 24 (3), 7-13.
Safizadeh, M.H, Ritzman, L.P., Sharma, D., Wood , C., 1996., An empirical analysis of the product-process matrix. Management Science 42 (11), 1576-1591.
Samuelson, P., 1947. Foundations of Economic Analysis. Chapter 4, Cambridge, Mass.: Harvard University Press.
Schmenner, R.W., 1991. International factory productivity gains. Journal of Operations Management 10 (2), 229-254.
Schmenner, R.W., Vollmann, T., 1994. Performance measures: gaps, false alarms, and the “usual suspects.” International Journal of Operations and Production Management 14 (12), 58-69.
Skinner, W., 1969. Manufacturing-missing link in corporate strategy. Harvard Business Review, May-June, 136-145.
Skinner, W., 1974. The focused factory. Harvard Business Review, May-June, 113-121.
Skinner, W., 1996. Manufacturing strategy on the ‘s’ curve. Production and Operations Management 5 (1), 3-14.
Swamidass, P.M., Newell, W.T., 1987. Manufacturing strategy, environmental uncertainty and performance: a path analytic model. Management Science 33 (4), 509-524.
Swink, M., Way, M., 1995. Manufacturing strategy: propositions, current research, renewed directions. International Journal of Operations and Production Management 15 (7), 4-26.
Whetten, D.A., 1989. What constitutes a theoretical contribution. Academy of Management Review 14 (4), 490-495.
White, R.E., 1986. Generic business strategies, organizational context and performance: an empirical investigation. Strategic Management Journal 7, 217-231.
White, G.P., 1996. A meta-analysis model of manufacturing capabilities. Journal of Operations Management 14 (4), 315-331.
24
Demand Variability (timing, quantities, or customization, measured as variances or standard deviations)
High demand variability Low demand variability
Low speedof materialsthrough the process Job shops
Speed of Batch operations Flow(throughput time)
Assembly linesHigh speed
of materials through theprocess Highly productive
continuous flow processes
Figure 1. A Variant on the Product-Process Matrix
Productivity (output per unit of input resource) increases as one goes down the diagonal.
25
Figure 2. Operating and Asset frontiers
Firm A is likely to operate under the laws of cumulative capabilities while firm B, due to diminishing returns on improvement, is more likely to be subject to the law of trade-offs.
Cost
Performance
A B
Asset Frontier
OperatingFrontier for A
OperatingFrontier for B
26
Figure 3. Three Operating States for a Manufacturing Plant
Plant A is underutilized and inefficient. Rationalizing resources and resolving inefficiencies leads to position A1 at which the plant encounters its operating performance frontier. Operating policy changes improve the frontier and move the plant to position A2, where technological and asset constraints begin to significantly affect performance.
Cost
Performance
A
A1
A2OperatingFrontier
“Bettered”OperatingFrontier
AssetFrontier
27
Figure 1. A Variant on the Product-Process Matrix
Productivity (output per unit of input resource) increases as one goes down the diagonal.
Figure 2. Operating and Asset frontiers
Firm A is likely to operate under the laws of cumulative capabilities while firm B, due to diminishing returns on improvement, is more likely to be subject to the law of trade-offs.
Figure 3. Three Operating States for a Manufacturing Plant
Plant A is underutilized and inefficient. Rationalizing resources and resolving inefficiencies leads to position A1 at which the plant encounters its operating performance frontier. Operating policy changes improve the frontier and move the plant to position A2, where technological and asset constraints begin to significantly affect performance.
28