Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN FOR
COMPUTER INTEGRATED MANUFACTURING (CIM)
by
SHWU-YAN CHANG SCOGGINS, B.L., M.B.A.
A DISSERTATION
IN
INDUSTRIAL ENGINEERING
Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for
the Degree of
DOCTOR OF PHILOSOPHY
Approved
December, 1986
""^^ Mr-'i
©1987
SHWU-YAN CHANG SCOGGINS
All Rights Reserved
^01
ACKNOWLEDGEMENTS
I wish to express my deep appreciation for the invaluable assis
tance and directing which Dr. William M. Marcy and Dr. Kathleen A.
Hennessey have given during all phases of the research. I also want to
thank to the other members of my committee. Dr. James R. Burns, Dr.
Milton L. Smith, and Dr. Eric L. Blair, for their continuous interest,
advice and constructive criticism.
My gratitude also goes to Joe L. Selan and Cher-Kang Chua for long
and tedious hours of proof reading and the drawing of figures. Lastly,
I want to thank my husband, Mark, for his support, collecting of refe
rences, and beneficial problem discussion.
ii
ABSTRACT
This paper investigates the methodologies of system analysis and
design for a CIM system from the software engineer's point of view. The
hypotheses of this research are: 1) particular methodologies are likely
to be suitable for a specific application system, 2) a combination of
methodologies generally can make analysis and design more complete, and
3) analysis of their characteristics can be used to select a methodology
capable of providing system specifications for software development and
system implementation.
To confirm the hypotheses, nine design methodologies are chosen to
analyze five application systems. Each methodology and application
system has its own characteristics. If the hypotheses are true, it will
be possible to match the characteristics of the methodologies with
corresponding characteristics of a particular system. Also, once the
methodologies are used, they should yield information that provides a
set of usable system specifications, and lead to a successful program
ming environment and implementation of the system.
The nine methodologies are SD (Structured Design), MSR (Meta Step
wise Refinement), WOD (Warnier-Orr Design), TDD (Top-Down Design), MJSD
(Michael Jackson Structured Design), SADT (Structured Analysis and
Design Technique), PSL/PSA (Problem Statement Language/Analyzer), HOS
(Higher Order Software), and HIPO (Hierarchy-Input-Process-Output). The
iii
five application systems are an overall CIM system, shop floor control
subsystem, product design subsystem, production planning/scheduling sub
system, and inventory control subsystem. The characteristics of the
methodologies include: system complexity, data structures, data flow,
functional structures, process flow, decoupling structure clash recogni
tion, logical control, and data flow control. The characteristics of
the application systems include: system complexity, functional struc
tures, process flow, data structures, logical control, data flow
control, cohesion, and coupling.
The contributions of this research include a technique for applying
Information Technology to manufacturing information problems, and a set
of rules for combination of different methodologies to improve the
results of analysis and design efforts.
iv
CONTENTS
ACKNOWLEDGEMENTS ii
ABSTRACT iii
LIST OF FIGURES ix
LIST OF TABLES xi
CHAPTER
1. INTRODUCTION 1
2. LITERATURE REVIEW: SYSTEM ANALYSIS AND DESIGN TECHNIQUES 9
2.1. Stages Of System Development 10
2.2. Basic Principles Of Software Engineering 11
2.3. Basic Problems Of System Theory 12
2.4. System Analysis And Design Criteria 13
2.4.1. Measures of Complexity 14
2.4.1.1. Modularity 15
2.4.1.2. Coupling 16
2.4.1.3. Cohesion 16
2.4.2. Representation of Complexity 17
2.4.2.1. Software Matrics 17
2.4.2.2. Diagrams 20 2.4.3. Approaches to Successful System Design 22
2.4.4. System Characteristics and Design Components .... 24
2.5. Criteria for Analysis and Design Methodologies 25
2.6. Design Methodologies 26
2.6.1. Structured Design (SD) 27
2.6.2. Meta Stepwise Refinement (MSR) 28
2.6.3. Warnier-Orr Design (WOD) 30
2.6.4. Top-Down Design (TDD) 32
2.6.5. Michael Jackson Structured Design (MJSD) 35
2.6.6. Problem Statement Language/Analyzer (PSL/PSA) ... 39
2.6.7. Structured Analysis and Design Technique (SADT) . 41
2.6.8. Higher Order Software (HOS) 44
2.6.9. Hierarchy-Input-Process-Output (HIPO) 48
2.7. Summary 50
3. ANALYSIS OF COMPUTERIZED MANUFACTURING SYSTEMS 54
3.1. Basic Concepts of Flexible Manufacturing Systems (FMS) . 54
3.2. Flexible Manufacturing Systems vs Transfering Line 55
3.3. Cellular Manufacturing (CM) 56
3.4. Conponents of A Machining Cell 61
3.5. Cell Operation And Cell Control 66
3.6. Supporting Theories and Software 69
3.6.1. CNC Standard Formats 69
3.6.2. Group Technology (GT) and Group Scheduling 71
3.6.3. Simulation 72
3.6.4. Artifical Intelligence (AI) and Expert Systems .. 73
3.6.5. Computer-Aided Statistical Quality Control (CSQC) 75
3.6.6. Database Management 76
VI
3.6.7. Decision Support System (DSS) 78
3.6.8. Inventory Control 80
3.7. Summary 81
4. CASE STUDY 84
4.1. Structured Design (SD) 89
4.2. Meta Stepwise Refinement (MSR) 91
4.3. Warnier-Orr Design (WOD) 94
4.4. Top-Down Design (TDD) 97
4.5. Michael Jackson System Design (MJSD) 100
4.6. Problem Statement Language/Analysis (PSL/PSA) 104
4.7. Structure Analysis and Design Technique (SADT) 109
4.8. Higher Order Software (HOS) 109
4.9. Hierarchical Input Process Output (HIPO) 118
4.10. Conclusions 118
5. COMPARISON OF DESIGN METHODOLOGIES VS APLLICATION SYSTEMS ... 124
5.1. The Application Systems 125
5.2. The Relationship between Design Methodologies and Application Systems 131
6. CONCLUSIONS 143
6.1. The Need for Industrial Information System Standards ... 143
6.2. Problems with Information and Human Resources 145
6.3. Contributions of this Research 146
6.4. Summary of Characteristics 146
6.5. Recommendations for Further Research 148
BIBLIOGRAPHY 150
vii
APPENDICES
A. CNC MILL, TRIAC 167
B. CNC LATHE, ORAC 172
C. PSEUDOCODES FOR FMC IN A MULTITASKING ENVIRONMENT 182
Vlll
LIST OF FIGURES
1. Shop Flow Control Context Diagram 86
2. Data Flow Diagram for SD (By Yourdon, Demarco method) 90
3. Warnier-Orr Data Structure Diagram 96
4. Warnier-Orr Process Structure Diagram 98
5. Top-Down Design (TDD) Diagram 99
6. Michael Jackson Structured Design (MJSD) Data Step Diagram ... 101
7. Michael Jackson Structured Design (MJSD) Program Step Diagram 102
8. PSL/PSA System Flowchart 105
9. SADT Level AO IDEFO Diagram 110
10. SADT Level A2 IDEFO Diagram Ill
11. SADT Level A3 IDEFO Diagram 112
12. SADT Level A4 IDEFO Diagram 113
13. SADT Level A31 IDEFO Diagram 114
14. SADT Level A32 IDEFO Diagram 115
15. SADT Level A33 IDEFO Diagram 116
16. SADT Level A34 IDEFO Diagram 117
17. Higher-Order Software (HOS) Diagram 119
18. Flexible Manufacturing Cell HIPO Diagram 120
19. CNC Mill HIPO Diagram 121
20. CNC Lathe HIPO Diagram 122
21. Overall CIM System Context Diagram 126
IX
22. Product Design Context Diagram 129
23. Production Planning Context Diagram 130
24. Inventory Control Context Diagram 132
LIST OF TABLES
1. Narrative, PSL Name, And PSL Object Type 106
2. Relationship Between Two Objects 107
3. Characteristics of Design Methodologies 133
4. Characteristics of CIM Application Systems 135
5• Design Methodologies vs. Application Systems 1 0
xi
CHAPTER 1
INTRODUCTION
In 1976 the first Flexible Manufacturing System (FMS) was installed
in the United States, resulting in great achievements in manufacturing
development. Achievements of FMS, including reduced work-in-process,
quicker change-over items, reduced stock levels, faster throughput
times, better response to customer demands, consistent product quality,
and lower unit cost, have caused it to be termed "the second industrial
revolution".
Computer Integrated Manufacturing (CIM) is a term introduced after
FMS [177]*. Manufacturers recognize that integration has to be achieved
not only in the factory, but also in almost every department and all
functions of a manufacturing organization. Currently, few manufacturers
fully implement CIM in the United States, but predictions indicate that
expenditures on computerized factory automation systems in the United
States will climb from $5 billion in I983 to as high as $40 billion in
1995 [57].
However, the technology required to design and implement CIM is in
an early stage of development. One cannot turn traditional manufac-
* Note: In 1974, Dr. Joseph Harrington coined the concept "Computer
Integrated Manufacturing (CIM).
turing equipment into modern Computer Numeric Control (CNC), Direct
Numeric Control (DNC), or Adaptive Control (AC) machines. If computer
ized machines come from different vendors or different models from the
same vendor, there are still problems in both the hardware communica
tion and the software interfaces. The implementation of CIM cannot be
done overnight; it requires an evolution instead of revolution.
Most manufacturers do not know what to do to implement a CIM
system. The lack of generic system analysis and design techniques makes
it difficult to implement CIM systems. Many institutions and indivi
duals have contributed to the research and development of CIM in the
last decade. However, the current literature does not investigate the
system analysis and design methodologies for CIM. Most of the papers
either evaluate the software and hardware together for a particular
system or focus on the functions of a CIM system; others concentrate
on the improvement of machining; others comment on the methodologies of
modelling production, such as scheduling rules, simulation and mathema
tical programming. It is almost impossible to find information about
software development for CIM, which may be attributable to CIM software
being proprietary material of the companies involved in this area of
research and development.
The difficult part of CIM is that CIM is a matter of concept
instead of technique. It is easy to train in technique but not in
concept. The general concepts of CIM are as follows:
1• Integration is combining the elements of a system to form a
whole.
2. The whole is greater than the sum of parts.
3. The challenge of integration in automated systems is greater
than the challenge of integration in conventional systems.
An evaluation of methodologies of system analysis and design for a
CIM system from the software engineer's point of view is needed. Unlike
Manufacturing Automation Protocol (MAP) and Technical Office Protocol
(TOP) which attempt to develop a set of standardized specifications that
will enable a factory's computers and computerized equipment to communi
cate with each other, this dissertation is concerned with the approaches
of system analysis and the specification of software requirements.
The success of automated manufacturing systems depends on low-cost,
high-reliability software systems. Most work on areas of Information
Technology (IT) is done for long-term payoffs, but not for short-term
benefits. The academic researchers lack of large-scale software deve
lopment experience is one of the reasons. Industrial support is growing
rapidly for short term research, both within industrial labs and through
support of university work.
One of the fundamental problems of the IT areas is the management
of complexity. Many of the real problems in software or hardware
engineering show up only when one tackles large problems. As the
complexity of systems increases, the degree of difficulty of program
ming, testing and debugging increases exponentially. Traditionally,
system analysts and designers follow qualitative guidelines. The new
technique uses mathematical computations, such as software metrics,
number of variables involved, or number of variables and comparisons, or
number of nodes and paths.
The knowledge and theories involved in operating a CIM are very
broad and very complex. However, system analysts need not know all the
technologies in detail, but they do need to know the organization struc
ture, what functions are performed and in which department, how to
synchronize all the functions, what the restrictions are, and what the
input and output data will be. Integration of manufacturing includes not
only hardware (machines, material-handling tools, cell controllers, and
computers), but also software (technologies, programs, data, and func
tions).
Hardware integration means that all programmable devices communi
cate with each other, rather than operating on a stand-alone basis.
Integration of software allows data produced in one program module to be
processed by any other program module inside the system, such as the
data in a CAD/CAM (Computer-Aided Design/Manufacturing) database that
can be used in simulation.
The objectives of this research are as follows:
. To integrate manufacturing and production technologies with
information system theories.
. To provide an organized approach to the application of the metho
dologies of system analysis and design in computer integrated
manufacturing systems.
. To provide the decision rules in choosing methodologies.
. To provide an approach leading from design of a complex system to
multitasking programming.
This research has three hypotheses. The first hypothesis states
that each application system has its own characteristics as well as each
methodology. There are certain methodologies suitable for a particular
types of application system. Nine analysis and design methodologies are
chosen to analyze five application systems. Should a match occur between
the criteria of the methodologies and the characteristics of a particu
lar system, the first hypothesis is accepted.
The second hypothesis states that the combination of different
methodologies always provides a better analysis and design technique. A
good methodology should provide as much information as possible. But
there is no methodology that includes all types of information, so the
combination of different methodologies, according to the selection
rules, provides more information, and therefore, is a better analysis
and design technique.
The third hypothesis states that once the methodologies are used,
they should yield information that provides a full set of usable system
specifications, and lead to a successful programming environment and
implementation of the system. Prior to this research, articles which
evaluate the contribution of the methodologies to CIM programming
techniques could not be found in the literature.
In Chapter 2, the most commonly used methodologies are reclassified
into nine system analysis and design methodologies. Distinctions
between these methodologies are not always consistent. G. D. Bergland
combined Meta Stepwise Refinement (MST), Michael Jackson's Structured
Design (MJSD) and Warnier's Logical Construction of Programs/Systems
(LCP/LCS) into the same category as Structured Design (SD) [25] (In
this paper, Warnier's methodology is combined with Orr's as Warnier-Orr
Design methodology). However, based on the design procedures.
diagramming technique, data flow control, logic control, system struc
ture, functional decomposition, and system specifications, these metho
dologies are distinguished.
The following nine methodologies are studied: Structured Design
(SD), Meta Stepwise Refinement (MSR), Warnier-Orr Design (WOD), Top-Down
Design (TDD), Michael Jackson's System Design (MJSD), Structured Analy
sis and Design Technique (SADT), Problem Statement Language/Analyzer
(PSL/PSA), Higher Order Software (HOS) and Hierarchy-Input-Process-
Output (HIPO). The United States Air Force's Integrated Computer Aided
Manufacturing Definition (IDEF), derived from SADT, gave SADT more
refined descriptions and is very suitable for manufacturing system
analysis.
Each methodology has its weaknesss and strengths, and no one metho
dology would be appropriate for every design problem. NBS (National
Bureau of Standards) has established IGES (Initial Graphic Exchange
Specification) and now Automated Manufacturing Research Facility (AMRF)
as standards for industry. It is a natural trend to develop standards
for industrial needs, and we should not be surprised to see that
standards for system analysis and design would produce a methodology
synchronizing all the standards into a complete software development
technique, especially for CIM, in the future.
Chapter 3 is an overview of general CIM systems, including
components of a manufacturing cell, process control, theories in produc
tion planning and design, and Decision Support Systems (DSS). Based on
this information, further system analysis can be performed.
Because CIM is a complex system, several design methodologies, such
as SD, MSR, WOD, and MJSD, do not fit the overall CIM system analysis,
but they might be good for a smaller scope of system analysis. So, this
thesis evaluated these methodologies based on different application
systems. Five application systems are used: 1) overall CIM system, 2)
shop floor control subsystem, 3) product design subsystem, 4) production
planning (scheduling) subsystem, and 5) inventory control subsystem. In
Chapter 4, nine methodologies are applied only to one of the subsystems,
a shop floor control subsystem. The shop floor control system in this
chapter is actually a Flexible Manufacturing Cell (FMC) which is
installed in the Industrial Engineering Department, Texas Tech Universi
ty, It has three functional modules: CNC lathe, CNC milling machine,
and robot. Each module has its own software and functions independent
ly. In order to computerize the flexible manufacturing cell and optimize
productivity, a computer was used to coordinate these three software
systems.
Application systems were first broken into modules, each module
representing a complete and independent function. Only complete and
independent modules of software and hardware that already exist and
function on a stand-alone basis were used, so that the focus was on the
interface of the modules.
Chapter 5 not only summarizes the conclusions from Chapter 4, but
also expands the methodologies to the other four system/subsystems. The
characteristics of the application system are compared and contrasted to
the representation criteria of the nine methodologies. Methods are
8
provided to select more than one methodology for a particular type of
application system.
The shop floor system has real-time multitasking processes (for
example, CNC Lathe, CNC Mill, and robot can be operating simultaneous
ly), it must be implemented under a real-time multitasking executive.
In this research, a commercial product (AMX86) was chosen. Chapter 5
also demonstrates the program design from the information given in the
system analysis and design methodologies and their diagrams. Pseudo
code for higher level functions was written, leaving detailed program
ming to be provided as required for each unit.
Chapter 6 contains conclusions and recommendations for further
research. It restates the issues in applying Information Technology to
implement a CIM system and offers solutions to the problems. This
research not only provides an analytical approach for applying Informa
tion Technology to manufacturing areas, but also suggest rules to com
bine different methodologies. It offers an insight to the relationship
between the methodologies and the programming technique, and provides a
framework for implementing a FMC in a multitasking environment.
Further work beyond the scope of this research can be done to
determine the limitation of complexity management for each methodology
quantitatively and set up the system analysis and design standards and
rules, which are applicable to any type of system. More mathematical
decision-making and automated design tools are needed to improve the
accuracy, consistency, completeness, modifiability, efficiency, and
applicability of system analysis and design techniques.
CHAPTER 2
LITERATURE REVIEW: SYSTEM ANALYSIS AND
DESIGN TECHNIQUES
System analysis consists of collecting, organizing, and evaluating
facts about a system and the environment in which it operates. The
objective of system analysis is to examine all aspects of the system and
to establish a basis for designing and implementing a better system
[63].
System design essentially recognizes processes and defines the data
content of their interfaces. System design is distinct from program
design in that program design deduces the flow-of-control structure
implicit in those interfaces. The ideal capability of a methodology is
that system design tools should establish system processes in such a way
that subsequent program design cannot invalidate these processes [94],
Program design techniques are associated with resolving inconsis
tencies and incompatibilities between required outputs and the inputs
from which they must be derived. System design, on the other hand,
seeks to establish a structured statement of what is to be accomplished
in terms of products, functions, information, resources and timing.
Nevertheless, some aspects of program design methodology can be seen to
have counterparts in system design methodology, especially in terms of
measurement are representation of complexity and design components.
10
System design is of critical importance, especially for large-scale
projects. System techniques provide formal disciplines for increasing
the probability of implementing systems characterized by high degrees of
initial correctness, readability, and maintainability, and promotes
practices that aid in the consistent and orderly development of a total
software system on schedule and within budgetary constraints. These
disciplines and practices are set forth as a set of rules to be applied
during system development to eliminate the time spent in debugging the
code, to increase understanding among those who come in contact with it,
and to facilitate operation and alteration of the program as the
requirements or program environment evolves,
2,1, Stages of System Development
Seven phases in the system life cycle are distinguished:
Phase 1: Documentation of the existing system.
Phase 2: The logical design.
Phase 3: The physical design.
Phase 4: Programming and procedure development.
Phase 5: System test and implementation.
Phase 6: Operation.
Phase 7: Maintenance and modification,
J, D, Couger stated [63] that the amount of costs and the alloca
tion among phases of system development in the 1970s are different from
the ones in the 1980s. The total cost of system development has increa
sed. The primary increase in cost occurred in the early part of the
system development cycle. In the 1970s, only 35 percent of development
11
cost occurred in phases through 3, and now it is approximately 55
percent. However, better analysis and planning of systems results in
lower costs in the subsequent phases. Unfortunately, improvement in
system development techniques has not kept pace with the improvement in
computing equipment and the increase in the complexity of systems.
System analysts continued to use techniques developed during the era of
first generation computers (1940 - 1950), In the latter 1970s and early
1980s, the gap began to close. Techniques especially suited for analy
sis and design of complex systems were developed. The fifth generation
of system techniques has been developed almost in parallel with the
fifth generation of hardware (in the late 1980s),
2,2, Basic Principles of Software Engineering
Software engineering is the study of software principles and their
application to the development and maintenance of software systems.
Software engineering includes the structured methodology and the
collection of structured techniques as well as many other software
methodologies and tools. The basic principles of software engineering
are defined as follows:
1, Principle of Abstraction: separate the concept from the reality,
2, Principle of Formality: follow a rigorous, methodical approach
to solve a problem.
3, Divide-and-Conquer Concept: divide a problem into a set of
smaller, independent problems that are easier to solve.
4, Hierarchical Ordering Concept: organize the components of a
solution into a tree-like hierarchical structure. Then the
12
solution can be constructed level by level, each new level
adding more detail.
5. Principle of Hiding: hide nonessential information. Enable a
module to see only the information needed for that module.
6. Principle of Localization: group logically related items close
together.
7. Principle of Conceptual Integrity: follow a consistent design
philosophy and architecture. It is the most important principle
of software engineering,
8. Principle of Completeness: insure the system completely meets
all requirements, involving data, function, correctness and
robustness (i.e,, system ability to recover from errors) [224],
2,b, Basic Problems of System Theory
Langefors stated [127] that the conditions or functions at the
outer boundary (outer boundary is the boundary of that design; inner
boundary is the set of subsystems) must be estimated by the designer of
the system. The conditions or functions at the intermediate boundary
(the boundary to the other subsystems) must not be estimated by the
subsystem designer. Instead, delineation of the intermediate boundary
should be derived in a formal fashion, by using system properties, from
decisions made at the outer boundary, Langefors aims to provide formal
methods of deriving intermediate boundary conditions from outer ones.
However, there are imperceivable systems which cause people to
neglect the importance or the existence of things that they are not able
to see or perceive. The imperceivable system is defined by Langefors
as a system in which the number of its parts and their interrelations is
so high that its overall structure cannot be safely perceived or
observed at one and the same time [127]. Because imperceivability is
subjective and varies from person to person, there is no means for
proving it by logic. Such a proof would have to be based on other
propositions. In the field of system analysis and design, designers do
not know such propositions which could be more conveniently used as
primaries in proving the correctness of the applications of the methodo
logies. In this kind of research results usually are based on deductive
statements (a method of formal analysis) instead of formal proofs.
An efficient way of designing an imperceivable system is by adding
a perceivable set of subsystems or interactions to a subsystem structure
of a system and by testing each subsystem's structure for feasibility,
before any subsystem contained in it is designed, by giving its subsys
tem structure. The methodologies of system analysis and design parti
tion a system into subsystems, specify subsystem properties and interac
tions, verify that all subsystems can be realized, construct system
properties from subsystem structure, and compare them with specified
system properties,
2,4, System Analysis And Design Criteria
The aim of system design is implementation of a functioning infor
mation, handling facility. To do this, the design must be capable of
translation into a set of coordinated procedures; in the case of
information systems, this most frequently involves software and data
design. When the application involves large volumes, and/or variety in
14
data structures, relationship and functions, problems of complexity
beyond the ability of the software engineer to solve them can result.
Several methods have been developed for defining and dealing with
complexity in software design that can be adapted to the overall context
of system design. Measures of software complexity include: modularity,
coupling and cohesion; software metrics and diagrams are used to define
and represent complex program structures,
2,4,1. Measures of Complexity
Several measures of complexity were developed based on module size
in the 1970s,
1, Halstead's Software Science is the most popular measure of
complexity. The software science matrices derive four basic counts for
a program: n1 is the number of distinct operators in a program, n2 is
the number of distinct operands in a program, N1 is the total number of
operators in a program, N2 is the total number of operands in a program.
The length of a program (module) is N = N1 + N2,
2, McCabe's Cyclomatic Number determines the complexity by counting
the number of linearly independent paths through a program. In a
structured program, the cyclomatic number is the number of comparisons
in a module plus one (V(G) = Path - Node + 1 ) ,
3, McClure's Control Variable Complexity computes the complexity
as the sum of the number of comparisons in the module and the number of
unique variables referenced in the comparisons.
There are several factors that influence the control of complexity,
such as modularity, coupling, and cohesion.
15
2.4.1.1. Modularity
Dividing a program into modules can be a very effective way of
controlling complexity. How a program is divided into modules is called
its modularization scheme. The modularization scheme includes restrict
ing module size to a certain number of instructions, such as IBM's 50
lines of program code, Weinberg's 30 Programming instructions, or J.
Martin's 100 instructions. However, in addition to module size guide
lines, the logical and control constructs and variables also affect
modularity.
There are two basic types of modularity: problem-oriented and
solution-oriented. A basic premise of Modular Programming (MP) is that
a large piece of work can be divided into separate compilation units so
as to limit the flow-of-complexity, permit parallel development by
several programmers and cut recompilation costs. This premise generates
wholly solution-oriented modules and relies on wholly solution-oriented
criteria of modular division. At the other extreme is a program which
responds to stimuli in a process-control application, and is wholly
problem-oriented because it is structured to reflect the time-ordering
of its input.
MP is used as a crude device for the intolerable complexity of the
unconstrained control logic of large programs. Two rules of thumb were
found to be useful and were more fully exploited in later methodologies.
One is the idea of functional separateness for constituent modules.
This concept is refined and extended by both Myers and Constantine
[64]. The second was the notion of a central control module directing
16
the processing carried out by all subordinate modules. MP is thought of
as either the predetermined design or architectural design.
2.4.1.2. Coupling
Modules are connected by the control structure and by data. To
control complexity the connections between modules must be controlled
and minimized. Coupling measures the degree of independence between
modules. The less dependence between modules, the less extensive the
chain reaction that occurs because of a change in a module's logic. The
more interaction between two modules, the tighter the coupling and the
greater the complexity. It is affected by three factors: 1) the number
of data items passed between modules, 2) the amount of control data
passed between modules, 3) the number of global data elements shared by
modules.
In a system there are five types of coupling between two modules:
data coupling, stamp coupling, control coupling, common coupling and
content coupling. Data coupling is the loosest and the best type of
coupling; content coupling is the tightest and the worst type of
coupling.
Decoupling is a method of making modules more independent. Decoup
ling is best performed during system design. Each type of coupling
suggests ways to decouple modules.
2.4.1.3. Cohesion
Cohesion measures how strongly the elements within a module are
related. There are seven levels of cohesion, the order of the strongest
17
level to the weakest is: functional, sequential, communicational,
procedural, temporal, logical and coincidental. Functional and sequen
tial cohesion are the only two types of cohesion that should be accepted
in a module. Any other type, when recognized, should be analyzed to see
if the module can be changed to be functional or sequential. This could
entail restructuring of modules and structure charts. A modifiable
system is made up of modules that have high cohesion and loose coupling.
2.4,2, Representation of Complexity
2,4,2,1, Software Metrics
Software metrics can be used to evaluate the structure of large-
scale systems quantitatively. Previous research in software metrics has
been concentrated in one of three major areas: lexical content of a
program, application of information theory concepts, flow of information
or control among system components.
Compared to the information theory measures, the information flow
method is a completely automatable process using fairly standard data
flow analysis techniques. The major elements in the information flow
analysis can be directly determined at the design phase. The availabi
lity of this quantitative measurement early in the system development
process allows the system structure to be corrected with the least cost.
By observing the patterns of communication among the system components
we can define measurements for complexity, module coupling, level inter
actions, and stress points. Evaluation of these critical system quali
ties cannot be derived from the simple lexical measures. Thus, the
information flow method leads to automatable measures of software
18
quality which are available early in the development process and produce
quantitative evaluation of many critical structural attributes of large-
scale systems. The most important practical test of a software metric
is its validation on real software systems.
The structure of information flow presents all of the possible
paths of information flow in the subset of the procedures. The informa
tion flow paths are easily computable from the relations which have been
generated individually for each procedure. The current techniques of
data flow analysis are sufficient to produce these relations automatica
lly at compiler time. If the external specifications have been comple
ted, then the information flow analysis may be performed at the design
stage.
The complexity of a given problem solution is not necessarily the
same as the unmeasurable complexity of the problem being solved. The
complexity of a procedure depends on two factors: the complexity of the
procedure code and the complexity of the procedure's connections to its
environment, A very simple length measure of a procedure was defined
as the number of lines of text in the source code for the procedure.
The (fan-in * fan-out) computes the total possible number of combina
tions of an input source to an output destination. Here, fan-in of
procedure A is the number of local flows into procedure A plus the
number of data structures from which procedure A retrieves information.
Fan-out of procedure A is the number of local flows from procedure A
plus the number of data structures which procedure A updates.
The global flows and the module complexities show four areas of
potential design or implementation difficulties for the module.
19
1. Global flows indicate a poorly refined data structure. Redesign
of the data structure to segment it into several pieces may be a solu
tion to this overloading.
2. The module complexities indicate improper modularization. It is
desirable that a procedure be in one and only one module. This is
particularly important when implementation languages are used which do
not contain a module construct and violations of the module property are
not enforcable at compile time,
3. High global flows and a low or average module complexity
indicate a third area of difficulty, namely, poor internal module
construction,
4. A low global flow and high module complexity may reveal either a
poor functional decomposition within the module or a complicated
interface with other modules.
The formula defining the complexity value of a procedure is
length * (fan-in * fan-out) ** 2
The formula calculating the number of global flows is
(write * read) + (write * read_write) +
(read_write * read) + (read_write * (read_write - 1))
The interface between modules is important because it allows the
system components to be distinguished and also serves to connect the
components of the system together, A design goal is to minimize the
connections among the modules. Content coupling refers to a direct
reference between the modules. This type of coupling is equivalent to
the direct local flows. Common coupling refers to the sharing of a
20
global data structure which this is equivalent to the global flows
measure.
The connections between two modules is a function of the number of
procedures involved in exporting and importing information between the
modules, and the number of paths used to transmit this information, A
simple way to measure the strength of the connections from module A to
module B is
(the number of procedures exporting information from module A
+ the number of procedures importing information into module B)
* the number of information paths.
The coupling measurements show the strength of the connections
between two modules. Coupling also indicates a measure of modifiabili
ty. If modifications are made to a particular module, the coupling
indicates which other modules are affected and how strongly the other
modules are connected. These measurements are useful during the design
phase of a system to indicate which modules communicate with which other
modules and the strength of that communication. During implementation
or maintenance, the coupling measurement is a tool to indicate what
effect modifying a module will have on the other components of a system.
2.4.2.2. Diagrams
There are many types of diagrams: structured diagram, data flow
diagram, structure charts, Warnier-Orr diagrams, Michael Jackson diag
rams, HIPO diagrams, flowchart. Pseudocode, HOS charts, Nassi-Shneider-
man charts, action diagrams, decision trees and decision tables, data
analysis diagrams, entity-relationship diagrams, data navigation diag-
21
rams, and compound data accesses. Three species of functional decompo
sition charts, IDEFO, IDEF1, and IDEF2 are usually associated with SDAT.
Each system design and analysis methodology has some diagrams to
represent the functional decomposition or data structure or data flow.
Good, clear diagrams play an essential part in designing complex systems
and developing programs. It also helps in maintenance and debugging.
The larger the program, the greater the need for precision in diagram
ming. MSR does not use any kind of diagram, it simply uses programming
techniques at each level and refines one portion at a time until the
whole program is completed. PSL/PSA also does not involve any diagram
ming technique. It only uses literal description to express functional
and performance requirements.
Structured diagramming combines graphic and narrative notations to
increase understandability. It can describe a system program at varying
degrees of detail during each step of the functional decomposition
process. Flowcharts are not used because they do not give a structured
view of a program. Today's structured techniques are an improvement
over earlier techniques. However, most specifications for complex
systems are full of ambiguities, inconsistencies, and omissions. More
precisely, mathematically based techniques are evolving so that we can
use the computer to help create specifications without these problems,
such as CAI, CAD and CAM (Computer-Aided instruction, design, and
manufacturing, respectively), CASA and CAP (Computer-Aided Systems
Analysis and Programming).
22
2,4,3. Approaches to Successful System Design
There are many design techniques which apply to both integrity and
flexibility. A system has a high level of integrity if it has the
characteristics: correctness, availability, survivability, auditability,
and security.
Flexibility includes the ability to modify functions, protocols, or
interfaces provided by the system or to add new functions, protocols, or
interfaces, and the ability to expand the capacity of the system, or
possibly to decrease the capacity.
Ground rules which apply both to designing for integrity and
designing for flexibility include the following:
1. Design for the maximum feasible modularity: The piece-at-a-time
approach rather than the grand-design approach should be used throughout
system design and implementation. Modularity improves integrity as well
as flexibility. A modular system is more readily understood and is less
likely than a more integrated system to have hidden relationships among
its elements which eventually cause errors or failures. Modularity is
therefore one of the most basic ground rules of good design.
2. Design for simplicity and avoid complexity: Complexity in design
or implementation makes a program or system difficult to understand,
systems which cannot be understood cannot easily be changed. Simplicity,
like modularity, contributes to both integrity and flexibility.
3. Create a set of design and implementation standards within the
organization, and observe them rigorously. Those standards include
programming-language standards, protocols, interfaces, and internal
interfaces.
23
4. Document everything.
Designing rules for integrity are as follows:
1. Provide automatic restart and recovery capabilities for all the
system's information processors — host and satellite.
2. Ensure that the communications links, processors, and other
equipment have adequate diagnostic capabilities,
3, Arrange for manual operation in case of serious system failures,
4, Define which data elements or functions must be protected for
security or privacy reasons and how this protection relates to
the access control over system functions,
5, Design the system to be self-tracking, by logging all important
events,
6. Ensure that the DBMS (DataBase Management System) software to be
used includes dead-lock detection and resolution,
7. Minimize the amount of keying required in data entry,
8, Do not allow access to a database except by standard DBMS.
Designing rules for flexibility are as follows:
1, Decouple information processing, database, and network design
and implementation. Keeping each of these areas as independent
as possible allows each to be changed without affecting any of
the others. This is modularity on a systemwide basis and has
the same effect as modularity at a lower level, such as within
an application program.
2. Decouple the design and management of the user interfaces, espe
cially terminal-user interfaces, from processing which uses the
input data.
24
3. Standardize and document all interfaces between modules and
programs.
4. Limit the size of programs and modules.
5. Minimize the degree of tight integration among the parts of the
information system; emphasize loose coupling instead.
6. Don't define limits within modules, programs, or the system.
2.4.4. System Characteristics and Design Components
There are some characteristics which can be used to represent
either a methodology or an application system.
1. Data structures and clash recognition: Data structures are the
vertical data relationships. Usually, input data structures are sepa
rate from output data structures. Structure clash occurs when input
data structures are inconsistent with output data structures. If there
is a large amount of data involved, it is easy to have structure clash.
If the methodologies are data-driven, then data structure charts will be
included,
2, Data flow analysis and control: Flow of data shows the horizon
tal data relationship, or indicates the input-process-output relation
ship. This relationship can be one to one (1:1), one to many (1:N),
many to one (N:1), or many to many (N:M). Problem Statement Language/
Analyzer (PSL/PSA) generates data metrics, from the input data, which
shows a many to many relationship. The input-process-output relation
ship is many to many for the Structured Analysis and Design Technique/
25
IDEF (SADT/IDEF), Higher Order Software (HOS), and Hierarchy-Input-
Process-Output (HIPO), and one to one for Structured Design (SD).
3. Functional structures indicate the vertical functional relation
ship. They can be hierarchy charts or structure charts. Usually struc
ture charts include more information than hierarchy charts.
4. Process flow analysis is the horizontal functional relationship.
It shows the sequence of processes. Process flow charts usually include
input and output for each function (data flow analysis). However, data
flow analysis does not always include process flow analysis. Data flow
analysis can be included in functional structure charts, such as HOS.
5. Control mechanism: Some methodologies are good in logical
control, but weak in data flow control, such as Warnier-Orr Design (WOD)
and Top-Down Design (TDD), some are the reverse, while others are good
in both controls.
2.5. Criteria for Analysis and Design Methodologies
Generally, a good design methodology and its diagrams should:
1. include some or all of the following information:
. process flow (horizontal functional relationship)
. functional structures (vertical functional relationship)
. data flow analysis (interface of data and functions)
. data structures (vertical data relationship)
2. provide formal guidelines to determine how to decompose struc
tures, what the boundaries are, and when to stop;
3. check the consistency of data input and output;
26
4. recognize structure clash;
5. be precise in describing system structures;
6. be able to handle complex systems;
7. be easy to use;
8. be easy to understand;
9. be easy to modify;
10. be easy to program and implement;
11. provide documentation.
The more information a diagram can show, the easier the prograrmner
can write the code. A good diagram should be not only easy to read and
understand, but also be consistent. Only when the flow of data is shown
can data consistency be check.
The flow of data is as important as the sequence of processes. If
a diagram does not show the input and output data of a function process,
there is a high probability of data inconsistency.
2.6. Design Methodologies
Design methodology may be defined as a set of principles which
enables the building of an exact model of events and their associated
data and the information to be derived from them. The design methodolo
gy needs to construct a structure in terms of data entities, their
dependences, relative orderings and associated data attributes in which
internal consistency and completion are subject to verification by
inspection, and implementation and optimization may be successively
applied to it to transform it to executable form.
27
Nine methodologies are evaluated based on their design procedures,
diagramming technique, data flow control, logic control, system struc
ture, functional decomposition, and system specification.
2.6.1, Structured Design (SD)
Structured Design (SD) is based on concepts originated by Larry L,
Constantine, N, Birrell and M, Ould which stated that SD should also be
called Composite Design (CD) or Module Design (MD) [245], However, SD
and CD have different principles (strength and coupling), they are
actually distinguishable methodologies. CD does not have clear design
procedures, so it is seldom adopted,
SD is a methodology for developing logically correct, hierarchical,
structured systems. It encompasses problem definition, system analysis,
system design, data base design, programming and project management in a
unified package. The method relies on following the flow of data
through the system to formulate program design. Data flow is depicted
through a special notational scheme which identifies each data trans
formation, transforming process, and the order of their occurrence.
The interpretation of the system specification is used to produce
the Data Flow Diagram (DFD), the diagram used to develop the structure
chart, the structure chart to develop the data structure, and all of the
results used to reinterpret the system specification. While the design
process is iterative, the order of iteration is not rigid [25, 167],
SD focuses on a higher-level view of the program, using the program
module as the basic building block. The concept of modularization was
refined by standardizing the structure of a program module, restricting
28
the interfaces between modules, and defining program quality metrics
[141],
SD is well suited to design problems where a well-defined data flow
can be derived from the problem specifications. Some of the characte
ristics that make a data flow well-defined are that input and output are
clearly distinguished from each other, and that transformations of data
are done in incremental steps — that is, single transformations do not
produce major changes in the character of the data.
The major problem with SD is that the design is developed bottom
up, which means that it can be applied to relatively small problems.
Also, identifying input and output flow boundaries plays an important
role in the definition of the modules and their relationships. However,
the boundaries of the modules can be moved almost arbitrarily, leading
to different system structures, but no formal guide is provided [167],
2,6.2, Meta Stepwise Refinement (MSR)
Meta Stepwise Refinement (MSR) was authored by Henry Ledgard and
later given its name by Ben Schneiderman. It is a synthesis of Mill's
top-down notions, Wirth's stepwise refinement, and Dijkstra's level
structuring. It produces a level-structured, tree-structured program.
MSR allows the designer to assume a simple solution to a problem
and gradually build in more and more detail until the complete, detailed
solution is derived. Several refinements, all at the same level of
detail, are conjured up by the designer each time additional detail is
desired. The best of these is selected, and so on. Only the best solu
tion is refined at each level of detail. The topmost level represents
29
the program in its most abstract form. In the bottommost level, program
components can be easily described in terms of the programming language.
MSR requires an exact, fixed problem definition. In early stages, it is
programming language independent. The details are postponed to lower
levels. Correctness is ensured at each level.
Each level is a machine. The three components of a machine are: a
data structure set, an instruction set, and an algorithm. To begin, the
program is conceived as a dedicated virtual machine. During the refine
ment process, a real machine is constructed from the virtual machine.
At each step a new machine is built. As the process continues, the
components of each successive machine become less abstract and more like
instructions and data structures from the programming language. The
process is complete when a machine can be built entirely of components
available in the prograimning language.
Since the solution at any one level depends on prior (higher)
levels, and since any change in the problem statement affects prior
levels, the user's ability to produce a solution at any level is under
mined until the changes are made. One approach is to refuse changes
until the design is complete. This results in the solution and the
requirements being unsynchronized. The production of multiple solutions
is another difficulty. Coming up with fundamentally different solutions
to a problem is not a likely occurrence for an individual. Also, how to
decide which solution is best is not addressed by this method.
This approach works best on small problems, perhaps those involving
only a single module. It is particularly useful where the problem
30
specifications are fixed and an elegant solution is required, as in
developing an executive for an operating system [141, 167].
2.6.3. Warnier-Orr Design (WOD)
Jean-Dominique Warnier and his group at Honeywell-Bull in Paris
developed a methodology. Logical Construction of Programs/Systems (LCP/
LCS), in the late 1950s, which is similar to Jackson's design methodolo
gy in that it also assumes data structure is the key to successful soft
ware design. However, this method is more proceduralized in its approach
to program design than the Jackson method. In the mid 1970s, Ken Orr
modified Warnier's LCP and created Structured Program Design (SPD). The
hybrid form of LCP and SPD is the Warnier-Orr Design (WOD) methodology
[141].
WOD is a refinement of the basic Top-Down Design (TDD) approach. It
has the basic input-process-output model for a system. It treats output
as being more important than input which is different from the Yourdon-
Constantine approach and the Jackson approach. The designer begins by
defining the system output and works backwards through the basic system
model to define the process and input parts of the design. It is not
really a top-down design. The six steps in the Warnier-Orr design
procedure are: define the process outputs, define the logical data base,
perform event analysis, develop the physical data base, design the
logical process, and design the physical process [141].
WOD is data-driven, driving the program structure from the data
structure, just like the Jackson Methodology. They both stress that
logical design should be separated from and precede physical design.
31
Structured programming provides only part of the theory which has
resulted in the development of structured systems design. Another major
element is data base management. The data base management system can
provide a great deal of data independence and thereby reduce the cost
of system maintenance.
This method appears to be suited for small, output-oriented
problems, and where the data are tree-structured, the latter leads to
the same kind of problems as the Jackson Methodology [167]. It has no GO
TO structure. Network-like data structures are not permitted. It
cannot be used as a general-purpose design methodology since many data
bases are not hierarchically organized.
Another problem is that the Warnier-Orr diagram includes control
logic for loop termination and condition tests only as footnotes, rather
than as an integral part of the diagram. It makes the designing of the
program control structure a critical and difficult part of program
design. There are no guidelines for control logic design. The methodo
logy builds the logical data base to list all the input data items
required to produce the desired program output much earlier, in step 2
of the methodology. It provides no comparable structure to list control
variables, nor are the new variables added to the logical data base.
Further, it does not include a step to check that each control variable
has been correctly initialized and reset [141].
The Warnier-Orr Diagram is a form of bracketed pseudocode where
nesting is horizontal. For larger problems with several levels of
nesting, the diagram quickly becomes many pages wide. Also, for larger
32
problems involving many pseudocode instructions, the diagram quickly
becomes very crowded.
At the point of defining the logical data base from the logical
output structure and the logical input structure, the methodology
becomes vague. By following from the output, the designer may overlook
requirements. When oversights occur, the designer must redraw the data
structure to include the hidden hierarchy. For very complex problems,
many requirements may not be initially apparent from an examination of
the system output. This may lead to an incorrect design.
Another problem in multiple output structure is the incompatible
hierarchy, which can be dropped from the structure. The process code is
transferred to the next lower hierarchical level, and control logic is
added in the lower level to determine when to execute the transferred
logic.
The ideal input structures which are derived from the problem
output structure may not be compatibly ordered with the already existing
physical input files. The user can always write another program to
convert the existing input into the ideal input. Sometimes, the data
may have to be completely restructured [141],
2,6.4. Top-Down Design (TDD)
Top-Down Design (TDD) is a design strategy that breaks large,
complex problems into smaller, less complex problems, and then decompo
ses each of those smaller problems into even smaller problems, until the
original problem has been expressed as some combination of many small,
solvable problems.
33
The interface between modules and the interface between subsystems
are common places for bugs to occur. In the bottom-up approach, major
interfaces usually are not tested until the very end — at which point,
the discovery of an interface bug can be disastrous. By contrast, the
top-down approach tends to force important, top-level interfaces to be
exercised at an early stage in the project, so that if there are
problems, they can be resolved while there still is the time, the
energy, and the resources to deal with them. Indeed, as one goes
further and further in the project, the bugs become simpler and simpler,
and the interface problems become more and more localized.
In the radical top-down approach, one first designs the top level
of a system, then writes the code for those modules, and tests them as a
version 1 system. Next, designs the second-level modules, those modules
a level below the top-level modules just completed. Having designed the
second-level modules, next write the code and test a version 2 system,
and so forth.
The conservative approach to top-down implementation consists of
designing all the top-level modules, then the next level, then all
third-level modules, and so on until the entire design is finished.
Then the developer codes the top-level modules and implement them as a
version 1. From the experience gained in implementing version 1, the
developer makes any necessary changes to the lower levels of design,
then codes and tests at the second level, and on down to the lowest
level.
There are an infinite number of compromise top-down strategies that
one can select, depending on the situation. If the user has no idea of
34
what he wants, or has a tendency to change his mind, it opts for the
radical approach. Committing oneself to code too early in the project
may make it difficult to improve the design later. All other things
being equal, the aim is to finish the entire design. If the deadline
is inflexible, the radical approach should be employed, otherwise, the
conservative approach. If one is required by his organization to
provide accurate, detailed estimates of schedules, manpower, and other
resources, then the conservative approach should be used.
Many projects do seem to follow the extreme conservative approach,
and while it may, in general, lead to better technical designs, it fails
for two reasons: 1) on a large project, the user is incapable of speci
fying the details with any accuracy, and 2) on a large project, users
and top management increasingly are unwilling to accept two or three
years of effort with no visible, tangible output. Hence, the movement
toward the radical approach [115],
In TDD, data should receive as much attention as functional compo
nents because the interfaces between modules must be carefully specified
specified. Refinement steps should be simple and explicit.
TDD is Module Programming (MP) to which is added a rational or
functional decomposition. The modules tend to be small and the require
ment for references from one module to a lower-level module satisfies
the single-entry, single-exit requirement of Structured Programming,
TDD thus has the benefits of Structured Programming which MP does not
[94],
However, when following a TDD, common functions may not be recog
nized, or the development process may require too much time, especially
35
for a large program. The TDD is preferred when the developer is more
experienced in constructing a component to match a set of specifica
tions, or when decisions concerning data representations must be
delayed. A combination of TDD and bottom-up approach is often more
practical.
2.6.5. Michael Jackson Structured Design (MJSD)
As developed by Michael Jackson, the methodology incorporates the
technologies of top-down development, structured programming, and struc
tured walk-throughs. It is a data-driven program design technique. The
design process consists of four sequential steps: data step, program
step, operations step, and text step.
In this methodology a program is viewed as the means by which input
data are transformed into output data. The system structure must
parallel the data structures used. Thus a tree chart of the system
organization reflects the data structure records. If not, then the
design is incorrect. Paralleling the structure of the input and output
ensures a quality design. Only serial files are involved. The methodo
logy requires that the user knows how to structure data.
MJSD divides programs into simple and complex programs. A complex
program must first be divided into a sequence of simple programs, each
of which can then be designed using the Basic Design Procedure. Most
programs fail to meet the criteria for a simple program because their
data structures conflict or because they do not process a complete file
each time they are executed.
36
Breaking a complex problem into a set of simple programs and
viewing a program as a sequence of simple programs connected by serial
data streams simplifies the design but can cause serious efficiency
problems when the design is implemented. Program inversion is the
technique MJSD used to solve this problem. Program inversion allows a
program to be designed to process a complete file but to be coded so
that it can process one record at a time. It involves coding methods
for eliminating physical files and introducing subroutines to read or
write one record at a time. The structure text is not affected, nor is
the design, nor is program inversion used during the design phase.
However, how the designer arrives at a program structure that
combines all simple programs is not explained in the methodology. For a
complex problem, the system network diagram, the data structures, and
the program structure do not obviously fit together. Double data struc
tures, hidden program steps, and combined program structures, a normal
part of the design of a complex program, can confuse and frustrate even
the most experienced of designers.
Several difficulties are encountered. One is the supporting docu
mentation which has too few explanatory notes. Also, various file
accessing and manipulation schemes make the design more difficult.
Whether the data are tree-structured or not, one may still end up with
an unimplementable program because there does not appear to be a causal
link between data structure and program quality [167].
There are two major achievements. The first, a standard solution
to the backtracking problem, enables the correspondence of data struc
tures and program structure to be maintained in those cases where the
37
serial input stream prohibits making decisions at the points where they
are required. MJSD enables the essential distinction between a good
batch and a bad batch to be preserved in the structure of the program.
The second major achievemnet of MJSD is the recognition of
structure clashes. Structure clashes occur when the input and output
data structures conflict. A structure clash is usually recognized during
the program step of the design process when the designer looks for
correspondences between data structures. To resolve a structure clash,
the problem is broken into two or more simple programs by expanding the
system network diagram. The designer must back up and begin the design
process again with an expanded system network diagram. As with back
tracking the designer is able to apply standard, simple solutions to the
complex problems.
By defining programs and processes in a way which guarantees for
each a structural independence of all the others, MJSD makes program
design more nearly an automatic procedure than is the case with any of
the other methodologies. By so doing it provides one of the prerequisi-
ties of the ideal capability, but it has not yet capitalized on this in
its system design techniques [94].
Both MJSD and SD separate the implementation phase from the design
phase. The major difference between the two is that MJSD is based on
the analysis of data structure, while SD is based on an analysis of
data flow. One is data-oriented, the other process-oriented. MJSD
advocates a static view of structures, whereas SD advocates a dynamic
view of data flow.
38
The important difference between MJSD and SD is that in MJSD the
need to read, update, and produce a new-subscriber master file is
clearly shown. MJSD is therefore preferred over SD because it provides
the more complete design.
In the respect of data design, MJSD is superior to the other design
methodologies. However, in the areas of control logic design and design
verification, MJSD is weak. There are no guidelines for governing the
execution of loops and selection structures during the last part of the
last step in the design process or checking its correctness.
Verification is an important part of the constructive method. Each
step should be performed independently and verified independently. MJSD
includes an informal verification step in each basic design step, which
is not sufficient. The designer decomposes a complex problem into
simple programs and verifies only the parts, but verifying the parts
does not mean the whole is verified.
The major weakness of MJSD is that it is not directly applicable to
most real-world problems. First, the design process assumes the
existence of a complete, correct specification. This is rarely possible
for most data processing applications. Second, the design process is
limited to simple programs. Third, the design process is oriented
toward batch-processing systems. It is not an effective design techni-
nique for on-line systems or database systems. In general, MJSD is more
difficult to use than other structured design methodologies. The steps
are tedious to apply.
39
2.6.6. Problem Statement Language/Analyzer (PSL/PSA)
PSL (Problem Statement Language) was developed by the ISDOS project
at the University of Michigan. It is a natural language prescribing no
particular methodology or procedure, though it contains a set of
declarations that allow the user to define objects in the proposed
system, to define properties that each object possesses, and to connect
the objects via relationships. Each PSL description is a combination of
formal statements and text annotation.
PSA (Problem Statement Analyzer) is the implemented processor that
validates PSL statements. It generates a data base that describes the
system's requirements and performs consistency checks and completness
analyses on the data. Many different reports can be generated by the
PSA processor. These include: data base accesses and changes, errors,
lists of objects, and relationships among the data in the data base.
PSL is mostly keyword-oriented, with a relatively simple syntax [83].
PSL contains a number of types of objects and relationships which
permit these different aspects to be described: system input/output
flow, system structure, data structure, data derivation, system size and
volume, system dynamics, system property, and project management.
PSL/PSA incorporates three important concepts. First, all informa
tion about the developing system is to be kept in a computerized,
development-information database. Second, processing of this informa
tion is to be done with the aid of the computer to the extent possible.
Third, specifications are to be given in what-terms, not how-terms.
40
The approach primarily assists in the communication aspect of
requirements analysis and specification. It does not incorporate any
specific method to lead one to a better understanding of the problem
being worked on. The act of formally specifying all development infor
mation and the cuialysis reports produced by PSA can aid the analyst's
understanding of the problem. It was created at least partly in
response to the tendency of analysts/designers to describe a system in
programming-level details that are useless for communication with the
nontechnical user. At the same time, it provides sufficient rigor to
make it useful for design specification. It has been used in situations
ranging from commercial DP to air-defense systems [81].
PSL/PSA is too general for many specific applications. Naming
conventions and limited attributes are not checked by PSA and must be
used to keep the design manageable. Also, PSA uses large amounts
of canputer memory. In the and several megabytes of direct access
secondary storage on large main frame computers. However, since I98O a
microcomputer version of PSL/PSA has been developed with a minimum of
64K bytes of RAM, and it became a single-user system. The number of
reports also reduced from 30 to 6 [83, 122].
PSL/PSA needs more precise statements about logical and procedural
information. It is also too complicated to be used. It should provide
more effective and simple data entry and modification commands and
provids more help to the users. The system performance also needs
improvement.
PSL/PSA has a number of benefits. Simple and complex analyses
become possible and particular qualities, such as completeness, consis-
41
tency and coherence can be more easily ensured and checked for. Also,
the implications of changes can be quickly and completely established.
2.6.7. Structured Analysis and Design Technique (SADT)
Structured Analysis and Design Technique (SADT), a trademark of
SofTech, Inc., is a manual graphical system for system analysis and
design. SADT has three basic principles. The first is the top-down
principle in a very pragmatic form. Any function or data aggregate be
decomposed into no fewer than three, no more than six at the next level,
because to decanpose into fewer generates too many levels and to deccan-
pose into more causes span of control problems.
The second principle is represented by the rectangular transforma
tion box with its four symmetrically disposed arrows. If the box repre
sents an activity, the left arrow describes the input data, the right
arrow the output data, the upper arrow the control data and the lower
arrow the means used to effect the transformation. This is a formaliza
tion of the change-of-state emphasis, but is decoupled entirely from any
computer representation.
The third principle is that the system may be represented equally
by a connected set of data boxes. The activity boxes are connected by
links representing data while the data boxes are connected by links
representing activities. The methodology acknowledges that the activity
and data representations are symmetric and equally valid, but since a
system has to be implemented from its constituent activities the
activity-based approach is the dominant one. In fact, the drawing of
42
the corresponding database diagrams is recommended mainly as a means
of checking consistency and completeness. There is an interesting
parallel here between the activity/data ambivalence of SADT and the
transform/transaction ambivalence of SD [94].
One basic assumption made in using SADT is that requirements defi
nition ultimately requires user input to choose among conflicting goals
and_cons_traints. This is difficult or impossible to automate. While
SADT is a design technique, it assumes that existence of other automated
tools to provide for the on-line data bases that may be needed.
There are a few problems in using SADT: it is not automated; it is
sometimes hard to think only in functional terms; the technique lacks a
specific design methodology [83].
The United Stated Air Force Integrated Computer Aided Manufacturing
(ICAM) Program is a SADT, particularily for applying computer technologjy
to manufacturing and demonstrating ICAM technology for transition to
industry. To better facilitate this approach the ICAM Definiton (IDEF)
method was established to better understand, communicate and analyze
manufacturing.
There are three models in IDEF. Each represents a distinct but
related view of a system and each provides for a better understanding of
that system.
1. IDEFO is used to produce a Function Model (blueprint). These
functions are collectively referred to as decision, actions and activi
ties. In order to distinguish between functions, the model is required
to identify what objects are input to the function, output from the
function and what objects control the functions. Objects in this model
43
refer to data, facilities, equipement, resources, material, people,
organizations, information, etc.
2. IDEF1 is an Information Model. The IDEF1 is a dictionary, a
structured description supported by a glossary which defines, cross-
references, relates and characterizes information at a desired level of
detail necessary to support the manufacturing environment. The IDEF1
identifies entities, relations, attributes, attribute domains, and
attribute assignment constraints. The advantage of the IDEF1 approach
to describing manufacturing is that it provides an essentially invariant
structure around which data bases and application subsystems can be
designed to handle the constantly changing requirements of manufacturing
information.
3. IDEF2 is a Dynamics Model (scenario). It represents the time
dependent characteristics of manufacturing to describe and analyze the
behavior of functions and information interacting over time. The Dynamic
Model identifies activation and termination events, describes sequences
of operations, and defines conditional relations between input eind
output. The Dynamics Model Entity Flow diagram is supported by
definition forms which quantify the times associated with the diagram.
The Dynamics Model is comprised of four submodels: Resource Disposition,
System Control, and Facility Submodels which support the Entity Flow
Submodel.
The ICAM system development methodology is unique because it estab
lishes a formal definition of the current manufacturing system prior to
the specification of the future integrated system and it uses a model
rather than a specification to accomplish this definition. Descriptive
44
models for management control and representational models for specifica
tions and designs are utilized to construct, implement and maintain an
integrated system [200].
2.6.8. Higher Order Software (HOS)
HOS initially was developed and promoted by Margaret Hamilton and
Saydean Zeldin while working on NASA projects at MIT. The method was
invented in response to the need for a formal means of defining
reliable, large scale, multiprogrammed, multiprocessor systems. Its
basic elements include a set of formal laws, a specification language,
an automated analysis of the system interfaces, layers of system archi
tecture produced fran the analyzer output, transparent hardware.
This design method requires rigorously defined forms of functional
decomposition with mathematical rules at each step. The decomposition
continues until blocks are reached from which executable program code
can be generated. The technique is implemented with a software graphics
tool called USE.IT, which automatically generates executable program
code and tenainates the decomposition. It can generate code for very
complex systems with complex logic.
This design method is based on axioms which explicitly define a
hierarchy of software control, wherein control is a formally specified
effect of one software object on another:
. A given module controls the invocation of the set of valid func
tions on its immediate, and only its immediate, lower level.
. A given module is responsible for elements of only its own
output space.
45
. A given module controls the access rights to a set of variables
whose values define the elements of the output space for each,
and only each, immediate lower level function.
. A given module controls the access rights to a set of variables
whose values define the elements of the input space for each, and
only each, immediate lower level function.
. A given module can reject invalid elements of its own, and only
its own, input set.
. A given module controls the ordering of each tree for the imme
diate, and only the immediate, lower levels.
The most primitive form of HOS is a binary tree. Each decompostion
is of a specified type and is called a control structure. The decompo
sition has to obey rules for that control structure that enforces
mathematically correct decomposition.
Each node of HOS binary tree represents a function. A function has
one or more objects as its input and one or more objects as its output.
An object might be a data item, a list, a table, a report, a file, or a
data base, or it might be a physical entity. In keeping with mathemati
cal notation, the input object or objects are written on the right-hand
side of the function, and the output object or objects are written on
the left-hand side of the function.
Design may proceed in a top-down or bottom-up fashion. In many
methodologies, the requirements statements, the specifications, the
high-level design, and the detailed program design are done with
different languages. With HOS, one language is used for all of these.
46
Automatic checks for errors, omissions, and inconsistencies are applied
at each stage.
Three primitive control structures are used: JOIN, INCLUDE, and OR.
Other control structures can be defined as combinations of these three.
1. JOIN: The parent's function is decomposed into two sequential
functions. The input of the right-hand child is the same as
that of the parent. The output of the parent is the same as that
of the left-hand child. The output of the right-hand child is
the input to the left-hand child.
2. INCLUDE: The parent's function is decomposed into two indepen
dent functions. Together both offspring use the input data of
the parent function, and together they produce the output data
of that function.
3. OR: One of the offspring achieves the effect of the parent, but
not both. The resulting output of each offspring is the same as
that of the parent.
4. COJOIN: A COJOIN is composed of a JOIN and an INCLUDE struc
ture. It is like the JOIN structure except that the left off
spring has as its input, variables that are input to the parent
as well as variables that are output from the right offspring.
5. COINCLUDE: The offspring share part of the input from the
parent. Together they produce the output data of that function.
6. COOR: A condition, P(B), is tested. One of the offspring is
used, depending on whether the condition is true or false.
47
7. CONCUR: Similar to COINCLUDE in that both offspring produce the
output data of a function together, CONCUR allows the input of
the left-hand child to be that of the parent or other child.
There are four types of leaf nodes:
1. Primitive Operation (P): This is an operation that cannot be
decomposed into other operations. It is defined rigorously with
mathematical axioms.
2. Operation Defined Elsewhere (OP): This function is further
decomposed in another control map, which may be part of the
current design or may be in a library.
3. Recursive Operation (R): This is a special node that allows the
user to build loops.
4. External Operation (XO): This function is an external program
that is not written with HOS methodology. It may be the manu
facturer's software or previously existing user programs. The
HOS software cannot guarantee its correctness.
The diagrams are designed interactively. The designer can build and
check them quickly. However, certain operations appear cumbersome when
created with control maps. It is desirable to replace or simplify
repetitively used logic.
There are two ways of extending the power and usability of the HOS
tool. The most common is to build a library of operations, like build
ing a library of subroutines and callable programs. Most HOS control
maps contain blocks labeled OP or XO. The second way of extending use
fulness is to create defined structures. This is rather like creating
one's own language or dialect in HOS.
48
USE.IT has an automatic documentation generator that generates
documentation in U.S. Department of Defense format, with standard para
graph numbers, and so on. The designer can put comments in any of the
blocks. A high-level system description may be associated with the
high-level blocks. The low-level blocks often need no description added
to them. A person maintaining the software works by changing the
control map, and from this both new program code and new auxiliary
documentation are generated.
HOS eliminates the need for most dynamic program testing, instead
using static testing. Dynamic program testing tests each branch and
usable combination of branches for all programs not built with a
rigorous mathematical technique. Static testing refers to verification
that the functions, data types, and control structures have been used in
accordance with rules that guau'antee correctness. The verification is
performed by a computer with absolute precision. A few instances of
control paths will be checked to ensure that correct results are being
produced. Because of the computerized verification and cross-checking,
HOS is particularly valuable for creating specifications of a level of
complexity beyond the capability of one person. The more complex the
system, the greater the benefits [141].
2.6.9. Hierarcy-Input-Process-Output (HIPO)
Hierarchy-Input-Process-Output (HIPO) is a documentation technique
developed by IBM and used in a number of different situations. It is
useful for design documentation as well as stating requirements and
specifications before beginning design. It is based on two concepts:
49
The input-process-output model of information processing and functional
hierarchies.
HIPO is basically a graphical notation, consisting of hierarchy
charts and process charts. The hierarchy chart is similar to a struc
ture chart. Each box in the chart can represent a system, subsystem,
program, or program module. Its purpose is to show the overall func
tional components and to refer to overview and detail HIPO diagrams. It
does not show the data flow between functional components or any control
information. It does not show the arrows with open or filled circles.
Also, it does not give any information about the data components of the
system or program.
The second part of the HIPO notation is process charts. Process
charts have two levels: overview diagram and detail diagrams. They are
used to describe a function in terms of its inputs, the processing to be
done, and the outputs produced. Any of the information, especially the
process part, may be described in more detail by accompanying textual
material. The process charts show the flow of data through processes;
however, they are more difficult to draw than data flow diagrams. HIPO
diagrams often require more verbiage and symbols to give the same infor
mation as a comparable data flow diagram.
HIPO has been used in a number of DP (Data Processing) situations
and some more complex specification tasks. Its drawbacks, especially
with respect to interfaces and the description of data, were limited to
its acceptance. HIPO does not include any guidelines, strategies, or
procedures to guide the analyst in building a functional specification
or the designer in building a system or program design.
50
At the general level, a structure chart is usually preferred over a
hierarchy chart. At the detailed level, pseudocode is often preferred
over HIPO diagrams because it provides more information in a more
compact form. HIPO diagrams have no symbols for representing detailed
program structures such as conditions, case structures, and loops. HIPO
diagrams cannot represent data structures or the linkage to data models.
However, its ease of use and simplicity do make it a viable alter
native for defining functional requirements and specifying individual
functions. Showing input and output and how they relate to procedural
steps is a strength of HIPO diagrams that some other structured tech
niques do not have [81, I4l].
2.7. Summary
There is no way to define the range of problems that a methodology
can handle except in terms of their structural principles. What distin
guishes these categories are not differences of structure or principles
of design but the mechanisms of invocation, synchronization, data
access, etc., all of which have to do with the transfer of data between
processes rather than their existence or internal structure.
All the methodologies recognize this by not making special provi
sions for these distinctions. As far as defining the range of problems
in terms of structural principles is concerned, it is impossible to make
any statement at all for MP, TDD, SD and SADT, simply because all these
are solution-oriented. Given a well-defined problem, any of these will
produce a solution, however particular characteristics of the problem
are defined. With WOD and MJSD at least a relative statement is
51
possible. MJSD offers backtracking and the mechanical decoupling of
clashing structures as already described. WOD has no conception of the
former and only a vestigial treatment of the latter. What MJSD makes so
central to the understanding of program structure WOD dismisses in a few
paragraphs with trivial illustrations. What in MJSD is obviously a
basis for future system design, in WOD is merely the obligatory mention
of a universal difficulty which the methodology otherwise does not take
into account.
Both MJSD and WOD also deal with the problem of multiple input
files where items in different files have to be collated. WOD has a
cumbersome approach based on the selection, arbitrarily, of one of the
files as a guide file. MJSD perceives that the essence of the collation
problem is the physical disjunction of a single iteration of entities
and therefore forces a similar program structure in a way that does not
falsely attribute structural importance to any one of them.
MJSD is more competent than WOD. WOD deals only with commercial
file-processing problems. In such problems the data may well require
considerable computation to produce the required output but the entities
are self-identifying. MJSD is capable of dealing with cases in which
both the classification and extent of the principal entities in the
files have to be computed.
Structural integrity is the most important single attribute that a
system should possess. One half of it is correctness. The structure of
a system may be correct if it corresponds directly to the data struc
tures it models and the existent orderings of the entities composing
them. Furthermore, this correspondence must be visible; in whatever
52
form the design is represented, that which is present for reasons of
language, optimization and physical input/output must be separable from
the realization of the problem structure. A system has complete struc
tural integrity if an outsider can deduce the problem it solves by
inspection of the design notations or compilable code and, further,
cannot tell whether it has ever been amended.
Because they are solution-oriented, TDD, SD and SADT can reveal
structural correctness by inspection. SD does meet the evolutionary
requirement because if the strength/cohesion and coupling rules are kept
wherever the system is amended, the patches do not show. A system
designed by SD may not have the best possible structure but it is
possible to ensure that the structural quality does not deteriorate
under repeated alteration.
SD permits the preservation of structural integrity. PSL/PSA and
MJSD both provide sufficient insight for the full effects of any amend
ment and the necessitated structural changes made at both system and
progrsim levels.
The conclusions about design methodologies are that no single
method exists which would be an asset on every design problem; methods
can only contribute so much to the design effort; designers produce
designs, methods do not; a design problem, although well suited to a
particular technique, will always have some quirk which makes it unique;
designing is a technique of problem solving.
Methods are important but their successful application occurs only
in supportive environments. Specifically, the necessary management
elements must all be present and effective. The larger the system, the
53
more important are these technical factors. The balance between methods
and environments is a delicate one. The merging of these may very well
be the next evolutionary step [167].
The combination of Structured Systems Analysis and PSL/PSA are
powerful tools in the development of logical models, functional specifi
cations, and management information systems; it also provides a valuable
methodology for use by teachers of system analysis and design.
CHAPTER 3
ANALYSIS OF COMPUTERIZED MANUFACTURING SYSTEMS
3.1. Basic Concepts of Flexible Manufacturing Systems (FMS)
In 1976 the first Flexible Manufacturing System (FMS) was installed
in the United States. The benefits of FMS include: reduced work in pro
gress, quicker changeover times, reduced stock levels, faster throughput
times, better response to customer demands, consistent product quality,
and lower unit cost. Later, manufacturers recognized that integration
has to be achieved not only in the factory, but also in almost every
department and all functions of a manufacturing organization, and the
term Computer Integrated Manufacturing (CIM) was introduced. In this
chapter, both the structure and the functions of a FMS are analyzed.
K. Stecke and J. Solberg gave FMS the following definition [191]:
"A Flexible Manufacturing System (FMS) is a system in which various
types of workpieces are transported, via computer controlled material-
handling systems, to versatile machines for the processing of their
individual operations, without human intervention. In general, FMS is
applicable to mid-volume manufacturing systems which produce a
variety of different part types. Unlike conventional job shops,
manual scheduling of a FMS is impractical, and would very likely be
serious underutilization of the system's capabilities." The following
terms are also used: ABMS (Advanced Batched Manufacturing System),
54
55
VMS (Versatile Manufacturing/Variable Mission System), VMM (Variable
Mission Manufacturing), and CMPM (Computer Managed Parts Manufacturing).
Flexibility is the essential feature of FMS. Flexibility is defin
ed by Mandelbaum [137] as the ability to respond effectively to changing
circumstances. The measure of flexibility includes part flexibility,
cell flexibility, machine flexibility, and control system flexibility.
To maximize the benefits of a FMS it is necessary to ensure that,
as the job flexibility of the system increases; the decline in producti
vity is minimized, and the increase in efficiency (machine flexibility)
is maximized.
3.2. Flexible Manufacturing Systems vs Transfer Line
There are two general classes of multistation automated manufactu
ring systems, automatic transfer lines and FMS. In automatic transfer
lines all parts processed by the line have the same sequence of opera
tions. Part movement is synchronized and, at fixed intervals of time,
every part in the system is transferred to the next station. All parts
need not visit the same sequence of machines as is the case with FMS,
which can be regarded as equivalent to a conventional shop with auto
mated material handling [47].
As long as a FMS has a production range of less than a dozen parts
per hour, its advantages can be justified. For high-volume production,
transfer machines remain the most cost-effective method. In addition,
FMS is beneficial for job shop production in which the machined part
configurations can change from week to week.
56
In contrast, transfer machines with special stations for a family
of parts can machine faster and with better quality than systems which
must make compromises for any part defined within a cube size. Thus,
manufacturers of valves, compressors, engines, or washing machines may
be better served by a combination approach that uses transfer technology
with special flexible machining stations.
3.3. Cellular Manufacturing (CM)
Manufacturing systems are segregated into three categories based on
their physical layout: product layout, functional layout, and Cellular
Manufacturing (CM). CM is a subset and derivative of Group Technology
(GT). Each cell is designed to produce a part family. The parts within
the family normally will go from raw material to finished parts within a
single cell. Usually, the manufacturing facility cannot be completely
divided into specialized cells. Rather, a portion of the facility
remains as a large functional job shop which is termed the remainder
cell [93].
Flexible Manufacturing Cell (FMC) provides many of the same advan
tages of a full FMS ~ at far less cost. These cells form the lower
levels of the factory hierarchy. The cells receive their work orders
either from a supervisor console or from a center-level computer. The
cells may also receive the part programs to be used by the device
controllers from remote CAD systems. The results of each work order
are returned to the requester when the work is completed.
A cell is generally on-line, and provide a return on investment
months before a more sophisticated system could. The common elements
57
in a cell are: the machines, a material-handling system, and a computer
or cell controller. The material-handling system is generally interre
lated with the various machines, and orchestrating the whole operation
is a cell controller [27].
A cell control system can be logically divided into two sections.
In one section are the algorithms and database that are responsible for
the actual cell management (cell controller). This section performs the
following functions: production control, resource management, part
program management, equipment monitoring, and error monitoring. In the
second section is the machine-dependent hardware and software required
to control and monitor each individual device in the cell.
The hardware and software architecture of the control system will
dictate the flexibility that can be achieved. Three possibilities will
be discussed, with designs progressing from a centralized system to a
totally distributed system. Essentially, the intelligence is progre
ssively driven down to the lowest level,
1, The Centralized System: In this architecture, the cell control
system is completely implemented on a single computer system, probably a
general-purpose minicomputer. This machine manages the database and all
the cell control functions. In addition, all the application and proto
col tasks required by the machines in the cell are performed there.
2. The Network System: This approach utilizes the development of
Local Area Networks (LANs), which provide high-speed, error-free trans-
misssion, and microprocessors to distribute the control system. In this
architecture, the cell controller will be a minicomputer or possibly one
or more microcomputers. The protocol section of the machine-dependent
58
software is contained in Network Interface Units (NIUs), which connect
to the network. The cell controller is responsible for the cell control
functions and database management, as well as all the machine applica
tion software. When an application wishes to control its associated
machine, it sends a message over the LAN to the protocol software in the
NIU, which transmits it to the machine.
3. The Fully Distributed System: In this control system, the cell
controller, database, and NIUs are all directly connected to a LAN to
provide a fully distributed architecture. The cell controller is res
ponsible for only those functions that control the cell. The appli
cation and protocol software that provides the intelligence to operate
the machine is placed in the NIUs, In this way, the LAN is no longer
just a communication system to a group of machines but is also a bus
between the cell control software and the virtual machines it is opera
ting. The advantage of a network-addressable database is that the
burden of obtaining the required part programs is transformed from the
cell control processor to the NIUs [205].
The heart of the design is flexibility. The design philosophy is
founded on two principles: the cell and its component parts and pieces
must be modular, and the cell and its components fit in a structured
hierarchy.
A few of the most common cell design pitfalls are listed below:
1. Flexibility vs. Productivity: The components of the cell can be
designed to maximize either of these criteria, but not both. It is
therefore important to decide from a production standpoint how flexible
the cell should be and what its productivity should be.
59
2. Failures: Failures can result from many causes, including equip
ment breakage, faulty incoming parts, and robot drift. It is probably
acceptable to scrap a part, but the equipment in the cell should not be
allowed to seriously damage itself.
3. Critical Items: If the robot is out of commission, then parts
will have to be fed by hand. If the supervisory computer is out of
commission, then the machine tools and robots can no longer function as
a cell. In the case of very critical components, such as the host
computer, there should always be spare parts on hand. Sensors, such as
limit switches used to determine the presence of a part, should be made
redundant so that if one fails then the desired information will still
be available from another.
4. Cell Host Extensibility: There must be a straightforward proce
dure for changing the way the cell executive sequences the machines in
the cell. The software to handle the errors should be designed to make
it straightforward to add new routines and modify old ones. In addi
tion, the hardware must allow for expansion of the software. A cell
host that seems just adequate for the cell at design time will probably
soon be bogged down with unanticipated sensor handling routines.
The functions and equipment for the cell described above are by no
means exclusive. Various alternative configurations can be used to
increase flexibility or productivity.
After a cell is built, it needs to be integrated into a larger
manufacturing system. Like the cells beneath it, the system will be a
module with a defined input and output. It will depend on the next
higher level of the hierarchy for certain services such as inventory.
60
engineering support, and scheduling. The system may be a very large and
versatile collection of cells, but it is not a factory. In particular,
the following operations are beyond the scope of the manufacturing
system:
1. The retrieval and initial processing of raw materials is done
outside the system.
2. Inventory and its control are handled outside the manufacturing
system.
3. Maintenance functions required in the factory are not part of
the system.
4. The CAD/CAM system will reside in computers outside the manufac
turing system.
The supervisory computer for the manufacturing system will fill the
same role for the system that the cell host computer does for the cell.
It schedules the flow of parts between the cells within the manufactu
ring system, directs the flow of information concerning the parts,
stores the CNC part programs in an on-line storage facility, maintains a
database that relates particular parts to the programs required to make
them, and supplies the part programs to the cell host when a new part is
send to a cell. The manufacturing system computer must also provide the
cell host with its operating software. If the cell hosts need software
support each time they are booted, then the system computer will be
responsible for that also. The statistics arising from the individual
cells will need to be compiled, compressed, and analyzed by the manufac
turing system computer. The software for gracefully degrading the
system will reside in the system computer [67].
61
3.4. Components of A Machining Cell
1. A CNC Milling Machine: This machine should be equipped with
closed-loop control of the table. The spindle speed should be software
controllable.
2. A CNC Turret Lathe: The machine tool should be equipped with
closed-loop position control on all axes. The spindle speed should be
software controllable.
3. The Lathe Loading Robot: In general this component consists of
two arms suspended from a carriage riding in an overhead track, like an
overhead crane. One arm is usually used for loading new pieces into the
lathe and the other for unloading finished pieces. Only the movement of
the carriage in its track is continuously programmable. A more flexible
alternative to the specialized lathe loading robot is to use a 5- or 6-
axis robot. Another possibility is to mount the mill loading robot on a
track running between the mill and the lathe.
4. A Mill Loading Robot: The robot requires at least five program
mable axes to pick up a part from a flat surface and to orient it on the
bed of a milling machine.
5. Linear Table: A linear table is used to transfer parts between
the lathe loading robot and the mill loading robot.
6. Controllers: The controller must be able to send and receive
complex messages from the cell host. The controller must allow the cell
host to manage the machine tools' actions including: the execution of
controller programs, transfer of programs to and from the host, and
access to the machines' and controllers' state.
62
Most existing controllers provide communications channels, but few
will allow a cell host to command the running or stopping of a control
ler program. Program transfers are provided on the more advanced
controllers but require operator intervention. This is unacceptable for
an autonomous cell. Solutions to this problem generally involve either
putting a small computer between the controller and the supervisor, or
rewriting the controller software.
7. A Vision System: The system consists of stand-alone microproces
sors, combined with special-purpose hardware and software designed to
process images from one or more video cameras, acquire an image, convert
the image to digital form, and perform a variety of computational tasks.
The computational tasks range from simply recognizing a distinct object
to calculating its position and orientation within the field of view of
the camera. The software will also control, to varying degrees, the
speed, accuracy, and overall utility of the system.
The need to communicate with the cell host and allow the cell host
to control the system's actions is for a vision system and a machine
controller. Both of these characteristics are absent in some of the
current commercial systems. The software required to make use of the
binary or grey-scale information is often not commercially available,
8. Cell Computer: The cell host computer is the heart of this
schematic, and the capabilities of the cell from the stand-point of
control and communications depend almost entirely on the host. The
coordination of the machine actions will be the most complex function
of the host, but the host will also pass part programs to the control
lers. The part programs themselves are stored elsewhere.
63
In addition, the host computer for a cell should be able to provide
statistical information to any higher level computer. The type of data
the higher level computer should have access to includes the following:
machine tool operation time, the number of parts processed in the last
time period, the number of parts that came into the cell in the last
time period, the number of incoming parts that were acceptable, the
number of parts that were scrapped during the last time period, how much
the various cutting tools have been used, what fadlures have occurred
during processing, and details of the failures.
The correct choice of operating system, microprocessor functions
and speed, source language, and support environment can greatly influ
ence the efficacy of the development effort. The system needs to
service communication with all machine controllers, and must provide
adequate support to the system programs.
9. Sensors: The quantity and the sophistication of the sensors in
a cell determine its ability to function autonomously. For s«niautono-
mous operation, one requires only fairly simple sensors which may be of
several types.
10. CNC Parts Programs: The systems that create parts programs
vary in canplexity from large CAD systems to simple, stand-alone pro
gramming aids. The generation of parts programs for CNC machine tools
is a straightforward problem, since most CNC machine tools are
prograrmned in a derivative of the APT language. The structure of APT
programs is simple and very amenable to automatic creation.
The state of the art in program generation and verification for
robots is more primitive than for CNC machine tools. For the near-term
64
cell, user must anticipate cell down-time when new robot programs are
being written, debugged, and tested.
The linear table will probably be controlled by a commercially
available Programmable Controller (PC). The programs for such devices
are simple ladder diagrams input through something typically called a
"program loader". The program loader is a terminal paired with software
to convert the user input into machine language. The program loader is
designed to send its machine code directly to the programmable control
ler, but the output can also be redirected to some storage device and
later down-loaded to the linear table controller [67].
The process controllers in the cells can be implemented using a
clock-driven system, a sequentially-driven system, an interrupt-driven
system or a multitasking-based executive program.
1. In a clock-driven executive program each task is allocated a
specific time interval which is scheduled by periodic time interrupts
provided by the computer's real time clock. This approach requires a
predetermination of task time, which is difficult to provide for complex
systems such as an FMS. In addition, high-priority devices and activi
ties under a clock-driven system do not receive preferred service.
Since this may lead to inefficient resource utilization or a catastrop
hic failure, a clock-driven system would probably never be implemented,
2. A sequentially driven executive program uses a software polling
system. The executive program polls the system status input ports and
the communication channels in a predetermined order. Although the order
of polling may be altered based upon input parameters, it still does not
allow for preferred service for high priority devices. An advantage of
65
the sequentially-driven system lies in its base of implementation using
high level languages.
3. The most efficient executive program uses an interrupt-driven
system. Interrupt-driven systems require a processor which allows an
external device to physically stop the processor, have the processor
save the current program information and then branch to a predetermined
memory location. Since an interrupt results in a branch to a predeter
mined memory location, we can implement priorities among devices.
Various queueing disciplines can be implemented, but a discipline based
upon task priority is most consistent with the intent of a priority
interrupt system. A disadvantage of this approach is the considerable
software overhead required for its implementation.
4. In a multitasking system a separate task is created for each
activity to be monitored. Each task may exist in one of four states:
executing, ready, suspended or dormant. Each task is assigned a priori
ty, and among active tasks, the operating system will always have the
highest priority task executing. In practice we would create a task to
monitor each level one process controller and a task to handle each type
of high priority communication. These tasks would lie in a suspended
state until communication was initiated by the level one controller.
The system still has considerable software overhead, but with technolo
gical advances toward high speed, large memory mini- and microcomputers,
the overhead is not a limitation.
Implementation of the system control components requires a program
structure, a data structure and decision algorithms. The necessary
decision algorithms are implemented using a multitasking program
66
structure. Most of the algorithm software can be kept on external
storage media. They are: part identifier assignment, part location,
dynamic part scheduler, material handling movement and control, part
transfer, operation-NC matrix, operation/machine assignment matrix, part
type/pallet type assignment matrix, part type to be input into the
system, monitoring, interpretation and system status determination,
decision implementation, system startup/shutdown.
Data structures for FMS process control program can be either
static or dynamic. Static information will not change once the system
is initialized in a particular configuration. Dynamic information
changes with each event occurrence within the system. The data struc
tures are part identifier, machine identifier, material handling compo
nent identifier, pallet identifier.
3.5. Cell Operation And Cell Control
The ability of a cell to function independently depends on its
tolerance of errors. An autonomous cell must be able to recognize bad
incoming parts, to recover from parts lost in process, and to recover
from common failures such as tool breakage and part misalignment.
Unfortunately, such errors are very difficult to recognize or tolerate
and this often forces one to limit the cell to partially autonomous
operation, meaning that human operators will verify the cell operation.
An autonomous cell should also have some rudimentary inspection
capabilities both for locating incoming parts and for final part inspec
tion. However, automated visual inspection cannot measure dimensions
67
such as the depth of a blind hole or the diameter of an internal
snapring groove.
Programmable coordinate measuring machines are expensive and more
difficult to interface with the rest of the cell than vision systems.
They are an attractive possibility for large manufacturing systems,
and in the future for small ones. The in-process inspection is very
useful for troubleshooting, but ties up the machine tools if it is done
extensively.
The cell operations are divided into two distinct categories of
cell-level tasks: steady state operations and periodic operations.
Error handling is treated separately.
The cell operates in a steady state during a parts run. The parts
move through the cell under the guidance of the cell host. Management
data concerning cell status is available from the cell host. The cell
host must be able to manage the actions of the mill, lathe, robots, and
vision system so that they function in parallel. This turns out to be a
difficult problem, for many of the different actions are related.
Managing a cell so that the machines can function in parallel is a topic
of current investigation — one technique for coordinating the different
activities is to use a rule based production system. Other schemes may
prove to be appropriate for managing the parallel operations within the
cell, but most of the currently available systems are difficult to
change and do not clearly describe how the cell works.
The control of the CM can be divided into two activities: cell
loading and cell scheduling. Cell loading is the determination of which
cell, among the feasible cells, the job should be assigned to. Cell
68
loading is similar to such topics as capacity scheduling, capacity
planning, shop loading, and work-load balancing. There are three objec
tives in cell loading: balance the load between cells, balance the
machine loads within each cell, and balance the proportion of jobs with
large processing time and jobs with small processing times between and
within cells.
Cell Scheduling is the internal control of the jobs within each
cell. Scheduling determines the order of the jobs onto each machine and
the precise start time and completion time of each job on each machine.
Each cell within a cellularly divided manufacturing system can be consi
dered a modified flow shop, A modified flow shop employs most of the
same characteristics as a job shop. The job shop can be modeled as
either a static or a dynamic system. For a static system, the jobs are
pooled and scheduled at fixed time intervals. Once the jobs are sche
duled, the schedule is set and fixed. For a dynamic system, the jobs
are scheduled as they arrive at the shop or at a machine. With this
flexibility, the queues in front of each machine can continually change.
If the job shop is modeled as a static system, the available sche
duling methodologies fall into three categories: combinatorial methods,
mathematical programming, and Monte Carlo Sampling. If the CM is
modeled as a dynamic system, the available scheduling methodologies fall
into two categories: queuing theory, and heuristic sequencing rules.
Specification of the FMS computer system varies widely depending on
system size. For controlling more than 15 machines, the systems may be
broken down into as many as five distinct levels. At the top level is
the corporate host; the second level, a factory or site host; the third
69
level, the FMS cell manager; the fourth level, individual functional
computers to control storage systems, transportation, tool management,
and other functions; the fifth level, the supervisory computer installed
at the machine tool ahead of the CNC and the programmable logic control.
Developing dated net requirements from the user's wide area network,
which incorporates MRP, cost, and master schedule functions, the CIM
approach passes production information down to a cell manager level.
The only required hierarchical level common to any size system, the
cell manager, receives dispatch lists for scheduling and tracking from
simulations performed on the corporate host. If the number of machines
in the system is small, other functions such as machining, inspection,
and administration may also be handled at this level,
3.6. Supporting Theories and Software
3.6.1. CNC Standard Formats
Integrating CAD/CAM with factory equipment is often slowed at a
bottleneck created at the CNC postprocessor stage. Also, users do not
have program portability. To solve some of these problems, a standard
CNC programming format, the EIA Standard RS-494 or Binary Cutter
Location (BCD Data Exchange Format for Numerically Controlled Machine
Tools, was developed. This standard, published in August 1983 by the
Electronics Industry Association, defines a specific format for Cutter
Location (CD data that can be easily output from most CAD/CAM systems.
The data can then be input directly into CNC controls that comply with
the RS-494 standard. This can be accomplished without conventional
postprocessors or the need for reprogramming.
70
With the program portability allowed by the standard, programs can
be prepared ahead of time, can be stored on 8" (203 mm) floppy disks,
and can even be moved between similar machines without being reprogram-
raed. Though the RS-494 format does not allow 100* portability, it can
significantly increase flexibility.
There is no need to change the entire NC operation to implement
BCL. The two basic requirements for implementation are to begin speci
fying RS-494 compatibility on new and retrofit CNC and to provide compa
tible output from CAD/CAM or other programming systems. Some companies
are phasing into BCL usage by retrofitting existing machines with a CL
exchange processor, a black box approach which provides compatibility.
To produce a part on a numerically controlled machine tool requires
a part program. This program defines the functions to be performed,
including the cutter path, and also commands for tool changes, spindle
speeds, feed rates, coolant, and other parts of the machining process.
The program format has traditionally been known as a conventional part
program. The NC programmer creates a high-level part program rather
than a conventional program. This can be a language program like APT (A
Programmable Tool) or a graphics-generated program using CAD/CAM.
The APT or CAD/CAM processor, an application software program,
converts the part program source into an intermediate form known as a
cutter location data file. It contains the same type of instruction
information as a conventional program, but is not yet formatted for a
specific machine tool. Since there is no standardization of conventio
nal programs, a unique piece of software called a postprocessor is
required to format data into a specific conventional program format. A
71
different postprocessor is required for each new combination of machine
tool and CNC.
Both the APT or CAD/CAM processor and the CNC must use the EIA RS-
494 standard CL data format to implement this approach. Most APT or
CAD/CAM systems vary slightly from the standard, and a small conversion
program is usually required to standardize the output [157].
3.6.2. Group Technology (GT) and Group Scheduling
W. P. Darrow mentions Group Technology (GT) \69} as a technique for
identifying and bringing together related or similar components in a
production process in order to take advantage of their similarities by
making use of the inherent economics of similar setups and flow produc
tion methods. Ccxnponents are broken down into subgroups based on part
code and process routing, called families. Ccxnponents within a subgroup
have similarities in production processes. A set of machines, called a
GT cell, is allocated to each family, located close to each other, and
dedicated to the manufacture of one family of parts.
Some of the major benefits of a GT cell are reduction of transpor
tation and queueing time between operations, simplification in the
design, preparation of process sheets, and production controlled by
manufacturing within a cell. One advantage of using GT is that the
total number of machines and jobs in a GT cell is significantly smaller
in comparison to a general job or flow shop.
Group scheduling has been identified as a logical means of integra
ting GT into an existing Material Requirement Planning (MRP) system.
72
Group scheduling concepts can be incorporated into the material planning
function of MRP by modifying the lot sizing procedures to handle the
aggregation of family production runs. GT can also be used in the
material control function by using group scheduling techniques for
machine scheduling.
3.6.3. Simulation
Simulation of the FMS has two functions. The first function is to
determine the operating characteristics and capabilities of various
vehicle guide path layouts. Under this evaluation, data would be
gathered on vehicle utilization, efficiency of vehicle usage, and
optimum placement of control elements [105].
A comprehensive simulation effort helps to assure the efficient,
reliable operation of a major automated system. One of the most impor
tant uses for simulation is the development of a control strategy to
manage the routing of shuttle cars carrying empty racks. Simulation can
be used to study the control system so as to have the majority of empty
racks go straight back to molding. Another approach involves a time
delay to provide a "window of opportunity." The use of simulation
determines if there is enough queuing capacity in the assembly area to
live with anticipated intermittent interruptions of shuttle cars move
ment without having to include an alternate path for the vehicles.
Graphic simulation of a complex system is probably one of the best
commmunication media. It is particularly effective in orienting manage
ment and those responsible for running the manufacturing system. After
the system is installed and running, the model can be used to analyze
73
planned changes that may be required to meet new production require
ments. Once built, the model can be used for ongoing simulation in an
evolutionary way [184].
One of the most important aids in the implementation of robot
systems interactive computer graphics software packages. The use of
these packages for simulation purposes provides significant time savings
in the layout and modeling of robot workcell ccaiponents and in confir
ming that the final installation will perform as intended. This capabi
lity essentially became available with the introduction of Computer-
Aided Design (CAD) systems capable of 3D modeling. More recently, it
has been enhauiced with the develoixnent of simulation packages that
dynamically model the robot in basically a real-time mode.
3.6.4. Artifical Intelligence (AI) and Expert Systems
Artificial Intelligence (AI) is the study of knowledge representa
tions and their use in language, reasoning, learning, and problem
solving. AI programs gain flexibility over conventional systems by
using a changing knowledge base rather than a fixed, preprogrammed
algorithm [181].
The most widely used AI programming languages are LISP and PROLOG
(PROgramming language based on formal LOGic). They are different from
conventional procedural language because they allow a programmer to
define a desired result without being concerned with the detailed
instructions of how it is to be computed. Sometimes called "declara
tive** languages, they are well suited to special computers, called
74
parallel processing computers (multiple computers working together
simultaneously), as programs that can be run in any order without regard
to sequence. The languages which deal with operations, objects, words,
and ideas are therefore most adaptable to robotic functions in an
automated manufacturing system.
Expert systems are currently the most emphasized area in the field
of AI. An expert system is an intelligent computer program that uses
knowledge and inference procedures to solve problems that are difficult
enough to require significant human expertise for their solution. The
knowledge of an expert system consists of facts and heuristics. The
performance level of an expert system is primarily a function of the
size and quality of the knowledge base that it possesses.
Expert systems have been applied to Computer-Aided Design (CAD),
such as VLSI and electronic circuits, and have been proposed for CAD/CAM
applications. A CAD system could alert a designer that a design does
not conform to specifications, it also might perform functions such as
selecting the inspection points of manufactured products based on an
analysis of its design.
To isolate a problem, the trouble-shooting system first displays a
menu of possible fault areas. When the user selects a particular fault
area, the system proceeds with a series of detailed questions. At
appropriate points during the question-and-answer session, CAD drawings
or video disk sequences are displayed on a CRT screen to assist the user
in locating various components. Finally, when the trouble-shooting
system identifies the cause of the malfunction, it generates specific
repair instructions.
75
Modern distributed control systems can monitor thousands of process
variables and alarms, delivering a constant stream of information to the
control room. PIPCON (Process Intelligent CONtrol) is a real-time expert
system for process control available from LISP Machine, Inc. PIPCON can
monitor up to 20,000 measurements and alarms and can assign priorities
to alarms to assist an operator in dealing efficiently with a process
interruption or fault. Other important areas being addressed by expert
systems in manufacturing include energy management systems, facilities
management systems, and factory and plant design simulation systems.
Expert systems are not problem-free. They tend to be highly memory
intensive, which may not be a terribly big problem as the cost of
hardware continues to decline. Another certainly near-term problem is
that the specialists required to set up such system are in short supply.
Estimates suggest perhaps as few as 250 people in the entire US can
tackle serious expert system challenges. A third problem is potential
resistance to expert systems on the part of end-users and programmers.
Human nature being what it is, reluctance to accept, let alone embrace,
expert systems can be expected. Last, expert systems cannot be accurate
all of the time [148].
3.6.5. Computer-Aided Statistical Quality Control (CSQC)
Pontiac Motor Div., General Motors Corp. has developed a computer-
aided gauging information and Statistical Quality Control (SQC) program
[26]. It provides the advantages of real-time analysis through compute
rization at every process. It also eliminates the delays and errors
76
associated with man-made control charts. By computer, each machine's
current statistical behavior and capability, not just workpiece dimen
sions, are monitored, assuring they are meeting specifications.
The effect is that action on a given process is taken when neces
sary to prevent production of out-of-specification products. Final
inspection, or action on the output side, is past oriented, involving
detection of out-of-inspection products already produced and requiring
physical sorting, scrapping, or reworking. The end result is that this
system can find a particular problem at the source, not at the end of
the production line when it is too late to take correct action. The
Pontiac system allows the automaker to move out of the part-sorting
business and into the business of producing a higher ratio of usable
products.
With the process running in statistical control, process capability
can be assessed. If the process cannot produce parts that consistently
conform to specification, the process itself must be investigated and
action taken to identify and correct the significant faults of the
system, or specifications will need to be reassessed. In general,
capability is a measure of the variation that a process can maintain.
3.6.6. Database Management
A database is a collection of stored operational data used by the
application systems of some particular enterprise [212]. A database
system provides the Computer Integrated Manufacturing (CIM) with centra
lized control of its operational data which is one of its most valuable
assets. This is in contrast to some systems where each application
77
software package has its own private files so that the operational data
is widely dispersed, and is therefore probably difficult to control,
leading to suboptimal use of data storage and duplicated data that is
inconsistent when only one has been updated.
With a centralized database, not only can redundancy be reduced,
but also the data can be shared and integrated. Data requirements of new
applications may be satisfied without having to create any new stored
files, ensuring integrity of the data in the database.
Based on overall system requirements, the centralized database can
be organized and structured to provide best service for the subsystems
and ensure that all applicable standards are followed in representation
of data. Standardizing data formats is especially desirable for data
interchange or transformation between systems.
The size of a centralized manufacturing database may grow to be
quite large. This extensive integrated manufacturing database will be
composed of process plans, fixture and tooling designs, NC programs,
quality control and inspection programs, and robot programs. It is
quite possible that data processing will ultimately become a subset of
the manufacturing operation once a companywide CIM operation is built.
An ideal database for CIM should allow the users to redraw the model to
suit new tolerance without endangering design integrity. It should also
allow users to interrogate and extract information from the database to
aid in process planning and tool selection. It may also offer help by
retrieving previously created data on feeds and speeds as well as
machine tool characteristics for the postprocessing phase.
78
3.6.7. Decision Support System (DSS)
A Decision Support System (DSS) can be one of the software compo
nents of an organization's information system. The main objective of a
decision support system is to improve the effectiveness of the decision
making process. The decision support system provides the control
capability for the structured problems, and the support in the decision
making process for the unstructured problems. A structured problem can
be defined as a problem that has been anticipated and occurs repeatedly.
An appropriate response for such a problem can be programmed in advance.
An unstructured problem may be defined as a problem that has not been
anticipated but arises occassionally. An appropriate response for such
problem may be sought with the help of the decision support system after
the problem has occured [53].
The concept of a DSS can apply to all levels of the organization
that interact with the CIM, and to the integration of the CIM with
operations in the rest of the organization. It can offer not just an
evolutive function, but a generative function. Given its capacity for
substantial computation, the DSS can propose candidate actions and
optimize among them, thereby creating a recommended course of action for
supervisor approval. The plant wide targets and production goals will
serve as inputs to the three operational levels.
The first level consists of long-term decision making, typically
done by higher management. This involves establishing policies, produc
tion goals, economic goals, and decisions that have long-term effects.
There are many ancillary support services, including: extended part
programming facilities, and part program verification tools. Part
79
design and manufacturing analysis is an effort that must be supported by
a variety of utility programs on the mainframe computers. To ensure
that the programs do what is intended, extensive part program verifica
tion aids must also exist. These usually involve some form of graphic
analysis and tool path plotting.
The second level involves medium-term decisions, such as dividing
overall production targets into batches of parts, assigning system
resources with each batch in a manner that maximizes resource utiliza
tion, and responding to changes in upper-level production plans or
material availability. The criteria the batching procedure should
satisfy are to minimize the number of batches required to process all
parts (this minimizes the time associated with batch changeovers), and
maximize the average utilization over all machines (this minimizes the
time required to work through an individual batch). Balancing the
workload involves minimizing differences in time required for workloads
assigned to different machines, and ensuring all the work for each batch
is assigned to some machine in the system.
The third level involves short-term decisions. The time horizon is
typically a few minutes or hours, and the decisions involve work order
scheduling and dispatching, movement of workpieces and material handling
system, tool management, system monitoring and diagnostics, and reacting
to disruptions. Of equal importance to the decision making within each
level is corranunication between the levels.
Integration of the operational levels is also an important feature
to be incorporated within the DSS software to be used with the FMS,
Basically, a Production Decision Support System (PDSS) consists of three
80
major components: a database, a control support system, and an analysis
system.
The analysis system of the PDSS is a set of software modules which
are developed specifically to deal with the unstructured production
problems associated with a CIM system. Whenever an unstructured
production problem appears on the CIM shop floor, alert information is
issued via the output communication terminal to the decision maker who
is in charge of the production control activities. After identifying
the problem, the decision maker establishes the feasible alternatives
with the aid of the system status information and relevant data
furnished by the database.
The basic capabilities in a PDSS include order scheduling, order
loading, work station capacity requirements adjustments, workpiece
sequencing, and reporting manufacturing performance, order status, and
alert information. For effective control, feedback information on the
controlled systems's behavior is usually required. The reports are used
to deal with the unstructured production problem of a CIM [53].
3.6.8. Inventory Control
On average, 34 percent of the current assets and 90 percent of the
working capital of a typical company in the United States are invested
in inventories. In 1979, Herron reported that for many firms, inventory
costs are approximately as large as before-tax operating profits [168].
Inventory management decision systems in the new international indust
rial competition of the future can no longer be designed separately from
81
their production process contexts. Inventory management, production
planning, and corporate strategy are all closely linked together.
3.7. Summary
It is apparent that Computer Integrated Manufacturing (CIM) systems
are a fruitful area for developing improved control policies. These
systems have considerable complexity because of the requirement for
multilevel control, the inherent variability due to the possibility of
machine breakdowns, the different processing requirements of jobs (both
in terms of routing and processing times), and the difficulty and cost
of collecting reliable information. As a result it has proved difficult
to formulate and solve adequate models of their control and operation.
Since the ability to develop exact models seems to be inherently
limited by the complexity of the systems, there is a need for further
work in the development of good approximate models of these multilevel
control systems with particular focus on the interaction between the
different levels of control [47].
The selection of scheduling rules, the determination of production
arguments (such as types of parts produced, number of stations and
queueings, types of machines, tools and transportation devices, types of
operations, time and cost consideration, and type of shops, etc.), and
the design of control algorithms are the problems which have not been
solved. Most research has not produced any general rules for further
research use. The best rule is to have a heuristic procedure searching
for answers close to optimal. Before an integrated package can be
82
developed, these problems need to be further studied and better decision
rules need to be developed.
Future application of CIM and the technical problems to be solved
include:
1. Computerized data banks based on proven and established cutting
and forming data, with the prospect of data optimisation in the future.
In a major investment required by a CIM system, the availability of the
best process data can have a direct effect on the economics of manufac
turing.
2. Automated computerized process planning and estimating as a
means of generating consistent and efficient process routes through an
FMS, and more importantly, through the other areas of manufacture
running in parallel with the FMS. Automatic process planning is also a
prerequiste for an FMS operating under a * real-time' scheduling system,
so that any requirements for rerouting, can be responded to immediately
by the generation of a 'new' process plan.
3. Computerized loading and scheduling must consider not only the
needs of a CIM system representing only part of a manufacturing facili
ty, but also the whole series of processes which combine to make
finished products ready to deliver to the customer on time. Real-time
scheduling is not necessary simply to be able to make frequent changes
by the hour or minute, but rather to have the ability to change a
manufacturing or tooling schedule quickly at a particular point in time,
and take full account of such a change throughout the factory as a
whole. This will enable at least two of the likely benefits of CIM -
83
reduced Work-In-Process (WIP) and better response to customer demands -
to be in other non-CIM parts of the factory.
4. Linking with office-based CAD/CAM systems using common data
bases from the point of component design through the preproduction
processes, thus ensuring that once the information concerning the tech
nical requirements of a part has been defined it can be processed and
reprocessed automatically via automated planning, scheduling and loading
systems into a series of direct manufacturing instructions in the form
of an NC program. This also means that any changes made, either for
reasons of performance, production, or cost, can be accommodated
throughout the system in the shortest possible time.
CHAPTER 4
CASE STUDY
Chapter 3 described the structure, components, and functions of
Computer Integrated Manufacturing (CIM) systems which, by nature of
their complexity can be defined as imperceivable systems, for which
analysis and design requires decomposition into subsystems. Putting bug-
free modules together to form a complete system, however, does not
guarantee that the system will work successfully. By taking into account
the supporting theories and software associated with Flexible Manufac
turing Systems (FMS) and other organization aspects, it is possible to
define a set of generic CIM system characteristics, on which the case
study in this chapter is based.
A shop flow control system is a subsystem of a CIM system. It is
chosen for study in this chapter because it is the essential and
most complex part of a CIM system. A shop floor control subsystem is
actually a group of Flexible Manufacturing Cells (FMC). The FMC instal
led in the Industrial Engineering Department at Texas Tech University
includes a CNC (Computer Numeric Control) Mill, a CNC Lathe, a program
mable robot and a microcomputer. The system analysis and design metho
dologies are applied to this FMC. Appendix A presents the information
about CNC Mill, TRIAC. Appendix B describes the CNC Lathe, ORAC. The
84
85
information includes function keys, operation modes, machine codes,
standard equipment, and extracts from of program listings.
A context diagram, developed by the author, is used for preliminary
analysis to help understand the application systems. In the context
diagrams, the thick solid lines represent functional hierarchy relation
ships, the light lines horizontal relationships, the thin solid lines
supporting software, and the project lines external system relation
ships. Each rectangular box represents an independent function, which
can be broken down into submodules in the lower level context diagrams.
Each rounded box represents a supporting mechanism. Each subsystan is
treated as an external module.
There are eight internal modules in a shop flow control context
diagram (Figure 1): cell coordinator and controller, job allocation,
material handling, device control, safety handling, quality control,
data acquisition and process, and user interface. Three external
modules are factory management, CAD/CAM, and scheduling. The supporting
mechanism is a cell COTiputer. The relationship of the shop flow control
subsystem to other subsystems, is shown in Figure 21 in Chapter 5.
Internal module operation and interaction with other modules are
described in detail in Chapter 3.
In the following section, nine methodologies are applied to the
shop flow control system based on the same level of analysis and design.
S(xne systems have concurrent processes, which cannot be identified from
the structure of systems but can be identified from the sequence of
processes. Some processes may take much longer to complete than others,
but the time constraint is seldom included in most design methodologies
From Factory Level
86
MAMWWWWMMWWMMUMMMWIMWWWWWWIM
From Factory Level
From Factory Level
FIGURE 1. Shop Flow Control Context Diagram
oTiner issues in system aeveiopment. Knowing how to go irom desigi
programming is important for both the designer and the programmer.
87
and diagrams. The only methodology among the nine that shows the time
factor is the IDEF2 Dynamic Model, an application of Structured Analysis
and Design Technique (SADT). Some design methodologies, such as MJSD,
SADT, PSL/PSA, and HOS, check the inconsistency of data input and
output, which decreases the difficulty of linking modules together to
establish a complex CIM system.
In the system develojxttent cycle, the fourth step is programming.
Software implementation, testing, debugging, and maintainence present
other issues in system develofwient. Knowing how to go from design to
If
the system design does not lead to successful program implementation,
the designer should confer with the programmer and modify his design.
So, in this case study, the feasibility and applicability of programming
from the results of analysis and design is also discussed, based on the
pseudocodes written for the FMC (see Appendix C). Because machining
processes and robot movements in the computerized manufacturing system
are concurrent and time-critical, a real-time multitasking executive
must be implemented. In this case study, a coimnercial real-time soft
ware system (KADAK's AMX86) was the executive.
There are five tasks in a FMC program:
1. Operator Task reads in the job specification file, machining
process file, tool file, and machine file and allows the operator to
enter proper data or make menu selections in order to produce part
programs for Mill and Lathe machines. After the part program is
produced, the operator must insure all machines and materials are ready
to start the process.
88
2. CNC Mill Task should be able to send and receive signals to and
from the CNC Mill machine. Also, it performs the bookkeeping for
process status and machine status. Each machine should be operated
independently.
3. CNC Lathe Task performs functions similar to the CNC Mill task.
In addition, it monitors the CNC Lathe machine. The Mill task and the
Lathe task must both wait for the robot to load material into the
machine and unload psu?ts from machines. Sensors in the Mill machine and
Lathe machine check for presence of any material or a part in the
machine. If so, the machines start the process. If not, the Mill task
or Lathe task will call the Robot task to load materials. While the
Mill task or Lathe task wait for the Robot task to load or unload
materials, they are in 'sleep*. Once the Robot task finishes loading it
*wakes' them up. The same process occurs when unloading parts from
machines. Both the CNC Mill and CNC Lathe tasks have the same priority.
When both request a robot, they are served on a first-come first-serve
basis.
4. Robot Task should be able to send and receive signals to and
from the robot, and to monitor robot movement. It also does the book
keeping for robot status. When the Mill task and Lathe task call for a
robot, they send an argument to the Robot task, which determines the
machine to pick up materials or parts, and where to put them down.
5. Safety Task is responsible for error detection, error correction
and emergency shut down of the manufacturing system. The Mill task or
Lathe task verifies input data from machines. If a fault occurs, the
89
Safety task is called and correction is performed, or the system is
shut down. The Safety task has priority over all other of tasks.
4.1. Structured Design (SD)
The DFD (Data Flow Diagram, see Figure 2) is the first step of
Structured Design (SD). The DFD shows how data flow through a logical
system, but it does not give control or sequence information.
The parallel lines on the DFD are a data store which represents a
logical file. A terminator shows the origin of data (source) used by
the system and the ultimate recipient of data (sink) produced by the
system. A process specification is created for each box in the lowest-
level DFD to define how data flow in and out of the process and what
operations are performed on the data. A data dictionary defines all
data in the DFD. It can also include physical information about the
data, such as data storage devices and data access methods.
The DFD is hard to draw when there are many modules (nodes) and
paths (vertices). In order to squeeze everything into a page, it is
necessary to shrink the size of modules and text, making the diagram
hard to read, especially at higher levels of detail. As such the DFD
should probably contain no more than six to twelve process boxes at each
level. Larger DFDs often show too much detail. If the designer does
not thoroughly understand data analysis and data modeling techniques,
inappropriate aggregation of system processes can result.
DFD is a very valuable tool for directing flows of data in a
complex system, but it is not adequate for checking all of the inputs
90
I I
8
a. 5
I
2 §
1 I
UJ
o Ui
t II oD oooo
c
<u E C o L . CO E 0)
o c c
T3 L. 3 C
>-•
> . OQ
Q CO
L. C
E CO L. 130 (D •H Q
3 C
CO
CO Q
UJ OS •=>
o I—I
91
and outputs of data. Specifications may be inconsistent for data input
and output, a type of error that can be detected from Michael Jackson
Structured Design (MJSD).
When using DFD, data models should also be included to give more
representative information about data structures. The data structures
are defined as being designed bottom-up.
The DFD shows that, after part programs are downloaded from the
computer to machines, the Mill machine and Lathe machine need to wait
for the robot loading materials. It did not show that the machines are
in 'sleep' while waiting, nor the decision rules to determine which
machine is to be chosen while both machines are waiting.
4.2. Meta Stepwise Refinement (MSR)
Each level is a machine. The three components of machine Mi are a
data structure set (Di), an instruction set (Ii), and an algorithm (Ai).
The Shop Flow Control system performs the following functions: set
up operator interface, produce part programs, monitor CNC Mill machine,
monitor CNC Lathe machine, monitor robot, write process report, write
status report, and write error report.
STEP 1: The system program is divided into twelve basic components
corresponding to the twelve program tasks.
Machine Ml
DI (job specification file, process file, machine file, tool file)
II (set up operator interface, produce (Part Programs),
92
monitor (equipment), write (reports))
AI: set up operator interface, produce (Part Programs), monitor CNC Mill machine, monitor CNC Lathe machine, monitor Intelledex robot, write (process report), write (status report), write (error report))
STEP 2; Machine Ml is defined in terms of machine M2. This is
accomplished by refining the instruction set and/or data structure set
for Ml. The decision is to refine the 'produce Part Programs' instruc
tion. The set DI is also refined since this is required to explain the
new instruction set 12.
Machine M2
D2 (job specification file,
material specification records, process specification records,
process file, type of process records, type of machine records, type of tool records, force records, angles and diameter records, feed rate records, spindle speed records, finishing process records, precision records,
tool file, type of tool records, process records, edges records, set up requirement records,
machine file, type of machine records, auxiliary equipment records, set up requirement records, operation mode records, process records)
93
12 (read (job specification file), find (material specification records), find (process specification records),
read (process file), find (type of process records),
write (part program), add (process number records), add (operation codes records), add (X, Y, Z positions), add (cutting speed records), add (feed rate records),
read (tool file), find (process records),
write (part programs), add (tool number records))
A2: set up operator interface, read (job specification file),
find (material specification records), find (process specification records),
read (process file), find (type of process records),
num__process=0, while (num__process [= jbb_num_process) do
num__process = num__process + 1, if ((process_process = job_process) and
(machine_j)rocess = job_j)rocess)) then
write (part program), add (process number records), add (operation codes records), add (X, Y, Z positions), add (cutting speed records), add (feed rate records),
endif read (tool file),
find (process records), if (tool_process = process_process)
then write (part program), add (tool number records))
endif enddo monitor CNC Mill machine, monitor CNC Lathe machine, monitor Intelledex robot, write (process report), write (status report), write (error report))
94
STEP 3: We would further refine 'produce Part Programs', or refine
any of the tasks. Because MSR is already in the same level of analysis
and design as the other methodologies, the procedure is stopped here.
Because of the nature of stepwise refinement, MSR describes how to
produce part programs very thoroughly in step 2, but it lacks informa
tion about the rest of processes and data flow. No other methodology
showed how to create part programs. Mainly it is too detailed to be
included at the same level of design as other methodologies. MSR has no
diagram technique. Procedurally, it is similar to a prograimner starting
from scratch to write a program. When a system is complex, it is
difficult to start a program description right away. Because there is no
data structure, it is easy to create data redundancy.
4.3. Warnier-Orr Design (WOD)
There are six steps in the Warnier-Orr Design (WOD) procedure:
STEP 1 Define The Process Outputs
Finished goods Part programs C(^pleted machining processes Process reports and error reports
STEP 2 Define The Logical Data Base
Job descriptions (materials, shape, size, machining, quantity, due date)
Process description (machining processes, tools, feed rate, cutting force, spindle speed, angles, diameters, precision, finishing processes)
Machine description (type of machine, machining processes, operation modes, set up requirements, tools, auxiliary equipment)
Tools (type, machining processes, edges, set up requirements)
95
Part programs (detailed processes)
Status reports (machines, processes, tools, raw materials, finished goods, errors)
Completed process reports (part programs, time, date, number of finished goods, errors)
Error reports (machines, processes, items, time, date)
The Warnier-Orr data structure is presented in Figure 3.
STEP 3 Perform Event Analysis
\ 1
1 ENTITY 1 ATTRIBUTE 1
{material Itype, strength, toughness, red hot, i ! i brittleness 1 Ipart {material, shape, size, quantity ! 1 machine Itype, functions, operation modes, tools,i 1 1 set up requirements, auxiliary equip. 1 {processes Itype, mode, tool, parts, accuracy, dia- 1 1 i meter, angles, force, feed rate, speed 1 {tools Itype, amount, machining processes, angle! {reports {process status, production, error I
STEP 4 Develop The Physical Data Base
Job descriptions (materials, shape, size, machining, quantity, due date)
Process description (type, machine, tools, feed rate, angles, cutting force, spindle speed, temperature, finishing processes)
Machine description (type, functions, operation modes, set up requirements, tools, auxiliary equipments)
Tools (type, processes, amount, angles)
STEP 5 Design The Logical Process
Job allocation — > set computer in communication mode — >
determine machining process — > set up tools for machines — >
put up materials for robot — > turn on machines — > load
programs into computer — > robot load materials to machines
96
Production -Information
Machine File
Tool File
Machining Process File
Job File
Type of Machines
Machining Processes
Operation Modes
Setup Requirements
Tools
Auxiliary Equipment
Setup Requirements
Edges
Machining Processes
Type of Tools
Type or Processes
Precision
Finishing Process
Spindle Speed
Feed Rate
Angles and Diameters
Forces
Type of Tools
Machining Times
Type of Machines
Process Specifications
Material Specifications
FIGURE 3. Warnier-Orr Data S t ruc tu re Diagram
97
— > start machining processes — > safety management — > error
detection — > error correction — > emergency shut off or
change tool — > continue processes — > finish processes — >
unload material from machines —•> print reports
The Warnier-Orr Process diagram is presented in Figure 4.
STEP 6 Design The Physical Process
At this step, the designer adds the control logic and file-handling
procedures to the design, which is dependent on the programming language
rather than the problem [224],
WOD has data structure chart and functional structure chart, but no
data flow analysis. In step 5 and step 6, there is neither diagram nor
guidelines for designing the logical process and the physical process.
It is good for small, output-oriented problems with tree-structured
data, but not other types of database.
4.4. Top-Down Design (TDD)
Top-Down-Design (TDD) has structured charts for both function and
data. At each level, there should be at most a single page of instruc
tions or a single-page diagram to explain the function. At the top
level, the overall design it should be possible to describe in approxi
mately ten or fewer lines of instructions. Usually, data structures are
designed in parallel with the procedural structure, but sometimes the
data structures are designed before the program structure. The TDD
functional structure chart is shown in Figure 5.
TDD defines functional structures and data structures. It is
similar to the Higher Order Software (HOS) diagram, but without data
98
FMC MGT
Safety MGT
-Operator-Interface
I — Emergency Shut-Off
— &ror Correction
— Bror Detection
— Set up Materials for Robot
— Set up Tools for CNC Mill
— Set up Tools for CNC Lathe
— Determine Process Type and Machines
Process Controh"
Data MGT"
• Datastorage
Data Processing
I Data Acquisition
— Printer MGT
-Device Control
CISC •Mill MGT
ONC •Lathe MGT
Robot MGT
— Planning-
Inventory Control
Cost Budget
Job Allocation
Maual Operation Loading/Editing Prog Changing Tools Machining Positioning Enable/Disable
Maual Operation Loading/Editing Prog Changing Tools Machining Positioning Enable/Disable
Movements
Load Material
Enable/Disable
FIGURE 4. Warnier-Orr Process Structure Diagram
99
o 8 "
u. • S 3
1 S
afel
y
Man
agem
ent 6.
0
1 O
per
ato
r In
terf
ace
4.0
n
s ? ?l EOT lU
Err
or
Corr
eclo
n 6.
2
Err
or
Det
ecb
on
6.1
1 P
roce
aa
Co
ntr
ot
3.0
Dat
a
Man
agem
ent 3.
2
Pla
nnin
g 2
0
Dev
ice
Conko
l
3.1
Inve
nto
ry
con
tro
l
22
Job
A
loca
bo
n
2.1
Set
up
mat
aria
la
for
robot
4.4
Se
tup
toc
falo
r C
NC
m
i
43
h'
l s | 5 | l
Dat
a
Sto
rag
e 3
.23
Dat
a
Pro
ceaa
ing
3.2.
i
DaU
A
cqila
ibon
3.2.
1
Printe
r M
anag
emen
t
o ?
Opar
atlo
n
3.1.
3.6
Load
ing
E
dili
ng
Pro
gra
mi
3.1.
3.6
Chan
gin
g
To
ola
3.1
J 4
Irfac
hln
ing
3.1.
3 J
Poal
bonin
g
3.1.
3.2
I s
Man
ual
O
pera
lon
31.2
6
Load
ing
/
Ediin
g P
rog
ram
i
31
.2.6
o 5 5 |
Ro
bo
t M
anag
emen
t
3.1.
1
Mo
vem
enta
3.1
.1J
Lo
ad
/Un
load
M
alar
ial 3 1
12
En
bale
/ O
iaaU
e
3.1
.1.1
Chan
gin
g
Toola
3.1.
2.4
Mac
hini
ng
3.1
^.3
Posi
tionin
g
31
22
Dis
able
3.1.
3.1
1 CO
) D
ial
Q
£-
C 00
<u Q
3 c 1 d. c E-
• LH r»i
M u<
100
flow. One reason that TDD is superior to HOS is that HOS uses binary
structure diagrams, which are not well-suited to show functional decom
positions. For example, the DEVICE CONTROL function can be decomposed
in the TDD diagram into as many devices as necessary in one level, more
than two levels are needed for the HOS diagram since only two functions
are allowed in each level. A combination of TDD and bottom-up approach
is often more practical.
4.5. Michael Jackson Structured Design (MJSD)
Michael Jackson Structured Design (MJSD) consists of four sequen
tial steps:
STEP 1 Data Step: describe each input and output data stream as a
hierarchical structure (see Figure 6).
STEP 2 Program Step: combines all the data structures produced in
the first step into one hierarchical program structure (see Figure 7).
STEP 3 Operation Step: Make a list of executable operations needed
to produce the program output from the input. Then allocate each opera
tion on the list to a component in the program structure (see Figure 7).
Each box represents an operation.
1. Open job file. 2. Open process file. 3. Open tool file. 4. Open machine file. 5. Read job specification record: process specification and
material specification. 6. Read process records: type of processes, type of machine,
machining time, type of tools, angles and diameter, forces, feed rate, spindle speed, finishing process, precision.
7. Read tool records: type of tool, process, edges, set up up requirements.
8. Read machine records: type of machine, process, operation mode, set up requirements, auxiliary equipment.
&
101
Ui (T
CO
Feed Rate
Tools*
^ X . Y . Z Positions
Operation Codes
Process*
Machining Processes
Setup Requirements
Tools
Auxiliary Equipment
A Setup Requirements
I ^ Edges
Machining Processes
Precision
Finishing Process
Spindle Speed
5 S
Type Of Tools
Machining Times
I I Type Of Machines (
Process Specification
Material Specifications
0 - 2 0 0 fpm
Feed Rate
Angles & Diameters
Forces
0 - 1 0 0 0 fpm
C loclmise/Cou nterciocicw ise 0-360*
Cutting Force
8 Types
CNC LatheCNC MiU
Size, Shape, Quantity, Due Date, Cutting, Type Of Process
Type Of Msterial, Strength, Toughness, Red Hot, Brinleness
§ to
CO
(t5
Q
Q CO •-3
c bO
•H 0} 0)
a 0) tu 3
4 J O 3 L,
JJ CO
C
o CQ
O a)
v CO
o
vO
OS
C3
102
: CM
Pro
duce
re
po
rts
\ \ . \
prod
uce
pro
du
ctio
n re
po
rts
V 10
= 5 s O (0 O-Q. U9 w
E o c
c o « o 3 • o o k. Q. 9
nsum
o o
<0
3 U
a> ^ c o ^ u 3 •o o Q.
duce
U
a.
Produce feed rate
Produce cutting speed
Produce tool #
,— I
<M ; I
-i'o'j L J
Produce X.Y,Z positions ' . . . .
Produce operation codes 00
c
sz o
2
jjT •^ ^ o —i
9
urn
<n f-o O
c (0
JJ — o o *. <l)
"^ M (A <U U
pro
(0
a u 3
•o o a
J9
9
c £ o E
R 9
•a c (0
M
E k .
u>
pro
Produce process # 1M
Produce header of part program
4 <»
Consume machine file 00
Consume tool file I
_ i CO
Consume machining process file
' . I
-,• «>
i bO (0
a
CO
to o
CO •-a
t« •H Of] 0)
O
T3 0)
3 4J O 3 L.
CO
c o CQ .i£ O OJ
•-3
0) (a s: o
•H
s
OS
C5
• m
in
103
9. Write part program header. 10. Assign operation code. 11. Assign X, Y, Z positions. 12. Assign tool numbers. 13. Assign cutting speed. 14. Assign feed rate. 15. num_loop = 0. 16. num_loop = num_loop + 1. 17. Write process number. 18. Write operation code. 19. Write X, Y, Z position. 20. Write tool number. 21. Write cutting speed. 22. Write feed rate. 23. Write end of part program. 24. Monitor production processes. 25. Write status reports. 26. Write production reports. 27. Write error reports.
STEP 4 Text Step: Write the ordered operations, with condition
logic included, in the form of structured text, a formal version of
pseudocode.
PART PROGRAM seq write part program header open job file open machining file open tool file open machine file read job specification records read process records read machine records numjprocess = 0 loop numjprocess = num_process + 1 if ((process_process = job__process) and
(machine_process = job_process)) then assign process number
assign operation codes assign cutting speed assign feed rate
read tool records if (tool_j)rocess = process_process) then assign tool number
end-of-loop PART PROGRAM end STATUS REPORT seq
monitor production processes
104
write status report if error occurs then goto err
STATUS REPORT end
PRODUCTION REPORT seq monitor production processes write production report
PRODUCTION REPORT end
err: ERROR REPORT seq write error report
ERROR REPORT end
MJSD starts from input and output data description, and then goes
to hierarchical program structure. Both data structure and data flow
are included, but neither the sequence of processes nor functional
hierarchy structure are included. It is good in data structure, but weak
in terms of control logic design and design verification. The informal
verification in each basic design step is not sufficient. The literal
description of operation and text steps is not as clear as with the data
flow diagram or the functional hierarchical structure chart. The text
step provided similar information but not as detailed as MSR, especially
for producing part programs.
4.6. Problem Statement Language/Analyzer (PSL/PSA)
The system flowchart of Problem Statement Language/Analyzer (PSL/
PSA) is in Figure 8. Each of the objects defined in the narrative
description can be given a corresponding PSL name and object type (see
Table 1). The description of the system using PSL is shown in Table 2.
PSL/PSA is a computer aided design tool which checks for data
inconsistency. Four types of information, the sequence of process,
functional structure, flow of data, and data structures, are all
105
Job Schedule & Spec
I Scheduling< System J
Time Sheet &Job Requiremenl
[ Computer|__^ Computet Control
C Operator H I
Correctior Request
Safety Process
I
Alarm Signal
Shut-Off
T Error Corection
Error j Report-/
Shut-Off Notice
Operator Interface
Determine Process & Machine K Part
Program (Shop Roor ] Control Sys j ^ -
Preparation & Requirements
•I Put-up Material for Robot
Set up tool for Mill
Robot Ready Signal
i Mill Ready Signal
Initial Robot
I Initial Mill
[Robot Initiotec I Signal
T Signal Robot to Load
T"
Robot Load
i Finish Loading Signal
JM Continue Signal
Change Tool
Signal for Changing
Jflfll
I Mill Process
I Signal Robot to Unload
Robot Unload
Ii Set up tool for Lathe
Lathe Ready Signal X I
Errors
Initial Lathe
T Signal Robot to Load
Lathe Process
T Signal Robot to Unload
T
Signal for Changing Tool
Change Tool
T Lathe Continue Signol
Process Repor
FIGURE 8. PSL/PSA System Flowchart
106
TABLE 1. Narrative, PSL name, and PSL Object Type
NARRATIVE PSL NAME PSL OBJECT TYPE
Scheduling Department Job schedule & specification Job allocation Job file Time sheets & job requirement Computer Computer-control Operators Operator-interface Determine process & machine Part programs Shop-floor-control-syst«n Materials Setup tools for Mill Setup tools for Lathe Put up materials for robot Mill ready signal Lathe ready signal Robot ready signal Initial Mill Initial Lathe Initial robot Mill signal robot to load Lathe signal robot to load Robot load Robot end of load signal Mill process Lathe process Mill need to change tool Lathe need to change tool Mill change tool Lathe change tool Mill continue signal Lathe continue signal Mill signal robot to unload Lathe signal robot to unload Robot unload Finished goods Inventory control Dept Process report Status report Correction Request Safety process Error correction Alarm signal Error report Shut-off
Schedule-Dept Job-sched-spec Job-allocation Job-file Time-sheet-req Computer Computer-contrl Operators Operator-interf Deter-proc-mach Part-progm Shop-floor-Ctrl Materials Set-tool-Mill Set-tool-Lathe Put-matrl-robot Mill-ready-sign Lathe-ready-sign Robot-ready-sign Initial-Mill Initial-Lathe Initial-robot Mill-wait-load Lathe-wait-load Robot-load Robot-end-load Mill-process Lathe-process Mill-need-tool Lathe-need-tool Mill-change-tool Lathe-change-tool Mill-cont-sign Lathe-cont-sign Mill-wait-unload Lathe-wait-unload Robot-unload Finished-goods Inventory-Dept Process-report Status-report Correct-request Safety-process Error-correct Alarm Error-report Shut-off
INTERFACE INPUT PROCESS SET INPUT INTERFACE PROCESS INTERFACE INPUT PROCESS OUTPUT INTERFACE INTERFACE PROCESS PROCESS PROCESS INPUT INPUT INPUT PROCESS PROCESS PROCESS INPUT INPUT PROCESS INPUT PROCESS PROCESS INPUT INPUT PROCESS PROCESS INPUT INPUT INPUT INPUT PROCESS OUTPUT INTERFACE OUTPUT OUTPUT INPUT PROCESS INPUT INPUT OUTPUT PROCESS
TABLE 2. Relationship Between Two Objects
107
OBJECT
Schedule-Dept Job-allocation Job-allocation Time-sheet-req Time-sheet-req Ccxnputer Computer-Control Operator Operator-interf Machine-file Process-file Tool-file Part-progm Deter-proc-mach Operator Operator Operator Prep-req Prep-req Prep-req Set-tool-mill Set-tool-lathe Put-matrl-robot Set-tool-mill Set-tool-lathe Put-matrl-robot Mi 11- ready- si gn Lathe-ready-sign Robot-ready-sign Initial-mill Initial-lathe Mill-wait-load Lathe-wait-load Robot-load Robot-end-load Robot-end-load Mill-process Lathe-process Mill-process Lathe-process Mill-need-tool Lathe-need-tool Mi11-change-tool La the-change-tool Mill-cont-sign Lathe-cont-sign Mill-wait-unload
RELATIONSHIP
GENERATES RECEIVES UPDATES GENERATED BY RECEIVED BY RECEIVED BY GENERATES GENERATES RECEIVED BY UPDATED BY UPDATED BY UPDATED BY RECEIVED BY GENERATES RECEIVED BY RECEIVED BY RECEIVED BY RECEIVED BY RECEIVED BY RECEIVED BY GENERATES GENERATES GENERATES GENERATES GENERATES GENERATES RECEIVED BY RECEIVED BY RECEIVED BY GENERATES GENERATES RECEIVED BY RECEIVED BY GENERATES RECEIVED BY RECEIVED BY GENERATES GENERATES GENERATES GENERATES RECEIVED BY RECEIVED BY GENERATES GENERATES RECEIVED BY RECEIVED BY RECEIVED BY
OBJECT
Job-sched-spec Job-sched-spec Job-file Job-allocation Computer-control Computer-Control Operator-interf Operator-interf Deter-proc-mach Deter-proc-mach Deter-proc-mach Deter-proc-mach Shop-floor-Ctrl Prep-req Set-tool-mill Set-tool-lathe Put-matrl-robot Set-tool-mill Set-tool-lathe Put-matrl-robot Errors Errors Errors Mill-ready-sign Lathe-ready-sign Robot-ready-sign Initial-mill Initial-lathe Initial-robot Mill-wait-load Lathe-wait-load Robot-load Robot-load Robot-end-load Mill-process Lathe-process Mill-need-tool Lathe-need-tool Mill-wait-unload Lathe-wait-unload Mill-change-tool Lathe-change-tool Mill-cont-sign Lathe-cont-sign Mill-process Lathe-process Robot-unload
108
TABLE 2. Relationship Between Two Objects (Continued)
Lathe-wait-unload Robot-unload Robot-unload Finished-goods Process-report Robot-load Robot-unload Mill-process Lathe-process Errors Ccxnputer-control Correct-request Safety-process Safety-process Safety-process Error-report Error-correct Alarm Shut-off Shut-off-notice
RECEIVED BY GENERATES GENERATES RECEIVED BY RECEIVED BY GENERATES GENERATES GENERATES GENERATES RECEIVED BY GENERATES RECEIVED BY GENERATES GENERATES GENERATES RECEIVED BY RECEIVED BY RECEIVED BY GENERATES RECEIVED BY
Robot-unload Finished-goods Process-report Inventory-Dept Shop-floor-Ctrl Errors Errors Errors Errors Computer-control Correct-request Safety-process Error-correct Error-report Alarm Shop-floor-ctrl Computer-control Shut-off Shut-off-notice Shop-floor-ctrl
109
included. The time factor is also considered in system dynamics aspect.
Regarding the amount of information PSL/PSA contains and the quality
(consistency, coherence, accuracy, and completeness), PSL/PSA is supe
rior to any other methodology. The system flowchart requires too
detailed a description, which is not necessary for high level design.
4.7. Structured Analysis and Design Technique (SADT)
The IDEF systems are the application of Structured Analysis and
Design Technique (SADT). The IDEF diagrams provide more information than
any other design technique. The functional hierarchy structure, seque
nce of processes and data flows are all included in IDEFO. IDEF1
provides structured analysis for entities, relations, attributes,
attribute domains, and attribute assignment constraints, which are
seldom mentioned in other design methodologies. IDEF2 shows the time
dependent characteristics of functions and information.
IDEF system development methodology establishes a formal definition
of the system. Because it can be utilized to construct, implement and
maintain an integrated system, it is a suitable methodologies for CIM
systems. Only the IDEFO models are used in the study (Figure 9 to 16).
4.8. Higher Order Software (HOS)
The most primitive form of Higher-Order Software (HOS) is similar
in structure to TDD, but the binary tree structure causes the diagram to
be much larger. It also gives input and output for each function, and
their mathematical operations, which provides more information than
110
g
1 8
t _ i
FIN
i
X
20/8
6
UJ J— <
CO
GG
INS
D
CO
HW
U-Y
AN
M
DE
SIG
N
^ o
AU
THO
R
RO
JEC
T:
OTE
:
Q. Z
• •
OB
JEC
T
S
o r-a> oo
o
^ =
CO OC
*roc
»ss
Li
nts
Con
stra
^^
e &
Bud
ge
E h-
«-• c
lirem
a-9 oc
_,
men
1
Allo
wan
ce
CO k .
Mat
ob
Ord
er
-i
c
ecifl
cati
ca. CO
Job
' -
C3>
Pla
nnin
L i i
achi
ning
c •K O
11 •ti-UI Q.
^ W
saiB
.^
Par
CJ
?? JR
Ope
rat
lnte
rfa<
ik
c <->
rorm
aii
2 £
(0
rial
aw M
at(
o> o
^ , . „ .
A A A
n : i L
5^ 2 -3
^
c c c
c a
9-5 - r oc'- :
I
Proc
ess
CO ^ ^ CO
Proc
e
cr
c : L
" ^ -
ontr
ol
o ^
0)
Com
pute
j ^
E (Q
<
• (0
m -
c _o ^ «
c
Saf
ety
Man
agem
e
8 oT o ^ 1
Q. UJ 1
2 ^ UJ o
o
E 0)
>.
ion
S
c
Ope
r£
rorm
al
quip
mi
C UJ
QC
3 Z UJ
Z Q
LU
SYST
d
ITLE
: HO
P FL
OW
a
H CO
§
E CO
OJ
O
Q
O
>
Q
C/3
ON
UJ OC
o
I l l
D
1 0 on <
DR
AFT
FI
NA
L
X
AU
THO
R:
SH
WU
-YA
N S
CO
GG
INS
DA
TE:0
7/2Q
^86
PR
OJE
CT:
CIM
DE
SIG
N
NO
TE:
12
34
56
78
91
0
UJ
d 0
Mat
eria
l A
llow
ance
... ^
t C
onst
rain
ts
'am
s
Judg
e
Par
t
Tim
e& E
M
achi
ning
In
form
atio
n
Pro
gi
t bqui
pmen
i P
rep
arat
ion
•—
^
Det
erm
ine
proc
ess
type
&
mac
hine
2.
1
4
iJob
Spe
Set
up
to
ols
for
CN
C L
athe
2.2
t
cifi
ca
tion
*^
Set
up
m
ater
ial
for
robo
t 2.
4
f S
etu
p
tool
s fo
r C
NC
Mill
2.3
-^ -w
1 R
aw M
ater
ial
Syst
em
Info
rmat
ion
Ope
rato
r N
C M
ill
O
CN
C L
athe
C
ompu
ters
1
Equi
pmen
t
NO
DE
NU
MB
ER
: 02
TI
TLE
: O
PER
ATO
R ir
fTER
FAC
E
CM
<
E (0 S-
CO
o u. UJ Q OJ
>
H
CO
UJ
o
112 C
onte
xt
Oft
Q Fina
l
X
<o 00
o CM
s D
ate
: 07
.c
Sh
wu
-Yan
Sco
CIM
D
esig
n
Aut
txx:
P
roje
ct:
ECT:
—i CD O
D 0
o
o*
4 5
6 7
8 N
ote:
1
2 3
ss
atio
n
<u c
o £ a £
i2 1 g £ P c
p P
roce
ss R
equirei
1
>
> 1)
1
J
)
ent
—
tions
atio
n SI
6 2 .o
i2 -C m o a 0) cv
r P
roce
r SS
la
tion
—
o ' oS
i-s 1 . . .
o
3001
an
agem
ent
3.1
r 2
Mat
eria
ls t
r~
Cv C ^^
9 9
CN
C L
ath
M
anag
em
^12
i M
ater
ials
pa
rts
rea
for
proc
e
i i
1 0 0 rri
ent
_ 6
CN
C M
i M
anag
(
1 )
1
<A C
rogr
ar
i
i * " " • - 0 CO DC
o - f -
rod
uc
)ata
*
— ^
T ^ "? US u c
Q.
CL
F DJO
O
Par
tPr
t «-o o
2ioe5
btat
us
Sta
tus
£ 0) a
fe o
Prin
t*
Man
1-J2
• 4 -C ' J 0) CO
£ 9
Dat
a
Man
oQ
iM SDO
O C3>
U k
4 on
s
-£3
8 c c c
i >
) k.
UJ
ters
Pr
in
pu
ters
£ o O
JC M
ills &
)n
np
ute
rs
uo ::
Lat
he
&
•np
ute
rs
z o O O
Com
pute
rs
Rob
ot &
yste
m
CO
orm
atio
n
quip
men
1 UJ
1 8
de N
u
O
z
ess
Contr
ol
o o
Title
: Pr
5
E CD L. t ^ CO
o UJ Q
PO
(U • >
(U • J
E -Q < CO
CxJ
CJ l-H
113
o ^ • 1 n <Sn <
« 2 o
•a c iZ
X
(O 00
o CM
Dat
e:
gins
S
hwu-
Yan
Sco
g
CIM
Des
ign
5 6
7 8
9 1(
• ^
utho
r
reje
ct
lote
:
2 3
< Q- Z ^
O UJ - 3 CD O
men
ts
9
—Sa
fety
R
equi
1
1 (A
men
t
<A w
Proc
e R
equi
M C o
or
rrec
ti
Err o o
w
,
ncy
9 O) I—
9
E UJ
CM • *
c o
>ctii
k. sf
b o UJ o
§ t ' o 1
eque
st
r C
orre
c
cc£
^
^
1
c
tio
rror
et
ec
UJ Q
JOJJ
roce
ss E
c
Ol
Mac
hini
i tion
In
form
a
nms
Ala
m • *
u c «. 0 ) 0 ^ 1
11 UJ CO
A
M
yste
nr
CO uop
Info
rma
lent
E
qu
ipm
o
iber
:
fc
z -8 o z
* r f
gem
en
a
ety
Man
(0
Title
:
S
T f
<
1 1
E (0 u taO (0
O C J L , liJ Q
<u >
0)
CO C\J
UJ oc
o «—I U-,
114
1 D •
nte
xt:
o ^ n o O <
raft
Q
inal
U .
X
( O OO
o
r i o
Dat
e:
M C
1 2 CO „ " c C C3) 0 0
3r:
Sh
wu
-Ya
ct:
CIM
Des
ii
3 4
5 6
7 1
^ -S- 1 CM 3 C O < Q. Z ^
EC
T:
- 5 CQ O
c <D
E 9 (0 c (0 s (A
ac k .
t
aint
s is
tri
o O * r f
9 T3 3
CQ
me&
P 1
1 1
1 «< c c o 9 'O
E 2
Equip
P
rep
a
y -
ble
/Dis
able
O
t
3.1
to .a c o
UJ CC
M
tion
lb S
pec
ifica
l
o —>
iai
Rea
dy F
or
ISS
A
Mat
er
Pro
ce 1 he
d G
oo
ds
1 Fi
nis
als
•c 2 1
m
adec
i P
art
3d CM
— CO
•o (0
o
lad/
Ui
ater
ial
^ 2
• ^
A A
-^
J£
Raw
Mat
eria
"8 o
CD
Pro
cess
ed <
Tran
spor
ted
Mat
eria
l or
Goo
ds
m
3.i
;
ting
k .
o OL M
ran
1-
;ers
3
E o
Rob
ot &
C
tn o
0)
Num
9
Nod
emen
O ) CO
(0
IS 4.^
o
Rob
.2
Tit
,-CO
<
E CO
00 CO
• H
Q
O UJ Q
PO
>
Q < OQ
UJ
•=) o l-H
u<
115
D
1 O 5n
Dra
ft Fi
nal
X
Au
tho
r:
Shw
u-Ya
n Sc
oggi
ns
Date
: 07
/20/
86
Pro
ject
: C
IM D
esig
n
Not
e:
ECT:
O
BJE
5
12
34
56
78
9 10
I 1
r-
Proc
ess
Req
uire
men
ts
roce
ss
form
atio
n Q.S
n
A
5 3
D
3 3 9 D
r
pmen
ar
atio
Eq
ui
[
. . . o J Q- CO
—
—
—
cess
rm
atio
n
Q. S
c CO r
" f
•Q
CO ^
E CO cn ^
T 3 r t t i - H k . LUCLQ.^"^
•
Lath
e
9 CO
S " (0
Q -c
(0 c
UJ
.
ii
?!
o a
i L » V
c
Enab
^
lion
s
1 Lo
adin
g/
Editi
ng
Prog
ram
s 3
32
i i
1
Par
tP
i
i S
I
5 . A - ^ " ^ — n o -
Pos
ition
ing 33
3
1
1
1
cess
m
natio
n Pr
o In
to
CO CO
c c ^ CO 0 -c 0 O K
-4
1
1 ^
1 ,
Mac
hini
ng
335
CO CO
M c 0
«'^ C <D ia
9
i CN
C L
atf
i i 9
CN
C L
ai
CN
C L
athe
C
NC
Lat
he
1
Com
pute
r C
NC
Mill
C
ompu
ter
CN
C L
athe
&
Com
pute
r
Nod
e N
umbe
r:
Title
:
CO 0
CN
C L
athe
Man
agem
ent
A32
E
iag
ra
Q
0 U . UI Q 1—(
OJ
< r-4
> (L>
• J
E-
SAD
• ^ '"
UJ oc ID 0 l-H U H
116
X
2 •
aft
o
nal
LL
X
0/86
CM
o
Shw
u-Y
an S
cogg
ins
Dat
e:
CIM
Des
ign
hor:
iect
:
3 2 < Q-
t -
BJE
C
O
1 D
0
<b o z
n
CO
<
5 6
7 8
9 10
CO
^
£ "D 2 A ® A <o -a ik M ^
2-1 0. ^
ess
Req
uire
men
ts
p-Pr
oc
1
0
j 2 5
J 3 3 3 9 D
r
«- c
8^1 « a. O 1 ^
c 1
—
—
o '
ces!
rm
a In
fo
c o
.g
<a r
o >
(A
T i f f lP" :2 j^ LUQ-CLI
1 9 CO
S « ca
B/Di
s M
ill
^^ CO c
UJ
^ Load
ing/
Ed
iting
Pr
ogra
ms
332
- ° i c
UJ
?
-A:;?* | U . -
333
c
Posi
tio
1
«A CO ^ ^ * ® E " 0 E
&
^ CO
c
Cha
ng
Tool
s
1 n
inin
g
^ ^ S
J
336
al
atio
ns
Man
u O
per
.ath
0
z ^ " ^
the
CL
a
z ^ "
athe
—1
0 z 0
NC
Lat
he
0
:)om
pute
r M
ill
pute
r
2 £ z 0 O O
E 2
Equi
p Pr
epa
1
<A
k .
8* s.
leci
fi tio
ns
artP
1 1
^ 2 3 3
^ E z 0 0 0
1 o,^g 0-
de N
umbe
r: 07
0
z
c
Title
:
CN
C M
ill M
anag
eme
CO CO <
E (D L. 00 CD
O
UJ
a
>
Q
in
UJ
oc o
117
D O
^ n X
1 D <Sn <
Dra
ft 75 c
X
( O CO
3 CM
h".
Dat
e:
0 og
gins
10
iwu-
Yan
Sci
M D
esig
n
6 7
8 9
W O •« •^
S ^ •• " ^ ® ffl
# o" S < 2 " 5 < Q . Z T -
o UJ - 9 CQ
o
ata
Q
u 3
1 p— ^
<A * C 9
«A <n (0 i l
CL CC
1
AM
CL
CO
CO
9 at
1 o (0 tf Q CO
a A (0 4^ o
• o
o (A (A
k -
c o
8 2
CL
CM
CO
C3)
_c * M <A
<0 ^ Q Q.
U ' S
<A E
3 o
CC .2
1 ^ 1 « r f
CO
c o
*'S
<o o Q <
9
"8* «A C 3 " O
«« 2 ? 0 0 i s c
© "» 5 J o "c l - O •=
<A M
8 2
c o
•Si
i o o
h .
g
^ i
<A
? ^ Q .
%
i
d . UJ (3L
<
<A E 2 CO c o CO
E o^
2 3 CL
E o o
0 0
o
1 E 3 Z <D
T3 O Z
c 0)
E 9 CT)
C
CO
<
E CO L. 00 00
o u. UJ Q
PO
>
J
E-i Q < CO
UJ OC ZD CJ l-H
118
TDD. Mathematical operations are determined by precise and objective
definitions, which makes the diagram easier to be processed by a
computer. HOS is an automated technique, and because of this it is
likely that HOS will become more popular than most of the other methodo
logies in the future. The HOS control structure chart is in Figure 17.
4.9. HIPO (Hierarchy-Input-Process-Output)
HIPO has a visible content diagram which is same as TDD and
Warnier-Orr process diagram. In the overview and detailed function
description, however, it is neither a functional hierarchy chart, nor a
logic process control diagram. It simply defines all the inputs in a
box, the processes in another box, and all the outputs in a third box.
The data is not structured. The HIPO hierarchy chart is the same as a
structure chart in TDD. The process charts are in Figures 18 to 20.
4.10. Conclusions
From the above analysis of the nine design methodologies and
their diagrams based on the Flexible Manufacturing Cell (FMC), it can be
concluded that no methodology includes all four types of information
that a good design methodology and diagram should have. Because there
are seldom enough rules or guidelines for designers to follow to sub
divide a system, to stop subdividing, to check data consistency to
measure the quality of the design, or to detect errors in the early
design stage, the methodologies rely to a greater or lesser extent on
the experience and judgement of the designers, who may need more than
119
mad
in
to
i A £ S " s
Err
or
algn
a
o c
rror
an
d!
UJ X
2 8
8"
a 7
• c
5 •
o c UJ 'S
• is
o 33 U
o • w O u o
120
INPUT
(a). Physical lOperator Raw materials Printer CNC Mill CNC Lathe Robot Computer
(b). Logical Part program Job Schedule & ISpecifications
PROCESS
Job allocation Set on operator mode Determine processes and machine
Set up tools for CNC Mill Set up tools for CNC Lathe Put up materials for robot Set cell computer in communication mode
Turn on CNC Mill Turn on CNC Lathe Turn on robot Load materials to machine Load part program from CNC machines or host computer Start machining processes Change tools Finish machining processes Unload materials from machines
Load part program to host computer
Error detection Safety management Error correction Emergent shut-off Print reports
OUTPUT
(a). Physical Finished goods
(b). Logical Part programs Process reports Error repx rts Completed process
FIGURE 18. Flexible Manufacturing Cell HIPO Diagram
121
1 INPUT
1(a). Physica {Operator IRaw materials iComputer
1(b). Logical 1 Manual data ir iput
PROCESS
X (X axis moves) Y (Y axis moves) Z (Z axis moves) G (G function) M (M function) IFEED (Axes feeds) ICLW (Clockwise circular
movements) !CCLW (Counterclockwise
circular movements) IT (Tool selection) IS (System software ver
sion and EPROM test) 1 REPEAT (up to 99 times) I MIRROR X [MIRROR Y lAUX/INPUT (4 external
output switches) [OFFSET (machine/program
offset) [SCALE (Scale movements) RESET ICOMP (Tool radius comp
ensation) I FLOAT DATUM (A selected
position as zero) [ABS DATUM [LOAD (Load a program) [EDIT [DATA LINK (To external
equipment) lABS/INC (Absolute/incr
emental data) [CASS (Cassette system) [BLOCK SEARCH ITPG (Tool path graphics INCH/MM (Data units) [SPINDLE +/-/FWD/REV CYCLE iPRINT
I OUTPUT
(a). Physical Finished goods
(b). Logical Completed process
FIGURE 19. CNC Mill HIPO Diagram
122
1 INPUT
i(a). Physical {Operator {Raw materials {Computer
{(b). Logical {Manual data input
PROCESS
PTP (Point to point) CIRC (Circular interpo
lation) THRD (Thread Cutting) DWELL (Dwell Period) AUX I AUX 0 SUB (Subroutine) INS/MM (Data Units) INS/ABS (Incremental/ absolute format)
X Traverse Y Traverse Feedrate Overide Spindle Speed +/-Reset Emergency Stop
[OUTPUT
(a). Physical Finished goods
(b). Logical Completed process
FIGURE 20. CNC Lathe HIPO Diagram
123
one methodology and one type of diagram to make the analysis and design
more complete.
IDEF systems from SADT are most suitable for CIM system analysis
and designs. They incorporate MJSD's data structure, and can be used
for most complex systems.
Many design methodologies cannot detect the inconsistency of input
and output data. Even some computerized tools have this problem. An
ideal computerized tool should automatically check data integrity.
Comparing the amount of information derived from each methodology
to the FMC (Flexible Manufacturing Cell) pseudocodes provides a insight
into the extent to which a methodologies produces what the programmer
needs.
Some methodologies provide information about sequence of processes,
but concurrent processes are seldom included, leaving design of such
processes dependent on the experience of the programmer, instead of the
designer. The choice of real-time multitasking executive will make a
difference to program technique but not system design.
If a software package generates codes automatically according to
the system design, the system design has to be implemented with
compatible software, so that the design can be recognized and translated
into codes as are PSL/PSA and HOS which use interactive design software
and automatically check data consistency. PSL/PSA has a very good
reputation for consistency, correctness, and completeness, but does take
a lot of cpu time and memory. For simple system designs, it is too much
trouble to use the software.
CHAPTER 5
COMPARISON OF DESIGN METHODOLOGIES
VS APPLICATION SYSTEMS
In the past, research on system analysis and design focused only on
the techniques of the methodologies; ignoring the characteristics of
application systems and program develojMient problem. Nevertheless, each
application system and each methodology can be shown to have its own
characteristics. Certain methodologies are suitable for a particular
type of application system. A methodology should provide a full set of
system specifications and lead to a successful programming environment
and system implementation.
In Chapter 4, the nine methodologies were applied to a shop flow
control subsystem in the higher level of analysis and design. The same
techniques can be applied to the other subsystems and the overall
Canputer Integrated Manufacturing (CIM) system. The results can be
compared and contrasted to determine the relationship of methodologies
and application systems. Should the characteristics of the methodolo
gies match those of a particular type of application systems, the first
hypothesis is confirmed; otherwise, any methodology can be equally
suitable for any application system. The results of the study described
in this chapter confirm the former assertion.
124
125
5.1. The Application Systems
The description of CIM application systems in Chapter 3 shows that
system analysis and design can start either from the top-level and
gradually work down to detail or from the bottom-level and gradually
aggregate submodules into higher level modules. Since nine methodologies
are applied to the same level of each application system, and some
methodologies use the bottom-up approach (such as Structured Design), no
detailed analysis and design will be provided in this Chapter. The four
subsystems are: the shop floor control subsystem which is already
cinalyzed in Chapter 4, product design subsystem, production planning/
scheduling subsystem, and inventory control subsystem. The sinalyses and
designs for the systems provide only the context diagrams and conclu
sions rather than the detailed information as in Chapter 4.
1. Overall CIM system: The context diagram of a CIM system is shown
in Figure 21. A CIM system basically includes three levels: factory,
shop floor, and information systems. At the factory level, the product
design, production planning/scheduling, simulation and inventory
control are performed. Due to the complexity of a CAD/CAM (Computer
Aided Design/Manufacturing) function, it is treated as a separate func
tion from product design.
Between each pair of functions, the relationship may be one-way or
two-way. For example, in the two-way relationship, the factory manage
ment function supervises the product design function, and the product
design function gives feedback to the factory management function (it is
feedback, because the relationship is vertical instead of horizontal).
126
FIGURE 21. Overall CIM System Context Diagram
127
In the two-way relationship, the factory management function determines
how and when to use the CAD/CAM function. But the CAD/CAM function does
not give feedback to the factory management function. The CAD/CAM
function influences the product design function, which gives feedback
to the factory management function. So, many of the functions are
directly or indirectly related to other functions, which makes the
overall CIM system very complex.
The vertical relationship means that one of the functions monitors
or supervises the other, giving feedback to the original function. In
the horizontal relationship either one of the functions can influence
the other's processes or decision-making.
At the information systems level, a data base system provides all
the software the system requires, such as part programs. Decision
Support Systems (DSS), simulation programs. Computer Aided Process
Planning (CAPP), etc. Each software package may support one or several
functional modules. For example, the Decision Support System (DSS) only
supports the inventory control subsystem, but the simulation function
may support the product design subsystem, production planning/scheduling
subsystem, and simulation subsystem.
2. The shop floor level of the overall CIM system is the group of
Flexible Manufacturing Cells (FMCs) which were described in Chapter 4
(Figure 1).
3. The Product Design subsystem starts with receiving the product
description which comes from the production planning/scheduling subsys
tem. It then determines types of materials, shapes of products, machi
ning processes, types of machines, precisions and tolerances. It may
128
update existing drawings or start with new drawings. In either case it
requires the assistance of a CAD/CAM subsystem. When product drawings
are finished, some person other than the designer has to examine the
drawings by running a simulation or simply visual examination. The
designer will modify his drawings if necessary, and then store them in
the CAD/CAM subsystem. The whole process is under the control of the
product design management function which is under the supervision of the
factory management function. The context diagram of product design
subsystem is shown in Figure 22.
The consistency among all the context diagrams must be maintained.
For example, in the overall CIM system context diagram (Figure 21),
there are four external systems to the product design subsystem. The
same relationship is shown in the product design context diagram (Figure
22).
4. The Production Planning/Scheduling subsystem begins with the
occurrence of demand. Then it determines scheduling strategies (Early
Due Date, or Short Process First, etc.), part-type mix ratio, machining
process, type of machine, and produces a production schedule. The
schedule is run under simulation packages for different scheduling
strategies and part-type mix ratios to determine the feasible and
optimal solution. The production planning subsystem interacts with the
product design subsystem to obtain all the product information. It also
interacts with the inventory control subsystem to maintain reasonable
stock levels. The context diagram of the production planning/scheduling
subsystem is shown in Figure 23.
129
Determine Type of Materials
Determine Shape of Products
From Factory Level Production Planning (Scheduling)
From Factory Level
Determine Machining Characteristics
"f
i Receiving Product Description
roduct Design Management
Determine Tolerance
Update Old Drawing
« CAD/CAM ;^. . - - ' '
Examine New Drawing
From Factory Level
I Simulation ; \ _ , _ I
From Factory Level
FIGURE 22. Product Design Context Diagram
130
From Outside From Factory Level
Computer Aided
Process Planning
From Information System Level
Determine Type of
Machines
Determine Production Schedule
From Factory Level From Factory Level
FIGURE 23. Production Planning Context Diagram
131
5. The Inventory Control subsystem determines the control strate
gies (exponential estimation, smooth estimation, etc.), and set up
safety stock levels and order quantities. When stock drops to reorder
points, orders are placed. Shipping, receiving and dispatching goods are
also carried out by the inventory control subsystem. The production
planning/scheduling subsystem should inform the inventory control subsy
stem about production plans so that the Inventory Control subsystem can
place orders for raw materials. Decision Support System (DSS) from the
Information System Level helps in decision making and strategic
planning. The context diagram of the inventory control subsystem is
shown in Figure 24.
5.2. The Relationship between Design
Methodologies And Application Systems
Table 3 summarizes the representation criteria vs. the design
methodologies. The criteria are system complexity, structure represen
tations (data structures, data flow, functional structures, and process
flow), decomposition (decoupling and structure clash recognition), and
process control (logical control and data flow control).
There is no objective way to determine the maximum degree of
complexity, in terms of McCabe's Cyclomatic Number, a particular design
methodology can handle. It can only refer to other studies to determine
small, medium, or complex problems that a methodology can handle. Most
of the methodologies, such as Structured Design (SD), Meta Stepwise
Refinement (MSR), Warnier-Orr Design (WOD), Michael Jackson Structured
Design (MJSD), and Hierarchy-Input-Process-Output (HIPO), are only good
132
From Fa[;tory Level Scheduling (Production Planning)
From Factory Level Factory Management
" ^ "• ' " i ' " " •"
From Information SystenJ Level bbs (Decisioii Support System)
Determine Strategies
L_Z Place Order
Receiving Raw Materials
Material Dispatching
LA Inventory Management
I Record Keeping
1 Shipping
Receiving Finished Goods
Recording W.I.P.
FIGURE 24. Inventory Control Context Diagr am
133
OQ 0)
•H bO O
rH O •o o s: <D
c bO •H 00 0)
o
ro o •H •p OO
• H
<D •P O (0 S4 n)
JC o
m Ed
CQ <
Con
trol
D
ecom
posi
tion
Rep
rese
ntat
ion
Dat
a flo
w
Logi
c S
truct
. C
lash
R
ecog
nitio
n D
ecou
plin
g
Pro
cess
Fl
ow
Fun
ctio
nal
Str
uctu
re
Dat
a flo
w
Dat
a S
truc
ture
Com
plex
ity
6 ,k w y (0 .^ o X CO <D X > «
Yes
o z
^
1
Yes
o z
-
Bot
tom
up
Sm
all
prob
lem
C
ompl
ex D
ata
flow
pro
blem
Q CO
^
o z
^
O
z
^
Yes
^
Top
- do
wn
Sin
gle
mod
ule
pr
oble
m
MSR
o z
Yes
o
o Z
Ver
bal
Yes
o z
Top
-dow
n S
mal
l pr
oble
m
WO
D
^
o
i
o z
o
Yes
o z
Top
-dow
n La
rge
com
plex
pr
oble
m
TDD
o
o z
Yes
Y
es
Yes
Ye
s
o z
Bot
tom
-up
Sm
all
prob
lem
M
JSD
Yes
Yes
Y
es
o z
Yes
Ye
s
2 ^
o Z
Com
plex
bu
sine
ss
pro
ble
m
PS
UP
SA
Yes
Yes
Yes
o z
Yes
Ye
s 1:
1,
1:N
N
:M.
N:1
o z
Larg
e.co
mpl
ex
prob
lem
SA
DT
IDE
F
Yes
Ye
s Ye
s
o z
o z
Yes
-
o z
Rel
iabl
e la
rge
scal
e pr
oble
m.
HOS
^
o z
o
o z
o z
o z
N:M
o z
Mai
nly
on
sim
ple
syst
em
HIP
O
134
for small problems. In the representation of data structures, SD and
MJSD are bottom-up, while MSR, WOD and TDD are top-down. PSL/PSA,
SADT/IDEF, HOS, and HIPO have no data structure representation. There
is no functional structure representation in SD and HIPO. The other
methodologies do have functional structures.
Data flow representation, similar to the entity relationship model,
can be either one (data item) to one (data item), noted as 1:1, or one
to many (1:N), or many to one (N:1), or many to many (N:M). SD and HOS
are 1:1. HIPO is N:M. PSL/PSA's data metrics show that the data
relationship is either N:M or 1:1. SADT possesses all four types of
relationship.
SD, MJSD, PSL/PSA, and SADT/IDEF have process flow charts. WOD
only verbally describes process flow. MSR, TDD, HOS and HIPO do not
represent process flow. MJSD, PSL/PSA, SADT/IDEF, and HOS can recognize
structure clashes, but only MJSD can decouple data structures. WOD and
TDD are strong in logical control, SD in data flow control, while
PSL/PSA, SADT, and HOS are good in both logical and data flow control.
When the data or process flow mechanism is weak, it can simply be
treated as not having this type of control mechanism. MSR and HIPO are
weak in both logical and data flow controls, making them adequate only
for single-module problems.
The ease of design preparation and diagram clarity are important
aspects of a methcxiology; because they can only be evaluated subjective
ly, they are not included in the methodology characteristics.
Table 4 summarizes the characteristics of each application system,
in terms of system complexity (determined by McCabe's Cyclomatic number
135
00 a 0)
43 OQ >»
CO
C
o •H *i (0 O
O. a <i
s M O
I M O
00 O
•H 4J OQ
•H
©
O (d s.. (0 s: o
CQ
ition
(A O Q. E
Dec
( tr
ol
Com
c
atio
i R
epre
sen
t ex
ity
Q .
Co
m
1 I
ra **.
o>
lin
Cou
p
c
ihes
io
O
Dat
a fl
ow
o O) o - J
a> ^ 3
ata
truc
t
Q (A
CO
Pro
cesi
Fl
ow
(0 « c »-O 3
Func
tii
Str
uct
i o> 9
a (0 (0
Nod
e P
ath
"o "o « «:
ics
App
iN
icat
ion
Hig
h
CO . -
c c
nctio
m
mu
le
d
=» O (0 Li- O O
Yes
$ >
Yes
Yes
Yes
>. . ^
x: c *x 0> ^ CD . 2 O ' " X O CL
(Si
Nod
es =
1
Pat
h
= 50
)=
50
-19
=33
H_ ^ O O O "^ CVJ *t % > +
Ove
rall
CIM
S
yste
m
Hig
h
CO - 1
oce
di
Q.
Yes
^ >
Yes
Yes
Yes
X E J2 3 Q. •B E ® O >s 2 O .tr .^ • ^ CNJ ^
11 CNJ V
Nod
es
Pat
h
= i)
= 22
=
13
- • - ^ «. O O > ^ C\J «: ^ > +
Sho
p
Flo
or
Co
ntr
o
Low
CO k .
smpo
h-
Not
ne
cess
ary
$ >
^ CO CA CA <0
«— o O m Z C
one
is
1
^ x 9 -S
Eith
ne
ec
X
E ® 3 CL •B E ® O >s 2 O .t:
£i! ^ CM II ^ r-
lode
s at
h =
=
24
14
Z CL ^::. II H- >- (D O O ^ ^ CVJ «: ^ > +
Pro
du
c D
esig
n
Med
ium
CO ^
empc
1 -
Not
ne
cess
ary
IS >
^ CO CA
lot
eces
Z c
one
is
i
m 0 *S
Eith
ne
e
X E ^ 3 CL •6 E ® O >s 2 O .t:
CO - ^ CO II CM r-
Nod
es •
• P
ath
=
)=
24-
=13
- - ^ ^ O O * - ' C\i % «: > +
^ c» i
Pro
du
c io
n
Pla
nn
in
(Sch
ed
lin
g)
E
Med
iu
CO
3
oce
d
Yes
sary
CA 9
O (Q Z c
^ CO CO
lot
eces
z c
Yes
^ CO CO
Not
ne
ces
X E ® 3 CL ^ E ® o > , 2 O .r
CM
odes
=12
at
h =2
3 :
23-1
2 +
13
2 CL J!^ II
•s -s ^ 2 5 ^ * ^ > >s
Inve
nto
C
on
tro
l
136
which is the number of paths minus the number of nodes plus 2), struc
ture representation (functional structures, process flow, and data
structures), process control (logical control and data flow control),
and decomposition (cohesion and coupling).
Based on the context diagrams (Figures 1, 21, 22, 23, and 24),
system complexity can de determined by McCabe Cyclomatic Number. There
is a total of 19 modules for the overall CIM system, 11 for shop floor
control subsystem, 12 for product design subsystem, 13 for production
planning/scheduling subsystem, and 12 for inventory control subsystem.
Each external system/subsystem is treated as one module.
The edge which connects two functions together can be one-way or
two-way. If it is two-way, it is counted as two paths; otherwise it is
one path. The total number of paths is 50 for the overall CIM system,
22 for shop flow control subsystem, 24 for product design subsystem, 24
for production planning/scheduling subsystem, and 23 for inventory
control subsystem. According to McCabe Cyclomatic Number v(G),
v(G) = e - n + p
where e is the number of paths (edges), n is the number of modules
(vertices) and p is the number of connected components which is usually
2 [142]. Then v(G) is 50-19+2=33 for the overall CIM system, 22-11+2=13
for the shop flow control subsystem, 24-12+2=14 for the product Design
subsystem, 24-13+2=13 for the production planning/scheduling subsystem,
and 23-12+2=13 for the inventory control subsystem.
The McCabe Cyclomatic Number has been chosen from among the many
quantitative measures of system complexity because of its close rela
tionship between context diagrams and computation. McCabe suggested that
137
software designers restrict their software modules to Cyclomatic
complexity below or equal to zero [142], When the complexity exceeds
10, designers are forced to either recognize and modularize subfunctions
or redo the software. The purpose is to keep the size of the modules
manageable and allow for testing the independent paths. The only situa
tion in which this limit seems unreasonable is when a large number of
independent cases follow a selection function (a large case statement),
which was allowed. McCabe does not specify what range of the Cyclomatic
number represents what degree of complexity. However, the author states
that complexity is low if the Cyclomatic number is below 10, high if
above 21 (double the number that McCabe suggested to keep module comple
xity below it), and medium between these values. Based on this, the
Overall CIM system is very complex, and each of the four subsystems has
a medium degree of complexity. The results strongly suggest that the
Overall CIM system should be decomposed into subsystems, with analysis
and design beginning from the subsystems. Once the analyses and designs
for subsystems are finished, they can be formed together into the full
CIM system.
The functional structure can be one of the following:
1. strong in vertical connection and weak in horizontal connection
(there are more vertical connections than horizontal connect
ions); in this case, the application system needs functional
structure representation.
2. strong in horizontal connection and weak in vertical connection
(there are more horizontal connections than vertical connect-
Ij8
ions); in this case, functional structure representation is not
needed.
3. approximately equal strength in the vertical connection and in
the horizontal connection (the number of vertical connections
is close to the number of horizontal connections); in this case,
functional structure is needed.
The overall CIM system belongs to case 3. The shop flow control
subsystem belongs to case 1. The product design production planning,
and inventory control subsystems belong to case 2. Some methodologies
are excluded for a particular system because they lacks required func
tional structures.
The process flow representation must be included in the overall CIM
system, shop flow control subsystem, and the inventory control subsystem
because the processes in these systems are complex and procedure-
oriented. For some application systems, such as the product design and
production planning/scheduling subsystems, either functional structure
or process flow representation is sufficient. If a volume of data is
involved in a system, data structures must be represented to reduce the
possibility of inconsistency and redundancy. However, even if data
structures are missing from the resulting design, a methodology can be
used as long as structure clash can be recognized.
If a system is procedure-oriented, it needs logical control because
control of processes is important. If a system deals mainly with data,
it needs data flow control. The overall CIM system and shop floor
control subsystem need both logical and data controls. The product
design and production planning/scheduling subsystems need strong logical
139
control, while the inventory control subsystem needs strong data flow
control.
Regarding coupling, little data is passed or shared or low in
procedure dependency between modules for the product design subsystem,
so the degree of coupling is low. Few data items are passed or shared
between modules for production planning/scheduling and inventory control
subsystems, as the degree of coupling is medium. Because module
dependency is strong for the overall CIM system and shop flow control
subsystem, and significant data is passed or shared between modules, the
degree of coupling is high for both systems. The coupling factor has
the same effect as the data flow control factor: the higher the degree
of data flow control, the higher the coupling.
The overall CIM system is strong in cohesion in terms of functional
level and communication cohesion. The shop floor control subsystem and
inventory control subsystems have procedural cohesion, which is medium
in terms of degree of cohesion. The product design and production
planning/scheduling subsystems have temporal cohesion, which is also a
medium degree of cohesion.
Combining Table 3 and 4 produces Table 5. Table 5 suggests which
methodology (or methodologies) is (or are) suitable for what type of
application system(s). Due to the complexity of an overall CIM system
and its subsystems, only use of TDD, PSL/PSA, SADT and HOS are consi
dered. TDD is weak in data flow control, so it is not suitable for the
overall CIM system, shop flow control and inventory control subsystems.
However, TDD is adequate for the product design subsystem and production
140
CO
a (0 •p n >»
CO c o •H (d o
a a < Of)
>
00 <D
•H bO O rH O
•o O
JO
s i
•H 00
a>
in Ed J QQ
HIP
O
(Hie
rarc
hy
Inpu
t P
roce
ss
Ou
tpu
t)
(Hig
her
O
rder
S
oft
war
e)
SA
DT
(S
tru
c.
An
alys
is&
D
esg
Tec
h)
PS
L/P
SA
(P
rob
. S
tatm
ent
Lang
uage
)
MJS
D
(Mic
hae
l Ja
ckso
n
Str
uc.
Dsg
.]
TDD
(T
op
D
own
D
esig
n)
WO
D
( W
arn
ier-
Orr
D
esig
n)
MS
R
(Met
a
Ste
pwis
e R
efin
emen
t)
sn
(Str
uct
ure
d
Des
ign
)
•p ® X 2 'at X •£ Ox
5. ® "x ifl X 1 9 / ~ C
Too
Com
plex
and
w
eak
in d
ata
flow
con
trol
an
d lo
gica
l co
ntr
ol
Fit
Fit
Fit
Too
Com
plex
&
wea
k in
d
ata
flow
co
ntro
l &
lo
gic
al
con
tro
l
Wea
k in
dat
a flo
w c
ontr
ol
and
k>g
ical
co
ntr
ol
Too
co
mpl
ex a
nd
wea
k in
da
ta fl
ow
co
ntro
l
Too
com
plex
and
w
eak
in
logi
cal
cont
rol
Too
com
plex
an
d w
eak
in
logi
cal
con
tro
l
Ove
rall
CIM
Sys
te
m
Too
Com
plex
and
w
eak
in d
ata
flow
con
trol
an
d lo
gica
l co
ntr
ol
Fit
Fit
Fit
Wea
k in
da
ta fl
ow
con
tro
l
Wea
k in
dat
a flo
w c
ontr
ol
Wea
k in
da
ta fl
ow
cont
rol
Too
com
plex
and
w
eak
in
log
ical
co
ntro
l
Too
co
mpl
ex
and
wea
k in
lo
gica
l co
ntr
ol
Sho
p
Flo
or
Co
ntr
ol
Fit
Fit
Fit
Fit
Fit
Fit
Fit
Fit
Fit
P
rod
uct
D
esig
n
Fit
Fit
Fit
Fit
Fit
Fit
Fit
Too
com
plex
and
w
eak
in
log
ical
co
ntro
l
Wea
k in
lo
gic
al
cont
rol
Sch
edul
ing
(P
rod
uct
ion
P
lan
nin
g)
Wea
k in
dat
a flo
w
cont
rol
and
logi
cal
con
tro
l
Fit
Fit
Fit
Wea
k in
dat
a flo
w c
on
tro
l an
d lo
gica
l co
ntro
l
Wea
k in
da
ta fl
ow
cont
rol
Wea
k in
dat
a flo
w
cont
rol
Wea
k in
lo
gic
al
cont
rol
Wea
k in
lo
gic
al
con
tro
l
Inve
nto
ry
Co
ntr
ol
141
planning/scheduling subsystem because both systems need only functional
structure or process flow.
PSL/PSA, SADT and HOS do not have data structure, which is a neces
sity for complex systems like the overall CIM system, shop flow control
and product design subsystems. Because clash recognition compensates for
this inadequacy, PSL/PSA, SADT and HOS are suitable for all five appli
cation systems.
For systems with medium complexity, some of the methodologies can
be applied. For example, SD is suitable for the product design subsys
tem. WOD, TDD, MJSD, and HIPO are suitable for the product design and
production planning/scheduling subsystems. The product design subsystem
relies less on formal methodology so any methodology can be used. For
the rest of the application systems, the reasons why some of the metho
dologies are not suitable can be determined from the comparison of Table
3 and 4, and are indicated in Table 5.
One may assign a weight to each factor according to its importance
to analysis and design of the system. For any 'No' factor, the weight
would be zero; otherwise, an integer value. By summing the total
weights, one can determine which methodology or methodologies are most
suitable.
From the above studies, it can be concluded that there is no single
methodology which meets all the criteria of an application system,
PSL/PSA, SADT and HOS can be applied to the systems, however, data
structure representation is missing. If two or more methodologies can
be adapted at the same time to complement each others' inadequacies,
analysis and design can be more complete. For example, the data
142
structures in SD and MJSD (which are bottom-up), or in TDD (which is
top-down) can be effectively used with PSL/PSA, SADT and HOS. This
confirms to the second hypothesis proposed in Chapter 1.
Finally, when two or more methodologies are used, the possibility
of conflicting designs should be discussed. If a methodology is applied
properly, it should not produce results incompatible with those of other
methodologies unless different concepts and data norms are used, which
is the fault of the designer rather than the methodology. If
conflicting designs do occur, the designer should reexamine the details
and correct the inconsistency.
CHAPTER 6
CONCLUSIONS
Computer Integrated Manufacturing (CIM) is a direction, not a des
tination; it is a concept, not a package; it is a management process,
not a set of technologies, systems, products, or projects. It cannot be
designed as a common package and sold to customers. Instead, CIM
planning must be provided for each system. Planning and implementation
of CIM can be quite different even in the same factory in different
shops.
CIM designers cannot take traditional manual equipment and turn
it into an automated system. The automated factory must have compute
rized equiiaient such as Computer Numerical Control (CNC), Direct Nume
rical Control (DNC), or Adaptive Control (AC) machines. These types of
equipment allow the computer to take control and integrate the whole
system. In the planning stage, designers and users must carefully
evaluate all the specifications and requirements of the system, and
make a wise choice of system hardware and software.
6.1. The Need for Industrial Information System Standards
For many small and medium size ccxnpanies, purchase of all the
equipment at one time is not possible; one module at a time may be
installed, gradually expanding into a whole CIM system.
143
144
Even when equipment and software have been carefully chosen, inter
faces between hardware and hardware, software and software, and hardware
and software, are difficult to handle. The documentation is always
insufficient and ambiguous. Even an experienced software engineer may
have difficulty installing the system according to available documenta
tion. There are always bugs. When two systems cannot communicate with
each other, neither of the vendors can really provide help.
To solve this problem, industrial standards must be established by
manufacturers, vendors, and users with a view to the nature of manufac
turing activities. The representative standards are: MAP (Manufacturing
Automation Protocol), IGES (Initial Graphics Exchange Standard), EDIF
(Electronic Design Interchange Format), and BDI (Business Data Inter
change). The proposed MAP protocol will make it possible to plug
equipment directly into the MAP network and communicate transparently
with other devices, but such capabilities are several years away.
Although IGES does not always succeed in passing graphics files between
systems and translating them into acceptable figures upon receipt, it is
expected that IGES 3.0 will soon be officially accepted as an interna
tional standard. EDIF is a means of communicating design information
between systems, testing and production equipment in the microelectronic
industry whose needs are quite different from those of the automative
machinery, and other mechanical producers. BDI is designed to facili
tate order processing shipping and receiving, invoicing, and payments
between separate firms [96].
145
6.2. Problems with Information and Human Resources
One of the difficulties in assembling individual data set modules
into a complete common database system is the inconsistency and redun
dancy of data structures. Taking subtree structures as a subsystem's
data structures can eliminate data inconsistency and redundancy. How
ever, for an entire system with so many variables involved in a CIM
system, a data structure for the whole system would be too complex to
handle. Also, a common database costs more, and has a higher risk in
the cases of failure of file medium, file device, data protection and/or
data security than separate databases. A CIM system is a real-time
processing system, and the degree of difficulty in designing and main
taining a real-time database is much higher than nonreal-tirae database.
Some argue that there is no need to have a common database at all, since
only parts of data are shared by any two-subsystems or modules. Whether
there should be a common database would be determined according to each
system's requirements and capabilities.
Human resources are another problem in CIM. Many more software
engineers with knowledge of system integration techniques are needed.
Information Technology (IT) becomes an important field in system engi
neering because system engineers working in the field of CIM use
techniques from electrical and electronic engineering, mechanical engi
neering, industrial engineering, computer science, information systems,
business administration and other application areas.
146
6.3. Contributions of This Research
This research recognizes the wide-ranging and complex scope of
issues involved in designing and implementing a CIM system, and provides
an approach for analysis and design of CIM information systems.
The results of the case study shows that although no methodology
includes all four types of information: data structures, data flow,
functional structures, and process flow, good system analysis and design
methodology should provide as much information as possible.
In Chapter 5, a comparison of the way in which each of the methodo
logies represent the application systems indicates that there is no
single methodology which can describe all relevant aspects of the appli
cation systems. Only PSL/PSA, SADT, and HOS are suitable for all five
types of application systems, but the design is difficult because data
structures are not represented. This problem can be dealt with by using
another method to recognize structure clashes, which demonstrates that a
combination of methodologies generally can make analysis and design of a
system more complete.
6.4. Summary of Characteristics
In Chapter 5, Table 3 summarizes the characteristics applied to
design methodologies in terms of complexity, structure mechanism (data
structures, data flow, functional structures, and process flow), decom
position (decoupling and structure clash recognition), and control
mechanism (logical and data flow). Table 4 summarizes these same
characteristics applied to application systems in terms of complexity.
147
structure mechanism (functional structures, process flow, data struc
tures), control mechanism (logical and data flow), and decomposition
(cohesion and coupling). These two tables show that each application
system and methodology has its own characteristics; combining these two
tables indicates which methodologies are likely to be suitable for a
specific application system.
Based on pseudocode written for the Flexible Manufacturing Cell
which is studied in Chapter 4, the relationship the information made
available by the system design and the information used in the process
of program development is evaluated. The result shows that some analy
sis and design methodologies with a closer fit in terms of characteris
tics provide more pertinent information about the system, leading to
easier development of program than other methodologies. This conclusion
confirms the third hypothesis that the characteristics can be used to
select an analysis and design methodology capable of providing system
specifications for software development and implementation.
This research is the first to point out that, contrary to common
assumptions a methodology found to be suitable for one application
system may not necessarily be suitable for another. Other contributions
of this paper include:
. it investigates the most common methodologies and reclassifies
them into nine methodologies (the classifications in the litera
ture are not unique).
. it integrates the technologies of manufacturing and production
in terms of information system theories.
148
. it provides a set of criteria for evaluation of nine methodolo
gies and applies them to current CIM application systems.
. it provides a method for choosing and combining methodologies for
an application system.
. it provides an approach leading from system design to program
development.
. it provides a framework for implementing a FMC in a multitasking
environment.
6.5. Recommendations for Further Research
The field of system analysis and design still lacks a scientifi
cally-based methodology; it is still possible to reduce trial and error
when designing, installing, testing, and maintaining a CIM system,
although the assumptions made by each methodology are not provable.
Designing is problem solving, dependent on each individual's ability and
thinking behavior. Designers produce designs, methods do not. A design
problem, although well suited to a particular technique, will always
have some quirk which makes it unique.
Considerable ambiguity remains in Information Technology (IT) and
system analysis and design methodologies. Further research is suggested
in the following areas:
. clearly defining principles, procedures, and rules on how and
when to decompose a system into subsystems.
. defining rules to determine the relationship between two objects
(functions or data).
. determining the complexity that a methodology can handle.
149
. quantifying the criteria of methodologies as much as possible.
. automating the methodologies as much as possible, allowing the
methodologies to check data consistency and completeness.
. determining the relationship between design and program develop
ment.
. providing industrial standards for hardware and software inter
face to reduce incompatibility and complexity problems.
BIBLIOGRAPHY
1. Acree, Elaine S. Part And Tool Scheduling Rules For A Flexible Manufacturing System. A dissertation in the Industrial Engineering Department of Texas Tech University, December, 1983.
2. Adlard, Edward and Vogel, Steven. "The Autoplan Process Planning System'*, Production And Inventory Management, third quarter, 1982.
3. Akella, R., Choong, Y. and Gershwin, S. B. Performance Of Hierarchical Production Scheduling Policy, Laboratory for Information and Decision Systems, Massachusetts Institute Of Technology, February, 1984.
4. Albus, J. S., Simpson, J. A., and Hocken, R. J. "The Automated Manufacturing Research Facility Of The National Bureau Of Standards", Journal Of Manufacturing Systems, Vol 1, No. 1, 1982.
5. Alford, Mack W. "A Requirements Engineering Methodology For Real-Time Processing Requirements", IEEE Transactions On Software Engineering, January, 1977.
6. Alfred, Mack W. "Software Requirements Engineering Methodology (SREM) At The Age of Four", Tutorial; Software Design Strategies, IEEE Computer Society, 1980.
7. Ammar, M. and Gershwin, S. "Reliability In Flexible Manufacturing Systems", IEEE Transactions On Software Engineering, 1979.
8. Anderson, Carol A. "AUTOFACT Returns To Anaheim", CAD/CAM Technology, Fall 1984.
9. Appleton, Daniel S. "Building A CIM Program", A Programmer Guide For CIM Implementation, CASA of SME, 1985.
10. Aral, Y., Hata, S., Imakubo, T., and Kikuchi, K. "Production Control System Of Microcomputers Hierarchical Structure For FMS", Proceedings Of 1st International Conference on FMS, October, 1982.
11. Asano, K., Oboshi, S., Takeyama, H., and Sawada, K. M. "Development of Programmable Precision Manufacturing Systems (PPMS) For Small Lot Production", Proceedings of Ist International Conference on FMS, October, 1982.
150
151
12. Asirelli, P., Degano, P., Levi, G., Martelli, A., Montanari, U., Pacini, G., Sirovich, F., and Turini, F. "A Flexible Environment for Program Development Based on A Symbolic Interpreter", IEEE Precee-dings 4th International Conference on Software Engineering, IEEE September, 1979.
13. Ayoub, Mahmoud A. and Babur M. Pulat. "A Computer-Aided Panel Layout Procedure for Process Control Jobs — LAYGEN", lEE Transactions, Vol. 17, No. 1, 1985.
14. Babb, Michael. "Factory Automation 1986: Riding On The Systems Integration Skyrocket", Control Engineering, May, 1986.
15. Bakanau, Frank E. Implementation And Maintenance Of A Parts Coding And Classification System. Coding And Classification Workshop, Arlington, Texas, June, 1975.
16. Baker, James. A. "Winning Your Case for Automation", Manufacturing Engineering, July, 1984.
17. Bamhart, Joseph. "Designing With the Help of Solids", CAD/CAM Technology, Fall 1984.
18', Barash, M. M. and Gupta, S. M. Computer-Aided Selection Of Machining Cycles And Cutting Conditions On Multistation Synchronous Machines, School of Industrial Engineering, Purdue University, August, 1977.
19. Barash, M. M., Nof, S. Y., and Solberg, J. J. "Operational Control Of Item Flow In Versatile Manufacturing Systems", International Journal of Production Research, 1970, Vol. 17, No. 5.
20. Barfield, W., Hwang, S. L., Chang, T. C , and Salvendy, G. "Integration of Humans and Computers In The Operation and Control of Flexible Manufacturing Systems", International Journal of Production Research, 1984, Vol. 22, No. 5.
21. Barnes, Bruce H. and Metzner, John R. Decision Table Languages and Systems, Academic Press, 1977.
22. Bassett, Paul. "Design Principles For Software Manufacturing Tools", ACM 1984 Annual Conference Proceedings, ACM, October, 1984.
23. Bastani, F., Ramamoorthy, C , Mok, Y., Chin, G., and Suzuki, K. "Application of A Methodology For The Development and Validation of Reliable Process Control Software", IEEE Transactions On Software Engineering, Vol. SE-7, No. 6, November 1981.
24. Bell, T. E., Bixler, D. C. and Dyer, M. E, "An Extendable Approach to Canputer-Aided Software Requirements Engineering", Tutorial On Software Design Techniques, IEEE Computer Society, 1984.
152
25. Bergland, G. D. "Structured Design Methodologies", Tutorial: Software Design Strategies. IEEE Computer Society, 1980.
26. Bergstrom, Robin P. "Computer-aided SQC Makes Impact at Pontiac", Manufacturing Engineering, May, 1985.
27. Bergstrom, Robin P. "FMS: The Drive Toward Cells", Manufacturing Engineering. August, 1985.
28. Berry, D. M., Leveson, N. G., and Wasserman, A. I. "BASIS: A Behavioral Approach To The Specification Of Information Systems", Tutorial On Software Design Techniques. IEEE Computer Society, 1984.
29. Biermann, Alan W. and Guiho, Gerard. Computer Program Synthesis Methodologies. D. Reidel Publishing Company, 1983.
30. Birrell, N. D. and Ould, M. A. A Practical Handbook For Software Development, Cambridge University Press, 1985.
31. Black, J. T. "Cellular Manufacturing Systems Reduce Setup Time, Make Small Lot Production Economical", IE Transactions, November, 1983.
32. Booch, Grady. "Object—Oriented Design", Tutorial On Software Design Techniques, Computer Society, 1984.
33. Boothroyd, G. "Econonics Of Assembly Systems", Journal Of Manufacturing Systems, Vol 1, No. 1, 1982.
34. Booth, Grayce M. The Design of Complex Information Systems — CCTMDon Sense Methods for Success, McGraw-Hill, I983.
35. Borowicz, Vincent F. Code Group Technology Classification System, Coding And Classification Workshop, Arlington, Texas, June, 1975.
36. Boucher, Thomas 0. and Muckstadt, John A. "Cost Estimating Methods For Evaluating The Conversion From A Functional Manufacturing Layout Group Technology", lEE Transactions, Vol. 17, No. 3, 1985.
37. Brooks, Frederick P., Jr. The Mythical Man-Month, Addison-Wesley, 1975.
38. Brown, E. "Modular Work Stations Provide Flexibility For Changing Demands In Manufacturing Setups", IE Transactions, March, 1984.
39. Bruce, Phillip and Pederson, Sam M. The Software Development Project. John Wiley & Sons, 1982.
40. Bryce, A. L. and Roberts, P. A. "Flexible Machining Systems In The U. S. A.", Proceedings Of 1st International Conference On FMS, October, 1982.
153
41. Bunnag, Panit and Smith, Spencer B. "A Multifactor Priority Rule For Jobshop Scheduling Using Computer Search", lEE Transactions, Vol. 17, No. 2, 1985.
42. Burgam, Patrick. "FMS Control: Covering All The Angles", CAD/CAM Technology, Summer 1984.
43. Burgam, Patrick. "Will CAPP Technology Replace The Planner?", CAD/ CAM Technolof^y. Winter 1984.
44. Buzacott, J. A. "Flexible Manufacturing Systems: A Review Of Models Models", TIMS/ORSA Detroit Meeting. April, 1982.
45. Buzacott, J. A. "The Fundamental Principles Of Flexibility In Manufacturing Systems", Proceedings Of 1st International Conference On FMS, October, 1982.
46. Buzacott, J. A. and Shanthikumar, J. G. "Models For Understanding Flexible Manufacturing Systems", AIIE Transactions, December 1980, Vol. 12, No.4.
47. Buzacoot, J. A. "Optimal Operating Rules for Automated Manufacturing Systems", IEEE Transactions on Automatic Control, Vol AC-27,
~ No.1, Fabuary, 19^2^
48. Carlson, Robert C. and Rosenblatt, Meir J. "Designing a Production Line To Maximize Profit", lEE Transactions, Vol. 17, No.2, 1985.
49. Case, P. W., Correia, M., Gianopulos, W., Heller, W. R., Ofek, H., Raymond, T. C , Simek, R. L., and Stieglitz, C. B. "Design Automation In IBM", IBM Journal Research Development, Vol. 25, No. 5, September, 1981.
50. Cavagnaro, F., Manara, R., and Giuffre, 0. "A Generalized Approach To The Problem Of FMS On-Line Management", Proceedings of 1st International Conference on FMS, October 1982.
51. Chan, H. M. and Milner, D. A. "Direct Clustering Algorithm For Group Formation In Cellular Manufacture", Journal Of Manufacturing Systems, Vol. 1, No. 1, 1982.
52. Chang, Tien-Chien and Wysk, Richard A. "CAD/Generative Process Planning With TIPPS", Journal Of Manufacturing Systems, Vol. 2, No. 2, 1983.
53. Chen, P. H. and Talavage, J. "Production Decision Support System For Computerized Manufacturing Systems", Journal Of Manufacturing^ Systems, Vol. 1, No. 2, 1982.
154
54. Chester, Daniel L. and Yeh, Raymond T. "Software Development By Evaluation Of System Designs", Tutorial; Software Methodology, Computer Society, 1984.
55. CIM Design Rules, Society of Manufacturing Systems, 1986.
56. CIM Programmer Guides, Society of Manufacturing Systems, 1986.
57. "CIM; Unlocking The Benefits", Design News, July 7, 1986.
58. "CIM — The Foundation For Factory Automation", Production Engineering, May, 1986.
59. Claybourn, B. H. and Hewit, J. R. "Simulation Of Activity Cycles For Robot Served Manufacturing Cells", Proceedings of 1st International Conference on FMS, October, 1982.
60. C.N.R.S., Chercheur au and Dupont-Gatelmand, Catherine. "A Survey Of Flexible Manufacturing Systems", Journal of Manufacturing Systems, Vol. 1, No. 1, 1982.
61. Colburn, Tim and Giddings, Nancy. "An Automated Software Design Evalutor", ACM 1984 Annual Conference Proceedings, ACM, October, 1984.
62. Cole, John R. and Flowers, A. Dale. "An Application Of Computer Simulation To Quality Control In Manufacturing", lEE Transactions, Vol. 17, No. 3, 1985.
63. Colter, M., Couger, D., and Knapp, R. Advanced System Development/ Techniques, John Wiley & Sons, 1982.
64. Constantine, L. L., Stevens, W. P., and Myers, G. J. "Structured Design", Tutorial; Software Methodology, IEEE Computer Society, 1984.
65. Cook, Nathan H. "Computer-Managed Parts Manufacture", AIIE Transactions, Vol. 12, No. 4, December, 1980.
66. Cook, S. D. and Harkrider, F. E. "Next Level of Control", Flexible Manufacturing Systems '86 — Conference Papers, CASA of SME, March, 1986.
67. Cutkosky, M. R., Fussell, P. S., Milligan, R., Jr. "The Design Of A Flexible Machining Cell for Small Batch Production", Journal of Manufacturing Systems, Vol 3/No.l.
68. Daniels, Alan and Yeates, Don. Design and Analysis of Software Systems, Petrocelli Books, 1983.
155
69. Darrow, William P. Group Scheduling In A Manufacturing Resource Planning Environment. A dissertation in the Industrial Engineering Department of Pennsylvania State University, August, 1980.
70. Dar-Ei, Ezey M. and Wysk, Richard A. "Job Shop Scheduling — A Systematic Approach", Journal Of Manufacturing Systems, Vol. 1, No. 1. 1982.
71. Davis, Carl G. and Vick, Charles R. "The Software Development System", IEEE Transactions on Software Engineering, Vol. SE-3, No. 1, January, 1977.
72. Deisenroth, M. P. and Galgocy, C. B. "FMS Simulation For Software Development", Flexible Manufacturing Systems '86 — Conference Papers, CASA of SME, March, 1986.
73. Dickinson, Brian. Developing Structured Systems, Bank of American National Trust and Saving Association, 1980.
74. Dickover, M. E., McGowan, C. L., and Ross, D. T. "Software Design Using SADT", Tutorial: Software Design Strategies, IEEE Computer Society, 1980.
75. Drexel, P. "Modular Flexible Assembly System 'FMS* From BOSCH". Proceedings of Ist International Conference on FMS, October, 1982.
76. Drozda, Thomas. "Unattended Manufacturing — More Than Just Talk". Manufacturing Engineering, July, 1984.
77. EiMaraghy, H. A. "Simulation And Graphical Animation Of Advanced Manufacturing Systems", Journal Of Manufacturing Systems, Vol. 1, No. 1, 1982.
78. Eversheim, W. and Herrmann, P. "Recent Trends in Flexible Automated Manufacturing", Journal of Manufacturing Systems, Vol. 1, No. 2, 1982.
79. Farber, David. Information Systems Engineering Perspectives, University of Delaware, Technical Report No. 86-04.
80. Farnum, Gregory T. "CIM — The Tools Are At Hand", Manufacturing Engineering, October, 1985.
81. Freeman, P. "Requirements Analysis And Specification: The First Step", Tutorial On Software Design Techniques, IEEE Computer Society, 1984.
82. Gane, Chris and Sarson, Trish. Structured Systems AnaU-ysis: Tools And Techniques, Improved System Technologies, Inc. 1979.
156
83. Gannon, J. D., Zelkowitz, M. V., and Shaw, A. C. Principles Of Software Engineering and Design. Prentice-Hall, 1979.
84. Garcia, Gary. "Manufacturing Information Management", Manufacturing Engineering. Februray, 1981.
85. Gershwin, Stanley B. and Kimemia, Joseph. "An Alogrithm for the Computer Control of a Flexible Manufacturing System", H E Transactions , December, 1983.
86. Gerwin, Donald. "Control and Evaluation in the Innovation Process: The Case of Flexible Manufacturing Systems", IEEE Transactions On Engineering Management. Vol. EM28, No.3, August, 1981.
87. Ghezzi, Carlo and Jazayeri, Mehdi. Programming Language Concepts, John Wiley & Sons, Inc., 1982.
88. Glossop, R. and Morgan, C. "Flexible Manufacturing Systems, The DNC Approach To Small Batch Robot PaintShop Working", Proceedings of Ist International Conference on FMS, October, 1982.
89. Goldbar, Joel T. and Jelinek, Mariann. "Planning for Economies of Scope", Harvard Business Review, November/December, 1983.
90. Golden, R. L., Latus, P. A., and Lowy, P. "Design Automation And The Programmable Logic Array", IBM Journal of Research Development, Vol. 24, No.1, January, 1980.
91. Ganam, Hassan. "Computer Integrated Manufacturing Archetecture of FMS", Flexible Manufacturing Systems '86 — Conference Papers, CASA of SME, March, 1986.
92. Gondert, Stephen J. "Understanding The Impact Of Computer-integrated Manufacturing", Manufacturing Engineering, September, 1984.
93. Greene, Timothy J. and Sadowski, Randall P. "Cellular Manufacturing Control", Journal Of Manufacturing Systems, Vol 2, No. 2, 1983.
94. Griffiths, S. N. "Design Methodologies ~ A Comparison", Tutorial; Software Design Strategies, IEEE Computer Society, 1980.
95. Gustavsson, Sten-Olof. "Flexibility and Productivity in Complex Production Processes", International Journal of Production Research, 1984, Vol. 22, No. 5.
96. Hales, H. L. "The Importance Of Standards", A Programmer Guide For CIM Implementation, CASA of SME, 1985.
97. Hankins, Steven L. Balancing Job Shops With Alternative Machine Routings, Manufacturing Systems Division, Cincinnati Milacron, Inc., November, 1984.
157
98. Harp, Jim. "CAD/CAM: Back to Basics", Manufacturing Engineering, October, 1985.
99. Hegland, Donald E. "CAD/CAM Integration — Key To The Automatic Factory", Production Engineering. August, I98I.
100. Hegland, Donald E. "Flexible Manufacturing Productivity and Your Balance Between Adaptability", Production Engineering, May, 198I.
101. Heninger, Kathryn L. "Specifying Software Requirements for Complex Systems; New Techniques and Their Application", Tutorial On Software Design Techniques. IEEE Computer Society, 1984.
102. Henry, Sallie and Kafuar, Dennis. "Software Structure Metrics Based Information Flow", IEEE Transactions on Software Engineering, September 1981.
103. Hershey, Ernest A., Ill and Teichroew, Daniel. "PSL/PSA; A Computer-Aided Technique for Structured Documentation and Analysis as Information Processing Systems", IEEE Transactions on Software Engineering. January, 1977.
104. Hildbrant, R. R. and Suri, R. "Methodology and Multi-Level Algorithm Structure for Scheduling and Real-Time Control of Flexible Manufacturing Systems", AIIE Transactions, Vol. 12, No.4, December, 1980.
105. Kitchens, Max W. "Simulation: The Key To Automation Without Risk", CAD/CAM Technology, Fall 1984.
106. Ho, G. S. and Ramamoorthy C. V. "A Design Methodology For User Oriented Computer Systems", Tutorial; Software Methodology, IEEE Computer Society, 1984.
107. Holland, J. R., editor. "Flexible Manufacturing Systems", Transactions Of The Society Of Manufacturing Engineers, 1984.
108. Hopkins, Albert L., Jr. "Fault Tolerance in Flexible Manufacturing System Control", Flexible Manufacturing Systems *86 — Conference Papers, CASA of SME, March, 1986.
109. Hutchinson, George K. "Advanced Batch Machining Systems", Transactions of the Numerical Society, March, 1979.
110. Hutchinson, George K. "The Economic Value of Flexible Automation", Journal of Manufacturing Systems, Vol. 1, No. 2, 1982.
111. Hutchinson, George K. "Flexibility Is Key To Economic Feasibility Of Automating Small Batch Manufacturing", IE Transactions. June, 1984.
158
112. Hutchinson, George K. and Wyner, Bayard F. "A Flexible Manufacturing System", IE Transactions, December, 73.
113. Hutchinson, George K. "Production Capacity; CAM vs. Transfer Line", IE Transactions. September 1976.
114. Hutchinson, G. K., Passler, E., Rudolph, K., and Stanek, W. "Production System Design; A Directed Graph Approach", Journal Of Manu-factuirng Systems, Vol 2, No. 2, 1983.
115. IBM Corporation. "Structured Walk-Throughs: A Project Managment Tool", Tutorial; Software Design Strategies, IEEE Computer Society, 1980.
116. Inaba, Hajimu and Sakakibara, Shinsuke. "Flexible Unmanned Assembly Cells With Robots", Proceedings Of 1st International Conference On FMS, October, 1982.
117. Issak, James. "Designing Systems For Real-Time Applications", Byte, April, 1984.
118. Jackson, M. A. "Constructive Methods Of Program Design", Tutorial On Software Design Techniques, IEEE Computer Society, 1984.
119. Jensen, Howard and Vairavan, K. "A Comparative Study of Software Metrics For Real Time Software", IEEE, 1982, CHI18101/82/0000/0096.
120. Johnson, R. Colin. "Special Report Automated Software Development Eliminates Application Programming", Electronics, June, 1982.
121. Kay, J. M. and Walmsley, A. J. "Computer Aids For The Optimal Design of Operationally Effective FMS", Proceedings of 1st International Conference on FMS, October, 1982.
122. Kirkham, J. A. and Thomas, R. J. "Structured System Analysis And The Problem Statement Language (PSL) As A Combined Methodology In The Teaching Of System Analysis And Design", ACM 1981 Annual Conference Proceedings, ACM, November 1981.
123. Krauskopf, Bruce. "Automation at Renault", Manufacturing Engineering, May, 1984.
124. Kumar, Vipin. "Integrating Knowledge In Problem Solving Search Procedures", ACM 1984 Annual Conference Proceedings, ACM, October, 1984.
125. Kusiak, Andrew. "Design of Flexible Manufacturing Systems", Flexible Manufacturing Systems '86 — Conference Papers, CASA of SME, March, 1986.
159
126. Kuttner, B. C. and Lachance, M. A. "Solving The CAD Surface Data , Transfer Problem", Manufacturing Engineering, October, 1985.
127. Langefors, Borje. Theoretical Analysis Of Information Systems, Auerbach Publishers Inc. 1973.
128. Lee, Barry. Introducing Systems Analysis and Design, Vol.1, The National Computing Centre Limited, 1978.
129. Leer, P. "Top-Down Development Using A Program Design Language", Tutorial; Software Design Strategies, IEEE Computer Society, 1980.
130. Lenkins, Renneth M. and Raedels, Alan R. "The Robot Revolution; Strategic Considerations For Managers", Production And Inventory Management. third quarter, 1982.
131. Lenz, J. E. and Talavage, J. J. General Computerized Manufacturing Systems Simulator, A dissertation of school of Industrial Engineering, Purdue University, August, 1977.
132. Lerro, Joseph P., Jr. "CAD/CAM System: Start Of The Productivity Revolution", Design News, 11-16-1981.
'133. Leung, Lawrence C. and Tanchoco, Jose M. A. "Replacement Decision Based On Productivity Analysis — An Alternative To The MAPI Method", Journal Of Manufacturing Systems, Vol 2, No. 2, 1983.
134. Lineback, J. Robert. "Logic Simulation Speeded With New Special Hardware", Electronics, June, 1982.
135. Lock, J. D. Sime, A. W., and Young, A. R. "A Pallet-Based Flexible Manufacturing System For Automated Small Batch Production", Proceedings of Ist International Conference on FMS, October, 1982.
136. Lukas, Michael P. and Scheib, Thomas J. "How Microprocessors Control Process Systems", Production Engineering, August, 1981.
137. Mandelbaum, M. Flexibility In Decision Making: An Exploration And Unification, A dissertation. Department of Industrial Engineering, University of Toronto, 1978.
138. Marcellus, Daniel H. Systems Programming For Small Computers. Prentice-Hall, Inc., 1984.
139. Marshall, Peter. "The Prospects for FMS in UK Industry", Proceedings of 1st International Conference on FMS, October, 1982.
140. Martin, James. System Design Fr<xa Provably Correct Constructs, Prentice-Hall, 1985.
160
141. Martin, James and McClure, Carma. Structured Techniques For Computing, Prentice-Hall, Inc. 1985.
142. McCabe, Thomas. "A Complexity Measure", IEEE Transactions On Software Engineering. Vol. SE-2, No.4, December, 1976.
143. McRoberts, Keith L. and Vaithianathan, Raj. "On Scheduling in A GT Environment", Journal of Manafacturing Systems, Vol.1, No.2, 1982.
144. Mc/lroy, Kenneth D. PRAGMA — The Simplistic Computer Application, PRAGMA Applications, 1982.
145. Meister, Ann E. "Ironing Out The Rough Spots Between CAD and CAM", CAD/CAM Technology, Fall 1984.
146. Mertins, K. and Spur, G. "Flexible Manufacturing Systems In Germany, Conditions And Development Trends", Proceedings of 1st International Conference on FMS, October, 1982.
147. Miller, C. P. "The Design Of Software For A Computer—Controlled Robot Paint Spraying Shop", Proceedings of 1st International Conference on FMS, October 1982.
'148. Miller, Richard K. "Artificial Intelligence: A New Tool for Manufacturing", Manufacturing Engineering, April, 1985.
149. Minton, Gene. "The Case for Continuing Education in CIM", Manufacturing Engineering, August, 1985.
150. Montag, Alfred C. "Flexible Automation for High—volume Production", Manufacturing Engineering, November, 1984.
151. Morin, Thomas L. and Stecke, Kathryn. Optimality of Balanced Workloads in Flexible Manufacturing Systems, Graduate School of Business Administration, The University of Michigan, January, 1982.
152. Morley, Bradford C. "Mechanical Product Design With Computer-aided Engineering", Manufacturing Engineering, September, 1984.
153. Musselman, Kenneth J. "Computer Simulation: A Design Tool For FMS", Manufacturing Engineering, September, 1984.
154. Napierala, Elizbieta. An Investigation Of Parts Flow In The Flexible Manufacturing System, A dissertation in the Industrial Engineering Department of Texas Tech University, May, 1983.
155. Nestman, Chadwick H. and Windsor, John C. "Decision Support Systems: A Perspective for Industrial Engineers", H E Transactions. Vol. 17, No. 1, 1985.
161
156. Nof, Shimon Y. and Seidmann, Abraham. "Unitary Manufacturing Cell Design With Rando Product Feedback Flow", lEE Transactions, Vol. 17, No. 2, 1985.
157. Ogorek, Michael. "CNC Standard Formats", Manufacturing Engineering . January, 1985.
158. Ogorek, Michael. "Interactive Graphics and Conversational Programming", Manufacturing Engineering, January, 1985.
159. Ogorek, Michael. "Workholding in the Flexible System", Manufacturing Engineering. July, 1985.
160. ORAC CNC Teachers Manual, Denford Machine Tools Limited, West Yorkshire.
161. ORAC Programming Instruction And Maintenance Manuad, Denford Machine Tools Limited, West Yorkshire.
162. Orr, Kemmeth T. "Introducing Structured Systems Design", Tutorial: Software Design Strategies, IEEE Ccanputer Society, 1980.
163. Owles, V. Arthur and Powers, Michael J. "Structured Systems Analy-~ sis Tutorial", ACM 1981 Annual Conference Proceedings, ACM, Novem
ber, 1981.
164. Parnas, D. L. "On the Criteria To Be Used In Decomposing Systems Into Modules", Tutorial; Software Methodology, IEEE Computer Society, 1984.
165. Parnas, D. L. "A Technique for Module Specification with Examples", Communications of the ACM, No. 155, May, 1972.
166. Peklenik, Janez. "Report On CIRP International Seminars On Manufacturing Systems", Journal Of Manufacturing Systems, Vol 1, No. 1, 1982.
167. Peters, Lawrence J. and Tripp, Leonard L. "Comparing Software Design Methodologies", Tutorial: Software Design Strategies, IEEE Computer Society, 1980.
168. Peterson, Rein and Silver, Edward A. Decision System, For Inventory Mainagement And Production Planning, John Wiley & Son, 1985.
169. Purdom, Peter B. and Palazzo, Tony. "The Citroen (CCM) Flexibl Manufacturing Cell", Proceedings of 1st International Conference On FMS, October, 1982.
170. Rapps, Sandra euid Weyuker, Elaine. "Data Flow Analysis Techniques For Test Data Selection", IEEE Transactions On Software Engineering, 1982.
162
171. Ross, Douglas T. "Structured Analysis (SA); A Language For Communicating Ideas", Tutorial On Software Design Techniques, IEEE Computer Society, 1984.
172. Ross, Douglas T. and Schoman, Kenneth E., Jr. "Structured Analysis For Requirements Definition", Tutorial On Software Design Techniques, IEEE Computer Society, 1984.
173. Schmitt, Lee E. "PC Versus CNC ~ Which Do You Choose ?". IEEE Transactions On Industry Applications, September/October, 1984.
174. Schaefer, Thomas J. "Modular Approach To CIM; Integrate Your Factory In Pieces", Production Engineering, April, 1986.
175. Schweitzer, Paul J. and Seidman, Abraham. Part Selection Policy For A Flexible Manufacturing Cell Feeding Several Production Lines, University of Rochester, Department of Industrial Engineering and Graduate School of Management, October, 1982.
176. Seidmann, Abraham and Schweitzer, Paul J. "Real-Time On-Line Control of a FMS Cell", H E 1983 Annual Industrial Engineering Conference Proceedings.
177". Shrensker, Warren L. "A Brief History Of CIM", A Programmer Guide For CIM Implementation, CASA of SME, 1985.
178. Slautterback, William H. "Manufacturing in the Year 2000", Manufacturing Engineering, August, 1985.
179. Smart, H. G. "Least Cost Estimating With Group Technology", Journal Of Manufacturing Systems, Vol 1, No. 1, 1982.
180. South, Robert C. "A Look At CAD Instruction", CAD/CAM Technology, Fall 1984.
181. Sowa, John E. Conceptuea Structures : Information Processing In Mind And Machine, Addison-Wesley Publishing Company, 1984.
182. Stauffer, Robert N. "Conmientaries on FMS Control", CAD/CAM Technology, Summer 1984.
183. Stauffer, Robert N. "General Electric's CIM System Automates Entire Business Cycle", CAD/CAM Technology, Winter 1984.
184. Stauffer, Robert N. "Graphic Simulation Answers Preproduction Productions", CAD/CAM Technology, Fall 1984.
185. Stauffer, Robert N. "Robot System Simulation", Robotics Today^ June, 1984.
163
186. Stay, J. F. "HIPO And Integrated Program Design", Tutorial On Software Design Techniques, IEEE Computer Society, 1984.
187. Stecke, Kathryn E. Production Planning Problems For Flexible Manufacturing Systems. Purdue University, August, I98I.
188. Stecke, Kathry E. "Formulation And Solution of Nonlinear Integer Production Planning Problems for Flexible Manufacturing Systans", Management Science, Vol. 29, No.3, March, 1983.
189. Steche, Kathryn E. and Solberg, James J. "Loading And Control Policies For A Flexible Manufacturing System", International Journal of Production Research, 198I, Vol. 19, No. 5.
190. Stecke, K. E. and Solberg, J. J. The Optimality Of Unbalanced Workloads and Machine Group Sizes For Flexible Manufacturing Systems, Graduate School of Business Administration, The University of Michigan, January, 1982.
191. Stecke, K. E. and Solberg, J. J. Scheduling Of Operations In A Computerized Manufacturing System, A dissertation of school Of Industrial Engineering, Purdue University, December, 1977.
192. Steinhilper, R. and Warnecke, H. "Flexible Manufacturing Systems; New Concepts; EDP - Supported Planning; Application Examples", Proceedings of 1st International Conference on FMS, October, 1982.
193. Suri, Rajan and Whitney, Cynthia K. "Decision Support Requirements In Flexible Manufacturing", Journal of Manufacturing Systems, Vol. 3, No.1.
194. Tausworthe, Robert C. Standardized Development of Computer Software — Part I & II. Prentice-Hall, 1977.
195. Teague, Lavette, Jr. and Christopher Pidgeon. Structured Analysis Methods For Computer Information Systems. Science Research Asso-ciaties. Inc. 1985.
196. Thomas, Tom. "Flexible Automated Manufacturing With Computer-Controlled Robots Under Hierarchical Control: Requirements And Benefits", Proceedings Of 1st International Conference On FMS. October, 19^2^
197. Torri, Signer L. "Flexible Manufacturing System; A Modern Approach", Proceedings of 1st International Conference on FMS, October, 1982.
198. Treywin, E. T. "Automatic Inspection And Control Of Products As Part Of A Flexible System", Proceedings Of 1st International Conference on FMS, October, 1982.
164
199. TRIAC Manual. Denford Machine Tools Limited, West Yorkshire, Feburary, I985.
200. United States Air Force Wright Aeronautical Lab. Integrated Computer-Aided Manufacturing (ICAM), United States, Air Force, 1981.
201. Varthianathan, Raj. On Scheduling In An GT Environment, A dissertation of school of Industrial Engineering, Purdue University.
202. Wang, Helen. An Experimental Analysis Of Flexible Manufacturing System (FMS). Department of M.C., Pace University, November, 1984.
203. Wang, T. L. "Distributed System Provides An Ideal Environment For Full Integration Of CAD And CAM", IE Transactions, November I98I.
204. Warneck, H. and Vettin, G. "Technical Investment Planning Of Flexible Manufacturing Systems — The Application Of Practice-Oriented Methods", Journal Of Manufacturing Systems, Vol. 1, No. 1, 1982.
205. Watt, Douglas G. "Networking Software Support for FMS", Manufacturing Engineering. September, 1985.
206. Wechsler, Donald B. "Process Planning In Transition", CAD/CAM Technology, Winter 1984.
207. Week, M. "Machine Diagnostics In Automated Production", Journal Of Manufacturing Systems, Vol 2, No. 2, 1983.
208. White, John A. A Preliminary Investigation of Group Technology Cells Using Simulation, School of Industrial and Systems Engineering, March, I98O.
209. Willamson, Donald. "3-D Simulation Smooths Out Shop Floor Materials Flew", Manufacturing Engineering, September, 1985.
210. Williamson, Donald. "CAD/CAM; Plotting the Impact of Shakeout", Manufacturing Engineering, Oct. 1985.
211. Young, R. E. "Software Control Strategies For Use In Implementing Flexible Manufacturing Systems", IE Transactions, November, I98I.
212. Yourdon, Edward. Techniques Of Program Structure And Design, Prentice-Hall, 1975.
213. Yourdon Inc. "Top-Down Design And Testing", Tutorial; Software Design Strategies, IEEE Computer Society, 198O.
214. Zisk, Burton. "Flexibility Is Key To Automated Material Transport System For Manufacturing Cells", IE Transactions, November, 1983.
APPENDIX A
CNC MILL, TRIAC
The TRIAC is a product of Denford. The TRIAC is an extermely versa
tile continuous path, computer based programmable numerical control unit
designed to control via stepper motors a 3 axis milling machine. It can
moniter 4 programmable input signals and give up to 4 programmable
auxiliary outputs. Integral control unit memory stored typically 750
blocks, the magnetic tape system 3000 blocks,
(a). TRIAC has the data keys 0 to 9, and the following function keys:
1. X; For X axis moves.
2. Y; For Y axis moves.
3. Z: For Z axis moves.
4. ENTER; When keying in information it is echoed first on the
display. After verification the data is then acceptance by
pressing "Enter".
5. G; For machine code G function selection.
6. M; For machine code M function selection.
7. FEED; Input key for axes feeds.
8. E.O.B.; End of block or end of line of instructions.
9. CLW; For clockwise circular movements.
10. CCLW; For anticlockwise circular movements.
11. T; Selects tool selection/setting menu.
165
166
12. S; Selects system software version and EPROM test.
13. REPEAT; Select repeat facility up to 99 times and to 4 levels.
14. DWELL: Selects programmable dwell in the range of .1 to 9999.9
seconds.
15. MIRROR X; The X axis can be mirrored in the program.
16. MIRROR Y; The Y axis can be mirrored in the program.
17. AUX/INPUT; The Auxiliary function allows system to have 4 relay
closure outputs. The Input functions allows system monitor 4
external switches.
18. OFFSET; Selects machine offset or program offset.
19. SCALE: Movements inside or outside the program can be scaled
in the range from 0.01 to 650^.
20. PROG STOP; Selects a program stop during programming.
21. RESET; The reset key is used to reset frcxn or end the current
mode, to cancel the most recent entry, or to reset from an error
condition.
22. COMP; Select tool radius compensation.
23. FLOAT DATUM: Enables a selected position to be used as zero in
the X and Y axes only.
24. ABS DATUM: To move machine to datum position.
25. LOAD/END; To start loading or complete loading a program.
26. EDIT END; Permits full editing facilities to be used on the
programmed sequence of instructions in memory and completes the
edit.
27. DATA LINK; Selects a data link to external equipment, e.g. an
external c<xiputer or printer.
167
28. ABS/INC; Selects input of data in absolute or incremental
format.
29. CASS; Permits use of the cassette system.
30. BLOCK SEARCH; To enable program execution to commence from a
specific point in the program.
31. TPG END; Selects Tool Path Graphics facility.
32. INCH/MM; Selects input of data in imperial or metric units.
33. SPINDLE +; To manually increase the spindle speed.
34. SPINDLE -; To manually decrease the spindle speed.
35. SPINDLE FWD: To select forward rotation of spindle.
36. SPINDLE REV; To select reverse rotation of spindle.
37. OFF; To select spindle off.
38. DRIVE ON; This key supplies power to the stepper motor drives
and to the main machine after the red STOP key has been
released.
(b). TRIAC has the following modes;
1. M.D.I. (Manual Data Input); This mode enables the system to be
controlled from the keyboard.
2. S. STEP; Single Step.
3. CYCLE: This mode enables a user program to be executed from the
current block to its end.
4. CASS; Cassette unit operating.
5. EDIT.
6. PRINT.
7. RS232C.
8. TPG; Controller is in Tool Path Graphics mode.
168
(c). MACHINE codes (M functions and G codes);
M functions for use outside the program:
1. M03 Spindle Forward.
2. M04 Spindle Reverse.
3. M05 Spindle Stop.
4. M06 Tool Change.
5. M20 Auxiliaries.
6. M21 Inputs.
M functions available inside the program;
1. MOO Program Stop.
2. M02 End of Program.
3. M03 Spindle Forward.
4. M04 Spindle Reverse.
5. M05 Spindle Stop.
6. MO6 Tool Change.
7. M20 Auxiliaries.
8. M21 Inputs.
G codes for use outside the program;
1. GOO Rapid Traverse.
2. G01 Linear
3. G02 Circular CLW.
4. G03 Circular CCLW.
5. G04 Dwell.
6. G21 Machine Scale.
7. G33 Thread.
8. G40 Cancel Tool Comp.
9. G41 Cutter Comp Left.
10. G42 Cutter Comp right.
11. G55 Machine Offset.
12. G70 Imperial Units.
13. G71 Metric Units.
14. G80 Deactivate Cycle.
15. G90 Absolute Input.
16. G91 Incremental Input.
17. G9d Absolute Datum.
G codes for use inside the program;
1. G01 Linear.
2. G02 Circular CLW.
3. G03 Circular CCLW.
4. G04 Dwell.
5. G10 Mirror X.
6. Gil Cancel Mirror X.
7. G12 Mirror Y.
8. G13 Cancel Mirror Y.
9. G20 Program Scale (replaces G21).
10. G33 Thread.
11. G40 Cancel Tool Comp.
12. G41 Cutter Comp Left.
13. G42 Cutter Comp Right.
14. G54 Program Offset (replaces G55).
15. G70 Imperial Units.
16. G71 Metric Units.
170
17. G79 Re-enable Cycle.
18. G80 Deactivate Cycle.
19. G8l Repeat Function.
20. G82 Circular Cycle.
21. G83 Drilling Cycle.
22. G84 Rect-Lar Cycle.
23. G90 Absolute Input.
24. G91 Incremental Input.
25. G98 Absolute Datum.
26. G99 Floating Datum,
(d). Standard Equipments;
1. RS232C link to computer or printer.
2. Audio cassette deck.
3. Headphone outlet.
4. Audio instruction tape.
5. Installation, maintenance and instruction manual.
6. Spare part list.
7. Co-axial T.B. socket for V.D.U.
8. Maintenance tools.
9. Mini cassette for program storage.
10. Automatic lubrication system.
11. Halogen Lo-Vo light.
Extra equipments;
1. Spray mist coolant.
2. Quick change tooling.
3. Printer.
171
4. CAD/CAM and off line computer progranming for Apple, BBC and
IBM.
5. Robot with F.M.S.
(e). Example;
NI GOO RAPID TRAVERSE X 75.000 Y 40.000 N2 G82 CIRCULAR CYCLE RADIUS 10.000
Z N3 G80 DEACTIVATE CYCLE N4 GOO RAPID TRAVERSE X
3.00
75.000 Y 70.000 N5 G82 CIRCULAR CYCLE RADIUS 3.00
Z N6 G80 DEACTIVATE CYCLE N7 GOO RAPID TRAVERSE X N8 G84 RECT-LAR CYCLE X
Z N9 GOO RAPID TRAVERSE X N10 G80 DEACTIVATE CYCLE Nil GOO RAPID TRAVERSE X N12 G84 RECT-LAR CYCLE X
Z N13 G80 DEACTIVATE CYCLE N14 GOO RAPID TRAVERSE X N15 G83 DRILLING CYCLE Z N16 GOO RAPID TRAVERSE X N17 GOO RAPID TRAVERSE X N18 GOO RAPID TRAVERSE X N19 GOO RAPID TRAVERSE X N20 GOO RAPID TRAVERSE X N21 G80 DEACTIVATE CYCLE N22 GOO RAPID TRAVERSE X
3.00
110.000 11.000 3.000
145.000
180.000 11.000 3.000
180.000 5.000
145.000 110.000 110.000 145.000 180.000
200.000
Y Y
Y
Y Y
Y
Y Y Y Y Y
Y
88.000 42.000
88.000
88.000 28.000
71.500
64.500 64.500 111.500 111.500 104.500
150.000
Z F F
Z F F
Z F F Z
Z F F
Z F Z Z z z z
z
0.000 200.0 100.0
0.000 200.0 100.0
0.000 200.0 100.0 0.000
0.000 200.0 100.0
0.000 100.0 0.000 0.000 0.000 0.000 0.000
50.000
CYCLE
CYCLE
CYCLE
CYCLE
CYCLE
3
3
3
3
3
APPENDIX B
CNC LATHE, ORAC
ORAC is also a product of Denford. ORAC will accept a maximum of 9
pairs of tool offsets in its memory with a o (zero) tool offset. The
manufacutrures repeatable accuracy on clamping is 0.01 mm.
(a). ORAC has the following function keys;
1. D: The DELETE key. Used to delete selected data.
2. E; The ENTER key. Used in conjunction with almost all pro
gramming and is used to confirm and transfer information into
the memory.
3. F; To quit from the offset routine and to remove the last tool
set. Failur to do so could result in the axis moving to the
program datum and colliding with the work.
4. S; To delete a character with the cursor under it.
5. P; To insert a new page.
6. yP ; To delete a page.
7. <t| ; The PAGE FORWARD key.
8. i l ; The PAGE REVERSE key.
9. A ; To move flashing cursor upwards.
10. V • * move flashing cursor downwards.
11. ^ ; To move flashing cursor to left.
12. >; To move flashing cursor to right.
172
173
13. .; Decimal point key.
14. -; Minus sign key.
15. PTP; Point to point operation. This instruction moves the tool
in either the X (facing) or Z (turning) direction individually,
or X and Z directions simultaneously.
16. CIRC: Circular interpolation. This instruction enables ORAC to
cut in a circular motion.
17. THRD; Thread Cutting. This instruction provides a threading
cycle for external and internal threads.
18. DWELL; Dwell priod. This instruction allows a timed dwell
period to be programmed between machining operations and the
machine will remain stationary at its present position.
19. AUX I; Auxilliary inputs. This instruction allows the program
to be halted between machining operations. The executing program
will only proceed beyond this point if any of the 4 auxilliary
inputs programmed to receive an input signal are activated.
20. AUX 0; Auxiliary outputs. This instruction allows any of the
4 auxilliary output relays to be operated to control external
functions.
21. CALL: Call subroutine. This instruction allows a subroutine to
be called for execution.
22. SUB; Subroutine start. This instruction allows a subroutine to
be constructed and allocated an identity number.
23. END SUB; End subroutine program. This instruction ends the
subroutine. All functions entered between SUB and END SUB.
174
24. DO; Start do loop. This instruction is used for starting a
repetitive sequence.
25. END DO; End do loop. This instruction defines the end of the
main program.
26. INS/MM: Units selection: inch or millimetres. This instruction
selects the units for subsequence program pages. Alternative
depression of the key changes the units from inch to millimetres
to inch etc. Ensure that the units for the program are selected
at either block 1 or 2.
27. INS/ABS: Incremental/absolute format. This instruction selects
the program format for subsequent program pages. Alternate
depression of the key changes the format from INC. to ABS to
INC etc. Ensure that the format for the program is selected at
either block 1 or 2.
28. PROG/DATUM; Program Datum. This instruction allows the coordi
nates of the program datum to be entered and these values are
always taken from the centre line of the spindle on X and from
the end of the workpiece on Z. Incremental may have been selec
ted for the format the program datum should always be entered in
block 3 and entered in the units previously selected in block 1
or 2.
29. START; Starts the operating sequence when executing a program.
Ensure before starting a program that the spindle is running.
30. STOP; To stops the operating sequence when executing a program.
To restart the program press the square green start button.
Inputs positional information for Zo plane and X DIA. When used
175
as instructed in the tool offset setting up mode. Stop manual
operating sequence and returns to either the main menu or the
tool offset menu.
31. MAIN; Allows selection of manual axis control once automatic
execution of a program has been stopped using the square red
stop key.
32. The 'Hand' Sign: Once operating under manual operation you have
three options: 1) The first feed indicated under manual opera
tion is fast feed. This is indicated in the bottom right hand
corner of the screen. The fast feedrate is a rapid move at 47
inches/min or 1200 mm/min. 2) If you depress this key the
feedrate will change from fast to slow. The slow feedrate is 6
inches/min or 150 mm/min. 3) A further depression of the key
will change the feedrate to step. This is a JOG of 0.01 mm or
0.0004 inches.
33. A and V : X traverse, in and out. Press the key pointing
upwards to move the tool towards the centre line of the spindle
and downwards to move the tool away from the centre line.
34. ^ and > ; Z traverse, left and right. Press the key poin
ting to the left to move the tool towards the chuck and to the
right to move away from the chuck.
35. FEEDRATE OVERIDE MINUS; When operateed during execution of a
program, reduces the axis feed in operation at that time from
its programmed value to some lower value. The overide range is
from 0% to 110%. The display only indicates the feedrate in
units of ten.
176
36. FEEDRATE OVERIDE PLUS: When operated during execution of a
program, increases the axis feed in operation at that time from
its programmed value to one higher value.
37. SPINDLE SPEED MINUS: When operated during the execution of a
program, reduces the spindle speed from the programmed value.
38. SPINDLE SPEED PLUS: When operated during the execution of a
program, increases the spindle speed from the prograimned value.
39. GREEN BUTTON: Starts spindle.
40. RED BUTTON: Stops spindle.
41. EMERGENCY STOP; Stops both spindle and axis drives.
42. RESET button on the RIGHT HAND SIDE of the control panel
situated below the cassette: To return to the main menu at any
stage during a program build up or execution without loss of
memory. If depressed during the execution of the program it is
advisable to reset tool 0 (zero) again.
43. RESET button below the MANUAL CONTROLS: For resetting the limit
switches on the axes of the machine. Depress the RESET button
and hold, while at the same time depress the axis jog button to
move the axis away from the limit switch.
(b). ORAC has the following modes:
1. M.D.I. (Manual Data Input): This mode enables the system to be
controlled from the keyboard.
2. S. STEP: Single Step.
3. Cassette unit operating.
4. EDIT.
5. PRINT.
177
6. RS232C.
(c). ISO 646 codes and addresses:
The programm is composed of blocks, separated from one another by
an asterisk or the letters CR (Carriage Return). Each block contains
words. A Word Address system is used. Words are begun with the address
letter. For example,
N; Block number, 3 digits.
X; Operative X axis co-ordinate point.
Z; Operative Z axis co-ordinate point.
F; Feedrate.
S; Rotational speed function.
T; Tool number.
G: Preparatory function, 2 digits.
1. GOO; Positioning of the axis slides or spindle at rapid tra
verse rate.
2. G01; Linear interpolation.
3. G02; Circular arc interpolation follows a CW (Clockwise) path.
4. G03; Circular arc interpolation is in the counterclockwise
path.
5. G04; Holding the tool at a given position for a programmed
length of time and never for use as tool change time or swarf
clearance.
6. G05; End of subroutine.
7. G06; End of Do loop.
8. G26: Auxiliary Inputs.
9. G27; Auxiliary Outputs.
178
10. G28
11. G50
12. G65
13. G70
14. G71
15. G73
16. G90
17. G91
Subroutine Start.
Program datum.
Call subroutine,
Imperial units.
Metric units.
It is a Repeat Facility,
Absolute format.
Incremental format.
M = Miscellaneous or auxiliary functions.
1. M01
2. M02
3. M03
4. M04
5. M05
6. M06
7. M07
8. M08
9. M09
10. M10
Selective stop.
Program end.
Spindle rotation clockwise.
Spindle rotation counterclockwise.
Spindle off; coolant off.
Tool Chang.
Coolant N;o 1 on (e.g. Mist coolant).
Coolant N:o 2 on (e.g. Flood coolant).
Coolant off.
Clamping.
11. Mil; Clamping released.
12. M12-M18; Free.
13. Ml9; Spindle stop in specified angular orienation.
14. M20-M29; Continuous free.
15. M3O: Program end. Stop and return to start character.
16. M31; Clamping override.
17. M32-M39: Free.
179
18. M40-M45; Change of gear ratio if this required. Otherwise
free.
19. M46-M47; Free.
20. M48; Cancels M49.
21. M49; Deletion of manually a djusted feedrate or rotation speed.
22. M50-M57; Free.
23. M58; Cancels M59.
24. M59; Maintains the spindle speed constant even though a G96
initiated.
25. M60; Workpiece change.
26. M61-M89; Free.
27. M90-M99; Continuously free.
Note: "Free" has the same meaning as for the preparatory functions,
that is they are free for the use of control unit or machine tool
manufacturer to select as he feels appropriate,
(d). Standard Equipments;
1. RS232 link for computer or printer.
2. Quick change toolpost and holder.
3. Self centring 3 jaw chuck.
4. Set of outside jaws.
5. Safety Guards.
6. Instruction manual and parts list.
7. Map light.
8. Stereo cassette deck.
9. 2 hi-frequency speakers and headphone outlet.
10. Audio instruction tape.
18C
11. Solar calculator.
12. Co-axial T.V. socket for V.D.U.
13. Lathe maintenance tolls.
14. Mini magnetic cassette for program storage.
15. 2 fuses.
16. Machine plug.
17. Test program with component.
18. Video instruction film.
(e). Example 1:
N01 G71 N02 G90 N03 G50 X15 Z5 N04 GOO XI3 Z0.2 F1000 S8OO TI M03 N05 G91 N06 G73 13 NO7 GOO X-1 ZO NOB G01 XO Z-30.2 F90 NO9 GOO X0.5 ZO N10 GOO XO Z30.2 Nil G06 ZO N12 G90 NI3 GOO X10.1 Z-29.9 F1000 N14 G01 X10 Z-30 N15 G02 X12.5 Z-32.5 12.5 F50 S900 N16 GOO X15 ZO F1000 N17 G01 Z5 F50 TO MOO NI8 G01 XO ZO M02
(f). Example 2; Printout from »ORAC»
TITLE I.D PAGE 01 INCH-UNITS PAGE 02 INCREMENTAL-FORMAT.G91 PAGE 03 THREADING..G33 IN/OUT-SIDE.DIAM 2 ROOT-DIAMTER 1.99 CUT.(INCR)..X 0.01 LENGTH..Z -1 PITCH 0.10 STARTS 1
181
TOOL-NO 1 SPINDLE-SPEED 100
PAGE 04 DWELL..G04 TIME.(SECS) 05
PAGE 05 THREADING..G33 IN/OUT-SIDE.DIAM 2 TOOT-DIAMETER 1.99 CUT.(INCR)..X 0.01 LENGTH..Z -1 PITCH 0.04 STARTS 1 TOOL-NO 1 SPINDLE-SPEED 100
PAGE 06 DWELL.G04 TIME.(SECS) 05
PAGE 07 POINT-TO-POINT.GOO,GO1 X 0.562 Z 0.562 FEEDRATE 10 TOOL-NO 1 SPINDLE-SPEED 1000
PAGE 08 CALL-SUBROUTINE
APPENDIX C
PSEUDOCODES FOR FMC IN A MULTITASKING
ENVIRONMENT
/» »/
/* C program starts from here »/ / » . /
/* Header files are used to declare variable attributes */ /* stdio.h ~ for standard I/O »/ /* gf.h — for Greenleaf Functions */ /* amx.h ~ for AMX86 systems */ /* c30.h — for function routines declaration */ /* struct.h — data structure for C program */ /* fcntl.h — file functions for MS-DOS »/ /» »/
#include "c;stdio.h" ^include "c;gf.h" ^include "c:amx.h" ^include "c:c30.h" ^include "c:struct.h" #include "c:fcntl.h"
/» »/
/• Other global variables declaration */ /» »/
static int timecd = 0; /* current timer variable */ unsigned delay, mainw; int cmain, cproc, cdum, cmsg;
»/
The following functions are in AMX86: */ clkisp() — real-time clock interrupt service */ pint(int) — print an integer */ prompt(char *) — print a character string with RETURN */ pstr(char *) — print a character string without RETURN */ pch(char) — print one character */ crash(int) — handle system error */
»/
»/
Main program starts from here */ »/
182
183
void mainO { extern unsigned int _top; int i,j;
clsO; /» clsO is a function of Greenleaf */ prompt("AMX86 starts from here"); for (j = 30; j != 0; j—) { /» delay approx. 3 sec. »/
for (i = 8000; i != 0; i~); }
amxgoO; /» Start AMX86 »/ }
/» »/
/* rrtimeO — Restart Procedure for timers */ /» »/
void rrtimrO { static int clkintcd[l6]; /» Clock ISP installed here */ int i;
ajmodlO; ajiptr(UCLKV,clkisp,clkintcd); /* install interrupt pointer */
/» clkispO is an AMX86 function for real time clodk interrupt */ for (i=1; i<100; i+-i-) ; /* delay for installing interrupt */ kickaO; /* start timers */ timecd = 0; /* initialize timecode */ }
/» »/
/* rrprocO — Restart procedure for tasks */ /» V void rrprocO { int i; static int keyintcd[16]; /* Keyboard ISP installed here */
ajmodlO; /* Set DS to DGROUP */ if (ajtask(TNOPMG) < 0) {
prompt("rrproc fails to call TNPRS"); exitO; } /* schedule operator manager to occur, if failt, exit to MSDOS */
}
/» »/
/» traO — timer routine a; this timer is defined in a */ /» special pace. */ /» »/
void traO { ajmodlO; /* Set DS to DGROUP */ kickaO; /* defined in amx.h to restart timer 0 */
184
timecd+-K; /» increment timecode */ }
TASK 1: OPERATOR TASK (stopmg) »/ »/
Operator Task reads in the job specification file, */ machining process file, tool file, and machine file and •/ allows the operator to enter proper data or make menu */ selections in order to produce part programs for Mill and •/ Lathe machines. After the part programs is produced, the */ operator must insure all machines and materials are ready */ to start the process. •/
void stopmgO { int i, fhl, fh2, fh3, fh4, fh5, fh, status, choice;
ajmodlO;
/* read (job specification file) */ if ((fh1=open("b:job", 0_RDONLY, S_IREAD))==-1) {
pstr("Can't open file b:job !\n"); exitO; }
/* read (machining process file) */ if ((fh2=open("b:proc", 0_RDONLY, S_IREAD))==-1) {
pstr("Can't open file b:proc !\n"); exitO; }
/» read (tool file) »/ if ((fh3=open("b:tool", 0_RDONLY, S_IREAD))==-1) {
pstr("Can't open file b: tool !\n"); exitO; }
/* write (part program) for CNC Mill »/ if ((fh4=open("b:millpart",0_WR0NLY, S_IWRITE))==-1) {
pstr("Can't open file b:millpart !\n"); exitO; }
/* write (part program) for CNC Lathe */ if ((fh5=open("b:lathepart",0_WR0NLY, S_IWRITE))==-1) (
pstr ("Can't open file b:lathepart !\n"); exitO; }
clsO; for (i=1, i<=2; i++) { switch(i) { /* first write part program for CNC Mill •/ case 1: fh=fh4; mtype="mill"; break;
185
/* then write part program for CNC Lathe */ case 2: fh=fh5; mtype="lathe"; break;
ClsO; /* find (job material records) */ status=read(fh1, job_mater, job_mater_byte); /* find (job process records) */ status=read(fh1, job_proc, job_proc_byte); /* find (machining process records) */ status=read(fh2, proc_type, proc_type_byte); /* find (machine type records) */ status=read(fh3, mach_type, mach_type_byte);
no_of__proc = 0, while (no_ofjproc <= job_no_of_proc) {
no__of_proc ••-+; if ((proc_type == job_proc) && (mach_type == mtype)) {
/* add (process number records) */ status=write(fh4, part_num, part_num_byte); /* add (operation codes records) */ status=write(fh4, part_code, part_code_byte); /* add (X, Y, Z positions) •/ status=write(fh4, part_X, part_X_byte); status=write(fh4, part_Y, part_Y_byte); status=write(fh4, part_Z, part_Z_byte); /* add (cutting speed records) */ status=write(fh4, part__speed, part_speed_byte); /* add (feed rate records) */ status=write(fh4, part_feed, part_feed_byte);
}
/* find (machining process records) */ status=read(fh3, tool_proc, tool_proc_byte); if (tool_j)roc = process_proc) {
/* write (part program) */ /* add (tool number records) */ status=write(fh, part_tooln, part_tooln_byte); } /» end of if */
} /« end of while »/ } /» end of for loop */
/* close all data files and part programs */ status=close(fhl); status=close(fh2); status=close(fh3); status=close(fh4); status=close(fh5);
/* Make sure machines are ready */ clsO;
186
start:
atsayd,0,"Please check all machines and materials,"); atsay(2,0,"When it is ready, press RETURN key to start process."); scanf("%d", Achoice); if (choice==CR) {
/* schedul task 2 — stmill »/ flag_load =1; if ((ajcall(TNMILL,mill_prior,&flag_load)) < 0) {
atsay(3,0,"Operator Manager fails to call Mill Manager"); exitO; }
/* schedule task 3 — stlath */ flag_load = 1; if ((ajtask(TNLATH),lathe_prior,&flag_load) < 0) { atsay(4,0,"Operator Manager fails to call Lathe Manager"); exitO; }
else goto start; } /* end of taski — stopmg */
/* »/
/* TASK 2: CNC MILL TASK (stmill) »/ /» »/
/* Each Machine should be operated independently. The */ /* CNC Mill Task should be able to send and receive signals */ /* to and from the CNC Mill machine. Also, it performs the */ /* bookkeeping for process status and machine status. */ /» »/
void stmill() { int idata;
ajmodlO; /* if there is no material ready for process, call robot */ /* manager to load materials. */ if (flag_mill_mater==0) {
/* flag of load, 1 for loading, 0 for unloading */ flag_load =1; /* call Robot Manager as soon as possible */ ajcalw(TNROBO,1,&flag_load); /* put Lathe Manager to sleep until Robot Manager finishs */ ajwaitO; }
/* otherwise, send signal to Mill machine for setting up */ /* communication line */ outp(PORTADD, BYTE); /» receive feedback from Mill */ idata = inp(PORTADD); /* varify the input signal */ /* if signal has error, call safety manager */ if (f__error==0) ajcall(TNSAFE,0,&error);
187
/* else send Mill part program 'b:millpart'to CNC Mill »/ /* start execution »/ /* at the end of part program processing, CNC Mill »/ /* send signal back to computer and then call robot manager »/ /* to unload parts §/ if (flag__mill_end= = 1) {
/* set flag of load =0 for unloading */ flag_load = 2; /* to call Robot Manager as soon as possible */ ajcalw(TNR0B0,2,&flag_load); /* put Lathe Manager to sleep until Robot Manager finishs */ ajwaitO; }
/* reschedule itself to another process */ if ((ajtask(TNMILL)) < 0) {
atsay(3,0,"Mill Manager fails to reschedule itself"); exitO; } /» end of if »/
}
/« »/
/» TASK 3: CNC LATHE TASK (stlath) »/ /» »/
/* This task performs functions similar to the CNC Mill */ /* task. In addition, it monitors the CNC Lathe machine. */ /« »/
void stlathO { int idata;
ajmodlO; /* if there is no material ready for process, call robot */ /* manager to load materials. */ if (flag_mill_mater==0) {
/* flag of load = 1 for loading, =0 for unloading */ flag__load =3; /* to call Robot Manager as soon as possible */ ajcalw(TNROBO,1,&flag_load); /* put Lathe Manager to sleep until Robot Manager finishs */ ajwaitO; }
/* otherwise, send signal to Lathe for setting up comm. line */ outp(PORTADD, BYTE); /» receive feedback from Lathe */ idata = inp(PORTADD); /* varify the input signal •/ /* if signal has error, call safety manager */ if (f_error==0) ajcall(TNSAFE,0,&error); /* else send Lathe part program 'b:lathepart'to CNC Mill */ /* start execution */ /» at the end of part program processing, CNC Lathe •/
188
/* send signal back to computer and then call robot manager */ /* to unload parts »/ if (flag_mill_end==1) {
/* set flag of load =0 for unloading »/ flag_load = 4; /* to call Robot Manager as soon as possible */ ajcalw(TNR0B0,2,&flag__load); /* put Lathe Manager to sleep until Robot Manager finishs */ ajwaitO; }
/* reschedule itself to another process */ if ((ajtask(TNLATH)) < 0) {
atsay(3,0,"Lathe Manager fails to reschedule itself"); exitO; } /* end of if »/
} /* end of task 3 — stlath »/
/ » - .
/«
/»
/»
/»
/»
/»
/»
/»
TASK 4: ROBOT TASK (strobo)
This task should be able to send and receive signals to and from the robot, and to monitor robot movement. It also does the bookkeeping for robot status. When the Mill task and Lathe task call for a robot, they send an argument to the Robot task, which determines the machine to pick up materials or parts, and where to put them down.
void strobo(&flag) int *flag, idata; { ajmodlO; /* wake up calling tasks ajwakcO; /* send signal to Robot to set up communication line outp(PORTADD, BYTE); /* receive feedback from robot idata=inp(PORTADD); /* varify received signal /* depending on flag value to determine loading or unloading /* and which machine to load/unload
•»/
»/
»/
»/
»/
»/
»/
*/
»/ — » /
7
»/
»/
»/
»/
switch(*flag) { /* for loading materials to Mill case 1: /* move to the material storage area /* pich up material /* move to the Mill loading position /» loading material to Mill /» return to the original position /* when finish loading, wake up Mill Manager ajwake(TNMILL);
»/
»/
»/
»/
»/
»/
»/
189
/»
/»
/»
/»
/»
/»
}
break; /* for unloading parts from Mill case 2:
move to the Mill pich up part from Mill move to part storage area put down part return to the original position when finish unloading, wake up Mill Manager
ajwake(TNMILL); break;
/* for loading materials to Lathe case 3:
move to the material storage area pich up material move to the Lathe loading position loading material to Lathe return to the original position when finish loading, wake up Lathe Manager
ajwake(TNLATH); break;
/* for unloading parts frr»m Lathe case 4:
move to the Lathe pich up part from Lathe move to part storage area put down part return to the original when finish unloading,
ajwake(TNLATH); break;
/»
/»
/*
/»
/»
/»
/»
/»
/»
/»
/»
/» position wake up Lathe Manager
»/
»/
»/
»/
»/
»/
»/
»/
»/
»/
»/
*/
»/
»/
»/
»/
»/
»/
»/
»/
»/
}
/».
/»
/»
/»
/»
/»
/ *
/»
/»
/»-
void stsafe(error) int error; { ajmodlO; /* depending on type of errors, different actions are taken */
TASK 5: SAFETY TASK (stsafe)
This task is responsible for error detection, error correction and emergency shut down of the manufacturing system. The Mill task or Lathe task verifies input data from machines. If a fault occurs, the Safety task is called and correction is performed, or the system is shut down. The Safety task has priority over all other tasks
.»/
»/
»/
»/
»/
»/
»/
»/
»/
• » /
190
switch(error) { case 0: crash(O); break; /* shut down the system */ case 1: /* call operator for checking equipment */ break; case 2: /* other process */ break; }
}