41
Importance & Formula of Time and Motion Study Time and Motion study plays very important role in to manufacturing domain to enhance production quality and utilize maximum resource capacity in minimum time. Now a days T&M also used in to offices, Hospitals, Departmental stores and service industry to ensure maximum utilization of resources in the lowest possible timeline with customization of the Best Management cycle by T&M Analyst. Time & Motion study is combination of Motion study invented by Frank B. Gilbreth, Lillian M Gilbreth and Time study invented by Frederick W. Taylor in 1881. Frederick W. Taylor said that “Time study is the one element in scientific management beyond all others making possible the transfer of skill from management to men ”. Time and Motion study invented around century back in 1885 by observing process of bricklaying on construction site. The simple yet effective observation Time and Motion study was “The horizontal line consume more time then vertical line.” So Just a minor change improves the motion such a way that more and more construction can be done in less time. another observation was to get all tools near by or handy will help to save potential time of gathering it for bricklaying purpose. So, all necessary tools like cement and water would be near by while bricklaying improved productivity and decrease span for construction for each worker. To derive the enhancement now a days we use electronic tools like stop watch, bar codes, video camera, 360 degree camara, softwares and computers. The formula to get standard Time and Motion is as under: ST= (NT)(1-AF) Where ST : Standard Time

Time and Motion

Embed Size (px)

DESCRIPTION

time and motion

Citation preview

Page 1: Time and Motion

Importance & Formula of Time and Motion Study

Time and Motion study plays very important role in to manufacturing domain to enhance production quality and utilize maximum resource capacity in minimum time. Now a days T&M also used in to offices, Hospitals, Departmental stores and service industry to ensure maximum utilization of resources in the lowest possible timeline with customization of the Best Management cycle by T&M Analyst.

Time & Motion study is combination of Motion study invented by Frank B. Gilbreth, Lillian M Gilbreth and Time study invented by Frederick W. Taylor in 1881. Frederick W. Taylorsaid that  “Time study is the one element in scientific management beyond all others making possible the transfer of skill from management to men ”.

Time and Motion study invented around century back in 1885 by observing process of bricklaying on construction site. The simple yet effective observation Time and Motion study was “The horizontal line consume more time then vertical line.” So Just a minor change improves the motion such a way that more and more construction can be done in less time. another observation was to get all tools near by or handy will help to save potential time of gathering it for bricklaying purpose. So, all necessary tools like cement and water would be near by while bricklaying improved productivity and decrease span for construction for each worker.

To derive the enhancement now a days we use electronic tools like stop watch, bar codes, video camera, 360 degree camara, softwares and computers.

The formula to get standard Time and Motion is as under:

ST= (NT)(1-AF)

Where ST : Standard TimeNT : Normal TimeAF : Allowance Factor Posted by Vision Raval at 10:45 AM

Page 2: Time and Motion

Work Improvement and Measurement Study (WIMS)Program Brief    

Backgrounder

     Under the Labor Code, as amended by Republic Act No. 6727, the NWPC is mandated, among others, to formulate policies on wages, productivity and income and prescribe rules and

guidelines for the determination of appropriate minimum wage at the regional, provincial or industry level.

     The approved DOLE Rationalization Plan pursuant to Executive Order No. 366 (Directing a Strategic Review of the Operations and Organizations of the Executive Branch and Providing

Options and Incentives for Government Employees who may be Affected by the Rationalization of the Functions and Agencies of the Executive Branch), transferred to the NWPC the wage

related functions of the Bureau of Working Conditions (BWC) specifically Facility Evaluation (FE) and Time and Motion Studies (TMS).

     Administrative Order No. 357 series of 2010 clarifies and delineates the duties and responsibilities of the NWPC, Regional Tripartite Wages and Productivity Boards (RTWPBs), Bureau of Working Conditions (BWC) and DOLE Regional Offices (ROs) in the conduct of

Facility Evaluation and Time and Motion Study functions.

     The conduct of work improvement measurement (formerly Time and Motion Study)  shall determine whether the piece rate/production standards (quota) prescribed by the employers for

their employees are fair and reasonable.

Objectives

To help MSMEs measures the amount or quantity of work done by employing work improvement and measurement study

To arrive at  fair and  reasonable piece rate/production standards 

 

Target Beneficiaries

     Micro, small and medium enterprises 

Mechanics of Availment

     Interested party (MSMEs) to send their formal request to RTWPB in their area/region for the conduct of WIMS. 

Page 3: Time and Motion

     The DOLE will issue a department order on the Guidelines for the Conduct of the Work Improvement and Measurement Study which covers filing of request, documentary

requirements, actions on applications, submission of report, period/duration of effectivity of the Order, posting, auditing, enforcement, repeal and separability.

To facilitate request for and conduct  of  WIMS, please see  attached forms.       1. Work Measurement Request Form 2. Initial Conference Form   3. Time and Rate Standars 4. Work Measurement Monitoring Record        

time and motion study  

Definition

Method for establishing employee productivity standards in which (1) a complex task is broken into small, simple steps, (2) the sequence of movements taken by the employee in performing those steps is carefully observed to detect and eliminate redundant or wasteful motion, and (3) precise time taken for each correct movement is measured. From these measurements production and delivery times and prices can be computed and incentive schemes can be devised. Generally appropriate only for repetitive tasks, time and motion studies were pioneered by the US industrial engineer Frederick Winslow Taylor (1856-1915) and developed by the husband and wife team of Frank Gilbreth (1868-1924) and Dr. Lillian Gilbreth (1878-1972). See also Taylorism.

Read more: http://www.businessdictionary.com/definition/time-and-motion-study.html#ixzz24L8QDAtG

work measurement  

Definition

Application of time and motion study and activity sampling techniques to determine the time for a qualified worker to complete a specific job at a defined level of performance. Work measurement is used in budgeting, manpower planning, scheduling, standard costing, and in designing worker incentive schemes. See also therblig.

Read more: http://www.businessdictionary.com/definition/work-measurement.html#ixzz24L8YTNyE

Page 4: Time and Motion

time-and-motion studytime-and-motion study extract from Britannica Online.

Analysis of the time spent in going through the different motions of a job or series of jobs in the evaluation of industrial performance. Such studies were first instituted in offices and factories in the U.S. in the early 20th century. They were widely adopted as a means of improving work methods by subdividing the different operations of a job into measurable elements, and they were in turn used as aids in standardization of work and in checking the efficiency of workers and equipment.

time-and-motion study is one of 26,000 free short articles on Britannica Philippines

Find more information on time-and-motion study. Upgrade to Britannica Online for more on time-and-motion study.

standardizationstandardization extract from Britannica Online.

In industry, the development and application of standards that make it possible to manufacture a large volume of interchangeable parts. Standardization may focus on engineering standards, such as properties of materials, fits and tolerances, and drafting practices; or on product standards, which detail the attributes of manufactured items and are embodied in formulas, descriptions, drawings, or models. Adoption of standards makes it easier for firms to communicate with their suppliers. Standards are also used within industries to prevent conflict and duplication of effort. Governmental departments, trade associations, and technical associations help to set standards within industries; these are coordinated and promoted by organizations such as the American National Standards Institute (ANSI) and the International

Time and motion studyFrom Wikipedia, the free encyclopediaJump to: navigation, search

Page 5: Time and Motion

A time and motion study (or time-motion study) is a business efficiency technique combining the Time Study work of Frederick Winslow Taylor with the Motion Study work of Frank and Lillian Gilbreth (not to be confused with their son, best known through the biographical 1950 film and book Cheaper by the Dozen). It is a major part of scientific management (Taylorism). After its first introduction, time study developed in the direction of establishing standard times, while motion study evolved into a technique for improving work methods. The two techniques became integrated and refined into a widely accepted method applicable to the improvement and upgrading of work systems. This integrated approach to work system improvement is known as methods engineering [1] and it is applied today to industrial as well as service organizations, including banks, schools and hospitals.[2]

Time and motion study have to be used together in order to achieve rational and reasonable results. It is particularly important that effort to be applied in motion study to ensure equitable results when time study is used. In fact, much of the difficulty with time study is a result of applying it without a thorough study of the motion pattern of the job. Motion study can be considered the foundation for time study. The time study measures the time required to perform a given task in accordance with a specified method and is valid only so long as the method is continued. Once a new work method is developed, the time study must be changed to agree with the new method.[3]

Contents

1 Time study 2 Criticisms 3 Motion studies 4 Taylor Vs. the Gilbreths 5 Direct time study procedure 6 See also 7 References 8 External links

Time study

Time study is a direct and continuous observation of a task, using a timekeeping device (e.g., decimal minute stopwatch, computer-assisted electronic stopwatch, and videotape camera) to record the time taken to accomplish a task[4] and it is often used when:[5]

there are repetitive work cycles of short to long duration, wide variety of dissimilar work is performed, or process control elements constitute a part of the cycle.

Page 6: Time and Motion

The Industrial Engineering Terminology Standard defines time study as "a work measurement technique consisting of careful time measurement of the task with a time measuring instrument, adjusted for any observed variance from normal effort or pace and to allow adequate time for such items as foreign elements, unavoidable or machine delays, rest to overcome fatigue, and personal needs."[6]

The systems of time and motion studies are frequently assumed to be interchangeable terms, descriptive of equivalent theories. However, the underlying principles and the rationale for the establishment of each respective method are dissimilar, despite originating within the same school of thought.

The application of science to business problems, and the use of time-study methods in standard setting and the planning of work, was pioneered by Frederick Winslow Taylor.[7] Taylor liaised with factory managers and from the success of these discussions wrote several papers proposing the use of wage-contingent performance standards based on scientific time study.[8] At its most basic level time studies involved breaking down each job into component parts, timing each part and rearranging the parts into the most efficient method of working.[9] By counting and calculating, Taylor wanted to transform management, which was essentially an oral tradition, into a set of calculated and written techniques.[10][11]

Taylor and his colleagues placed emphasis on the content of a fair day’s work, and sought to maximize productivity irrespective of the physiological cost to the worker.[12] For example, Taylor thought unproductive time usage (soldiering) to be the deliberate attempt of workers to promote their best interests and to keep employers ignorant of how fast work could be carried out.[13] This instrumental view of human behavior by Taylor, prepared the path for human relations to supersede scientific management in terms of literary success and managerial application.

Criticisms

In response to Taylor’s time studies and view of human nature, many strong criticisms and reactions were recorded. Unions, for example, regarded time study as a disguised tool of management designed to standardize and intensify the pace of production. Similarly, individuals such as Gilbreth (1909), Cadbury[14] and Marshall[15] heavily criticized Taylor and pervaded his work with subjectivity. For example, Cadbury[16] in reply to Thompson[17] stated that under scientific management employee skills and initiatives are passed from the individual to management,[18] a view reiterated by Nyland.[19] In addition, Taylor’s critics condemned the lack of scientific substance in his time studies,[20] in the sense that they relied heavily on individual interpretations of what workers actually do.[21] However, the value in rationalizing production is indisputable and supported by academics such as Gantt, Ford and Munsterberg, and Taylor society members Mr C.G. Renold, Mr W.H. Jackson and Mr C.B. Thompson.[22]

Motion studies

Page 7: Time and Motion

In contrast to, and motivated by, Taylor’s time study methods, the Gilbreths proposed a technical language, allowing for the analysis of the labor process in a scientific context.[23] The Gilbreths made use of scientific insights to develop a study method based upon the analysis of work motions, consisting in part of filming the details of a worker’s activities while recording the time.[24] The films served two main purposes. One was the visual record of how work had been done, emphasising areas for improvement. Secondly, the films also served the purpose of training workers about the best way to perform their work.[25] This method allowed the Gilbreths to build on the best elements of these work flows and to create a standardized best practice.[26]

Taylor Vs. the Gilbreths

Although for Taylor, motion studies remained subordinate to time studies, the attention he paid to the motion study technique demonstrated the seriousness with which he considered the Gilbreths’ method. The split with Taylor in 1914, on the basis of attitudes to workers, meant the Gilbreths had to argue contrary to the trade unionists, government commissions and Robert Hoxie[27] who believed scientific management was unstoppable.[28] The Gilbreths were charged with the task of proving that motion study particularly, and scientific management generally, increased industrial output in ways which improved and did not detract from workers' mental and physical strength. This was no simple task given the propaganda fuelling the Hoxie report and the consequent union opposition to scientific management. In addition, the Gilbreths credibility and academic success continued to be hampered by Taylor who held the view that motion studies were nothing more than a continuation of his work.

While both Taylor and the Gilbreths continue to be criticized for their respective work, it should be remembered that they were writing at a time of industrial reorganization and the emergence of large, complex organizations with new forms of technology. Furthermore, to equate scientific management merely with time and motion study and consequently labor control not only misconceives the scope of scientific management, but also misinterprets Taylor’s incentives for proposing a different style of managerial thought.[29]

Direct time study procedure

Following is the procedure developed by Mikell Groover for a direct time study:[30]

1. Define and document the standard method.2. Divide the task into work elements.

Steps 1 and 2 These two steps are primary steps conducted prior to actual timing. They familiarize the analyst with the task and allow the analyst to attempt to improve the work procedure before defining the standard time.

3. Time the work elements to obtain the observed time for the task.4. Evaluate the worker’s pace relative to standard performance (performance rating), to

determine the normal time.

Page 8: Time and Motion

Note that steps 3 and 4 are accomplished simultaneously. During these steps, several different work cycles are timed, and each cycle performance is rated independently. Finally, the values collected at these steps are averaged to get the normalized time.

5. Apply an allowance to the normal time to compute the standard time. The allowance factors that are needed in the work are then added to compute the standard time for the task.

See also

Ergonomics Human factors Methods-time measurement Predetermined motion time system Standard time Evolutionary economics

References

1. ̂ Zandin 2001, Section 4, Chapter 1, p.22. ̂ Ben-Gal et. al 20103. ̂ Pigage and Tucker 1954, p. 24. ̂ Groover 20075. ̂ Salvendy 2001, Section IV.C, Chapter 546. ̂ IIE, ANSI 19827. ̂ Krenn, M 2011, ‘From Scientific Management to Homemaking: Lillian M. Gilbreth’s

Contributions to the Development of Management Thought’, Management & Organisational History, vol. 6, no. 2, pp. 145-161

8. ̂ Payne, S.C., Youngcourt, S.S. & Watrous, K.M. 2006, ‘Portrayals of F.W. Taylor Across Textbooks’, Journal of Management History, vol. 12, no. 4, pp. 385-407

9. ̂ Payne, S.C., Youngcourt, S.S. & Watrous, K.M. 2006, ‘Portrayals of F.W. Taylor Across Textbooks’, Journal of Management History, vol. 12, no. 4, pp. 385-407

10. ̂ Nyland, C 1996, ‘Taylorism, John R. Commons, and the Hoxie Report’, Journal of Economic Issues, vol. 30, no. 4, pp. 985-1016

11. ̂ Gowler, D & Legge, K 1983, ‘The Meaning of Management and the Management of Meaning: A View from Social Anthropology’, Perspectives on Management, cited in Karsten, L 1996, ‘Writing and the Advent of Scientific Management: The Case of Time and Motion Studies’, Scandinavian Journal of Management, vol. 12, issue. 1, pp. 41-55.

12. ̂ Karsten, L 1996, ‘Writing and the Advent of Scientific Management: The Case of Time and Motion Studies’, Scandinavian Journal of Management, vol. 12, issue. 1, pp. 41-55.

13. ̂ Thompson, C.B. 1914, ‘The Literature of Scientific Management’, The Quarterly Journal of Economics, vol. 28, no. 3, pp. 506-557

14. ̂ Cadbury, E. 1914 ‘Some Principles of Industrial Organization: The Case For and Against Scientific Management’, Sociological Review, vol. 7, pp. 99-125

Page 9: Time and Motion

15. ̂ Marshall, A 1919, Industry and Trade, MacMillan, London, cited in, Caldari, K 2007, ‘Alfred Marshall’s Critical Analysis of Scientific Management’, European Journal of the History of Economic Thought’, vol. 14, no. 1, pp. 55-78

16. ̂ Cadbury, E. 1914 ‘Some Principles of Industrial Organization: The Case For and Against Scientific Management’, Sociological Review, vol. 7, pp. 99-125

17. ̂ Thompson, C.B. 1914, ‘The Literature of Scientific Management’, The Quarterly Journal of Economics, vol. 28, no. 3, pp. 506-557

18. ̂ Cadbury, E. 1914 ‘Some Principles of Industrial Organization: The Case For and Against Scientific Management’, Sociological Review, vol. 7, pp. 99-125

19. ̂ Nyland, C 1996, ‘Taylorism, John R. Commons, and the Hoxie Report’, Journal of Economic Issues, vol. 30, no. 4, pp. 985-1016.

20. ̂ Caldari, K 2007, ‘Alfred Marshall’s Critical Analysis of Scientific Management’, European Journal of the History of Economic Thought’, vol. 14, no. 1, pp. 55-78

21. ̂ Wrege, C.D. & Perroni, A.G. 1974, ‘Taylor’s Pig-Tale: A Historical Analysis of Frederick W. Taylor’s Pig-Iron Experiments’, Academy of Management, vol. 17, no. 1

22. ̂ Cadbury, E 1914, 'Mr. Cadbury's Reply', The Sociological Review, vol. a7, issue 4, pp. 327-331, October

23. ̂ Baumgart, A & Neuhauser, D 2009, ‘Frank and Lillian Gilbreth: Scientific Management in the Operating Room’, Quality Safety Health Care, vol. 18, pp. 413-415

24. ̂ Baumgart, A & Neuhauser, D 2009, ‘Frank and Lillian Gilbreth: Scientific Management in the Operating Room’, Quality Safety Health Care, vol. 18, pp. 413-415

25. ̂ Baumgart, A & Neuhauser, D 2009, ‘Frank and Lillian Gilbreth: Scientific Management in the Operating Room’, Quality Safety Health Care, vol. 18, pp. 413-415

26. ̂ Price, B 1989, ‘Frank and Lillian Gilbreth and the Manufacture and Marketing of Motion Study, 1908-1924’, Business and Economic History, vol. 18, no. 2

27. ̂ Hoxie, R 1915, ‘Why Organised Labour Opposes Scientific Management’, Quarterly Journal of Economics, vol. 31, no. 1, pp. 62-85

28. ̂ Nyland, C 1996, ‘Taylorism, John R. Commons, and the Hoxie Report’, Journal of Economic Issues, vol. 30, no. 4, pp. 985-1016

29. ̂ Nyland, C 1996, ‘Taylorism, John R. Commons, and the Hoxie Report’, Journal of Economic Issues, vol. 30, no. 4, pp. 985-1016

30. ̂ Groover, Mikell P. (2007). Work Systems and Methods, Measurement, and Management of Work, Pearson Education International

Standard time (manufacturing)From Wikipedia, the free encyclopediaJump to: navigation, search

In industrial engineering, the standard time is the time required by an average skilled operator, working at a normal pace, to perform a specified task using a prescribed method[1]. It includes

Page 10: Time and Motion

appropriate allowances to allow the person to recover from fatigue and, where necessary, an additional allowance to cover contingent elements which may occur but have not been observed.

Contents

1 Usage of the standard time 2 Techniques to establish a standard time 3 Method of calculation 4 Notes 5 References

Usage of the standard time

Time times for all operations are known.

Staffing (or workforce planning) : the number of workers required cannot accurately be determined unless the time required to process the existing work is known.

Line balancing (or production leveling) : the correct number of workstations for optimum work flow depends on the processing time, or standard, at each workstation.

Materials requirement planning (MRP) : MRP systems cannot operate properly without accurate work standards.

System simulation : simulation models cannot accurately simulate operation unless times for all operations are known.

Wage payment : comparing expected performance with actual performance requires the use of work standards.

Cost accounting : work standards are necessary for determining not only the labor component of costs, but also the correct allocation of production costs to specific products.

Employee evaluation: in order to assess whether individual employees are performing as well as they should, a performance standard is necessary against which to measure the level of performance.

Techniques to establish a standard time

The standard time can be determined using the following techniques[2]:

1. Time study,2. Predetermined motion time systems,3. Standard data system,4. Work sampling .

Method of calculation

Page 11: Time and Motion

The Standard Time is the product of three factors:

1. Observed time: The time measured to complete the task.2. Performance rating factor: The pace the person is working at. 90% is working slower

than normal, 110% is working faster than normal, 100% is normal. This factor is calculated by an experienced worker who is trained to observe and determine the rating.

3. Personal, fatigue, and delay (PFD) allowance .

The standard time can then be calculated by using[3]:

Notes

1. ̂ Zandin 2001, Section X, Chapter 5.12. ̂ Groover 20073. ̂ Groover 2007

References

Groover, M. P. (2007). Work systems: the methods, measurement and management of work, Prentice Hall, ISBN 978-0-13-140650-6

Salvendy, G. (Ed.) (2001). Handbook of Industrial Engineering: Technology and Operations Management, third edition, John Wiley & Sons, Hoboken, NJ.

Zandin, K. (Ed.) (2001). Maynard's Industrial Engineering Handbook, fifth edition, McGraw-Hill, New York, NY.

Abstract: The input-weighted average standard processing time for a multi-item machine is

discussed in this paper. For practical purposes it is important to have a short average processing time. It is shown that the input-weighted average processing time does not depend on the scheduling, sequencing and lot-sizing; it is only influenced by the total input and by the individual standard processing times. Furthermore, the relationship between the individual standard processing time and the average standard processing time is investigated. It is proven that the input-weighted average standard processing time is a convex function of the standard processing time of one certain item. An interesting consequence of the convexity is the fact that a decrease of the standard processing time of an item with a standard processing time less than half of the input-weighted average standard processing time causes an increase of the input-weighted average processing time.

Document Type: Research article DOI: http://dx.doi.org/10.1080/00207540310001645129

Page 12: Time and Motion

Building a Work Process Standard - Raphael L. Vitalo, Ph.D.

Contents

Introduction

How to Build a Standard

General Guidance

Summary of Steps

About the Author

Feedback Please

Introduction

A work standard is a written description of how a process should be done. It guides consistent execution. At its best, it documents a current "best practice" and ensures that it is implemented throughout a company. At a minimum, it provides a baseline from which a better approach can be developed.

Standards are an essential requirement for any company seeking to continuously improve. All continuous improvement methods leverage learning to get better results from their business efforts. Standards provide the baseline references that are necessary for learning. A standard operating procedure supplies a stable platform for collecting performance measurements. The standard and its profile of performance yields the information people need to uncover improvement opportunities, make and measure improvements, and extract learning.

Frequently, small companies or large companies that grew rapidly have no official documented work standards. Also, we have observed that office and service work settings, whether small or large, lack documented work standards and metrics. You can not proceed with any continuous improvement effort until you remedy this gap.

Before you proceed, however, be sure you know why you do not have standardized work processes in your business. It is not always an issue of need or "know how." The cause for the absence of standards may be rooted in your company's culture. Standards may be absent because company leadership chooses to defer to individual preferences or the privilege of rank over the achievement of a common goal. Leadership may not wish to challenge managers who are uncomfortable dealing with abstractions like standards or the discipline they require. Such managers prefer a hands-on approach with the maximum latitude to direct operations as they see fit when they see fit and their companies choose to accommodate these preferences. Such "people issues" require a different solution then the acquisition of "know how" and this article does not address those solutions.

How to Build a Standard

Page 13: Time and Motion

We provide guidance in the Kaizen Desk Reference Standard that addresses how to build work standards and I will be referring to that guidance throughout this article. To benefit most from this article, you need access to the Kaizen Desk Reference Standard or be familiar with its contents.

In building standards, you will need skill in describing a target work process, detecting waste, and documenting procedural knowledge. Examples of a work process description and of how to detect waste are available elsewhere on this web site—see, for example, Kaizen in Action or read other Kaizen event descriptions that are posted on our Share/Learn page. For examples of documented procedural knowledge, read any chapter of the Milestone, Task, or Step chapters in the Kaizen Desk Reference Standard. Each of these chapters is structured as a procedural knowledge document.

General Guidance

Anchor your effort to build a work standard with an output (a product or service outcome). The output guides you in deciding what operations to include in your work process description. Essentially, you include only those activities that people do to produce that output.

Be certain the output is not the final product or service delivered to the customer, only a component of it. Work standards target subprocesses within a total value stream and subprocesses produce components of the final product or service outcome, not the outcome in its entirety.

Next, capture the customers' key requirements relative to this output and identify metrics that measure whether the output satisfies the customers' requirements. Use the customers' key requirements to determine whether the output the work process produces is value adding from a customer's perspective. You need to verify that the output you selected merits having a standard. You must answer the question, "Does this output address and satisfy a customer requirement sufficiently to justify us investing time in standardizing and improving how we build it?" In this step, you may uncover simple improvements you can make to the output's features that would make it even more value adding to its customers. You would standardize the process for producing this improved version of the output, not its current version1.

Once you have documented the customers' requirements and verified that your output should have a standard, use your skill in describing a work process to document a the overview information for the work process and map the "typical" way it is done. See Step D1-S1. Build a Description of the Target Work Process, pages 197-213, of the Kaizen Desk Reference Standard for detailed steps.

The work process overview describes the purpose the work process accomplishes, the conditions under which it operates, and the metrics used to measure its performance. The overview also documents the inputs the process begins with including the information that triggers its start. It includes a listing of all locations where the work process is implemented and all the

Page 14: Time and Motion

organizations and groups who participate in implementing it. (See example below.) Whenever an element of the overview has not been previously defined, define it using current practice.

Build the work process overview using information from all managers and performers of the work process. Resolve differences in the overview information, especially with respect to the purpose of the work process. Be sure to involve representatives from all interfacing work processes, and make sure that their expectations of the work process are incorporated in its purpose statement. Obtain approval of the overview from the work process manager, his or her manager, and the manager who oversees all the organizations that interface with the work process.

Once the overview is complete, map the operations that transform inputs into the work process's output. (See example below.) Use the overview as your reference for mapping the work process.

Page 15: Time and Motion

The sequence of operations you map must accomplish the purpose defined in the overview. Gather the information for work process map from the people who do the process using interviews, team discussions, and a walk through. Begin with the most common way the work process is done ("typical process"). Once this is documented, collect the alternate ways people produce the output and post these variations on the map of the typical approach. Apply your skills in detecting waste to the various alternative ways so that you can select the approach that minimizes waste. Use the guidance in Step D1-S2. Walk Through the Target Work Process, pages 215-239, of the Kaizen Desk Reference Standard. Either do an actual walk through or simply a "talk through" of the mapped process, as appropriate. If there is contention about which process is better, let quantitative information make the decision. Use the measurement data you currently produce to sort out which alternate approach does better. But, be reasonable in the effort you apply to settling on the initial standard. You do not need perfection as a beginning point. Once the standard is established, it becomes the "current best practice" and you apply Kaizen to continuously improve it. You want an initial standard that is your best version of the work process but, in essence, it is only a baseline. Standards, in a continuous improvement setting, are always "current best practices" and never "forever and always" best practices.

Page 16: Time and Motion

With your new standard, you need to define metrics for its output and operation and you should establish visible ways to communicate how well the work process is performing with respect to these metrics. In this way, everyone participates in seeing how the standard is working and everyone has a chance to uncover new opportunities for further improving it. The most basic metrics are:

Takt time Cycle time Value-added ratio Throughput (units of output per unit of time) Unit cost Defect or conformity rate Scrap rate Rework Machine uptime (if machines are involved) Percent of cycle time implemented by machines Percent of cycle time implemented by people Safety (recordables, for example).

Tailor other metrics to measure performance on any other factors that help or harm business success.

Once you have mapped the work process and defined its metrics, complete your work by preparing a procedural knowledge document. This document captures the guidance a person needs to perform a process successfully. It augments the map with narrative detail that addresses each key topic a performer must know about. Also, it breaks down each of the operations in the process map, providing detailed steps and substeps (if needed) and includes tips that will prevent a person from making errors. (Click here to view the components of a procedural knowledge document.)

Before you finish, make sure you have a change management procedure in place. You need a method by which you qualify new ideas before you implement them. You also need to give each improved standard sufficient time to demonstrate its capabilities before you make further improvements; otherwise you never know what you are accomplishing.

When you roll out the standard, be sure to educate everyone to the new standard and to prepare them with any new skills they require so that everyone knows and can do the improved method correctly.

Page 17: Time and Motion

Theory of Constraints (Part 4) - Reasons for delay

In this post, I shall be giving some points due to which the projects get delayed in spite of the use of tools such as Gannt chart, etc. So here goes;

 

Protection of buffers during estimates of work - When estimating the time it will take to complete the work, generally a buffer is kept in the estimates to make sure that the work gets completed in the time that has been estimated. It is empirically seen that when there is a 50% chance of the work getting completed in a particular amount of time, to be 90% confident that the work will get completed, the time estimate has to be 2 times the amount of time estimated with a 50% confidence.

To get over this issue, there has to be a change in thinking between both the person giving the estimates and the people to whom they report. In order to give a 50% confidence estimate, the person who gives the estimates must be confident that he/she will not be penalized if the deadline is missed around 50% of the time. Hence the person to whom the estimate is sent to must make sure that no penalty occurs due to the missing of the estimate.

Page 18: Time and Motion

Now once all the estimates for the individual pieces of work are taken into account, there has to be a buffer kept at the end of the total estimate. As only 50% of the work can get delayed, a 50% extra amount of time can be kept at the end of the total estimate. Even then there is a 25% amount of time which is saved by this over the previous method which is currently used.

Student Syndrome - This occurs when a person allocates a due date to complete the work and then works on the project with 100% focus only when the due date is near. This is very similar to how students put off their homework till the last minute and hence the name. One can imagine how this behavior along with the previous behavior is quite dangerous to a quick delivery.

Starting the work immediately - Though this sounds directly contradictory to the above law, it is not. This delay occurs when the work starts without having all the inputs in place. Hence in best case scenario, when the particular input is required, the work stops and does not proceed until the input comes in. In the worst case, when the particular input comes in, it is found that the work done so far is useless and it is to be done all over again.

Parkinson's Law - This law states that "Work expands so as to fill the time available for its completion"For example, a work has been given which is to be completed in 5 hours. In case, the person doing the work is able to complete it in 3 hours, the result will not be passed on. Instead, more checks, more processing of the work takes place until the 5 hours are used up.This along with student's syndrome explains why only the delays in work are passed on and no savings of time is passed on inspite of having big buffers.As an aside, this Parkinson's Law is also used to explain how bureaucracies expand irrespective of the work assigned to it until the organization it supports collapses.

Page 19: Time and Motion

Multitasking - This phenomenon is often seen as a time saver and not a time waster. Unfortunately in the real world, this is one of the biggest time wasters. To explain this further, kindly check the diagram below and read on.

Consider that each color shown here is a different task. Hence as observed in the first bar, there are 3 tasks which take equal amount of time to complete. In case the tasks are done one after another without any multitasking, it is observed that the first task completes in around 1/3rd of the total time, the second task in 2/3rd of the total time with the last task completing at the end. The little white gaps in between is the time taken to shift from one task to another. In manufacturing industries, this is known as setup time.

Now when multitasking is used (The second bar with each of the colours split in three equal parts), it is noticed that the first task takes more than 100% of the time that it would have taken if multitasking was not done. Even the second task takes more time to get completed. The total time to complete all the tasks also increases, as the number of times shifting between tasks (The white space) multiplies by as many times as the tasks are split. This total time increases so much more when this setup time (The amount of white space during each shifting over from one task to another) also increases.

Improper use of the constraint - It has been seen how critical the constraint is to produce the output in an optimal manner. So if this constraint is used improperly, the delay will increase. Here are some ways that the constraint is used improperly.

Sending bad quality inputs to the constraint. This can be avoided by having a quality check of the inputs which go to the constraint. 

Sending unimportant work to the constraint. This happens when the inputs are sent to the constraint in order to produce an output which will not be required right now and there is an output currently required which is not processed as the constraint is now working on the unimportant task. 

Making the constraint work on general processing. The constraint is generally a specialist in some task. In case the constraint starts working on a task which can be done by any other non-constraint (A general, non-specialist task) then there is a waste of time which could be better utilized.

Page 20: Time and Motion

This brings us to an important point of Theory of Constraints. An hour wasted in a non-constraint is no time wasted at all. An hour wasted in a constraint is an hour wasted on each and every part of the whole system.

Delay Analysis - Methodology and Mythology - Part 1, A. Farrow (Digest Issue 27) 

Delay Analysis – Methodology and Mythology Part 1 by Tony Farrow

The following article is Part I of a two part article on analysis of delays in a construction project. This first part explains the process of delay analysis and methodology and examines the ‘theoretical based’ methods of analysis.

The second part of the article will consider ‘actual based’ methods of delay analysis, will contrast the theoretical and actual based methods and discuss what factors may affect the selection of the appropriate method

Introduction‘Has construction law changed much over the past decade?’ During a question and answer session at a recent lecture on construction claims, I was asked this question. In giving my answer, I considered the question from three perspectives: recent reforms, statute and case law.

With regard to the first point, I referred to Lord Woolfe’s report Access to Justice published in 1995 and his recommendations aimed at addressing the criticisms that civil justice was too slow, costly and complex. Building on the recommendations contained in this report, the Civil Procedure Rules came into effect on 26 April 1999 and replaced the Rules of the Supreme Court (the White Book). Judicial case management lies at the very heart of these reforms and I have no doubt that anyone involved in construction disputes 10 years ago will agree that the case management of construction litigation has been revolutionised over the past decade.

I then turned to Sir Michael Latham’s report Constructing the Team (more commonly referred to as the Latham Report), published in 199, a precursor to the Housing Grants, Construction and Regeneration Act 1996 (UK) (the Act) which introduced statutory adjudication into many areas of construction. Five years have passed and the effect of this legislation has been considerable. The number of construction disputes that now proceed along the traditional avenues of litigation and arbitration have been reducing each year since the introduction of the Act. Ask any construction lawyer what is the type of work they are involved in these days and the answer will probably be: ‘Another adjudication: it’s been non-stop for months’. Although it is not yet known what the long term consequences of adjudication will be, it can be stated with certainty that the introduction of adjudication has fundamentally altered the construction dispute process.

Finally, I considered recent case law, with particular emphasis on cases concerning extensions of time, which was the area of my talk. I referred to:• Balfour Beatty Building Ltd v Chestermount Properties Ltd• Piggott foundations Ltd v Shepherd Construction Ltd

Page 21: Time and Motion

• John Barker Construction Ltd v London Portman Hotel Ltd• Ascon Contracting Ltd v Alfred McAlpine Construction Isle of Man Ltd• Henry Boot Construction (UK) Ltd v Malmaison Hotel (Manchester) Ltd• Royal Brompton Hospital v Frederick Alexander Hammond.

From these cases, the following conclusions can be drawn.1. Theoretical calculations of delay have been discredited on numerous occasions.2. What actually occurred on the particular project is significant when analysing contractual entitlement.3. Who owns the program float, as between employer and con tractor, is still unclear. Simplistically one would conclude that the contractor does, but there is rarely a simple case to which to apply the rule and so the debate over ownership continues. For example, a contractor cannot allocate a general float period on a selective basis when dealing with subcontractor delays.4. When there are two concurrent delays, one the employer’s respon sibility the other the contractor’s, the latter is entitled to an extension of time.5. However, where the contractor is already in delay and the employer introduces an excusable event whichdoes not further delay the works, no extension of time is due.6. But, if the excusable delay further delays the works, the contractor is entitled to an extension of time, on the net entitlement basis.

From the perspective of extensions of time, my view is that recent case law has not brought about significant changes to construction law. Instead, the courts have clarified a number of issues, particularly in relation to the facts and contract terms of those specific cases.

Following the lecture, I reflected on how my own practice of preparing and evaluating contractors’ claims and related expert witness work had changed over the past decade. One particular feature stood out. Trett Consulting employs three times as many forensic planners (or programmers or schedule analysts) in 2001 than it did in 1991. This is significant because it indicates how the nature of contractors’ claims has changed, in particular with respect to the analysis of project delay. There was a time when this work was the domain of quantity surveyors but today, in a firm such as Trett Consulting, half my colleagues are from a planning, construction or engineering background, rather than purely commercial. This stems from the fact that delay analysis today is more involved, more analytical, and more forensic and a more challenging feature of construction law than it was 10 years ago.

This change can also be attributed to developments in computer technology, in conjunction with more advanced planning and forensic software tools which are now available to the analysts involved in the investigation of project delays. Indeed, delay analysis has become a construction law subject in its own right, with American and UK publications available relating solely to this topic. The Society of Construction Law in the UK has drafted protocol for determining extensions of time.

I discussed how delay analysis has changed with several of my colleagues. Of particular interest were the experiences of one of our forensic planners, Chris Foan. Having given evidence in a

Page 22: Time and Motion

number of arbitrations, he expressed the view that the approach to delay analysis was indeed changing. However, rather than setting out what had occurred on a project and why, he felt many claims were being presented on what should have occurred or what would have occurred but for a series of events imposed upon a party. Claims were based on theoretical models, not interrogation of fact.

We debated the number of text books now available and formed the view that while the problem has never changed (that is, ‘what caused the delay and who is responsible for the time and cost consequences?’), the means of investigating and presenting the problem (that is, the delay analysis methodologies) had changed. We decided to review a selection of these methodologies and applied them to a single project scenario. The conclusion drawn from our research was that many of the techniques produced different and, in some cases, unrealistic results which were closer to fiction than fact.

From this we coined the phrase ‘ delay analysis mythology’ and this is the theme of this article. That is to say, methodology is concerned with a set of practices, rules or procedures used by those engaged in an enquiry, and mythology is fictional story telling. Are those using delay analysis methodologies as a means for investigating project delay, actually engaged in delay analysis mythology?

The following notes briefly describe the types of methodologies in use, their advantages and disadvantages, the factors influencing selection of a methodology, the means of testing the robustness of each, and most importantly, identifying how methodology can sometimes lead to mythology.

What is delay analysis?Before considering the various different types of delay analysis methodology, it is necessary to consider what we mean by delay analysis and why we should want to use it.

Delay analysis refers to a forensic investigation into the issue of what has caused a project to run late. That is, delay to the completion of work or contract milestones caused by the time impact of events such as variations, late information, excessively inclement weather, poor performance, remedial works and the hundreds of other delay causing circumstances that arise on construction projects.

Analysts distinguish between critical and non-critical delay, the former delaying the project’s completion date and the latter affecting progress but not overall completion. Ultimately, we need to identify those events causing critical delay for evaluating extensions of time, but it can be difficult to distinguish between each type. Analysts also investigate program disruption, or disruptive delay, which concerns issues of productivity, acceleration, congestion, fatigue, morale and other consequential effects of project change. This aspect is not considered in this article.

There are three primary reasons why one might want to analyse delay: to establish lines of investigation; to demonstrate entitlement; and to present the case one is seeking to prove.

To help establish lines of investigation

Page 23: Time and Motion

An investigation of a construction project will involve consideration of a wide variety of issues. These include: where the delays occurred (which part of the project – was it in the foundations, steelwork, roof, air conditioning and so on); when did the rate of progress decline; where did late information or materials cause delay; instances of competing delays; poor productivity; insufficiency of resources; lack of design information; failure to progress; excessive rainfall and so on.

Such an investigation requires a forensic review of project records involving three stages.1. Databasing relevant project records, such as program activities, progress schedules, actual start and finish dates, labour allocation sheets, daywork sheets, notices of delay, material deliveries, plant deliveries, weld records, chainage progress, drawing registers and so on.2. Analysing the databased records by linking related data together where relevant (for example, drawing numbers linked to activity progress), sorting/ grouping data by different variables (for example, progress by floor, by gang, by ‘system’, by trade and so on), aggregating or summing together quantities (for example, numbers of drawings per week, progress achieved per month and so on) and selecting or filtering data of particular interest.3. Graphing the results of the analysis using barcharts, histograms, line graphs, tables and so on.Analysis of project information in this way can help to highlight when events, delays or disruptions arose, how extensive they were, where they occurred on the project and which program activities were affected.

The graphs or charts produced in this way are working documents, in that their purpose is to identify changes or variances (such as peaks and troughs in resource levels, design information, overall productivity and so on), trends (indicating where delays arose, where events and delays occurred at the same time, reductions in productivity and so on) and differences (such as illustrating that certain floors/trades/systems were not affected whereas others were).

This kind of working data analysis can be raw and involves ‘slicing and dicing’ the project records in order to discover where the effects appear to exist and where the problems probably arose. In general, such analysis does not rely on a delay methodology, but requires a free format and versatile data analysis and graphics software tools. From this investigation, the analyst hopes to identify those issues, time periods or construction elements that require a more detailed study.

To demonstrate entitlementThis is the main subject of this article. I use the phrase ‘to demonstrate entitlement’ with caution as it may imply that the delay analysis using one of the methodologies is the demonstration (that is, it discharges the party’s burden to prove the consequences of a set of events upon the progress of the works). However, this implication is mythology. The delay analysis methodologies do not provide the ultimate answer in a case concerning extensions of time. The methodologies are tools for assisting in describing or analysing complex sets of facts. It is the engineer or architect, or ultimately an arbitrator or a judge, who has to consider and weigh up all the competing evidence and form an opinion. The delay analysis exercise will assist in this process but it will only be part of the evidential matrix. That is to say, the tribunal has to weigh up the terms of the contract, relevant case law, witness evidence, contemporary records such as photographs, as well as considering analytical exercises such as delay analysis, and form its own views.

Page 24: Time and Motion

To present the caseHaving interrogated the project records and analysed the delays, it is necessary to convince the opposing party. Visual aids can help this process (primarily graphs and charts) and these can be produced using IT tools used by the delay analyst.

Delay analysis methodologiesThere appear to be two groups, or types, of delay analysis methodology. The first category is often referred to as entitlement based methods, but this is not an ideal description since it can be confused with contractual entitlement. For example, a contractor’s entitlement to an extension of time is derived from the terms of the contract, whereas entitlement here is derived from the results of a delay analysis methodology (that is, it is methodological entitlement). The two are clearly different.

Theoretical based methods is perhaps a better definition, since these methods rely on demonstrating the theoretical impact of the consequences of delaying events, rather than on showing what in fact occurred. Another definition of this group could be model based methods, since each methodology is based on establishing a programming model of the project and then influencing it by the application of project events or constraints. The theory based grouping includes the global and net impact methods, the as-planned but for, the as-planned impacted and the as-built but for methods.

The second group is called the actual based methods, since they seek to demonstrate what actually occurred on a project and the analyst investigates what caused the project delay. These methods include the as-planned vs as-built, the window/ snapshot and the impact/update methods.

The theoretical based methods all approach the issue of entitlement to extension of time by first focusing on the delay event and then seeking to determine what delay may have resulted. However, this is not achieved by identifying its actual impact from recorded facts, but by theoretical analysis of what the effect ought to have been. These methods tend to favour the contractor’s position because matters such as culpable delay (that is, where the contractor has a problem of its own), the effects of mitigation (that is, the employer’s delay being offset by simple corrective action by the contractor) and the programming changes actually implemented by the contractor, tend to be considered only as secondary issues, if at all.

On the other hand, the actual based methods approach the analysis by seeking to measure how actual progress differed from what was planned. They focus on how the works progressed, how activities were actually delayed and only thereafter seek to ascertain what delay event(s) caused this delay.

I would emphasise that the two groups do have common features and cannot be distinguished in absolute terms. For example, the actual based methods also rely on models and theory, but less so than the entitlement or theory based methods.

One feature that all the methodologies have in common, however, is the subjectivity involved in the entire delay analysis process. If different analysts investigated the same project, applying the

Page 25: Time and Motion

same method and using the same faces, they would be unlikely to arrive at the same conclusion. This is because each analyst will have to consider and challenge a wide variety of related issues and each analyst will apply different degrees of personalexperience and judgment.

These related issues include:• the sufficiency of the planned durations;• the sequence and logic of the planned program of work;• the sufficiency of resources provided by the contractor to carry out the works;• criticality, being the determination of those activities influencing progress at each stage;• program float or risk contingency;• the impact of revisions to program;• concurrency of evens (two events imposed at the same time);• concurrency of delays, (two events imposed at different times but causing delay at the same time);• contractor mitigation, that is, making up lost time by construction experience and skill;• dominance theory (which event was the major contributor to delay); and• acceleration, which involves increasing the rate of production by employing more resources or working longer hours.

To take a very simple case. An element of work took two men four weeks instead of two weeks. Throughout the period the weather was dreadful. The contractor claims that the weather caused the delay. The employer asks why did the contractor not employ four men? Opinions will differ, not only between contractor and employer, but also between analysts.

The point, I believe, is this. The delay analysis methodologies each provide a set of rules for examining project delay. However, issues affecting the analysis using any methodology require subjective assessment and it is these assessments that undermine the analytical or clinical nature of the process. In addition, the rules of themethodologies can be ill defined or require judgment in applying them,and this again increases the level of subjectivity.

In summary, none of the methodologies are perfect because they all include an element of assumption, subjective assessment and theoretical projection, some more than others. For this reason, the ‘answer’ that a methodology provides is only as good as the accuracy of the base information, the assumptions inherent in the methodology and the reasonableness of the subjective decisions made by the analyst. This is important to recognise if you are employing delay analysis to assist in presenting your client’s case.

The following notes briefly discuss each of the theoretical based methods referred to earlier. Part 2 of this article discusses actual based methods and this is followed by some practical considerations of selection and use. One problem any writer on delay analysis has is attempting to describe in words something that really requires a dynamic and graphical presentation.

Theoretical based methods

Global impact method

Page 26: Time and Motion

The global impact method is a rough and ready way of indicating what the potential impact of a delay causing event has been. An example of a global approach is where the work scope (that is, the amount of work the contractor has to carry out) doubles due to variations and so the duration of the relevant activity is doubled. In the McAlpine Humberoak v McDermott International case the claimant sought to demonstrate its delay case by arguing that every hour worked on particular variations equated to an hour delay to the project (the argument was not accepted!). Another example of a global claim arises when you indicate the date of a variation and measure the difference between the planned start or finish date of the relevant program activity with the date of the variation, to arrive at a period of delay to the project.

The global approach is quick and simple but never contractually supportable and provides no cause and effect. It ignores other delays occurring at the same time and does not consider timing, concurrency or dominance of delays. It also ignores any actual delays caused by the contractor.

This method has also been repeatedly criticised by the courts because it fails to consider the fundamental issue of criticality (that is, whether the works were delayed or not) and ignores reality, as well as the contractor’s duty to mitigate. Despite the criticisms, the global method is still widely used by contractors endeavouring to demonstrate their case.

Net impact methodThis is essentially the same as the global method but with the refinement that the issue of concurrency of delays is considered. For example, where there are two concurrent delays each of five days, only five days is taken as delay to the works rather than the total of 10 days, as would be the case with the global method. The advantages and disadvantages of the global impact method also apply to this approach. In summary, this method has little to commend its use.

As-planned impacted methodThis method is also known as the entitlement method and the POPE method (program of possible entitlement). It analyses the theoretical effect of impacting delay events onto the original baseline (that is, planned) program and projecting the completion date using the original sequence and timing of remaining activities. It can be used to show the theoretical delaying effect of the employer’s delays, or of the contractor’s delays, or of both together.

The prerequisites of this method are a baseline critical path program that represents the contractors intent and a schedule of delay events. The first step is an assessment of the likely critical delaying effect of each delay event in the schedule. This can be estimated using norms and experience, or be based on evidence of the actual delay experienced on the project. Second, for each delay event, the effect (such as a delayed start, delayed finish or prolonged duration) is individually impacted onto the planned program, in chronological order and the project completion date re-analysed. This process continues until all of the delays have been impacted.

The strength of this method is that the process avoids the need to analyse actual progress records in detail because the key elements of the methodology are the original baseline program and a schedule of delay events.

However, there are two principle weaknesses of this method. First, the original baseline program

Page 27: Time and Motion

may not be a realistic model on which to base the whole analysis (because the works were probably carried out in a different sequence and at a different time from that originally planned). Second, since actual progress is not considered, this method does not demonstrate what actually caused delay to the works. If it can be shown that a delay event relied upon in the analysis could not have actually caused delay (for example, if it can be shown that alleged late information was received well in advance of the actual progress of the works), then the methodology will lose credibility.

If the result of this method is a projected end date that is much later than the actual achieved end date, then the reasonableness of the analysis will be in doubt, which is often the case. As with the other entitlement methods, the results derived from the analysis are likely to be attacked as artificial.

As-planned but for methodFor this method, the analyst impacts the planned baseline program with the assessed implications of the events a party considers it is responsible for and the combined influences of these are analysed. The impacted completion date is then compared with the as-built completion date (that is, when the project was actually completed) and the difference is said to be how much earlier the project could have finished ‘but for’ all the other delay events (imposed by the other party) but which have not been analysed. The period between the analysed date and the actual completion date is said to represent either the contractor’s entitlement to an extension of time or the employer’s entitlement to deduct liquidated damages, depending on which set of events have been analysed.The advantage of this method is that it is reasonably quick, as there is no need to consider actual progress of the works or the timing of events, but this means it is a theoretical investigation.

This method also relies upon the planned model for carrying out the works and ignores the ha that the actual critical path would more than likely be different. This is because a planned program is an early projection of intentions and the contractor will change the sequence and timing of activities when the works are in progress.

As-built but for methodAlthough this methodology adopts a model of the as-built program for analysis, and so starts off as a fact based analysis, it is still a theoretical analysis of delay. The approach is similar to the as planned impacted method, but in reverse. The as-built program is first constructed and linked together into a critical path network. This becomes the model to be analysed. A schedule of delaying events is created, including a measure of their impact (for example, a start delay, a finish delay or an activity prolongation). The very last excusable delaying event on the critical path is removed and the model is re-analysed. The difference in the overall program duration before and after this removal is said to represent the period of critical delay caused by the particular delay item removed. For example, if the last activity in the project happens to be a five day painting variation, when this excusable delay is removed, the program is able to finish five days earlier. Thus, ‘but for’ the additional painting, the program could have finished five days earlier.

This process is then continued until all excusable delays have been removed and the model has

Page 28: Time and Motion

been fully collapsed (the method is also referred to as the collapsed as-built method. The process can also be applied by removing all the excusable events together, or by grouping similar excusable events together and removing each group individually. Whatever the approach, each invariably produces a different answer! Once the analysis is underway, the model no longer represents the real as-built program but is only a simulation of what the as-built program could have been had the delay events not occurred. The model is, therefore, sometimes referred to as the simulated as-built program.The accuracy of the analysis will depend on the quality of the information on which it is based. The greater the amount of information that can be provided in support of any assumptions made, the more credible the results. Such information will generally be gained from site records, in whatever form they exist, and the importance of accuracy, completeness and reasonable logic cannot be over stressed. However, since the method is based on the as-built program, it appears to have a thread of truth about it.

Although the principles of this methodology are straightforward, its application is not. It is dependant on subjective logic links that were never set down and which were never agreed in any contemporaneous program. It is open to criticisms of bias on the part of the analyst. Also, removing delay events (and logic links) retrospectively does not reflect the actual way the works were progressed. Hence, this methodology does not appear to be an appropriate one for most standard forms of contract. In summary, the results of an as-built but for analysis are often impossible to defend because they do no relate to what actually happened on the project.