Transcript

Journal

www.elsevier.com/locate/jterra

Journal of Terramechanics 44 (2007) 15–22

ofTerramechanics

The changing role of physical testing in vehicledevelopment programmes

Paul Wilkinson

LTC Ltd., Aston Way, Leyland PR26 7TZ, United Kingdom

Available online 29 March 2006

Abstract

The role of physical testing in product development is changing due to the requirements for faster new product development, reducedtolerance of failures in the field and the emergence of computer aided engineering (CAE) technologies. To be used most effectively, testingmust be seen as an integral part of the process for reducing risks associated with new product introductions. Quality function deployment(QFD) and failure modes and effects analysis (FMEA) can be used to establish effective test and development plans that integrate the useof virtual and physical testing. By effectively integrating virtual and physical test technologies significant improvements in product per-formance can be achieved within shorter times and with reduced development and manufacturing costs. The approach was illustrated bya process of reducing in-cab noise during the design of a new truck.� 2006 Published by Elsevier Ltd on behalf of ISTVS.

Keywords: Testing; CAE; FMEA; QFD; Noise; Vibration; Virtual

1. Introduction

In the past, testing has been seen as an essential part ofproduct development. Only by testing could weaknesses ina design be discovered before the product got into the handsof customers. However, over the last 20 years the capacity forcompleting detailed analysis of many aspects of a designusing computer aided engineering (CAE) tools has devel-oped to the point where ‘‘virtual’’ product developmentcan be considered a realistic proposition. Many car manu-facturers, for example, are already using CAE tools to enablethem to go straight to producing prototype vehicles fromproduction tooling, eliminating one stage in the traditionalproduct development process and improving the quality ofdesign. So, what is the future for physical testing – is it stillnecessary or is physical testing becoming a thing of the past?

2. Role of testing with a new product development

programme

Essentially, testing has just one objective: to ensure theproduct will deliver a profitable return to the manufac-

0022-4898/$20.00 � 2006 Published by Elsevier Ltd on behalf of ISTVS.

doi:10.1016/j.jterra.2006.01.004

E-mail address: [email protected].

turer. It is a kind of insurance policy and the more paid,the better the cover. Do no testing and there is a high riskthat the product will fail in some way, leading to eitherexcessive warranty costs, poor sales or even costly legalaction. Complete an extensive test and development planand the risks reduce.

A product must meet the legislative requirements, whilstmeeting or exceeding the customers expectations in termsof functionality, reliability and durability – and achieveall of these whilst maintaining a sufficient gap betweenwhat a customer is willing to pay and what the productcosts to manufacture if it is to deliver profits to the organi-sation. Testing has traditionally played a major role in find-ing and resolving weaknesses in a design before it goes intoproduction and through to the customers. However, testingis a cost to the organisation and the amount of testing thatcan be contemplated depends on the economic equation ofthe particular new product development programme. Amassive test programme may be justified for a high volumecar, based on the warranty costs associated with any fail-ures and the number of units across which the costs of test-ing can be spread. For smaller volume programmes, suchas are more typical in the off-road sector, the size of the test

16 P. Wilkinson / Journal of Terramechanics 44 (2007) 15–22

programme needs to be very carefully tailored to the eco-nomic realities. Whilst probabilistic approaches exist tohelp identify the appropriate level of testing [1], mostorganisations use a knowledge based approach to decideon what is necessary based on the scale of change andthe projected sales volumes.

It is very helpful to think of testing in relation to riskin this way. Too often testing is seen as something that isbolted on at the end. In reality the best approach is toconsider the test programme as an integral part of theprocess. Many companies are now doing this and usingtools such as quality function deployment (QFD) [2]and failure modes and effects analysis (FMEA) [3] tofocus attention on meeting or exceeding customerrequirements whilst minimising risk, and hence testrequirements.

3. Tools for risk reduction

If we consider that many ‘‘new’’ products are actuallydevelopments of existing products then we could definethe requirement for the new product design as being toestablish which systems within the existing product need:

(a) enhancement to satisfy or delight the customer;(b) modification to reduce costs (either directly or in

manufacture);(c) modification to reduce warranty costs.

QFD can be used very effectively to identify the charac-teristics of the product that are perceived as important to

Fig. 1. QFD relationship matrix ‘‘house’’ for driver’s seat comfort, showingtogether with details of the importance of what the customer wants and the e

the customers and relate these customer requirements toengineering quantities that can be identified. Fig. 1 givesan example of this matrix for the comfort of a driver’s seat.

Once the attributes are understood and the method ofmeasuring them established it is then possible to measurethe out-going product against the competition and identifywhich aspects require enhancement.

Opportunities for cost reduction can be highlighted bydetailed cost analysis of the design and also the manufac-turing process. The results of such analysis are the identifi-cation of components, systems or processes that requirechange.

Warranty cost information is normally very well knownand often the root causes of any weaknesses in the currentdesign will be known before the new product developmentprocess even begins. But again, this information needs tobe fed into the process as it implies change to components,systems or processes.

Whilst looking at these demands for change, it isalways important to try and quantify the benefits of mod-ification in financial terms – to define an economic equa-tion for the project [4]. This economic equation wouldinclude the effects of component costs, manufacturingcosts, test and development costs and potential warrantycosts in relation to the date of introduction into the mar-ket, expected sales volumes and selling price. This toolcan be used throughout the programme, but used earlyit can quickly establish the priorities for developing thedesign, taking into account the things that will deliverbest value in terms of perceived improvement (by the cus-tomer) and financial return.

the relationship between customer wants and engineering characteristics,ngineering characteristics.

P. Wilkinson / Journal of Terramechanics 44 (2007) 15–22 17

4. Physical testing versus virtual testing?

Over the last 20 years the capability of computer aidedsimulation has grown out of all recognition. Simulationsthat used to require mainframe computers can now berun on a desk-top PC. Equally, more complex processescan now be simulated, expanding the range of attributesthat can be investigated in the virtual domain. However,there are still limitations and whilst the completely digitalproduct development process remains a valid goal, today’sreality is that simulation can be more expensive, more timeconsuming and less reliable than physical testing in certaincircumstances.

When designing any test it is essential to start out byunderstanding what output is required. Is it a validation thatthe fatigue limit of a material is never exceeded in a particu-lar component? Or a thorough understanding of the sensi-tivity of ride comfort to suspension set-up is required?There are some simple rules to deciding when it is appropri-ate to use CAE and when it is more appropriate to use phys-ical testing [5]. In essence, the use of CAE supports a systemsapproach and enables multi-attribute optimisation that is atbest cumbersome in the physical test domain. Whenever adetailed understanding of a system is required and especiallywhen CAD data is available, then a CAE approach is likelyto be more efficient and more effective.

However, some systems with which we are very familiarare extremely difficult to model because of the physicsinvolved. For example, analysing the fatigue behaviour ofa hot exhaust system is currently an extremely difficultthing to do in simulation (because of uncertainty aboutmaterial properties, crack propagation at elevated temper-atures, the effect of weld geometry and material propertychanges at elevated temperature), but it is relatively easyto fit an exhaust to an engine and run a test to find out ifthe exhaust cracks. Such systems are often better developedusing traditional physical testing. Equally, if the serviceconditions are not quantified then it is very difficult to haveconfidence in the results of CAE simulation. In such casesthe service conditions can be measured by test, but it maybe more efficient to complete the whole exercise as a phys-ical test and keep the data for future use with CAE models.

Whilst it is often presented as a straight choice betweenphysical testing and CAE testing, the reality is that edgescan be rather blurred. Computer aided testing (CAT) is anew term that has begun to be used to describe activitiesin which data from physical tests is manipulated andenhanced within a computer to yield far greater value. Aperfect example of this is experimental modal analysis,where a mathematical model is fitted to a set of measuredtransfer functions to yield a modal model of a structure.This model can then be used to present results in the formof animated displays showing how the structure vibrates ata particular frequency. Extending the application even fur-ther, the model can be modified in software to investigatethe effects of adding mass, or stiffness. So is such a test aphysical test or a virtual test?

Often the most effective use of technology is when the twodisciplines are brought together [6]. CAE models are best atdelivering an understanding of system behaviour, interac-tions and sensitivity, whilst physical tests are good at identi-fying absolute levels of performance and the response ofcomplex systems. These characteristics lead quite naturallyto a hybrid approach whereby high quality legacy data fromprevious tests on carry-over systems can be included in a sys-tem model in which new elements of the design are modelledin the CAE environment. These hybrid models can deliverhigh levels of accuracy within shortened model creationtimes and with increased confidence.

The form in which a hybrid approach is used dependsvery strongly on the availability of information. Is therea CAD geometry or existing FE model available? Is therelegacy test data from a previous product development pro-ject? Does hardware exist that could be tested? Is a modelrequired for analysis of some other attributes and can theoptimisation of all attributes be brought together?

5. Purposes of physical testing

In this new context of physical and virtual testing work-ing together, physical testing is conducted for a variety ofreasons, which can be classified as:

� Benchmarking and Target Setting – to determine thestatus of the current product in comparison with thecompetition and set targets for the new product. Thiscan be completed at various levels, starting from a sim-ple comparison of performance as perceived by the cus-tomer through to a complete strip down and componentlevel characterisation. At one level this type of testingprovides information about the areas in which improve-ments are required. At another level information abouthow competitors products achieve their levels of perfor-mance can be generated.� Identification of inputs for CAE models – to establish

the forces, displacements, accelerations, pressures, tem-peratures, etc., encountered either in real-world use orduring existing physical tests. This data is used by theCAE teams to provide output from simulation thatcan be related directly to target levels of performance.� Identification of transfer functions of output/input for

complex systems – which can be used either to developanalytical models for subsequent use within a CAEmodel or which are to provide a look-up table of valuesthat can be called directly from the simulation.� CAE model validation – to provide information that

establishes not only that the model predicts responsesto the required level of accuracy but that the modelachieves such a result by accurately simulating the pro-cesses that deliver the overall characteristic.� Product validation – to confirm that the product meets

the applicable legal standard of performance and alsoachieves the company’s objectives in terms of function-ality, reliability and durability.

Fig. 2. Description of where physical testing activities support the product development cycle.

18 P. Wilkinson / Journal of Terramechanics 44 (2007) 15–22

� Problem solving – to provide information required toidentify and validate a solution to any problems encoun-tered during the validation phase or even once the prod-uct has entered service.

Each of these functions has a place in the developmentprocess as described in Fig. 2. By applying the correct func-tion at the right time in the process the overall developmentcycle time can be reduced, as can the total test cost and thecost of modifications.

6. Example – development of the in-cab noise of a new truck

To illustrate the process an example will be given thatshows when and how physical and virtual techniques wereused in conjunction with each other to deliver an extremelysuccessful programme.

6.1. Background

A new small truck range was being developed to replacea product that was highly successful in the UK market. Amajor target for the new vehicle was to win a greater shareof the European market whilst maintaining its UK marketshare. It was established that one area where improvementwas required was the in-cab noise.

6.2. Use of risk reduction tools

The customer descriptions of noise were related to engi-neering measurements as shown in the QFD matrix ofFig. 3. This helped identify the need to measure subjectiveevaluation of normal driving using a standard rating sys-tem and overall noise level during constant speed driving,supplemented by noise measured during full loadacceleration.

Engineering tests were completed on the out-goingmodel and several competitor models to establish the scaleof improvement required in relation to the competition.From this work it was also possible to identify a ‘‘targetvehicle’’ for the noise attributes. Interestingly, it was foundthat there were two aspects to the requirement for a‘‘quiet’’ vehicle. One was that low frequency componentsrelating to engine firing needed to be controlled (but noteliminated) whilst mid-frequency noise also needed to besignificantly improved. Getting the right balance betweenfiring frequency related noise and general engine noisewas seen as key to achieving a pleasing sound quality thatgave the right aural cues to re-assure the driver.

Completing an FMEA, as shown in Fig. 4, on the causesof high in-cab noise enabled a test programme to bedesigned which would develop an understanding of the dif-ferences between the target vehicle and the out-goingmodel in terms of noise transmission. Because the behav-iour of the out-going vehicle had not previously been char-acterised in detail the FMEA showed a high level of riskassociated with almost all potential transmission paths.However, the contributions from each were relatively easyto measure experimentally so it was possible to achieve agood level of understanding of where the real risks wereto be found and what would need to be done to reduce thatrisk. One of the key findings was the need for a ‘‘mule’’vehicle on which many of the at risk systems could be pro-ven before going to full prototype build. Introducing thismule vehicle into the programme reduced all the risk prior-ity numbers calculated in the FMEA to below 100 (seeTable 1).

Interestingly, at the start of the project there was a gen-eral belief that the target vehicle must simply have a quieterengine. By completing the detailed benchmarking exerciseit was found that the target vehicle actually had a noisierengine, but that the chassis and cab design were sufficiently

Fig. 3. QFD relationship matrix ‘‘house’’ for in-cab noise.

Fig. 4. (a) Prototype vehicle on test in LTC’s semi-anechoic chassis dynamometer and (b) noise levels (out-going vehicle, target vehicle and prototype ofnew vehicle).

P. Wilkinson / Journal of Terramechanics 44 (2007) 15–22 19

superior so that the handicap of the noisier engine wasovercome. Analysing how the chassis and cab differed fromthe out-going production model gave significant insightinto changes that could be introduced to help the newmodel achieve its global target of being ‘‘quiet’’.

Targets for the new vehicle were derived from the bench-marking exercise in terms of overall vehicle targets and sys-tem level targets for:

� engine radiated noise and vibration;� chassis mount stiffnesses;

� chassis vibration transfer between the engine mountsand the cab mounts;� cab mount stiffnesses;� cab acoustic attenuation between the engine bay,

intake orifice and exhaust orifice and the interior ofthe cab;� cab noise transfer functions (noise response in the cab in

response to a dynamic force applied at the cab mounts);� exhaust orifice noise;� intake orifice noise;� in-cab noise level and quality.

Table 1Simple FMEA table for in-cab noise

Potentialfailure mode

Potential failureeffects (KPOVs)

SEV CLASS Potential causes offailure (KPIVs)

OCC Current process controls DET RPN

Constant speednoise level

Noise levelsabove target

Programme delayand overspend

7 Inadequate isolation oncab floor

4 Laboratory test 1 28

7 Holes or poor sealing ofthe cab

4 Laboratory test 1 28

7 Engine mounts too stiff 6 Dynamic stiffnessmeasurement

3 126

7 Cab mounts too stiff 5 Dynamic stiffnessmeasurement

3 105

7 Excessive frame response 7 FE analysis 4 1967 Excessive cab response 6 Experimental

modal analysis andnoise transfer functionmeasurement

4 168

7 High wind noise 4 Validation teston mule vehicle

3 84

Noise duringnormal driving

Subjectivelyunacceptable

Programme delayand overspend

7 Inadequate isolation oncab floor

4 Laboratory test 1 28

7 Holes or poor sealing ofthe cab

4 Laboratory test 1 28

7 Engine mounts too stiff 6 Dynamic stiffnessmeasurement

3 126

7 Cab mounts too stiff 5 Dynamic stiffnessmeasurement

3 105

7 Excessive frame response 7 FE analysis 4 1967 Excessive cab response 6 Experimental

modal analysis andNTF measurement

4 168

7 High wind noise 4 Validation teston mule vehicle

3 84

7 Clunks due to clashes 4 ADAMS analysisof cab and enginemovement

5 140

Key: SEV – severity; CLASS – highlight if severity is rated 9 or 10 (health and safety risk); OCC – occurrence; DET – detectability; RPN – risk prioritynumber.

20 P. Wilkinson / Journal of Terramechanics 44 (2007) 15–22

These targets were then assessed in terms of the likelycost to achieve them, the conflicts with other design criteriasuch as package and other attribute requirements, such asride and handling.

6.3. Test and development programme

By the end of the FMEA, benchmarking and target set-ting exercises there was a clear set of requirements fordesign modifications. These requirements were fed intothe design process and concepts developed to support therequired modifications.

The main elements of the proposed modifications were:

� softer front engine mounts;� softer front cab mounts;� improved stability of the chassis;� improved sealing of the cab floor.

Areas that were identified at risk due to changesrequired for other reasons included:

� engine noise emission;� exhaust noise emission;� cab structural response.

Considering these features, a combination of physicaland virtual testing was planned to make best use of avail-able carry-over hardware and CAE tools.

Analysis of the system was broken down into severalsubsets:

� engine mount system – rigid body dynamics and vibra-tion transmission;� chassis response – mode shapes and forced response;� cab mount system – rigid body dynamics and vibration

force transmission into the cab;� cab response – acoustic attenuation and conversion of

force inputs into noise within the cab.

The split between physical and virtual testing is identi-fied in Table 2. As can be seen, the majority of investigativework was completed in the virtual world, taking advantageof the ease with which parameter variations could be stud-

Table 2Breakdown of the division of activities between physical and virtual testing

Physical test Virtual (CAE) test

Engine mount system – rigidbody dynamics

Not required – sufficient confidence in CAEresults

Investigated using ADAMS software

Engine mount system – vibrationtransmission

Confirmed through testing of dynamic stiffness ofthe components and full vehicle validation using amule vehicle

Investigated using ADAMS software

Chassis response Confirmed by experimental modal analysis and rigtesting of a chassis system. Validated with fullvehicle testing using a mule vehicle

Investigated using NASTRAN and using ADAMS fullvehicle model with imported flexible chassis element anddamper model identified from physical testing

Cab mount system – rigid bodydynamics

Investigated using ADAMS software

Cab mount system – vibrationforce transmission

Confirmed through testing of dynamic stiffness ofthe components and full vehicle validation using amule vehicle

Investigated using ADAMS full vehicle model withimported flexible chassis element and damper modelidentified from physical testing

Cab response – acousticattenuation

Investigated using a rig test of the cab system Not required – easier and quicker to test physical partsthat existed at the right time

Cab response – conversion offorce into noise

Investigated using a rig test of the cab system,including modal analysis and measurement ofnoise transfer functions

Not required – easier and quicker to test physical partsthat existed at the right time

Engine noise and vibration Validated performance on prototype engines Completed by supplier of the engine

Exhaust noise Validated during full vehicle testing on a mulevehicle

Completed by supplier

Cab structural response Validated by experimental modal analysis on aprototype cab

Investigated using NASTRAN to evaluate modal responseand forced response using forcing functions measuredfrom vehicle testing on a mule vehicle

In-cab noise level and quality Investigated using legacy data and validatedthrough measurement on mule vehicle

Low frequency – initial predictions based on ADAMSbased prediction of force at the cab mounts combined withmeasured noise transfer functionsMid/high frequency – not completed due to insufficientconfidence in available predictive techniques

P. Wilkinson / Journal of Terramechanics 44 (2007) 15–22 21

ied. This environment also provided the ideal opportunityto run evaluation of noise and vibration in parallel withevaluation of ride and handling performance, allowingtrade-offs and opportunities to be established very earlyin the programme. Physical testing was used primarily toconfirm the findings of CAE work.

An important element of the testing was the use of amule vehicle to validate much of the work completedusing CAE tools. Such a mule vehicle was relatively easyto create by re-using elements of the out-going vehicleand introducing prototypes of modified systems that werecritical to noise and vibration. This was initially dictatedby the fact that several CAE technologies were being usedtogether for the first time on this programme with anassociated risk. However, the availability of the mulevehicle meant that much simpler models could be usedwith the aim of establishing mechanisms and direction,whilst relying on the physical results to provide therequired evidence that absolute target levels of perfor-mance were being achieved. This helped to reduce thetime needed to complete the CAE work and allowed

results to be generated in time to influence the designdirection very early in the programme.

6.4. Results

There were several important results of thisprogramme.

� The risk of failure to meet the noise targets had beenmoved from very high to very low by the time manufac-ture of the prototype vehicles was started.� When the first vehicles emerged from the prototype shop

they all met the targets for in-cab noise with no furtherdevelopment required.� The targets for noise level reduction of some 5 dB and

improved noise quality were met with fewer noise andvibration specific components than the out-going model,as shown in Fig. 4.� Sufficient understanding of the vehicle was gained that

further mid-life improvement of noise could be achievedwith confidence at very low cost.

22 P. Wilkinson / Journal of Terramechanics 44 (2007) 15–22

� The overall spend on noise development was significantlylower than that of the out-going model’s development.� A process had been established and proven that could be

used and developed on future programmes with the scaleof activity matched to the scale of the programme.

7. Conclusions

New technologies are increasing the effectiveness ofCAE and as organisations gain experience with these toolsthe role and requirements for physical testing are changing.In simple terms, physical testing now has a young andaggressive competitor that is providing new opportunitiesto develop and validate designs without the encumbranceof physical hardware.

However, to view the issue as a straightforward competi-tion between ‘‘virtual’’ and ‘‘physical’’ is to miss an enor-mous opportunity. Each approach should be deployed onmerit and integrated to deliver the most effective productdevelopment process. Often it is reasonably easy to createa model that gives information about how a systemresponds to its inputs and boundary conditions but muchmore difficult to achieve a faithful reproduction of everynuance of the physical system. Equally, the results from amodel will only ever be as good as the assumptions maderegarding load cases and boundary conditions. On the otherhand, obtaining detailed information about why a systemresponds the way it does, or how sensitive it is to changesin design can be very time consuming and expensive toachieve through physical testing.

The best product development processes take intoaccount the capabilities of both physical and virtualapproaches and uses both together to generate better infor-mation more quickly and with least cost. Typically, physi-cal testing may be used to:

� establish inputs to a model;� provide information about the response of parts of a

system that are easy to test but would be extremely dif-ficult to model accurately, enabling ‘‘black box’’ model-ling of such parts;� confirm performance where there is a subjective element

to assessment of the characteristic and the humanresponse cannot easily be simulated;� validate the CAE predictions.

Is physical testing becoming a thing of the past? No –but the role of testing is changing away from being the pri-mary means of identifying and resolving problems with adesign. Now testing is very much concerned with support-ing CAE by providing those elements that are difficult tosimulate and resolving problems that slip through theproduct development process undetected.

The questions that should always be asked before creat-ing a CAE model are:

� Is there sufficient knowledge from previous designs todecide there is no need to test the design either in the vir-tual or physical domains?� Is it quicker and cheaper to create a virtual model of this

system than to build a physical prototype and is thereconfidence that the results will be sufficiently accurateto support the decisions that will be based on them?� Are the real-world operating conditions understood and

can they be replicated in the model?� Could a better model be developed if data from some

limited physical testing were incorporated?� Does the model need to be validated and if so, how will

this be done?

The questions that should be asked before commencingphysical testing are:

� Is this test necessary, or do a combination of experienceand CAE prediction give sufficient confidence?� Is the test intended to simply record the output response

of a system to a given set of input conditions, or is it nec-essary to gain an understanding of how the system deliv-ers that response?� Is it better to use a simple CAE model to gain under-

standing of the system and use the physical test simplyas a validation?� Does the physical test correspond to both the real-world

use situation and the CAE model conditions?

Significant improvements in product performance canbe achieved with reduced development time and coststogether with reduced unit manufacturing costs by effec-tively combining the best of physical testing with the bestof CAE simulation technologies.

Acknowledgements

The author thanks Leyland Trucks and the ADAU ofISVR for their work and support in relation to the examplegiven in this paper.

References

[1] Seksaria D, Baker J. A Probabilistic Approach to Evaluating FinancialRisk and Determining Testing Requirements for Low Volume NewProducts. SAE Paper 861280; 1986.

[2] Day RG. Quality function deployment: linking a company with itscustomers. American Society for Quality; 1993.

[3] Stamatis DH. Failure mode and effect analysis: FMEA from theory toexecution. American Society for Quality; 1995.

[4] Reinerstein DG. Managing the design factory – a product developer’stoolkit. New York, NY: The Free Press; 1997.

[5] Campbell RM. Analysis – When and When Not. SAE Paper 982011;1998.

[6] Vandeurzan U. Empowering a real breakthrough in functionalperformance engineering. In: LMS conference for physical and virtualprototyping, Paris; 2001.


Recommended