6
l H ow to conduc t a plant perf or mance test Performance testing after initial start-up has value well beyond t he short-term goal of validating equipment gu arantees-it's your only opportunity to establ'sh the baseline performance of the overall plant and many of its I 1 or systems. Corporate bean counters may be interested in short-term but a good plant engineer unde rstands that a thorough perfor- mance test will be useful for ma ny years. Here's your guide to each facet of a performance test-plus pitfa lls to avoid. By Tin a l. T ob uren, PE . and lar ry Jo nes. McHale & Associates Inc. C ompleting a power pla nt 's sIan-up and commi ssioni ng usually means pushing the prime contraclor to wrap up the remaining punch lisl items and gel- ting the new operators trained. Staffers are tired of the long hours they've put in and are looking forward to sculing into a work routine. Just when the job site is beginning to look like an operating planL a group of engineers arrives with laptops in hand, commandeers the only spare desk in the control room. and begins to un pack boxes of precision instru- ments. In a fit of controlled confusion, the engineers install the instruments. find pri- mary no .... • elements. and make the required conneclions. Wires are dragged back to the eOnlrol room and terminated at a row of neat - ly arranged laptops. When the test begins. 1. Trading sp aces. This IS a typical setup of data acquisitIOn computers used during a plant performance test. Couflesy· McHale & AsSOCiates the test engineers stare at their monitors as if they were watching the Super Bowl and trade comments in some son of techno-geek language. The plant perfonnance te st has be· gun (Figure I). An atomy of a test The type and extent of plant performance testing activities are typically driven by the project specifications or the turnkey contract. They also usually are linked to a key progress payment mileston e. although the value of the tests goes well beyond legalese. 1be typical test is designed to \'erify power and heat rate guarantees that are pegged to an agreed-upon set of operating condi tions. Sounds simple. right1 But the behind-the·seenes work to prepare for a test on wh ich perhaps mi ll ions of dollars are at stake beyond the contract guarantees almost cenainly exceeds your ex- pectations (see box). Long before a rr iving on site, the te5tteam will have: • Gathered site inf or mation . Reviewed the plant design for the adequa- cy and proper placement of test taps and for the type and location of primary now elements. • Developed plant mathematical models pnx""edurt"s Met with the plant owner. contractor. and representatives of major original equip- ment manufacturers (OEMs) to iron oUI the myriad details oot covered by COntract specifications. Experienced owners will have made sure that the plant operations staff is included in these meetings. Tests are nonnally conducted at full- load operation for a predeter mined period of time. The test team collects the necessary data and ruliS them through Ihe fa cility correct ion www.pow •• mog.com model to obtain preliminary res ul ts. Usu- ally withill a day. a preliminary test report or letter is generated to allow the owner to declare '·substantial completi on" and com- mence commercial operation. The results ror fuel sample analysis (andlor ash samples) are usually available within a couple of weeks. allowing Ihe final customer repon 10 be fin · ished and subminoo. The an and seienceor perfonnance testing require very specialized expenise and expe- rience that take years to de\·elop. The sci- Performance test economics are overpowering Co nsider a 500- MW fa cility with a heat rate of 7,000 Btu/kWh. Whe n operating at ba seLoad with an 80% capacity fac- tor, the plant will consu me over 24 mil- lion mmBtu per year. At a tu el cost of S8/mm Btu, thafs nearly SlOO miLlion in fuel costs fo r t he year. If an inst rum entation or controL er- ror ra ises the heat rate of the facility by 0. 5%, that would cost the plant an additional $1 million each year. If. on the other hand, a misreported heat rate causes the faciLity to be dispatched 0.5"10 Less often, reducing the capacity factor to 79.5%, the Lo sses in revenue at S50/ MWh would amou nt to nearly S1.1 mittion for the year. Performa nce tests can bring the right people toge ther at the facility to iden- tify Losses in performance a nd to recap- ture or prevent s uch losses in fa cility profits.

Conduct Perf Test - Power Mag

Embed Size (px)

DESCRIPTION

How to Conduct Performance Test - Power Plant

Citation preview

Page 1: Conduct Perf Test - Power Mag

l

How to conduct a plant performance test Performance testing after initial start-up has value well beyond the short-term

goal of validating equipment guarantees-it's your only opportunity to establ'sh the baseline performance of the overall plant and many of its I 1 or systems. Corporate bean counters may be interested in short-term

but a good plant engineer understands that a thorough perfor­mance test will be useful for many years. Here's your guide to each facet of a performance test-plus pitfa lls to avoid.

By Tina l. Toburen, PE. and l arry Jones. McHale & Associates Inc.

Completing a power plant 's sIan-up and commissioning usually means pushing the prime contraclor to wrap

up the remaining punch lis l items and gel­ting the new operators trained. Staffers are tired of the long hours they've put in and are looking forward to sculing into a work routine.

Just when the job site is beginning to look like an operating planL a group of engineers arrives with laptops in hand, commandeers the only spare desk in the control room. and begins to unpack boxes of precision instru­ments. In a fit of controlled confusion, the engineers install the instruments. find pri­mary no .... • e lements. and make the required conneclions. Wires are dragged back to the eOnlrol room and terminated at a row of neat­ly arranged laptops. When the test begins.

1. Trading spaces. This IS a typical setup of data acquisitIOn computers used during a plant performance test. Couflesy· McHale & AsSOCiates

the tes t engineers stare at their monitors as if they were watching the Super Bowl and trade comments in some son of techno-geek language. The plant perfonnance test has be· gun (Figure I).

Anatomy of a test The type and extent of plant performance testing activities are typically driven by the project specifications or the turnkey contract. They also usually are linked to a key progress payment milestone. although the value of the tests goes well beyond legalese. 1be typical test is designed to \'erify power and heat rate guarantees that are pegged to an agreed-upon set of operating condi tions. Sounds simple. right1 But the behind-the·seenes work to prepare for a test on which perhaps mi ll ions of dollars are at stake beyond the contract guarantees almost cenainly exceeds your ex­pectations (see box).

Long before arriving on site, the te5tteam will have:

• Gathered s ite information . • Reviewed the plant design for the adequa­

cy and proper placement o f test taps and for the type and location o f primary now elements.

• Developed plant mathematical models :lTltll~SI pnx""edurt"s

• Met with the plant owner. contractor. and representatives of major original equip­ment manufacturers (OEMs) to iron oUI the myriad details oot covered by COntract specifications. Experienced owners will have made sure that the plant operations staff is included in these meetings.

Tests are nonnally conducted at full- load operation for a predetermined period of time. The test team collects the necessary data and ruliS them through Ihe facility correction

www.pow •• mog.com

model to obtain preliminary resul ts. Usu­ally withill a day. a preliminary test report or letter is generated to allow the owner to declare '·substantial completion" and com­mence commercial operation. The results ror fuel sample analysis (andlor ash samples) are usually available within a couple of weeks. allowing Ihe final customer repon 10 be fin · ished and subminoo.

The an and seienceor perfonnance testing require very specialized expenise and expe­rience that take years to de\·elop. The sci-

Performance test economics are overpowering Consider a 500-MW fa cility with a heat

rate of 7,000 Btu/kWh. When operating

at baseLoad with an 80% capacity fac­

tor, the plant will consume over 24 mil­

lion mmBtu per year. At a tuel cost of

S8/mmBtu, thafs nearly SlOO miLlion in

fuel costs fo r the year.

If an instrumentation or controL er­

ror ra ises the heat rate of the facility

by 0.5%, that would cost the plant an

additional $1 million each year. If. on the other hand, a misreported heat rate

causes the faciLity to be dispatched

0.5"10 Less often, reducing the capacity

factor to 79.5%, the Losses in revenue

at S50/MWh would amount to nearly

S 1.1 mittion for the year.

Performa nce tests can bring the right

people togethe r at the facility to iden­

tify Losses in performance and to recap­

ture or prevent such losses in fa cility

profits.

Page 2: Conduct Perf Test - Power Mag

PERFORMANCE TESTING

cnee of crunching data is defined by industry standards , but the art rests in the ability \0 spot data inconsistencies , subtle instrument errors. skewed control systems . and opera­tional miscues. The c"pcrienced tester can also quickly delennine how tbe plant must be configured for the tests and can answer questions such as, Will the steam turbine be in pressure control or at valves wide open in sliding-pressure mode? What control loops will need 10 be in manual or automatic dur­ing testing? and At what level should the boiler or duct bumers be fired?

For the novice, it's easy to miss a 0.3% error in onc area and an olTsclling 0.4% e(­ror in another area that together yield a poor result if they aren't resolved and accounted for. Wilh millions of dollars on the line, the results have to be rock solid

Mid-term exams There are many reasons to evaluate the per­formarKe of a plant beyond meeting contract guarantees. For example. a performance test might be conducted on an old plant to verify its output and heat rate prior to an acquisition

"

Kick your building schedule ...

~taullC -.--It's a fact: Although piping systems account for

as litt le as five percent of total installed costs on

a project. installation can eat up more than 30

percent of all fie ld man hours. In fact. any misstep

in the process- whether labor shortages , hot work

permit delays. or lengthy weld times-can lead 10

making or breaking your projec t schedule. For over

8Oyears. the Victaulic grooved pipe joining method

has saved lime and money on industrial projects

across the globe . It is effiCient. proven technology

fO( uti lity and fire protecllon services that's faster,

safer and in teday's world of fast-track building

schedules. built for speed.

ClfICU l3 ON REAOtR SERVICE CARD

to conclusively determine its asset value. Other perfomlance tests might verify capac­ity and heat rme for the purpose of maintain­ing a power purchase agreement. bidding a plant properly into a wholesale market, or confirming the performance changes pro­duced by major maintenance or component upgwdes.

Performance tests are also an in tegral part of a quality performance moni toring program. If conducted consistently. periodic performance tests can quantify nonrecover­able degradation and gauge the success of a facility 's maintenance programs. Perfor­mance tests also can be ron on individual plant components to infom1 maimenance planning. If a component is performing bet­ter than expected. the interval between main­tenance activities can be extended. If the opposite is the case. additional inspection or repair items may be added to the next o utage checklist.

Whatever the reason for a test. its con­duct should be defined by industry-standard specifications such as the Performance Test Codes (PTCs) published by the American Society of Mechanical Engineers (ASME), whose web site - www.asme.org - has a complete list of available codes. Following the PTCs allows you to confidently compare today's and tomorrow's results for the same plant or equipment . Here. repeatabili ty is the name of the game.

The PTCs don't anticipate how to test ev­ery plant configuration but. rather. set general guidelines. As a result . some imerpretatiOfl of the codes' intent is always necessary. In fact. the PTCs anticipate variat ions in test conditions and reponing requirements in a code-compliant test. The test leader must thoroughly understand the codes and the im­plications of how they are applied to the plant in question. Variances must be documented. and any test anomalies must either be identi­fied and corrected before starting the test or be accounted for in the final test report.

A performance test involves much more than just taking data and writing a report. More time is spent in planning and in post­test evaluations of the data than on the actual test. Following is a brief synopsis describing the process of developing and implementing a typical performance test. Obviously. the de­tails of a particular plant and the requirements of ils owner should be taken into account when de\'eloping a specific test agenda.

Planning for the test The ASME PTCs are often referenced in equipment purchase andlor engineering. pro­curement. and construction (EPC) contracts to provide a standard means of determining compliance with performance guarantees.

POWER ls.pwmlar 4'tU

Page 3: Conduct Perf Test - Power Mag

The ASME codes are developed by bal­anced comrnillees of usen. manufacturen. independent testing agencies. and other par­ties interested in following best engineering practices. They include instructions for de­signing and executing perfonnance tests at both the overall plant level and the compo­nentlevel.

Planning a perfonnance test begins with defining its objective(s): the validation of contractual guarantees for a new plant and! or the acquisit ion of baseline data for a new or old plant. As mentioned. part of planning is mak.ing sure that the plant is designed so il enn be tested. Design ~quireJlle"t:>

include defining the physical boundaries for the tCSt, making sure that test pons and permanent instrumentation locations are available and accessible. and ensuring tbat fl ow metering meets PTe requirements (if applicable).

After the design of the planl is fixed. the objectives of testing must be defined and documented along with a plan for conduct­ing the test and analyzing its resUlts . A well­wrillen plan will include provisions for both expected and unexpected test conditions.

Understanding guarantees and corrections The most common perfonnance guarantees are the power output and heal rate that the OEM or contractor agrees to deliver. Deter­mining whether contractual obligations have been met can be lrick.y. r-or example. a plant may be guaranteed to have a capacity of 460 MW at a heat rate of 6,900 BtulkWh - but only under a fixed set of ambient operating conditions (reference conditions). Typical reference conditions may be a humid sum­mer day with a barometric pressure of 14.64 psia. an ambient temperature of 78F. and relative humidity of 80%.

The intent of testing is to confinn whether the plant performs as advenised under those specific conditions. But bQw do you verify that a plant has met ils guarantees when lhe test must be dOlle on a dry winter day. with a temperature of 50F alld 20% relative hu­midity?The challenging pan of perfonnance t~sting is correctin!l the resuhs for differ­ences in atmospheric conditions. OEMs and contractors typically provide ambient cor­rectioll factors as a set of correction curves or fonnulas for their individual components. But it is often up 10 the performance test en­gineers to integrate the component infonna­lion into the overall performance correction curves for the facility.

The reference conditions for performance guarantees are unique to every sile. A sim­ple-cycle gas turbine's ratings assume ils operation under Intem3lional Standardiza-

s.-.... Zllli I,.oWER

lion Organization (ISO) conditions: 14.696 psia. 59F. and relative humidity of6O%. n.e condition of the inlet air has the biggest im­pact on gas turbine-based plants because the mass flow of air through the turbines (and consequently the power they can produce) is a function of pressurc. temperaturc. and humidity. PerfOnllanCe guarantees for steam plants also depend on air mass flow. but to a lesser eXlenl.

The barometric pressure reference condi­tion is nonnally set to the average barometric pressure of the si te . If a gas turbine plant is

si tcd 31 sea level. its barometric pressure ref­erence is 14.696 psia. For the same plant at an altitude of 5.000 feet. the reference would be 12.231 psia. and its guaranteed output would be much lower.

The relative humidilY reference condition mayor may not have a significalll bearing on plant perfonnance. In gas turbine plants the effect is nOI large (unless the inlet air is conditioned). but it still must be accounted for. The effect of humidity. however. is more pronounced on cooling towers. Very humid ambient air reduces the rate at which evapo-

FARO.com FARO 800.736.0234 ----~

THE MEASURE OF SUCCESS ..,. .... ~ .. ''"'',,,. ... ¥,,,. ~ ~ ____ =-o> _.,.,.....,.." .... ~~."""" ~., .. """"_~ ~,_'""'"'~

CIRCLE 301 otI READER SERVICE CARD

"

Page 4: Conduct Perf Test - Power Mag

. 1 PERFORMANCE TESTING

ration takes place in the tower. lowering its cooling capacity. Downstream effects arc an increase in steam turbine back pressure and a reduction in the turbine-generator's

gross capacity. The most imponant correction for gas

turbine plant performance tests involves

compressor inlet air temperature. Allhough a site's barometric pressure Iypically varies by no more than 10% over a year. its tempera­tu res may range from 20F to lOOP over the period. Because air temperature has a direct effect on air densit)'. temperature variation changes a unit's available power output. For a typical heavy-duty frame gas turbine. a 3-degree change in temperature can affect its capacit)' by 1%. A temperature swing of 30 degrees could raise or lower power output by as much as 10%. The effect can be even more pronounced in aeroderivative engines.

ISO-standard operating conditions or site-specific reference conditions are al ­most impossible to achieve during an ac­tual test. Accordingly. plant contractors and owners often agree on a base operating condition that is more in line with normal

site atmospheric conditions. For example. a gas turbine plant built in Florida might be tested at reference conditions of 14.6 psia.

78F. and 80%. Establishing a realistic set of

reference conditions increases the odds that conditions during a performance leSI will be close to the reference conditions. Realistic reference conditions also help ensure that the guarantee is representative of expected site output

Establishing site-specific reference con­ditions also reduces the magnitude of COf­

rections to measurements. When only small corrections are needed to re late measured performance from the actual test conditions 10 the reference conditions, the correction methods themselves become less prone to question. raising everyone's comfort level with the quality of thc performancc test

results. Beyond site ambient conditions. the PTCs

define numerous other correction factors that the tesl designer must consider. Most are site-specific and include:

• Generator power factor. • Compressor inlet pressure (after losses

across the filter house).

• Turbine exhaust pressure (due to the pres­ence of a selective catalytic reduction sys­tem or heat-recovery steam generator).

• Degradation/fired hours. recoverable and unrecoverable.

• Process steam now (export and return).

• 8lowdown (normally isolated during testing).

• Cooling waler tempeT1ltllre (if using oncc­through cool ing. or if the cooling lower is outside the test boundary).

• Condenser pressure (if the cooling water cycle is beyond the test boundary).

• Abnormal auxiliary loads (such as heal tracing or construction loads)

• Fuel supply conditions. including telll­

perature and/or composition

Choose the right instrumentation Instrumentation used to record test measure­ments should be selected based on a pre-test uncenainty analysis (see "Understanding test uncenainty·'). This analysis is important to

fine-tune the instrumentation to ensure that the quality of the test meets expectations. The test instruments themselves are usually a combination of temporary units installed specifically for testing. permanently installed plant instrumenlation. and utility instrumen­tation (billing or re\'enue metering). Tem­porary instruments are typica lly installed to make key measurements that have a sig­nificant impacl on results and where higher aceumcy is needed to reduce the uncenainty of test results. Among the advantages of us­ing a piece of temporary instrumentation is

General Physics Corporation (Gp) has been hetping power companies find soIutioos to worl<forte qualifiation issu,,", for over 40 V"ars. GP a n help you eduale your emptoyee'> to make !itlre they IIave the skills and knowledge they need to !itlcu'sslully operate and maintain your ptant.

Benefits of GPiLeam"':

• 21ft 7 Availability · Cost Effect ive · Centralized Record·Keeping · Accredited Degree Prog ra m

Host S jt~ Sp~cific Conlcnt • World·Class Oracle Learning

Manageme nt System (lMS)

learn more about energizing your workforce.

General Physics Corporation

410.379.3659 www.EnergyWBT.com

Our extensive library of Power industry programs addresses:

· Ope.ators · Mechanics

Electricians · I&C Technicians · G~ Fr~me 7F Combined Cycle PI3nts • Siemens SOIF Combined Cyele Plants • Fundamenta l Training (New Hires) • Chemists and lab Technicians

· Hyd roele<:t ric f'lants · Wind Farms • Waste·to·Energy f'lants • Na tco Wate r Treatment - Coal (Mater jaO Hand lers

• OSHA Comp!i~nce · Environme nt~l Compliance

CIRCU 35 Ofj READER SERVICE CARD

POWI;R I Seplemb.r 20li

Page 5: Conduct Perf Test - Power Mag

PERFORMANCE TESTING [

that it has been calibrated specifically for the performance test in question followillg Na­tional lnstiwle of Standards and Technology (NIST) procedures.

Another benefit of installing temporary instrumentat ion is to verify the readings of pennanent plant instruments. Plant instru­mentation typically lacks NIST-traceable calibration or has been calibrated by techni­cians who are more concerned with operabil­ity than with accurate perfonnance testing. 1bere's a good reason for the former: Per-

foroling a codc- Ievel calibration on plant instrumentation can be more expensive than installing temporary test instrumentation. An additional benefit of a oomplete temporary test instrumentation setup is that the instru­mentation. signal conditioning equipmclll. and data acquisition system are often cali­brated as a oomplctc loop, as is recommend­cd in PTC-46 (Overall Plant Performance).

ing a good pcrform,mce data center is very important. A perfonnance command ceUler should be out of the way of site operations yet close enough to observe plant instrumen­ullion input and operation.

Obviously. performance instrument read­ings should be checked against those of plant instrumenlS. where available. This is one of the most important checks that can be made prior to a performance test. When a performance tester can get the same result from tWO different instruments that were

All pcrfomlance inStrumeUls should be installed correctly, and any digital readings should be routed to a centrnllocation. Chaos-

Understanding test uncertainty Uncertainty is a measure of the quality of the test or calcula­tion result. A pretest uncertainty analysis can be used to de­sign a test to meet predefined uncertainty limits. A post-test uncertainty analysis should be performed to verify that those uncertainty limits were met and to determine the impact of any random scatter recorded in the test data.

Each input to the calculation must be analyzed for its impact on the final resuLt. This impact is identified as the sensitivity of the result to that input. For example, if inlet air temperature changes by 3 degrees F, and the corrected output changes by 1"10, the sensitivity is 1"10 per 3 degrees F or 0.33"1o/degree F.

The instrumentation information is used to identify the sys­tematic error potential for each input. For exampLe, a precision 4-wire resistance-temperature detector can measure inlet air temperature with an accuracy of +/-O.18F, based on information provided by the manufacturer and as confirmed during periodic calibrations.

During a test run, multipLe recordings are made for any given parameter, and there will be scatter in the data. The amount of scatter in the data is an indication of the random error potentiaL for each input. For example, during a 1-hour test run, the inlet air temperature may be recorded as an average of 75F, with a standard deviation in the measurements of O.6F.

If more than one sensor is used to measure a parameter, there also will be variances between sensors based on location. These variances may be due to the variances either in the instrumen­tation or in the actual parameter measured. For example, if air temperature is being measured by an array of sensors, there may be effects due to ground warming or exhaust vents in the area, either of which would affect the uncertainty of the bulk average measurement. These variances wiLL affect the average and stan­dard deviation values for that parameter. SPiltiill variances are added into the systematic error potential, based on the devia­tion of each Location from the average vaLue for aLI locations.

Now that we've defined the three separate inputs to the un­certainty determination- sensitivity (A), systematic error po­tentiaL/uncertainty (8), and random error potential/uncertainty (C)-it's time to put on our statistician's hats.

The terms can be combined in the following equation:

Uncertainty .. SORTI (A x 8F + ( t x A x C)2]

~r2tlOl I POWEIt

The "1" value on the right side of the equation is known as the St~dent-t· factor and is based on the number of degrees of freedom (or number of data points recorded) in the data set. For a 95"10 confidence interval and data taken at I-minute intervals for a 6D-minute test run, the value of "1" is 2.0. If data are taken less frequently (such as at 2-minute intervals), fewer recordin9s are made and therefore either the test run must be longer (which is not recommended, because ambient conditions may change) or the value of "t" will increase.

The exampLe given above is for a single parameter, such as inlet air temperature, and its effect on corrected output. for each correction made, the same process must be carried out to determine the sensitivity, systematic uncertainty, and random uncertainty of the corrected result on that correction parameter (such as barometric pressure and relative humidity).

Once each individuaL uncertainty has been identified, they can be combined to determine the overall uncertainty of the cor­rected result. Combining the individual uncertainties is a three­step process;

• Determine the total systematic uncertainty as the square root of the sum of the squares for alL the individuaL systematic uncertainties.

• Determine the totaL random uncertainty as the square root of the sum ofthe squares for all the individual random uncertain­ties.

• Combine the total systematic uncertainty and total random uncertainty as follows: Total uncertai nty _ SQRT[(systematic_ total)2 + ( t x random_totaIF ].

The result of the anaLysis is an expression stated in terms of the uncertainty cakuldll!d for an individual instrument or the overall system. We might normally say, "The inlet air tempera­ture is 7SF," but when including an uncertainty analysis of a temperature measurement system, a more accurate statement would be, NWe are 95"10 certain that the inlet air temperature is between 74.6f and 75.4f."

Once again, the vaLue for "t" wilL depend on the design of the test, including the number of multiple sensors and the frequency of data recordings. Additional information on the Student-t fac­tor as well as a discussion of how to determine uncertainty can be found in ASME PTC 19.1 (Test Uncertainty).

..

Page 6: Conduct Perf Test - Power Mag

. 1 PERFORMANCE TESTING

installed to independent leSI ports and cali­brated separately, there's a gOCKl chnnce the measurement is accurate. If there's a differ­ence between the readings that is close to or exceeds instrument error. something is likely to be amiss.

Typicully, when plant guaramces are tied to corrected output and heal ratc, the two most imponam instrument readings aTe mea­sured power and fuel flow. If either is wrong. the test results will be wrong. For c)(amplc, say you're testing a unit whose expected out­put is 460 MW. The plant instrumcm is ac­curate 10 within I %. and the test instrument is even more accurate: +/-0.3% . In this case, the lester prefers to see the two readings well

within 1% of each other (4.6 MW) but they still may be as far apart as 5.98 MW (1.3%) and technically be within the instruments' uncertainty.

When setting up for a perfonnance test. it is oot uncommon to find errors in penna­nent plant instrumentation, control logic, or equipment installation. These errors can influence the operation of a gcnerating unit, for example by causing ovcr- or underfiring of a boiler or gas turbine and significalllly impacting the unit's output and heat rate_ In cases where the impact on actual operation continues undetected, the corrected test re­port values may still be in error due to cor­rec tions made based on fault y instrument readings. If these reported values are used as the basis of facility dispatch. a small er­ror could have an enonnous impact on the plant's bonom line, ranging from erroneous fuel nominations to the inability to meet a capacity commitment.

Conduct the test The perfonnance test should always be conducted in accordance with its approvcd procedure. Any deviations should be dis­cussed and documented to make sure their impact is understood by all parties. [fthe test is conducted periodically, it is imJXlrtant to know what deviations were allowed in pre­vious tests to understand if any changes in perfonnance might have been due to equip­ment changes or simply to the setup of the 1/"_~1 i1.sdf

Calibrated temJXlrary instrumentation should be installed in the predetennined locations, and calibration records for any plant or utility instrumemation should be re­viewed. Check any data collection systems for proper resolution and frequency and do preliminary test runs to verify that aU sys­tcms are operating properly.

The perfonnance test should be preceded by a walk-down of the plant to verify that all systems are configured and operating correct­ly. It's important to verify that plant operations

"

are in compliance with the test procedure be­cause equipment disposition. operating limits. and load stability affect the results. Data can then be collected for the time periods defined in the test pnx:edure and checked for compli­ance with all test stability criteria. Once data have been collected and the test has been deemed complete. the results can be shared with all interested parties.

Because the short preliminary test may be the most important part of the process. be sure to allow sufficient time for it in the test plan . The preliminary test must be done dur­ing steady-state conditions following load stabilization Of when the unit is operating at steady state during the emissions testing program. The preliminary test has three ob­jectives: to verify all data systems, to make sure manual data takers are reading the cor­rect instruments and meters. and to have the data pass a "sanity check."

A fter the test data have been collected. the readings should be entered into the correction model as soon as possible and checked for test stabi lity criteria (as dcfined by the test procedure). At this point, de­pending on the correction methods. the test director may be able to make a preliminary analysis of the results. If the numbers are way out of whack with expected values, a good director will start looking for expla­nations-poSSibly, errors in the recorded data or something in the operational sctup of the unit itsclf. Though everyone is con­cerned when a uni t underperfonns, a unit that performs unexpectedly well may have problems that have been overlooked. For example, a unit that corrected test results in­dicate has a 5% capacity margin may need 10 have its metering checked and rectified. or it may have been mistuned to leavc it in an overfired condition.

Although an overtuned gas turbine may produce more megawatt-hours during initial operations. the gain comes with a price: in­creasing degradation of the unit's hot section. shortening parts life and increasing mainte­nance costs. The most common mistake in testing is acceptance of results that are too good. If results are bad. everyone looks for thp. pmhlem. If the reslllt~ are ah-nve ]lar. everyone is happy-especially the plant owner. who seems to have gotten a "super" machine. However. there's a reason for ev­ery excursion beyond expected perfonnance limits - for better or worse

If all the pretest checks are done prop­erly. the actual perfonnance test should be uneventful and downright boring. It should be as simple as veri fying that test parameters (load , stability. etc.) are being mel. This is where the really good perfonnance testers make their work look easy. They appear to

have nothing to do during the test. and that's true because they planncd it that way. Hav­ing done all the "real" work beforehand. they can now focus on making sure that nothing changes during the lest that may affect the stability of the data.

Analyze the results Almost immediately after the perfonnance test (and sometimes even before il is com­plete). someone is sure to ask. "Do you have the results yet?" Everyone wants 10 know if the unit passed. As soon as practical. the per­fonnance group should produce a prelimi­nary report describing the tes1 and detai ling the results . Data should be reduced to test run averages and scrutinized for any spuri­ous outliers. Redundant instrumentation should be compared. and instrumentation should be verified or calibrated after the test in accordance with the requirements of the procedure and applicable test codes

The test runs should be anal yzed follow­ing the methods outlincd in the test proce­dure . Results from multiple test runs can be compared with one another for the sake of repeatability. PTe 46 (O verall Plant Perfor­mance) outlines criteria for overlap of cor­rected test results. For example. if there are three test runs, a quality test should demon­strate that the overlap is well wi thin the un­certainty limits of the test.

Once test analysts are satisfied that the re­sults were proper, the test report can be writ­ten to communicate them. This reJXlrt should describe any exceptions to the tcst procedure that may have becn requi.red due to thc con­ditions of the facility during thc test. In the event that the results of the perfonnance test are not as expected, the reJXlrt may also sug­gest potential next steps 10 rectify them.

For sites where the fuel analysis is not available online or in real time, a prelimi­nary efficiency and/or heat rale value may be reported based on a fuel sample taken days o r even weeks before the test. Depend­ing on the type and source of the fuel. this preliminary analysis may be significantly different than that for the fue l burned dur­ing the test. It's important to understand thaI preliminary heill ralp. and efficiency re~lllts are often subject to significant changes. Once the fuel analyses are available for the fuel samples taken during the test. a final reJXlrt can be prepared and presented to all interested parties . •

- Tina L Toburen, PE, is manager of performance monitoring and Larry Jones

is a testing consultant for McHale & Associates. Toburen can be reached at

425-557-8758 or [email protected]; Jones can be reached at 855-588-2554 or

[email protected].

POWER I Stpllmbe, 2006