342
AECL-7056 Atomic Energy of Canada Limited L'Energie Atomique du Canada Limitee DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS Systemes Distribues pour les Centrales Nucleaires IAEA SPECIALISTS' MEETING. International Working Group on Nuclear Power Plant Control and Instrumentation REUNION DES SPECIALISTS DE L'AIEA. Groupe International de Travail sur la Commande et ('Instrumentation des Centrales Nucleaires Chalk River Nuclear Laboratories Laboratoires nucle'aires de Chalk River Chalk River, Ontario July 1980 Juillet

DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

AECL-7056

Atomic Energy of

Canada Limited

L'Energie Atomique

du Canada Limitee

DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

Systemes Distribues pour les Centrales Nucleaires

IAEA SPECIALISTS' MEETING. International Working Group onNuclear Power Plant Control and Instrumentation

REUNION DES SPECIALISTS DE L'AIEA. Groupe International de Travail sur la Commande et('Instrumentation des Centrales Nucleaires

Chalk River Nuclear Laboratories Laboratoires nucle'aires de Chalk River

Chalk River, Ontario

July 1980 Juillet

Page 2: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

PROCEEDINGS OF THE SPECIALISTS' MEETING

on

DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

Convened by the IAEA Working Groupon Nuclear Power Plant Control and Instrumentation

14-16 May 1980

at

Chalk River Nuclear LaboratoriesChalk River, Ontario

CANADA

Chairman: G. Yan

Atomic Energy of Canada Limited Research CompanyChalk River Nuclear Laboratories

Chalk River, OntarioJuly 1980

AECL-7056

Page 3: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

TABLE OF CONTENTS

Page

LIST OF PARTICIPANTS iv

PHOTOGRAPHS viii

ACKNOWLEDGMENTS xii

Welcoming Address on behalf of AECL E. CritophVice-President and General Manager CRNL

Welcoming Address on behalf of IAEA G. SitnikovScientific Secretary IWG/NPPCI

Opening remarks A. PearsonChairman IWG/NPPCI

SESSION 1 - Properties of Distributed Systems

REQUIREMENTS AND CONCEPTS FOR A NUCLEAR PLANT SURVEIL-LANCE AND DIAGNOSTIC SYSTEM (NPSDS)

Paul J. Nicholson and D. Lanning

A DISTRIBUTED ARCHITECTURE IN THE CONTROL OF THE PWR1300 MW NUCLEAR PLANTS OF ELECTRICITE DE FRANCE 20

G. Guesr'ier, P. Peinturier and G. Varaldi

THE DESIGN OF A REAL-TIME SOFTWARE SYSTEM FOR THE DIS-TRIBUTED CONTROL OF POWER STATION PLANT 32

G.C. Maples

DISTRIBUTED CONTROL AND DATA PROCESSING SYSTEM WITH ACENTRALIZED DATABASE FOR A BWR POWER PLANT 43K. Fujii, T. Neda, A. Kawamura, K. Monta and K. Satoh

PROWAY - AN INTERNATIONAL STANDARD DATA HIGHWAY 65R. Warren Gellie

A WESTINGHOUSE DESIGNED MICROPROCESSOR BASED DISTRIBUTEDPROTECTION AND CONTROL SYSTEM 101

J. Bruno and J.B. Reid

Page 4: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

11

Paqe

DATA MANAGEMENT PROBLEMS WITH A DISTRIBUTED COMPUTERNETWORK ON NUCLEAR POWER STATIONS 119

I. Davis

THE USE OF DISTRIBUTED MICROPROCESSORS FOR SODIUM PRE-HEATING SYSTEM 129

K. Fujii, T. Satou, M. Satou and H. Okano

DISTRIBUTED SYSTEMS DESIGN USING SEPARABLE COMMUNICA-TIONS 141

A.C. Capel and G. Yan

THE IMPACT OF DATA HIGHWAY ARCHITECTURES ON CONTROLAND INSTRUMENTATION SYSTEM DESIGN 156

G.A. Hepburn, T. McNeil and R.A. Olmstead

SESSION 2 - Requirements and Design Considerations - I

THE DESIGN, DEVELOPMENT AND COMMISSIONING OF TWODISTRIBUTED COMPUTER BASED BOILER CONTROL SYSTEMS 168

J.R. Johnstone, D. Collier and S.T. Pringle

CONDENSATE CLEAN UP CONTROL SYSTEM WITH DISTRIBUTEDDDC 180Y. Yoshioka, T. Tazima, O. Nakamura and S. Kobayashi

THE NORTHEAST UTILITIES GENERIC PLANT COMPUTER SYSTEM 192K.J. Spitzner

DESIGN CONCEPTS AND EXPERIENCE IN THE APPLICATION OFDISTRIBUTED COMPUTING TO THE CONTROL OF LARGE CEGBPOWER PLANT 198

J. Wallace

LEAK DETECTION SYSTEM WITH DISTRIBUTED MICROPROCESSORIN THE PRIMARY CONTAINMENT VESSEL 214

K. Inohara, K. Yoshioka and T. Tomizawa

DISTRIBUTED TERMINAL SUPPORT IN A DATA ACQUISITIONSYSTEM FOR NUCLEAR RESEARCH REACTORS 226

R.R. Shah, A.C. Capel and C.F. Pensom

Page 5: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

Ill

Page

SESSION 3 - Tours

SESSION 4 - Requirements and Design Considerations - II

DISTRIBUTED SYSTEMS IN THE HEAVY WATER PLANT ENVIRON-MENT 240

G.E. Kean, J.V. Galpin and J.C. McCardle

CANDU CONTROL FUNCTIONS HIERARCHY FOR IMPLEMENTATIONIN DISTRIBUTED SYSTEMS 265

Pierre Mercier

ON DISTRIBUTED ARCHITECTURE FOR PROTECTION SYSTEMS 282P. Jover

THE OPERATOR'S ROLE AND SAFETY FUNCTIONS 290W.R. Corcoran, D.J. Finnicum, F.R. Hubbard, III,

C.R. Musick and P.F. Walzer

FAIL-SAFE DESIGN CRITERIA FOR COMPUTER-BASED REACTORPROTECTION SYSTEMS 306

A.B. Keats

AN INTELLIGENT SAFETY SYSTEM CONCEPT FOR FUTURE CANDUREACTORS 318

H.W. Hinds

Page 6: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

I V

LIST OF PARTICIPANTS

CANADA

AMUNDS, L.O. (Vern)AECL Research CompanyChalk Fiver Nuclear LaboratoriesCHALK RIVER, OntarioKOJ 1JO

BAYOUMI, Prof. M.Department of Electrical Eng.Queen1s UniversityKINGSTON, OntarioK7L 3N6

CAPEL, A.C. (Tony)AECL Research CompanyChalk River Nuclear laboratoriesCHALK RIVER, OntarioKOJ 1JO

CHANDRA, MurariAtomic Energy Control BoardP.O. Box 1046OTTAWA, OntarioKIP 5S9

CHOU, Q.B. (Jordan)Ontario Hydro700 University Avenue H7TORONTO, OntarioM5G 1X6

DOWNIE, ColinAtomic Energy Control BoardP.O. Box 1046OTTAWA, OntarioKIP 5S9

EL-ZORKANY, Prof. H.I.Department of Systems Engineering &

Computing ScienceCarleton UniversityOTTAWA, OntarioK1S 5B6

FIEGUTH, WernerAECL Engineering CbmpanySheridan Park Research CommunityMISSISSAUGA, OntarioL5K 1B2

CANADA (continued)

GELLIE, Dr. Warren R.National Research Council of CanadaDivision of Mechanical EngineeringBuilding M-3, Montreal RaadOTTAWA, OntarioK1A OR6

HEPBURN, G.A.AECL Engineering CbmpanySheridan Park Research CbmmunityMISSISSAUGA, OntarioL5K 1B2

HINDS, H.W. (Tony)AECL Research CompanyChalk River Nuclear laboratoriesCHALK RIVER, OntarioKOJ 1JO

ICHIYEN, NormanAECL Engineering CompanySheridan Park Research CommunityMISSISSAUGA, OntarioL5K 1B2

KLOCK, R.J. (Ron)AECL Research CbmpanyChalk River Nuclear laboratoriesCHALK RIVER, OntarioKOJ 1JO

MAGAGNA, Dr. LinoOntario Hydro700 University Avenue H7TORONTO, OntarioM5G 1X6

MARTIN, David JAtomic Energy Cbntrol BoardP.O. Box 1046OTTAWA., OntarioKIP 5S9

McCARDLE, Jim C.AECL Chemical CbmpanyP.O. Box 3504OTTAWA, OntarioKLY 4G1

Page 7: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

V

CANADA (continued)

McNeil, TimAECL Engineering CompanySheridan Park Research CorrmunityMISSISSAUGA, OntarioL5K 1B2

MERCIER, PierreHydro-QuebecCentrale Nucleaire Gentilly-1GENTILLY, Quebec GOX 1GO

OLMSTEAD, RoyAECL Engineering CompanySheridan Park Research CommunityMISSISSAUGA, OntarioL5K 1B2

PEARSON, Dr. AlAECL Research CompanyChalk River Nuclear LaboratoriesCHALK RIVEROntario, KOJ 1J0

PENSOM, Croaribe FAECL Research CompanyChalk River Nuclear LaboratoriesCHALK RIVEROntario, KOJ 1J0

SHAH, Ramnik RAECL Research CompanyChalk River Nuclear Laboratories .CHALK RIVEROntario, KOJ 1J0

SNEDDEN, M.D. (Don)AECL Research CompanyChalk River Nuclear LaboratoriesCHALK RIVEROntario, KOJ 1J0

TAWA, Roger HHydro-Quebec855 est Rue Ste.MONTREAL, P.Q.H2L 4P5

Catherine

WATKINS, Len MAECL Research CompanyChalk River Nuclear LaboratoriesCHALK RIVEROntario, KOJ 1JO

CANADA (continued)

YAN, Dr. GeorgeAECL Research CompanyChalk River Nuclear LaboratoriesCHALK RIVER, OntarioKOJ 1JO

FRANCE

AMIEL, AndreSINTRA74-76 Av. Gabriel-Peri92230 GennevilliersFrance

ANGLARET, PhilippeCGEE Alsthcm13 rue Antonin Raynaud92309 LevaJlois FerretFrance

BECKERS, G.Sofinel/Tour Fiat1 Place Coupole92084 PARISLa Defense

FURET, JacquesMinistere de L1 IndustrieService Central de Suretedes Installations Nuclaaires

99 rue de GrenellePARIS 75700France

GUESNIER, GuyElectricite de FranceTour E.D.F.-G.D.F.Cedex N° 892080 PARISLa Defense

JACQUOT, Jean PaulElectricite de FranceResearch and Development Centre6 Quai VfetierCHATOU, 78France

JOVER, PierreCommissariat a l'Energie AtcmiqueS.E.S/S.A.I.Centre d1 etudes Nuc-leaires de Saclay91190 GIF-sur-YVETTEFrance

Page 8: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

VI

IAEA U.K

SITNIKOV, GScientific SecretaryIWG-NPPCIIAEA.Kamtner Ring 11P.O. Box 590 A-1011VIENNA, Austria

ITALY

PALAMIDESSI, AlvaroCNEN-CRVVia Arcoveggio 56/23°40129 BolognaItaly

JAPAN

SATO, NobuhideNippon Atomic Industry Group Co., Ltd4-1 TJkishima-choKawasaki-kuKAWASAKI-SHIJapan

SATOU, TakahisaNippon Atomic Industry Group Go., Ltd4-1 Ukishima-choKawasaki-kuKAWASAKI-SHIJapan

YOSHIOKA, KatumiToshiba Electric GorpInstrument & Automation Division1 Toshiba-choFuchu, TOKYO 182Japan

SWEDEN

LUNDBERG, DavidSwedish State Power BoardVattenfall/VerS-16287 VALLINGBYSweden

8ERGGREN, JonasSwedish State Power BoardVattenfall/VerS-16287 VALLINGBYSweden

ENTWISTLE, Adrian G.Central Electricity Generating BoardEuropa HouseBird Hall LaneCheadle HeathSTOCKPOKT, England

HAMILTON, JamesSouth of England Electricity BoardCathcart HouseSpean StreetGLASGOW G44 4BEScotland

JOHNSTONE, Leslie RossCentral Electricity Generating BoardNorth Eastern RegionScientific Services DeparbnentBeckwith KnowleOtley RoadHARROGATE, HG3 IPSEngland

KEATS, A. BrianAtomic Energy EstablishmentWINFRITH, DorchesterDORSET, DT2 SDHEngland

MAPLES, Graham C.Central Electricity Research LaboratoriesLEATHERHEAD, SurreyEngland

WALLACE, John N.Central Electricity Generating BoardSouth Western RegionBridgewater RoadBedminster DownBRISTOL, England

WILLIAMS, David R,Central Electricity Generating BoardLaud House (Roan L509)20 Newgate StreetLONDON, E.C.I A YAXEngland

WILSON, IanHeadInstrument Systems and Techniques GroupUKAEARoan 121, Building B41DORCHESTER, Dorset,England DTI 8DH

Page 9: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

Vll

U.S.A.

DEYST, Dr. JohnC.S. Draper Laboratory,555 Technology SquareCAMBRIDGE, Mass. 02139

Inc.

KANAZAWA, Dr. RichardElectrical Power Research InstituteP.O. Box 10412PALO ALTO, CA 94303

KISNER, RogerOak Ridge National LaboratoryP.O. Box X-3500, M/S 10OAK RIDGE, TN 37830

MADDEN, Dr. PaulC.S. Draper Laboratory,555 Technology SquareCAMBRIDGE, Mass 02139

Inc.

MUSICK, Charles R. (Ron)System EngineerC-E Power SystemsCombustion Engineering, Inc1000 Prospect Hill RoadWINDSOR, Connecticut 06095

NICHOLSON, Paul J.PresidentNicholson Nuclear Science & EngineeringBox 74, MIT Branch P.O.CAMBRIDGE, Mass. 02139

PAFFRATH, A. WayneSangamo-Weston,Kennedy DriveARCHBALD, Pa. 18403

REID, J. BrianManager, Integrated Protection SystemsWestinghouse Electric CorpP.O. Box 355PITTSBURGH, Pa 15230, U.S.A.

ROBERTS, RobertBabcock & WilcoxResearch CentreP.O. Box 1260LYNCHBURG, VirginiaUSA 24505

SIDES, WilliamOak Ridge National LaboratoryP.O. Box X-3500, M/S 10OAK RIDGE, TN 37830

U.S.A. (continued)

SMITH, JohnMeetinghouse Electric CorpP.O. Box 355PITTSBURGH, Pa. 15230

SPITZNER, Klaus J.Generation Process Computer SectionNortheast UtilitiesP.O. Box 270HARTFORD, Connecticut 06101

WEST GERMANY

GMEINER, LotharInstitut Fuer Datenverarbeitung

In Der TechnikKernforschungszentrumKarlsruhe, GFMBHPostfach 3640, West Germany

HAMMERSCHMIDT, WernerBrown Boveri & Cie AGAbt. GK/TE2Postfach 351D-6800 Mannheim 1Deutschland

HOTTMAN, DetlefInteratanInternationale Atonreaktorbau

GmbHFriedrich-Ebert-Strabe5060 Bergisch Gladback 1West Germany

KREBS, Dr. Wolf-DieterKraftwerk Union AGResident Engineer at C-E100 Prospect Hill RoadWINDSOR, Ct 06095

SCHULZ, Dr. G.Siemens AGAbteilung E STE 23SiemensalleeD7500 KarlsruheWest Germany

WILHELM, HerbertBrown Boveri & Cie AGAbt. GK/TE2Postfach 351D-6800 Mannheim 1Deutschland

Page 10: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

VI11

E. Critoph G. Sitnikov

A. Pearson G. Yan

Page 11: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

IX

A view of the Meeting

Tony Capel explaining the operation of INTRAN

Page 12: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

A description of the REDNET data acquisitionsystem by Croombe Pensom

A tour of the Dynamic Analysis Laboratory withRudy Lepp as guide

Page 13: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

IAEA SPECIALISTS' MEETING

on

DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANT

Chalk River Nuclear Laboratories, Chalk River, Ontario

14-16 May 1980

1. Sitnikov, G. 17.2. Spitzner, Klaus, J. 18.3. Pearson, Al 19.4. Yan, George 20.5. Yoshioka, Katumi 21.6. Reid, J. Brian 22.7. Shah, Ramnik 23.8. Satou, Takahisa 24.9. Tawa, Roger 25.10. Wilson, Ian 26.11. Capel, Tony 27.12. Watkins, Len 28.13. Penscm, Croaribe 29.14. Mercier, Pierre 30.15. Martin, David 31.16. Nicholson, Paul 32.

Palamidessi, Alvaro 33.Keats, Brian 34.Gmeiner, Lothar 35.Hottman, Detlef 36.Furet, Jacques 37.Chandra, Murari 38.Jacquot, Jean Paul 39.Bayoumi, M 40.Sato, Nobuhide 41.Wilhelm, Herbert 42.Krebs, Wolf-Dieter 43.El-Zorkany, H.I. 44.Klock, Ron 45.Sides, William 46.Chou, Jordan 47.Kisner, Roger 48.

anith, John 49.Magagna, Lino 50.Lundberg, David 51.Hamilton, James 52.Paffrath, A. Wayne 53.Roberts, Robert 54.Entwistle, Adrian 55.Maples, Graham 56.Hinds, Tony 57.Jover, Pierre 58.Ichiyen, Norman 59.Fieguth, Werner 60.Berggren, Jonas 61.Amunds, Vern 62.Kanazawa, RichardDeyst, John

Musick, RonMadden, PaulWilliams, DavidJohnstone, LeslieGuesnier, GuyMiiel, AndreOlmstead, RoyAnglaret, PhilippeHepburn, AlMcNeil, TimWallace, JohnWest, RodHammerschmidt, WernerSchuLz, G.

Page 14: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS
Page 15: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

X l l

ACKNOWLEDGMENTS

Many people worked hard to make this conference asuccess. In particular, Mrs. Yvonne Rawlingson didan excellent job of handling the administrativeduties and associated detai ls ; Rod W.est took careof the s l ides and audio equipment during themeeting; Don Snedden orchestrated an entertainingprogram for the evening of the banquet; Vern Amundsorganized an interesting l i s t of activities for theOpen Evening; Len Watkins organized the tours; andMrs. Kathy Amunds helped with the Ladies' Program.I should like to thank them a l l for their effortsand co-operation.

G. Yan

Page 16: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 1 -

IAEA SPECIALISTS' MEETING

DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

(CRNL, '80 May 14-16)

WELCOMING AVVRESS ON BEHALF OF AECL

!E. CKltoph)

Good morning, ladies and gentlemen!

I have the pleasure this morning of welcoming you to ChalkRiver Nuclear Laboratories (CRNL).

For our part, we are rather proud of the Laboratories, andenjoy showing them off to visitors. More important, visits such asyours are essential if we are to keep in close touch with the outsideworld, since, as you have no doubt noticed, we are rather isolatedgeographically.

I hope that during your brief stay you will have an oppor-tunity' to sample some of the attractions of this beautiful part of thecountry. Unfortunately, at this time of year, you may have to competewith members of our abundant insect population, also intent on enjoyingthe great outdoors.

Let me explain very briefly where CRNL fits into the generalscheme of things.

The Laboratories were started in 1945 under the NationalResearch Council. The large NRX research reactor facility startedoperation in 1947 and provided an initial focus for the various laboratoryactivities. CRNL quickly established a reputation in basic nuclear physicsresearch, and fundamental underlying research has been one of the importantfoundations of our work ever since.

In 1952 a Crown Corporation, Atomic Energy of Canada Limited(AECL), was established, with Chalk River laboratories as its maincomponent — the other being Commercial Products in Ottawa. AECL wasgiven the mission of developing nuclear energy for the benefit of Canadians.This mission was vigorously pursued and, over the next decade or so, CRNLin collaboration with Ontario Hydro and Canadian General Electric developedthe CANDU concept and saw it successfully demonstrated in NPD (a 25 MWedemonstration reactor just 15 miles up the river). Subsequently, thedesign of commercial CANDU reactors was made the responsibility of anotherarm of AECL in Toronto formed for that purpose. However, CRNL has remainedheavily involved in the power reactor program with prime responsibilityfor the R & D necessary to support the CANDU system.

Page 17: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 2 -

Now AECL has five major arms: the Research Company (CRNLand WNRE), the Engineering Company (power reactor design), the Radio-chemical Company (isotope sales, etc.), the Chemical Company (D2Oproduction), and the International Company (CANDU foreign marketing);but CRNL is still the biggest single site. Many graduates from CRNLplay key roles in these organizations as well as in other sectors ofthe Canadian nuclear industry.

Over the years there have been major additions to thefacilities and programs at CRNL. I will not take the time to mentionthese now but hope you will take the opportunity while here of findingout more about those that particularly interest you.

I would like to use this occasion to acknowledge the veryuseful role played by the IAEA in the dissemination of technology.We at CRNL have a great deal of sympathy with their objectives in thisarea. It is, therefore, a particular pleasure to host this IAEAspecialist meeting.

The subject of the meeting — "Distributed Systems for NuclearPower Plants" — is also very appropriate. CRNL has had, for many years,a vital interest in computers for reactor control and has been a strongadvocate of this application of computers. Even those of us not expertin the field realize that the latest trend towards distributed archi-tectures could have very important ramifications and we are anxious tokeep up-to-date on latest developments in this field.

I know you have a very full program so I'll let you get onwith it. Welcome to CRNL, and thank you for coming.

******

Page 18: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 3 -

Welcoming Address to theSpecialists' Meeting on

DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

(G. Sitnikov)

Dear Chairman, (Ladies), Gentlemen,

On behalf of the International Atomic EnergyAgency let me welcome you to this Specialists' Meeting on"Distributed Systems for Nuclear Power Plants".

I highly appreciate your enthusiasm and readinessto assist the Agency in particular in such a very importantsubject as Control and Instrumentation for Nuclear PowerPlants. This meeting is sponsored and organized within theframework of the Agency' s International Working Group onNuclear Power Plant Control and Instrumentation incooperation with the Chalk River Nuclear Laboratories ofAtomic Energy of Canada Limited.

The Agency appreciates very much the opportunityof holding this meeting here in Chalk River, where, inLaboratories nearby, promising experiments and projects arebeing carried out having objectives to investigate anddemonstrate advanced electronic systems to make easier andmore safe the operation and control of future nuclear powerplants. On behalf of the Agency I would like to expressgratitude to the Government of Canada for the invitation tohost the meeting.

It gives me a special pleasure to express thehighest appreciation to the Chairman of the Meeting,Dr. Yan, and to the Chairman of the International WorkingGroup, Dr. Pearson, and to their other colleagues for theirvery hard and very useful organizational and preparatorywork.

This meeting is one of the very important in therow of meetings organized within the framework of the IWGNPPCI since it is devoted to the problem of application ofcomputers for control systems of Nuclear Power Plants. Thesolution of this problem depends a great deal on the develop-ment of new reliable hardware and software, as well as newtypes of communication systems.

I would not like here to go too deeply into thetechnical aspects. Still at the beginning of the meeting itcould be interesting to make a short survey of the develop-ment of computerized control systems. In their history theypassed already three stages of development or so to saygenerations.

Page 19: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 4 -

1. Information systems which are computerized systemsassigned for centralized collection, storage and retrievalof source data from detecting elements and capable to carryout some simple treatment procedures in order to presentsome "post-factum" pictures in NPP performance.

2. Advisory systems which are computerized systemsassigned to carry out the same task as above and providingcurrent analysis of NPP performance, in order to predict aprocess development and generate some advice to an operatorin accordance with some possible predetermined schemes of aprocess.

3. Control and monitoring systems which are capablenot only to generate some advice to an operator but at thesame time carry out real operator's function sending controlcommands and monitoring their implementation. Such systemsmust be capable to develop some correlation functionsdescribing a real process based on information collected dueto operations.

The reliability of such centralized systemsappeared to be lower in general than the reliability ofreactors themselves and their main technological equipment.To improve it, it was necessary to duplicate some vitalparts of computerized systems. Moreover such systemsdemanded and demand till now usage of the most modern andvery expensive computers.

If we take a look at the programme of the meetingwe could come to the conclusion that we are standing now atthe edge of a new generation of control systems for NuclearPower Plants, and what is very interesting, not only withpressurized, boiling water and pressure tube type reactors,but with LMFBR1s (for example, the paper of K. Fujii,T. Satou, M. Satou and H. Okano from Japan). Those newcontrol systems are distributed systems. They really open anew era of computerization of Nuclear Power Plant since theyhave essential advantages.

They unload the central computer(s) for moresophisticated and delicate analysis of current processesleaving for peripheral microprocessors' maintenance of allunits. They distribute in such a way responsibility betweencentral and peripheral processors into a hierarchical systemtremendously increasing that system's reliability. Theymake it possible to free the operator from routine non-intellectual work.

Even such systems with their promising advantageshave some definite limitations in usage and even raise someproblems which were the reasons for all of you to come here.Their solution could be assisted through internationalcooperation. That is the reason why our IWG NPPCI devotesso much attention to it. I call attention to only two of

Page 20: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

5 -

the most important problems from my point of view:

1. Accumulation and use of experience of abnormalsituation at NPP.

2. Operator-computerized system communication.Operation of existing control systems is based on statisti-cal and formalized presentation of processes in the reactorand other equipment. In other words they use experienceaccumulated during all history of exploitation of NPP.Modern distributed systems are mostly self-educating systemsbut they could be educated only through experience based onsufficient amount of data.

Still, computerized systems cannot totally replacethe operator for the control of a NPP in abnormal situation.We should accumulate information about development ofprocesses in such cases, maybe simulate similar ones andonly then we could order the system control of NPP inanalogous situations. Experience obtained due to the TMI-2accident gave us for example a lot of information to thinkover.

We have, however, had too few incidents andaccidents which have provided information. We certainlyneed more and closer international cooperation to poolinformation. When a new model of an aircraft is designed,the first specimen is usually destroyed on earth. We cannotafford to destroy any pilot NPP. The only way to get theinformation we need is through modeling processes andinternational accumulation of information on all abnormalsituations.

It may appear strange but with the implementationof distributed systems the role of operator of NPP isfurther enhanced. In spite of freeing him from some controlfunctions, the main function belongs to him that is thefunction to be ready to meet any dangerous abnormality ofNPP operation and to make a decision. So in f~e process ofcommunications "Operator-computerized system" we need notonly a well trained but well educated operator and thehigher complexity increases the demands on his education.

I would very much appreciate if you did not limitthe discussion to technical aspects but also considerpossible international cooperation in the above field.

In conclusion I wish you a successful and pro-ductive meeting and express the hope that it will contributeto implementation of computerized control systems of thefourth generation in practice.

Page 21: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 6 -

IAEA SPECIALISTS' MEETING ON

DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

(CRNL, 1980 May 14-16)

OPENING REMARKS BY CHAIRMAN, IWG/NPPCI

(A.

Good morning.'

This is your third and final welcome to the ChalkRiver Nuclear Laboratories and to this the 22nd Specialists'Meeting to be held under the auspices of the IAEA InternationalWorking Group on Nuclear Power Plant Control & Instrumentation,a group it has been my privilege to preside over for the pastfive years. This is the 2nd Specialists' Meetings to be heldin Canada, the 1st on In-Core Instrumentation and Failed FuelDetection took place in Toronto exactly six years ago.

These meetings have left behind an impressive legacyof information dealing currently and in detail with that veryimportant aspect of nuclear power generation, control andinstrumentation. I trust that this meeting will contribute itsfair share.

The topic to be discussed in the next three days hashad an interesting history. As far back as 1962 when the minicomputers began to appear, attention was being turned to theuse of digital processors tailored to suit particular require-ments and I recall that at a conference in Sandefjord, Norway,in 1968 during a panel discussion, the idea was raised thatperhaps nuclear power plant control and instrumentation couldbe better implemented by stand-alone mini digital computerslinked in some undefined way to a supervisor.

The idea built up slowly as one watched the ever-growinguse of computer networks and distributed computing capabilitiesaround the world, but the last few years has seen an almostexponential increase in interest in the application of distri-buted systems in the nuclear industry.

This increased interest, it seems to me, has not beenfired so much by the real needs of the nuclear industry as bythe rapidity with which electronic technology is moving. Weare being pushed by a technology that currently passes throughseveral generations of development during the time it takes usto design and bring a nuclear power plant into production.

Page 22: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

« "7 "•

Despite the success of existing designs and the pres-sure to stick to them, the rate at which new components becomeavailable and others drop from the suppliers' shelves, requirethat we give attention to system architectures that are moretolerant of this situation.

A distributed data base, containing both on-line andarchival information, made available to all systems of anuclear power plant by means of a highly reliable communicationsmedium, could form the basis for such an architecture. It couldnot only solve this problem of rapid component evolution butalso provide for complete and comprehensive plant control andsurveillance.

Perhaps during the next three days we may see some ofthe ways that this new and exciting approach can be made areality.

I will now turn the proceedings over to Dr. George Yanwho will be your Chairman throughout the meeting.

Page 23: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 8 -

SESSION 1 - PROPERTIES OF DISTRIBUTED SYSTEMS

Page 24: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 9 -

REQUIREMENTS AND CONCEPTS FOR ANUCLEAR PLANT SURVEILLANCE AND DIAGNOSTICS SYSTEM (NPSDS)

P. J. Nicholson and D. D. Lanning

ABSTRACT

An advanced plant surveillance and diagnostic system has beenpostulated for the purpose of aiding operator response to un-anticipated plane upsets. The plant operator is described asan information processor that needs plant status informationrepresented by symbolic outputs. These will be compatible withmodern visual processing techniques, such as CRTs. Preferredmethods of estimating the state-of-the-plant and verifyingmeasurements require on-line real-time models which are simpledynamical relationships based on energy and mass conservationlaws. Implementation of on-line state estimation techniquesusing such models probably requires a distributed microcomputersystem whose features are described.

1. INTRODUCTION

Control of U.S. nuclear power plants should evolve away from the traditionaluse of standard sensors that transmit by "hard wired" systems to control roomreadouts. In these traditional control rooms the reactor operator must mentallyvalidate each sensor reading, quickly check the multireadouts in order to determineand then execute the proper control action or sequence of actions. Although bytraining and experience the operators have been able to operate the nuclear plantssafely and successfully for hundreds of reactor operating years, the demand on theoperator's capability is substantial.

In this regard, it has been recognized for several years that more advancedtechnology is available that could improve the operator's capability to analyzesensor data and organize information for easier prediction of proper control actions.In fact, certain standardized automatic actions of the safety system have alwaysbeen part of nuclear plant control systems due to requirements for control actionin a short time relative to a human decision and response time. However, advancedcontrol technology hag not been accepted by the nuclear industry for a variety ofreasons, e.g., standardization, long time span from initial design to actual con-struction (ten years or more), concern for safety and reliability of advancedsystems.

In the aftermath of Three Mile Island's Unit 2 accident, it is widely agreedthat more advanced technology and methods should be incorporated into displaysystems, including some backfitting into present operating plants. It is theauthor's opinion that these new designs must be based on certain criteria ifsafety and operability are to be successfully enhanced. These criteria are listedbelow:

(1) A new plant wide data acquisition and surveillance system designneeds to be defined and adopted as a reference for the nuclear industry.This system, designed to meet licensing requirements, should employredundant distributed microcomputers with modular software, multiplexing,and a highly reliable central computer to supply data to interactive videocontrol room displays, such as CRTs.

Page 25: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 10 -

(2) The distributed system should collact ail data which characterizeplant status and reduce the data base to a small set of meaningful plant'state' parameters, which are then displayed. To assure the credibilityof displayed information all measurements used in status parametercalculation require two levels of validation: inter-comparison ofsimilar measurements and cross comparison of diverse measurements,

(3) Both measurement validation and estimation of unmeasured stateparameters require the use of simple on line dynamic plant process modelswhich have been previously verified and validated, They should includesensor response characteristics and be capable of execution in real time.

(4) The system should alert the plant operator to the onset of adisturbance and indicate the general area of the plant where a faultexists. As a later capability it could guide his responses by identifyingcorrect procedures and predicting consequences of contemplated controlactions.

The above four considerations have been incorporated into the conceptualdesign of a Nuclear Plant Surveillance and Diagnostics System (NPSDS) as describedin this paper. The concept incorporates advanced but well-founded principles,as well as proven technologies that have been utilized in control systems outsideof the nuclear industry.

2. CONCEPT DESCRIPTION

2.1 Design Concept for Nuclear Plant Surveillance & Diagnostic System (NPSDS)The proposed Nuclear Plant Surveillance Diagnostic System (NPSDS), depicted

in Figure 1, is conceived as an addition to existing or planned nuclear units.However, the dedicated display may be a candidate for incorporation into advancedcontrol room designs. The purpose cf NPSDS is to acquire and process plant dataand present validated and prioritized information regarding plant status to theoperator, It is specifically focused toward development of a plant "state vector",a reduced set of parameters which characterize the plant's safety and operability.This information will be of use to plant operators and supervisory personnel inmonitoring operations, in achieving a consistent picture of plant status, and indetecting and guiding responses to disturbances.

To accomplish these purposes the NPSDS first acquires all relevant plantdata, both directly from sensors throughout the primary and secondary sides ofthe plant and, as appropriate, from other plant systems such as the processcomputer. This initial data acqusition and first level signal processing iscarried out within the redundant distributed microcomputers as described inSection 3.3. Thus the acquired data base is digitally filtered and validatedat the first level by inter-comparisons of redundant measurements where theseare available. The most recent samples are then smoothed, averaged and storedfor subsequent trending analysis and comparison to limits established by the storedon line plant model. Data is then available on demand or by cue to operators viainteractive video displays (CRTs, TV) at one or more locations. Both the state in-formation and the plant data base will be structured in a hierarchical manner fordisplay convenience.

It is expected that the plant state vector will include inferred plant para-meters. NPSDS develops this information using "state estimation" procedures whichemploy simple dynamic on-line models of the plant processes in combination withthe stored data base. The deductive on-line models serve the additional purposeof data integration and second level measurement validation by allowing cross-comparison of diverse process measurements. Comparison of the validated measure-

Page 26: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 1 3 . -

WFSDS CONCEPT

REACTORCONTROLS

- | OtntRATOR I

BALANCE Of/IANT

SEMI DUPLEXTIME DIV. MUXPOLLING PROTOCOL

t - PIIOTECTION SIGNALSC - CONTROL SIGNALS

U - microcomputer

INPUT/OUTPUTCONTROL

M. Mbps/link

STORAGE I DISPLAY I SYSTEMi GEN. | TEST

CENTRAL

COMPUTER(FAULT TOLERANT)

OPERATOR/SUPERVISORYOISPLAY

Figure 1. NPSDS Concept.

merits to model predicted parameters allows discrepancies to be interpreted asplant upsets.

To provide the required data acquisition, processing, storage and displaycapability, as well as to carry out state estimation functions , the NPSDS willrequire a new powerful, flexible and reliable computer system. For this purposea distributed LSI microcomputer system is postulated which will provide the re-quisite computational capabilities and memory capacity. NPSDS can be implementedlargely with current commercially available technology. Due to the low cost ofthe LSI*devices, designs of this type have very attractive performance vs. costattributes and are in wide use in the chemical industry and office equipment fields.Section 3.3 discusses this aspect in further detail.See also Ref [1].

2.2 Performance AdvantagesNPSDS has the potential to provide the operator with a powerful new tool both

for monitoring normal operation, for anticipating and forestalling disturbances,and controlling and limiting their effects. These may arise from external causes,such as loss of load, or from internal plant malfunctions such as equipment failureor other error (operator, maintenance, etc.) or from combinations of these faults,typical examples from recent mishaps can be cited to show how the systems on lineproce3s modeling and estimation capabilities, as described subsequently, could aid

*Large Scale Integrated

Page 27: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 12 -

in avoiding disturbances. In these cases NPSDS could have provided:

(1) Confirmation of pressure relief valve closure by monitoring fluidlosses in the primary circuit.

(2) Indications of core margins to saturation by automatic steam tablelook-up, using available core temperature and pressure data.

(3) Confirmation of auxiliary feed system operability, after initiation,by comparing relevant flows, pump motor power, and discharge/suctionpressures and by checking primary to secondary heat transfer rates.

These examples indicate that NPSDS, when fully and convincingly demon-strated and employed could favorably affect plant safety and availability.Reaching that objective requires resolution of several critical issues dis-cussed helow.

3-0 Critical design J.3sues in System InpiementationEnhancement of operator capabilities and plant performance clearly rests on

a workable, coherent and self consistent design approach. Three elements must bebrought together in a harmonious whole; state estimation methodology, the under-lying plant modeling, and implementation of the distributed system design toachieve the desired safety grade reliability. These critical issues are nowdiscussed. Further research and development needs are presented in the concludingsection.

3.1 Plant State Vector EstimationThe NPSDS integrates the plant data bast' by means of the on line models and

state estimation techniques to derive a minimum set of parameters which character-ize the state of the plant. When correctly chosen, this minimum set, or "statevector", will alleviate operator information overload and also indicate j>!-.«riti

systems exhibiting abnormal performance.

INITIALCONDITIONX0

CONTROLSIGNALurn REACTOR

ORPLANT

GAINMATRIX

RESIDUAL € 6:MODEL

STATE VECTOR

X(TJ

MEASUREMENTVECTORA.Y(T)

MEASUREMENTMATRIX

Figure 2. Conventional state estimation.

Page 28: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 13 -

The parameter set of immediate interest to U.S. light water reactor plantsis the safety state vector, as described in NTJREG-0585[2] and elsewhere [3]. Itmay consist of 30 to 40 parameters of safety significance, such as margins tosaturation, primary coolant inventory, gross fuel integrity, primary to secondaryheat flow, etc. It is apparent that some elements of this set are either directlymeasureable or can be obtained by applying correlations to available measurements.Current reactor protection systems perform this latter function now to obtain DNBRor maximum linear heat rate., Still other elements are not directly measureablebut must be estimated by recourse to validated process models.

In the first case (directly measurable elements) a digital filter is usedto find the best fit to a time sequential set of measurements. This proceduretypically involves averaging of redundant measurements and smoothing, with recentsamples weighted more heavily. In this first level processing raw sensor datais reduced and individual sensors can be checked for reasonableness. Randomerrors in measurement are not serious and even a bad sensor can be quickly detected.Digital filtering of this type, readily achieved with microcomputers, plays animportant role in the treatment of redundant process sensor data from the plantand is a necessary first stage in conditioning raw data used in overall plant stateestimation.

In the second case (indirectly derived elements) the mathematical structureof the process model must be known or hypothesized from the physics of the process.The model is then validated or 'identified' by comparison of model outputs tomeasurements and by adjustment of model parameters to obtain the best match. Ina conventional Kalman filter implementation of this procedure, illustrated inFigure 2, differences between model and measurement are fed back through a "gainmatrix" to correct the model outputs until the weighted sum squared of the residualsis minimized. The model output, the state vector, contains the additional missingvariables that were to be estimated. For the present purpose the Kalman filterimplementation is to be limited to the calibration phase where the model isidentified or validated against sensors that have been separately checked. Duringnormal plant operations correct sensor functioning cannot be assumed but must beverified as a step in the overall calculation of the state vector.

A reliable estimation procedure which is not strongly sensor dependent isprobably achievable if careful attention is paid to the modeling component. Whatis needed is a model that can first be largely verified and validated prior to on-line use. Such a technique, namely deductive modeling, exists and is the basis forthe NPSDS estimator.

In the application of deductive modeling to safety state vector estimation,the mathematical form of the process model is defined from first principles,such as mass and energy conservation laws. Model validation, the process of pickingmodel constants and matching the model to plant measurements, is initiated byapplication of design data and steady state calibration. As a final step the planttransient data would be used to refine the initial model. Once a match is achievedthe model is said to be validated or identified; thereafter, with periodic re-calibratiou, the model can run free of the plant and be used to cross-checkmeasurements of key parameters. The result, then, is an estimator with the generalform shown in Figure 3. Current research by the authors will determine whether bothsensor checks and state vector estimates can be achieved at the microcomputer levelor whether assistance from the supervisory computer is required.

In estimating a typical state vector element, such as primary coolant inven-tory, with this technique, several steps may be needed. The measured data mustfirst be digitally filtered to provide the estimator with the best input data. Themissing state parameters can then ba estimated from an overall fit of a total plantmodel to a wide spectrum of measurements. As part of this procedure the desired

Page 29: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 14 -

STATE ESTIMATION CONCEPTUAL SOLUTION

CONTROLVECTORU(T)

ASSUMPTIONS:

MODSL VALIDCOMPUTER OK

REACTOROR PLANT

MODELEQUATIONS

rT

PLANTMODEL

X(T)

I * *

CONSISTENCYCHECK ECULT

CALIBRATION V.

j

STATE VECTORX(T)

c

MEASUREMENTMATRIX

•^^ • I * J

S^ RESIDUAL*

Figure 3

Page 30: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 15 -

parameter, encompassed by a local process model (mass balau.-.e), will be specificallymatched to detailed local measurements. The objective will be to tie the estimatedparameters to as many measurements as possible, without making the result dependenton any single measurement.

If the scheme proves to be successful, then the deviation of measurements fromthe model can be interpreted a plant disturbance, and the method would appear tobe a global apporach toward disturbance detection. To be unambiguous the methodmust eliminate apparent causes of anomalous behavior arising from either sensormalfunctions or model infidelity. Individual sensor failures will, in general,be detected during the data validation at each subsystem. However the estimationprocess breaks down if too many sensors give bad data. Therefore sufficient re-dundancy and good maintenance of sensors is required to make the scheme work —these are good practices in any event.

3.2 Plant ModelsThe foregoing discussion argued the need for prior verification of models

which could then be well matched to the process (validated). Successful modelsare probably attainable if the norms that were developed for the Croroby plantmodel[4], among others, are followed. The nonlinear deductive models proposedfor the NPSDS concept adhere to these norms but go further in applying thento an on-line estimator with real-time capability.

As discussed earlier, the mathematical form for the deductive models are takenfrom basic physics relationships. In this case, the relationships are mass andenergy conservation laws applied to each of the set of control volumes whichtogether comprise the plant. The control volumes used for initial scoping are il-lustrated in Figure 4 for the primary system of a generalized PWR plant and encom-pass the state variables of interest.

This formulation is analogous to that used in some of the safety transientcodes such as RELAP/RETRAN[5], and enables che latter codes to be used to verifythe accuracy of the proposed real time code. The plant model so chosen contains 200-300 first order differential equations which can probably be computed in real timeby virtue of the enhanced processing power of the distributed system^ This estimateis for a model that lumps together the characteristics of some elements of the bal-ance of plant, such as condensate and drain pumps, and is consistent with other sin-pie plant modelsJ6]. Its initial development appears to be feasible within reason-able time and resource constraints. The required software will be modest, allowingone to think of sof/tware packages that are each reliable, amenable to checking, andwhich can meet NRC licensing criteria.

A further requirement of the model is that it be capable of execution eitheras one combined model at the plant level or as a number of mutually consistentsubmodels at the subsystem level. A convenient way of accomplishing this objectiveis to represent the input/output characteristics of each control volume in thefrequency domain using transfer function representation. The constants for theseexpressions would be suitably chosen from stored tables to assure that any linearityrequirements are met in the operating region. The rules for multipying transferfunctions together then allow model combinations to suit the needs for data andsensor intercomparison. This ability to decompose the model is also motivated bya desire to incorporate its various parts into the distributed microcomputers andto utilize their combined processing power for real time estimation. This is atask that would probably exceed the capability of a single central processor, basedon comparison to recent experience with a 31 equation dynamic model used in theLOFT control system [7],

Page 31: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

STEAMGENf.HATOR

PECOWATER

- 16 -

NUCLEAR PLANT CONTROL VOLUMES

RELIEFVALVE

PRIMARYCOOLANTPUMP

VOLUMECONTROLTANK

Figure 4. Nuclear plant control volumes.

1>. 3 Computer Technology NeedsSuccessful implementation of the computer based real time Nuclear Plant

Diagnostic System (NPSDS) clearly requires application of several importanttechnologies, which derive from a proven experience base in other applications(chemical industry, experimental nuclear research, aerospace, and fossil plants).These are:

1) A highly flexible and reliable computer system architecture, includingdistributed state of the art LSI microcomputers, and a multiplexed dataacquisition network.

2) Software design and development disciplines to improve software reliability.

3) A very reliable central computer to supervise and test the system andprovide data to control room displays. This computer could be configuredalong the lines of the CSDL fault tolerant multiprocessor[1].These major technology items will now be summarized, based on preliminary

work to date.

3.3.1 Computer System OverviewThe computer system proposed for the NPSDS is currently visualized as a

simply connected array of LSI microcomputers although other interconnect schemes

Page 32: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 17 -

are under study. The exact number of computers in the array is undetermined but10-20 are anticipated in an initial system. They are identical, interchangeable,and all have the same interface with the interconnecting data highway. The actualdevices are also unspecified but one has a good selection of commercially availableprocessors to choose from. For example, the IEEE Survey identified about a dozendifferent models. Some of these, like the Honeywell TDC 2000 now widely used inthe chemical industry, already come as a system with their own interconnect. Forvarious reasons, including the eventual requirements for resolution with bothpolarities, the likely requirements for directly addressable distributed memoryand the desire for compatibility with the S100 bus, the focus of the NPSDS will beon the 16 bit microprocessor as a standard.

In our system the microcomputers do not have any peripherals—those are allin the control room or the computer room. Our device is a stand alone units de-dicated to one limited set of functions consisting of data acquisition, digitalfiltering, averaging, smoothing, and estimation of one or more state variables.Its interface to the plant is via the sensors on the input side and the data high-way on the other. The LSI microcomputer is well suited to this task as it issmall, can be well shielded against pernicious plant environments (EMI), and drawsvery small amounts of power. This last feature enables it to depend on backupbattery supplies which can see it through plant electrical supply interruptions.

They are also extremely reliable, having a mean time to failure of 10 to 100times better than conventional computers, for an average life comparable to theplant life. Two fold redundancy for any function should give an acceptably goodperformance in a safety related application. At a few hundred dollars each, twodozen of them in the plant means that the important cost factors are not incomputational hardware, but are in software, peripherals and maintenance.

The interface to the plant is conventional for sampling process sensors atabout 1 sample/sec. Analog multiplex switching (via FET's) gates the signalsamples into an 8-12 bit A/D which is under processor control. The digital samplesare stored in local scratchpads preparatory to local averaging, smoothing and usein estimation algorithms after which they are returned to local storage. Each pro-cessor handles typically 25-50 signal channels in this way so that 5-10 secondsworth of data is held at any one time in a small RAM, enough to do a good digitalfiltering job. The stored programs are contained in read only memories (ROM's)and may not therefore be altered except at the plant under controlled conditions.Care must be taken to protect the contents against unintended changes in state fromexternal influences. In addition a self test routine is carried out under controlfrom the central computer.

The next significant component of our system is the interconnect technologywhich links processors to the central computer. Again a number of options exist.For this purpose we visualize the interconnection as a self clocking asynchronousserial time division multiplexed bit stream controlled by a polling protocolsupervised by the central unit. This bit stream can be sent over long lengths viaa number of independently paths at rates in excess of 1 Mbps. These links can beeither electrical or optical. Typical formats for baseband electrical data trans-mission use a Manchester biphase coded waveform with invalid header for synchroni-zation and redundant bits to detect transmission errors. Systems with this schemehave been demonstrated in a number of aerospace applications. A class IE qualifiedsystem now available commerically handles bit rates of 0.5 Mbps and usescarrier modulation to achieve electrical noise immunity.

For various reasons, including bandwidth,electrical isolation, and superiorEMI protection, the authors look favorably on optical multiplexed systems, providedthe cost of terminals can be brought down. We particularly approve of this scheme

Page 33: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 18 -

since it allows information transfer from protection systems to control room con-soles via the microcomputer, without violating any NRC IEEE separation standard.

3.3.2 SoftwareThe bottom line of the design is the software package; its complexity, develop-

ment cost, integrity and perhaps its licensability. How good can a stored programbe made to be, knowing that, prior to verification and validation, error rates areon the order of 1-2 errors per hundred lines of code?

Several features of NPSDS contribute toward reliable software. First, thetotal program is Feparated functionally and physically inco modules that arededicated to their particular task. Computations take place in parallel and chancesfor interprogram interaction are minimized.

Second, the independent program modules lend themselves to on-line testing.One imagiras closed loop tests, injection of pilot tones, built in tests. Inaddition it is contemplated that each microprocessor module be dual redundant andthat both can operate in parallel and check the other's results. This doesn'teliminate software coding errors, which are assumed to be common mode, but it cantake care of another type of nuisance—pick up of erroneous bits through powersupply droop, EMI effects, etc.

To eliminate errors occuring from program changes, the program is stored inread only memory that can be altered only under controlled conditions away fromthe plant.

In addition to these advantages which are inherent in the NPSDS design, it isassumed that the precepts for reliable software development will be followed.These include formal procedures for specification, development and testing,with the emphasis on detection of errors prior to operational use. To achievethis result a fairly substantial test cycle is required, including perhaps twoindependent software packages, and separate teams to code them. These pre-cautions should catch ordinary coding mistakes but will not correct errors ordeficiencies in the initial specification of the software package. One is thenleft with the option of dual specification and coding efforts,and the test prog-ram would play these against each other. Some automation of this process isneeded, employing techniques like the CSDL DART process and other tools such asRXVP and SADAT.

After all of the above is done, what is expected in the way of residual errors?The indication from the NCSS experience was that about 85% of the 2.5% initial errorcould have been found. If this figure is representative, then a residual errorrate on the order of 0.4% is possible, with a further possibility that this couldbe reduced to 0.1% For the on-line NPSDS program containing an estimated 1-2000lines of instruction, there are then 1-4 errors remaining. This sounds excessiveand it is hard to imagine that one cannot do better with the advantages justmentioned. The software verification issue then is somewhat unresolved in thatmethods appear to be at hand to deal with it may not be entirely satisfactory. Itshould be tried to show what really can be done.

4. CONCLUSIONS

4.1 Overall ConceptA concept has been described which appears to satisfy an operator need for

better and more concise information about plant status and safety. While muchwork remains to be done to reduce this approach to practice, the main elementsof the approach rest on well-founded principles and experience. The next majorstep therefore involves synthesis of these suggested methods and technologies ina system which can be evaluated at the laboratory level and tested in a live plantenvironment.

While some technical issues, such as software qualification, are to be settled,

Page 34: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 19

the main uncertainty to be resolved concerns the man/machine interface. Thisdesign must include not only human factors considerations but, most importantly,the informational input from machine to man. Resolution of these issues requiresa realistic real simulation and plant environment. Reduction of the concept toan operational configuration must also draw upon the experience previously gainedby those who have used this advanced technology.

5. REFERENCES

1. P.J. Nicholson, J.B. DeWolf, et al., Conceptual Design Studies of Controland Instrumentation Systems for Ignition Experiments, Charles Stark DraperLaboratory Report R-1139, March 1978.

2. U.S. Nuclear Regulatory Commission, TMI Lessons Learned, Final Report,NUREG-0585.

3. E. Zabroski, "TMI Lessons Learned, The Nuclear Industry Perspective",IEEE/NRC Working Conference on Advanced Electrotechnology Applicationsto Nuclear Power Plants, Shoreham Americana Hotel, Washington, D.C,January 1980.

4. Lester Fink, "Evolution of a Successful Modeling Program", Proc. of theSeminar on Boiler Modeling, MITRE Corporation, November 6-7, 1974(D. Berkowitz, ed.).

5. K.V. Moore, et al., RETRAN: A Program for One-Dimensional TransientThermal-Hydraulic Analysis of Complex Fluid Flow Systems, EPRI CCM-5,Volumes 1-4, December 1978.

6. T.W. Kerlin, "Dynamic Testing in Nuclear Reactors for Model Verification",Proc. of the Seminar on Boiler Modeling, Mitre Corporation, November 6-7,1974 (D. Berkowitz, ed.).

7. J. Louis Tyhee, "Low Order Model of the Loss-of-Fluid Test (LOFT) ReactorPlant for Use in Kalman Filter-Based Optimal Estimators", Fourth PowerPlant Dynamics, Control and Testing Symposium, Gatlinburg, Tennessee,March 1980.

8. IEEE selected reprints, Microprocessors and Minicomputers, 2nd Edition,IEEE Catalog, No. THP662-0.

9. E.F. Miller, "Survey of Software Verification and Validation Technology",IEEE /NRC Working Conference on Advanced Electrotechnology Applicationsto Nuclear Power Plants, Shoreham Americana Hotel, Washington, D.C. Jan. 1980.

The Authors: Paul J. Nicholson is President of Nicholson Nuclear Science andEngineering (NNSE), a consulting firm that serves the nuclear industry.Address: 255 North Rd. (77), Chelmsford, Mass. 01824.

David D. Lanning is Professor of Nuclear Engineering at MIT, Cambridge, Mass.

Page 35: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 20 -

A DISTRIBUTED ARCHITECTURE IN THE CONTROL OF THE

PWR 1 3 0 0 MW NUCLEAR PLANTS OF ELECTRICITE DE FRANCE

G GUESNIER EDF

P PEINTURIER CGEE Alsthorn

G VARALDI EDF

Presented by P. ANGLARET

CGEE ALSTHOM

Page 36: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 21 -

FREAMBOLE

Since 1974, EDF has developed the control andinstrumentation technology in its nuclear power plants.Technological improvements in microelectronics led to thedevelopment by CGEE ALSTHOM of automation equipment, socalled CONTROBLOC, meeting the following objectives :

. introduction of automation at a high security and availa-bility level,

. progressive implementation in design offices and on sitesby operators not specialized in electronics or dataprocessing,

. great flexibility permitting the configuration of varioussystems,

. survivability to first failure,

. capability of self diagnosis.

Characterized by a modular, programmed andmultiplexed structure with distributed software, theCONTROBLOC equipment is under commissioning in the first1300 MW nuclear plant.

The following sections give an introduction tothe main characteristics of the equipment, peculiar to 1300 MWpower plants (this layout includes approximately 100 cabinetsdistributed over 7 rooms ; these cabinets exchange datathrough multiplexed links) and descriptions of working methodsadopted by the design offices ; problems met during developmentas well as operating conditions in the first months.

I - MAIN CHARACTERISTICS OF CONTROBLOC EQUIPMENT (See Ref. I)

The prime objective among those selected by EDFand CGEE ALSTHOM during CONTROBLOC development was the pro-duction of equipment capable of supplying safe and highlyreliable automation ; this equipment being operative 24 hoursout of 24.

The equipment was also required to have a modularstructure permitting progressive implementation on site anddecentralization of the equipment as well as a reduced failurecapacity.

Page 37: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 22 -

Finally, the equipment had to adapt easily tothe functional developments of controlled processes and tobe implemented by operators not specialized in electronicor data processing.

The basic structure of the CONTROBLOC is acabinet permitting :

- Through up to 256 input/output commonalized modules,acquiring data concerning position of reversing switchor controlling (relay) or indicating (lamps) units whoseconsumption is less than 3 W under 24 V.d.c.

- Solving logic equations of users programmes stored intomaximum 16 REPROM memories of 32 Koctets total capacity,in approximately 50 ms.

- Formulating 384 and 127 different internal variables andtime delays, with a time delay range from 1/10th of asecond to 42 minutes.

This basic structure can be completed by modulesallowing :

- Transmission of maximum 500 data to a centralized computerwith time recording upon data status switching.

- Collection or distribution of maximum 1000 data with othercabinets through 11 multiplexed links.

For safety and availability purposes, eachCONTROBLOC cabinet can be equipped with a dual structure ;the interface block including the 256 input/output modulesis unique. In this case, the electronic block is fittedwith two redundant structures having access to the interfaceblock. An order shall be transmitted only if requested bythe 2 redundant structures (operation in 2/2 mode), shoulda failure occur in one of the 2 redundant structures, thisstructure shall automatically be switched off uhile theCONTROBLOC operation continues in 1/2 mode with the remainingstructure.

Each redundant structure of the electronic blocksis equipped with a set of processors actuating the followingfonctional units (See Fig. I) :

- Interface control unit (UC) managing the interface modules.

Page 38: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 23 -

- Processing unit consisting of :

. Processing logic unit (ULT) implementing programmesstored in REPROM memories of programme unit (UP).

. Management logic unit (ULG) controlling REPROM memoriesconfiguration, undertaking detection and identificationof defects and implementing starting programmes of acabinet.

- Internal functions control unit (UI) managing andimplementing internal variables and time delays.

- Inter-cabinet exchange control unit (UE) undertakinguni-or-bi-directional multiplexed exchanges with othercabinets or with demultiplexing equipment through theconnection unit (UB). Each inter-cabinet exchange controlunit can transmit or receive maximum 1000 data.

- Unit controlling exchanges with contralized plant computer(TCI) ; this unit transmits status switches time recordedwithin 50 ms to the centralized plant computer.

A supervising unit, common to the two structures,localizes defects and manages the various modes of operationin connection with the management logic units.

A CONTROBLOC cabinet is IE qualified.

II - CONTROBLOC LAYOUT IN 1300 MW NUCLEAR POWER PLANTS

A 1300 MW PWR nuclear plant is equipped withapproximately 1000 remote controlled actuators, 2000 logicsensors and position reports, 600 control devices, 3000alarms, 600 signals and has to deal with almost 6000 datato be transmitted to the plant computer (TCI) (See Ref. 2).

To implement alarms automation an processingfunction, it has been decided to retain the completeredundant structure of a CONTROBLOC cabinet and the layoutpresented in figure 2. This layout can be broken down into3 assemblies :

a) Automation cabinets

- These cabinets receive logic data from the installationor the control room ; they prepare orders for actuators,alarms data and plant computer.

Page 39: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 24 -

Data and functionally dependant control devices aregrouped on a same cabinet to reduce wire by wire ormultiplexed connections between cabinets.

There are 101 cabinets, 72 are located in the elec-trical building of the nuclear plant ; 51 in thechannel A electrical room process automations concerningtrain A *, safety systems as well as systems installed inturbine-generator building and in reactor building ;21 in the train B electrical room process automationsconcerning train B safety systems.

13 cabinets are located in the nuclear auxiliariesbuilding and process automations of nuclear wastesystems.

3 cabinets are located in the pumping station andprocess the automations of non-safety auxiliary systemsset up in this station.

2 cabinets are located in each diesel building andprocess local automations of diesels. Finally,7 cabinets are installed in the demineralized waterproduction station on site and 2 cabinets are in theauxiliary boilers building on site.

All systems managed by these CONTROBLOC cabinetsare controlled and supervised from a main centralized controlroom, except those located in the nuclear auxiliariesbuilding and in the demineralized production station ;the latter are controlled locally and supervised from themain control room.

The general rule retained for all automationcabinets is that every control, order, data from the plantis sent to these cabinets through wire by wire connections.Multiplexed links between cabinets are used to convey dataacquired or prepared in a particular cabinet and used inone or several other cabinets. As far as alarms and signalsare concerned, alarms requiring immediate action from theoperator are indicated by lights connected wire bywire to the cabinet preparing these alarms ; remaining alarmsare transmitted by multiplexed links to the alarm centralizingcabinet (See Chap. II.C) and presented on a display CRT.Indicating messages are transmitted through multiplexedlinks to demultiplexers installed in the main control roomand monitoring LEDs.

The safety functions of the Reactor Protection Systare not realised by CONTROBLOC but by a specificelectronic system,the SPIN (Digital Integrated ProtSystem) [Ref.IIIj.

Page 40: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 25 -

b) Common data cabinet

- A number of data concerning either status conditionsof the nuclear plant or controls (shedding,acknowledgment ot alarm horn or alarm lights) shallbe supplied to a large number or automation cabinets.These data are acquired or prepared by 3 cabinetsdesignated common data cabinets and transmittedthrough multiplexed links to all automation cabinetswhich receive 3 times the same data each. The cabinetusing data process these data in 2/3 mode. Tripletransmission of common data has been retained to avoidthe loss of a cabinet or a multiplexed link.

c) Alarm management units

- As previously mentioned (See Chapter II.a), the alarmsrequesting operator immediate action are indicated bylights connected wire by wire to the automation cabinetswhile other are displayed on CRT displays.

- 7 polychrome CRT have been installed in the main controlroom, 6 display all alarms included in tne plant ; analarm is displayed on the CRT located in the controlboard area where the sytem control alarms relative tothis alarm has been installed. The 7th CRT, used' onlyas an alternative, displays alarms of safety B trainsystems.

- To switch a alarm transmitted by a given automationcabinet to a given CRT, a set of two CONTROBLOCcabinets are available, one train A cabinet including2 electronic blocks and one train B cabinet with 1electronic block with handling capacity of multiplexedlinks double that of automation cabinets (2000 data perelectronic block instead of 1000).

- Each electronic block gets information over 7 linkseach of the approximately 35 cabinets to which it islinked ; the block forwards the status switches itreceives either to one of the CRT with which it islinked or to the electronic block managing the CRTon which status switches ghall be displayed.

Each CRT has a 500 alarms capacity, 23 warningscan be displayed simultaneously, 24 more can be stored anddisplayed at operator's request. (Whenever more than4'/ alarms are displayed simultaneously, the first alarmsare lost).

Page 41: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

26 -

III - ORGANIZ.VTION OF DESIGN OFFICES

A nuclear power plane is split down intofunctional assemblies of various importance, designatedelementary systems,to which are associated functionallylinked actuators and, therefore, a number of controldevices and sensors. Operating studies are undertaken,elementary system by elementary system ; the operationof these systems is described in logic diagrams. Theselection of CONTROBLOC cabinets ensuring logic automationof such elementary system is made while taking a numberof parameters into account :

- Volume of elementary system in wire by wire inputs/outputson the CONTROBLOC cabinet (the number of wire by wireinputs/outputs is decisive considering the large logicprocessing capacities of REPROM memories and multiplexedlinks).

- Functional relations between elementary systems (the intentis to group, in a same cabinet, all systems with a highfunctional link to reduce intercabinet connections).

- Implementation period on site (to avoid delivering alarge number of CONTROBLOC cabinets, to achieve early-system implementation, the intent is to use a group ofsub-systems each consisting of a limited number ofcabinets altogether forming a master system).More than 2 years elapse between implementation of firstand last systems.

- Distribution of designs between several design offices.(The EDF PWR plants are studied by several EDF designoffices, each assigned a part of the nuclear plant, thenuclear boiler manufacturer shall also design its owncontrol-instrumentation system). Distribution of elementarysystems in cabinets is such that most systems designed bya particular office are processed in CONTROBLOC cabinetsdistinct from those studied in other offices.

From logic diagrams functionally describingautomation in an elementary system, the design office setsup a logic operation drawing showing in detail the automationto be implemented in the CONTROBLOC cabinet, identifyinginputs and outputs, logic statuses, internal variables andtime delays, describing logic sequences.

Page 42: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 27 -

This logic drawing is translated into logicequations and using these equations, a programming consoleenables to lead the REPROM memories installed in the cabinets.Data processing equipments being presently developped shouldsoon permit determining REPROM memories from logic drawings.

The design office provides two types of management

-• Management of input/output data of each CONTROBLOC cabinet :wire by wire inputs/outputs to actuators or alarm lights,data to centralizing computer, alarm indications to CRT,multiplexed links between cabinets :

. the time being,this management is partly provided bydata processing equipment and in a large part, manually.

- Management of REPROM memories loaded with logic equations :

. a CONTROBLOC cabinet contains several REPROM memories ;to facilitate management and considering memoriescapacities, only logic equations calculated in anelementary system shall be loaded in a REPROM memory.

The design office thus manages REPROM memoriesof a cabinet with respect to the degree of loading of thiscabinet, the degree of validity of the programmes loadedin the REPROM memory (programmes validated or not by tests,REPROM memory forwarded to site, operational elementarysystem ...) and with respect to the various modificationslikely to be introduced at the different stages of implemen-tation.

This management is ensured with data processingequipment.

To limit the number of modifications on site,the design offices have adopted simulation equipmentsconsisting of CONTROBLOC cabinets and permitting checkingstep by step logic equations loaded in REPROM memories.

IV - FIRST EVALUATION OF DEVELOPMENT

The CONTROBLOC technical prescriptions weredefined in 1975. A first model proving the suitability ofthe logical operator was manufactured in 1976 and in 1978a prototype cabinet was installed in a thermal power plant.1978 was mainly dedicated to solving industrializationproblems (suppliers, size of electronic cards, structureof interface block and of electronic cards, connections)and "the prototype was largely reworked.

Page 43: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 28 -

A second prototype was installed in the thermal power plantat the beginning of 1979 and since was operated satisfactorily.The final industrial equipment was drawn up in mid. 1979 andthe first cabinets were delivered on the nuclear site inSeptember 1373.

Software studies applied to the starting, timedelay and internal variables, multiplexed links and defectslocalization software and to the programming consoleswere undertaken in parallel with hardware studies.

Development would be accelerated if EDF hadestablished more definite technical prescriptions and ifthe manufacturer had had more sophisticated tools to helpthe conception.

V - EXPERIENCE ACCUMULATED IN THE FIRST MONTHS OF OPERATION

Approximately ten elementary systems are inoperation on the site today, 20 cabinets are operational(including common data cabinets and a alarm management unit).

Systematical tests having been performed by themanufacturer, the possibility to experiment with the equipmentfor more than 1 year in the design office and the earlydelivery of several cabinets on the site enabled to identifya number of problems in various fields (operation within limitconditions, software deficiencies, mechanical behaviour in anindustrial environment) that were solved progressively. Thedelivery and commissioning of an increasing number of cabinetson site when necessary adaptations of software remain andminor modifications of hardware cannot be excluded areextremely imposing factors.

This situation enables to implement the equipmentin effective operation conditions with every restraintinherent to installation on site (connections, tests,supplies ...) and to point out problems it is preferable tosolve before plant equipment is too advanced.

VI - CONCLUSIONS

In the control structure describe before, theautomation and data processing equipment have been decentra-lized and located closer to the equipment to be controlled.Due to their programmed structure, the use of CONTROBLOCcabinets enables to reduce the volume of the links betweencontrol room and automation cabinets or between automationcabinets thenselves through a large application of multiplexedlinks. In the retained structure, however, multiplexed linkswere not used to send data from sensors to automation cabinetsand to transmit orders from control room but mainly to transmitalarms anJ indicating messages.

Studies have been undertaken in which largestscope are given to multiplexed links, particulary in controlroom automation cabinet links. It shall be necessary to waitfor the implementation of the first 1300 MW plants to furtherthese studies.

Page 44: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 27 -

This logic drawing is translated into logic|uations and using these equations, a programming console^ to lead the REPROM memories installed in the cabi:

DaTHLprocessing equipments being presently developped shjsooHlfeermit determining REPROM memories from logic drav

The design office provides two types of Sgement

ManagemeTBW of input/output data of each CONTROJ^PE cabinetwire by wabe inputs/outputs to actuators or aJ rrn lights,data to ce»».alizing computer, alarm indicaj^^s to CRT,multiplexed^fccks between cabinets

m A,. the time bei^B^this management is parijBFprovided bydata processirtBjj^quipment and in a idBW part, manually.

- Management of REPRdMfcjemories loadeajB^tn logic equations :

a CONTROBLOC cabinet^to facilitate managemcapacities, onlyelementary system shall

ntains j(Beral REPROM memoriesand a«Eidering memories

is calculated in an[ded in a REPROM memory.

The design off J^B^hi^kinanages REPROM memoriesof a cabinet with respect^^^ the^fcgree of loading of thiscabinet, the degree of ij^KFdtty of^Kie programmes loadedin the REPROM memory (jB^ jramines vaSfiated or not by tes t s ,REPROM memory forwaraM^o s i t e , operational elementarysystem . . . ) and witl^HSspect to the va^tpus modificationslikely to be introjJHBed at the differenffl^gtages of implemen-tation.

equipment.Th^ftWanagement is ensured with processing

ina

_ limit the number of modificationsthe desi^H^offices have adopted simulation equipments

rJ of CONTROBLOC cabinets and permitting CITTstep J^JJstep logic equations loaded in REPROM memori

EVALUATION OF DEVELOPMENT

The CONTROBLOC technical prescriptions weredefined in 1975. A first model proving the suitability ofthe logical operator was manufactured in 1976 and in 1978a prototype cabinet was installed in a thermal power plant.1978 was mainly dedicated to solving industrializationproblems (suppliers, size of electronic cards, structureof interface block and of electronic cards, connections)and the prototype was largely reworked.

Page 45: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 28 -

second prototype was installed in the thermal power plantit the beginning of 1979 and since was operated satisfactori]

final industrial equipment was drawn up in mid. 1979 andt!lkrfirst cabinets were delivered on the nuclear site inKL 1073.

Software studies applied to the starting, tdelay'^•j^ internal variables, multiplexed links and deilocalizlHJKm software and to the programming consoles^were undel fcaken in parallel with hardware studies.

^yeiopment would be accelerated if Ejestablished i^^^ definite technical prescriptio^the manufactur^^^had had more sophisticated ^the conception -1

Fhadrnd if

help

V - EXPERIENCE ACCUMULA1 IN THE FIRST MONTHS OPERATION

Approximateoperation on the site to!!(including common data

ten elemental20 cabin«

inets and I

iysteras are inrare operationallarm management unit).

Systematical te^fc ha^Kq been performed by themanufacturer, the possibilitjBkyflR'periment with the equipmentfor more than 1 year in the d^mWn office and the earlydelivery of several cabinets^Bfcbe site enabled to identifya number of problems in varj^Ps ifclds (operation within limitconditions, software def icjjPrcies^fcpechanical behaviour in anindustrial environment) ^«jKat wereTkplved progressively. Thedelivery and commissionJ^^of an incsB^ing number of cabinetson site when necessary^Haptations of^BLftware remain andminor modifications cji8!iardware cannot^fc^excluded areextremely imposing

aThis^Wtuation enables to impl BJnt the equipment

in effective opJPtion conditions with everyJpMstraintinherent to ir^^pllation on site (connections _^^supplies . . .)JjKd to point out problems i t is "^Okferable tosolve befors^^^ant equipment is too advanced.

VI - CONCLUSJ

In the control structure describe before,^ and data processing equipment have been d ^and located closer to the equipment to be control^]

'to their programmed structure, the use of CONTROBLOCSinets enables to reduce the volume of the links between

FBntrol room and automation cabinets or between automationrcabinets thenselves through a large application of multiplexedlinks. In the retained structure, however, multiplexed linkswere not used to send data from sensors to automation cabinetsand to transmit orders from control room but mainly to transmitalarms and indicating messages.

Studies have been undertaken in which largestscope are given to multiplexed links, particulary in controlroom automation cabinet links. It shall be necessary to waitfor the implementation of the first 1300 MW plants to furtherthese studies.

Page 46: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 29 -

REFERENCES

I - IAEA International symposium on nuclear power plant control andinstrumentation. CANNES (France) 24-28 April 1978.

"Electronic system of power station control with modulesand distributed software based on CONTROBLOC elements".P. PEINTURIER D.A. MAYRARGUE. G. GUESNIER. G. VARALDI.

II - IAEA Working group on nuclear power plant control and instrumen-tation. Specialists meeting on procedures and systems forassisting an operator during normal situation. MUNICH5,7 December 1979.

"Data processing and data display in Electricity de FrancePWR 1300 MW nuclear power plant". G. GUESNIER. C. HERMANT.

Ill - IAEA Working group on nuclear power plant control and instrumen-tation. Specialists meeting on distributed systems fornuclear power plants. Chalk River. 14-16 May 1980."On distributed architecture for protection systems".P. JOVER.

Page 47: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 30 -

CONTROBLOC CABINET STRUCTURE

Process interface n"3 64 inputs/outputs

Process interface n'4

—I —' — -H —\ —\8 main boards I | I I8 elementary modules on each main board( input or output ) I I I

13CO

UC

Coupling

UnitUR

To plant computer

tUR

U C

Coupling

Unit

512 data

Ul

384 internal data

127 programmed

time delay

UT

UP

**)32 K bytes

Reprom

ULT

Logicoperator

unit

UL G

Configura--tionsurvey unit

UE

•*• Informations exchange unit

Fault \

display/

_^ exchangeI control unit

Ul

(Common to manycabinets )

\ internal< / function

\control/ unit

UT

U S

Faults

localisation

I UE

Maintenancepanel

11 Multiplexed links1 of 256 data output3 of 96 data input7 of 48 data output/input

Process interface n* 1 64 inputs / outputs

CM

Process interface n*2 64 inputs/ouputs

FIGURE A

Page 48: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

FIGURE: 2 CONTROBLOC CABINETS ARRANGEMENT IN A PWR 1300 MW PLANT

Main control room

( CRT ^ f a n / CRT \V n 1 ) \ n 2 ) j n - 3 J

C R Tn"4

/ C R T \ / C R T \\_n_b_J \ n 6 j

6 Links

Commondata

cabinets

3

RjnkTI Rink

3 Links

Uink

3 Links

Alarmcabinet

1

Plantcomputer

69 Links

8 links 12 Links

/$ * 55 Links

Automation cabinets

55T^I—t—i—-t

\ 4 Links/

ULS

I Link

Uink4

Rink

JATP

n" 1

UATP

n'2

32 Links

3 links

Alarmcabinet

1T>

Commondata

cabinets

3llink

6 Links

ULS

B

25 Links

3 Links

3 Links

Automationcabinets

25Reactor pro ection system \

"4 Links 1 1 i ._.

UATP

n-3

UATP

n4

\

c'6

35

CD

.5!

7^

/ 1 Control room 1Linl< Control room

2 Links f f 6 Links

10

2 Links4

Rink

2Ltnks

Hink

Diesel trainA

Pumpingstation

Auxiliaryboilers

Demineralisedwater Building

Nuclear auxiliary buildingtrain A train B

Diesel trainB i

FIGURE :2

Page 49: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 32 -

THE DESIGN OF A REAL-TIME SOFTWARE SYSTEM FOR

THE DISTRIBUTED CONTROL OF POWER STATION PLANT

- by -

G.C. Maples

Summary

Computers are being increasingly used for the control ofgenerating plant, and the advent of low cost mini/micro computers havingan installed price comparable with conventional equipment has resultedii«. a significant trend tovzards distributed computing. As this applicationof computers widens the problems of resourcing several individual ^rojeLtsover their life cycle can become formidable.

In particular, the provision of reliable and effective softwareit a crucial task, since it has a considerable impact on resourcing dueto the high costs associated with its design and development, and equallyimportant, its subsequent amendment and support. This paper indicates thefactors viiich are considered relevant to containing the resource requirementsassociated with software, and outlines the benefits of adopting a standardmachine independent software system which enables engineers rather thancomputer specialists to develop programs for specific projects.

The design objectives which have led to the current developmentwithin the C.E.G.B. of CUTLASS (Computer Users Technical Languages andApplications Software System) are then considered. CUTLASS is intendedto be a standard software system applicable to the majority of futureon-line computing projects in the area of generation, and is appropriateto stand alone schemes or distributed schemes having a host/targetconfiguration. The CUTLASS system software provides the necessaryenvironment in which to develop, test, and run the applications software,the latter being created by the user by means of a set of engineer-orientated languages.

The paper describes the various facilities within CUTLASS,i.e. those considered essential to meet the requirements of future process;control applications. In particular, the paper concentrates on the systemsoftware relating to the executive functions, and the organisation ofglobal data and communications within distributed systems. The salientfeatures of the engineer-orientated language sets are also discussed.

Page 50: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 33 -

1. INTRODUCTION

The complexity of modern generating plant both conventional andnuclear together with the increasingly severe operational requirementsrelating to safety and efficiency necessitate continuing improvements in thecontrol and instrumentation (C & I) systems. To this end the C.a.G.B. incommon with other utilities is making an ever greater use of digital computersfor C & I applications.

Further, the trends in computer technology offer the possibilityof utilizing computers in a more widespread and effective mariner. Inparticular, the advent of low cost micro/mini computers having an installedprice comparable with conventional equipment enables the hierarchic structureassociated with many C & I schemes to be implemented as a distributed networkof computer-based control centres, with each centre carrying out a specifictask within the overall scheme. This approach enables the scheme to bepartitioned such that each control centre has a high degree of autonomy, whichimplies that the performance of one centre has only a limited effect on theremainder. It follows that:-

(i) the commissioning and/or modification of a centre canbe carried out independently of the rest of the system;

(ii) the scheme is readily extended by incorporating additionalcentres;

(iii) the need for standby facilities is often eliminated L-\nccin many cases failure of an individual centre, and thesubsequent requirement for manual intervention, do«s notimpose an intolerable burden on the operator;

(iv) where availability is of paramount importance theredundancy of the network can be utilized in that the dutyof a failed centre can be taken over by a second centre;

(v) the scheme can be segregated into easily understood sub-systems.

However, past experience has shown that the potential ber.ejiiu.-; ofsuch computer-based systems are not always realized and that the resourct:srequired to support major projects "ver their life cycle are greater thai,anticipated. This has been due in part to a lack of discipline in thespecification and design of systems aggravated by the ambiguities whichinevitably arise between engineers and specialist computer staff, Also, pastinstallations have used a wide variety of software languages and equipntct.ttypes, and this represents a poor utilization of scarce speciaiist computingand system design expertise since effort expended on one project is net ingeneral transferable to another.

In the above context the method adopted for providing adequate soft-ware for future projects is seen as a key factor in ensuring that the potentialadvantages of computer-based schemes are fully realized and that the associatedresource requirements are contained. This is particularly important in thecase of microprocessor based systems where the manufacturer provides only

Page 51: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 34 -

limited software support. This paper describes the approach being adopted bythe C.E.G.B. to help mitigate the problems of software provision for futureC & I schemes associated with new plant, and, equally significant, theretrospective equipping of existing plant.

2. DESIGN CRITERIA

Clearly the method adopted for the provision of software has a sig-nificant impact on the resourcing of on-line computing projects. This is dueto the high cost, particularly in terms of scarce expertise of specifying,developing and engineering software as well as the equally important influenceof software design on the effort required for the subsequent maintenance andamendment. There is thus an incentive to adopt standard software suitable fora range of applications, since development effort can be utilized across anumber of projects and the long-term support provided on a common basis.

To this end the C.E.G.B. is currently developing CUTLASS (ComputerUsers Technical Languages and Applications Software System) which is intendedto provide a standard software environment and user-orientated languages forprocess control applications, initially on generating plant, and with possibleapplication to transmission plant in the longer term. CUTLASS is appropriateto both stand-alone and distributed computer systems, the latter being assumedto have a host/target configuration.

The common design approach throughout the CUTLASS development maybe summarized as:-

(i) to achieve a shift of resources for project applicationsfrom the scarce computer specialists to availableengineering staff. It is intended that computing staffwill be responsible for the design and support of theCUTLASS system but that engineering staff will utilizethe user-orientated language sets in CUTLASS to createthe applications software for individual projects;

(ii) to achieve as great a degree of machine independence aspracticable by the adoption and support of high levellanguages over a selected range of machine architectures;specifically a single dialect of CORAL 66 for the systemsoftware and the CUTLASS user-orientated languages forapplications software;

(iii) to optimize the effectiveness of the available computingspecialists. Together with high-level languages theimportant distinction between 'host' (development) and'target' (final implementation) hardware configurationshas evolved with emphasis being placed on maximizing thedevelopment done in the efficient environment of the 'host'machine;

(iv) to achieve sufficient flexibility to permit easymodification or extension of systems so as to meetchanging operational requirements or technologicaladvances;

Page 52: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 35 -

(v) to ensure that the effect of software failure isminimized not only in respect of plant safety andavailability, but also in terms of the maintenanceburden imposed on operating staff.

The above criteria are not always considered in sufficient detailat the onset of software design even though they have a significant effect onlife-cycle costs. Further, the potential benefits of adopting a standardsoftware system such as CUTLASS will largely be mitigated if there is noagreed definition of the user requirements for major areas of application.Failure to agree such definitions leads to ambiguities of interpretation whichcan result in costly software modifications. A process of consultation istherefore being carried out within the C.E.G.B. to establish the user require-ments in respect of the control of generating plant, and to coordinate thesewhere subjective differences arise.

3. CUTLASS DESIGN CONCEPTS

CUTLASS consists of a framework of system software together with anumber of user-orientated language sets. The former provides the necessaryenvironment in which to develop, test and run the applications softwarecreated by the user by means of the language sets. The system software includestranslator/editor facilities, an executive for allocating resources to taskswithin a machine, diagnostic programs, and software for organizing databaseand communications functions in both stand alone and distributed systems. Thetranslator/editor will generally be run on a large host machine during develop-ment although target applications systems can have a host machine as part oftheir configuration.

In developing CUTLASS the following concepts have been adopted so asto meet the design criteria outlined above:-

3.1 Ease of Use

As well as a general purpose user-orientated language having forexample, arithmetic and boolean statements, it is intended to provide languagesets appropriate to differing areas of control, such as D.D.C., sequence control,display etc.

Each language statement relates to a sub-routine and its associatedparameters, each sub-routine being designed to carry out a specific functioncommonly encountered in a particular area of control as defined by the userrequirements. Thus the sub-routine functions are synonymous with familiarengineering concepts.

The applications software for implementing 3. given task can thusbe readily specified by the user as a set of language statements whicheffectively define an assembly of sub-routines each having a known function,and their associated interconnections. At the same time every effort has beenmade to provide a range of interactive diagnostic and commissioning aids tofacilitate the testing and development of the applications software.

Page 53: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 36 -

3.2 Machine Independence

To help achieve machine independence the CUTLASS system software andapplications sub-routines are written in CORAL 66. However, this does notcompletely solve the problems of portability. Although a substantiallystandard syntax is available the implementation of that syntax on a specificmachine architecture or by a particular compiler introduces a number of machine-dependent features. To help reduce these problems a specific dialect ofCORAL 66 has been adopted and an effort made to separate and minimize themachine-dependent aspects cf the CUTLASS system.

Although CORAL 66 is less efficient in the use of both memory andprocessor power than assembler-based systems the relevance of this is rapidlydiminishing due to decreasing memory costs and the trend towards distributedconvputing which renders the loss of efficiency less significant.

3.3 Flexibility

The requirement for flexibility has been met by adopting a structuredapproach to the software design. Each software function (e.g. communications,data handling, etc.) is designed on a modular basis with well defined inter-faces. These modules can then be configured within the standard softwareframework to provide the relevant facilities for any given C & I scheme.

This modular approach provides for a natural segregation of thesoftware into functional blocks related to the different activities within atypical on-line computer scheme. Further, this approach has enabled the CUTLASSsystem to be developed as a collaborative project within the C.E.G.B., individualmodules being produced by experts in different departments.

3.4 Reliability and Security

The attainment of reliability in software is far more dependent onrigorous specification and good structural design than it is on the eliminationof simple errors in the program code. For example, a modular structure assistsin containing and diagnosing errors, while user-orientated languages help assurethat the applications software more accurately reflects the user requirements.

However, in addition to the above factors specific facilities havebeen incorporated in CUTLASS in the interests of reliability and security,namely:-

(i) an ordered method whereby data can be accessed by thevarious schemes in a given computer, or over the networkas a whole. The data handling is largely independent ofthe hardware or network configuration and providesprotection against unauthorized access;

(ii) a concept of privileged and non-privileged users, definedin terms of a security code, whose ability to alter soft-ware, particularly system software, is restricted on thebasis of their security rating;

(iii) formal procedures for diagnosing and containing errorsin both the executive and the run-time software associatedwith the applications sub-routines;

Page 54: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 37 -

(iv) application sub-routines designed to take accountof plant safety in that, for example, an output driveroutine would have inbuilt rate protection. Whereappropriate the sub-routines are also designed tosimplify the control system design effort byincorporating the necessary features to handle thecomplexities of initialization, bumpless auto/manualtransfer etc.

While it is recognized that there is no way of preventing the userfrom making errors in control strategy the facilities outlined above shouldsignificantly reduce the consequent effect on plant.

4. CUTLASS APPLICATIONS PROGRAMS

4.1 General Concepts

The user-orientated languages consist of a general purpose languagesub-set and special purpose languate sub-sets, it being intended that thelatter should be provided for:-

regulating control (DDC)

sequence control

data logging and display

alarm analysis

condition monitoring

performance calculations

CUTLASS utilizes a translator/editor to convert the user-orientatedlanguage statements into object code which consists essentially of addressesindicating entry points of the relevant applications sub-routines and theirassociated data. Run time execution begins by entering the first sub-routinespecified. Having accessed the appropriate data and carried out its particularfunction the sub-routine causes an entry to be made to the next sub-routine andso on. In this way object code, execution is carried out efficiently andwithout the need for a separate run time interpreter.

This method was chosen rather than a true compiler because it enablesa number of desirable features to be achieved more easily. For example, theobject code is machine-independent, it can be directly edited or translatedback into its original source form, and permits testing and monitoringprocedures to be easily incorporated. Although a scheme generated by s com-piler is generally faster in execution, the speed penalty involved is not verysignificant since the linking overhead is normally small compared with theexecution times of the applications sub-routines.

The translator/editor combines the functions of source editing,syntatic and semantic checking, as well as the generation of object code.

Page 55: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 38 -

The program source is stored as a text file on backing store, and when enteredthe translator/editor reads the source and creates a temporary copy of thesource file. Editing proceeds by merging edit commands from the user with thetemporary source file to create an alternative copy of the temporary file.

This serialized editing technique allows the program to be fullychecked as the lines are read in, appropriate messages being provided toindicate various syntactic and semantic errors. When editing is complete theuser can:-

(i) generate a new copy of the source file from thelatest temporary file;

(ii) output a listing of the object code;

(iii) load the object code over the communications networkinto a target machine.

The structured nature of CUTLASS requires that the user partitionsthe applications software into schemes and tasks. A scheme, which may consistof one or more tasks, forms a single indivisible unit for the purpose of trans-lating and editing. Tasks within a scheme can communicate via common datadeclared at the head of the scheme. This data is inaccessible from any otherscheme.

A task is the smallest program element, written as a set of languaf*instructions, which can be run by the executive in its own right. Tasks cancontain their own local data, declared at the head of a task, and inaccessibleby any other task. The scheme and task boundaries are assigned by the user,and will normally relate to some natural segregation of the required controlstrategy.

Tasks can communicate with tasks in other schemes via global datawhich is created by a separate utility. There is no restriction on whetherthe communicating schemes are in the same or different computers. CUTLASS usersare required to log in to the system in order to use certain utilities, inparticular, the translator/editor and the utility for creating and deletingglobal data. The user name is checked against a list of known users held inthe target machine and is converted to a user number which is stored with anyscheme he creates.

Selected users can be given a privileged classification which allowsthem to use various software options needing extra care, but does not allow themto break ownership rules. One user can be nominated as the 'system manager' andas such has the ability to break ownership rules such as altering the ownernumber attached to schemes, files etc., and change the allocation of privilegeto particular users.

4.2 Error Handling

CUTLASS includes error handling arrangements designed to containvarious hardware and software failures. For example, error tracking is providedwhareby recognizable errors such as invalid input data, or a communicationsfailure in a distributed system, causes the associated numeric or logical data

Page 56: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 39 -

to be flagged 'bad1. Any subsequent CUTLASS instruction which utilizes this'bad' data will in turn flag its output 'bad' so that 'bad' data propagates in anordered manner across task and scheme boundaries.

The user can test the state of data items at any stage and henceinvoke corrective actioa. Alternatively, if no corrective action is taken,CUTLASS output instructions receiving 'bad' data automatically respond in asafe way. Thus a valve drive instruction in the DDC language sub-set willfreeze the valve in its current position and apply any necessary inhibits tothe control algorithms.

In cases where a task is affected more severely the user has theoption to specify an error trap label to be entered for particular types oferror. These include array bound errors, LOOP count 'bad', IF condition 'bad',etc. The action taken by CUTLASS on detection of these differing types oftask error is determined by internal flag words set up when the system isgenerated. The options ara:-

(i) errors trapped by the user as described above,the task being aborted if no trap entry isspecified;

(ii) as (i) but the task continues if no trap entryis specified;

(iii) errors simply set a flag in an associated taskstatus work which can be subsequently tested bythe user.

5. TARGET MACHINE CONFIGURATION

Typically a target machine will contain the object code andapplications sub-routines appropriate to its particular control functionstogether with a set of system software. The latter will now be described ingreater detail.

5.1 Executive

The executive (TOPSY-2) is essentially a modular kernal of proceduresto create, manipulate and delete a set of resources in the computer. Theseresources are 'named', and can be programs, devices, semaphores or memorypartitions. Whenever a resource is created a 'control block' is allocatedfrom a pool of available memory, the control block space being returned tothe pool when the resource is deleted.

TOPSY-2 is capable of running a number of independent programs. Todo this, TOPSY-2 maintains information describing the current status of eachof the programs in the machine. When appropriate a scheduler algorithmselects a program to run and transfers control to that program. Variouscauses (e.g. a natural dropout or suspension of the program), subsequentlyrelinquish control to TOPSY-2 in such a way that the program will only beresumed as the result of selected stimuli.

Page 57: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 40 -

Peripheral devices need to be shared between contending programsand this is organized by the executive. Programs requiring to perform I/Otransfers must do so through requests to TOPSY-2, which will forward therequest to the appropriate device driver. TOPSY-2 also provides for aprogram to temporarily acquire the exclusive use of a device for an uninter-rupted series of transfers such as the printing of a list of data that is notinterspersed with random messages from other programs.

Where two or more programs require access to certain sharedresources, but where concurrent access could lead to destructive interference,(e.g. shared sub-routines) TOPSY-2 provides 'semaphore' facilities to ensurethat only one such program at a time can access the shared resources. Othercontending programs are forced to wait until the current program has finishedusing the resource.

To facilitate memory management TOPSY-2 has the capability ofmaintaining a record of used and unused memory space within the computer, andof allocating/de-allocating memory partitions as required. TOPSY-2 alrso hasthe ability to maintain a hardware driven clock facility, and co performscheduling actions on the basis of elapsed time intervals. This is importantin a real-time environment where programs/devices are to be run at regularintervals or pause for specified periods.

The programs which share the computer cannot be assumed to workperfectly, so TOPSY-2 contains provision to handle errors which may be detectedat run-time. Error 'containment' prevents a program from running wild aftera situation occurs which is inconsistent with correct program behaviour, andtidies-up any resources used by the program. Error 'reporting' is a mechanismthat allows the error to be notified to a supervisory program or a humanoperator. TOPSY-2 provides facilities whereby either TOPSY-2 or a program mayinitiate the containment and/or reporting process when an error is detected.

Since the executive is modular it can be configured down to a setof procedures appropriate to a given application. It is also being extendedto accommodate machines having memory-mapped architectures.

5.2 Communications Software

The communications software organizes the message transfer betweencomputers in a distributed network. It assumes the use of standardasynchronous serial link hardware and will accommodate a variety of store andforward network topologies (e.g. radial, ring, or fully connected). Whilethis type of network imposes additional transmission delays, since intermediatenodes are involved, it can be made tolerant to failure since messages can bere-routed along alternative links, while the practicable data rates are suitablefor many control applications.

The software to implement this type of network is structured intothree levels, these being:-

(i) The link level. This level is concerned with the exchangeof 'link frames' (encoded forms of the actual message) viathe serial interface hardware. This level is implementedas one or more drivers attached to the TOPSY-2 executive

Page 58: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 41 -

and utilizes a C.E.G.B. software based protocol GENNET.GENNET is designed to provide adequate error detectionand correction without incurring excessive software overheadsand to allow for full duplex transmission and data trans-parency. Messages are encoded as a series of 8 bit bytes,the message frame consisting of two header bytes and fourtail bytes. The latter include two check bytes; thefirst represents a horizontal parity check on the messagedata while the second gives the length of the message data.Each successive message is transmitted with an alternativetoggle (contained in the second header byte) so that if thesame header is received more than once subsequent messagesare accepted but the data discarded. This allows for lossor failure of an acknowledgement which would otherwisecause messages to be duplicated. To minimize the delayin responding to a message during full duplex transmissionthe acknowledge sequence can be 'stuffed' into a reversemessage.

(ii) The network level - this level is responsible for thestore and foward operations on messages received from thelinks and is implemented as a task under TOPSY-2. To achievethis each computer contains a routing table and every messagesent through the network has a destination number. When amessage is received the routing table is consulted and thebest link selected for forwarding the message. The link levelsoftware in any computer sets up its routing table dynamicallyby exchanging special messages with its nearest neighbours.This allows the communications to adopt automatically to linkfailures or changes in network topology and enables computersto be removed or added while the system is on-line.

(iii) Applications interface level - all messages exchanged betweenthe applications tasks pass through a standard applicationsinterface implemented as a task-to-task driver under TOPSY-2.The interface provides for message transfer between tasks inthe same computer or between tasks and the network level. Itcarries out validity checks on application task requests andseparates incoming messages into channels. There is an errorchannel for mis-routed messages.

Since the communications software is modular the various levelscould be replaced to meet changing requirements. For example, the link levelcould be replaced by a hardware implemented protocol such as HDLC.

5.3 Global Data Handling

The data manager provides a method for exchanging global data betweenschemes in the same or different computers and is essentially distributed innature. Where a number of schemes in different computers require to accessthe same data item, there is a 'master' copy of the item in one computer, and'slave' copies in each of the other computers involved. The 'slave' copiesare updated automatically at either specified intervals or when a master copychanges. These methods are seen as preferable to an 'on demand' approach whendealing with real time situations.

Page 59: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 42 -

This method of global data organization relies on the definitionof machine-independent messages to interrogate and update the data. Eachcomputer contains a directory of the names and types of all global data thatit holds which enables the necessary data connections for the remote updatemessages between computers to be established automatically. Once establishedthe update messages use direct addressing to reduce overheads. It followsthat schemes can be designed as separate modules without explicit referenceto the distributed nature of the system.

Provision has been made to minimize the effects of accidental_changes, and global data items are therefore created by a carefully controlledprocess distinct from scheme creation. A scheme will not translate successfullyif it refers to a global item which does not exist in the computer for whichit is being translated. In general only one scheme may write to a given dataitem, effectively becoming its 'owner' though any number of schemes may readthe item. This check is applied at scheme installation time, and thissimplifies the maintenance of system integrity, in particular when insertingor removing schemes. A record is kept of the number of schems accessingan item of data so that the item (and hence the scheme updating it) may onlybe deleted when no scheme remains that uses the data. In the event of loss ofcommunications between computers, or failure of a scheme owing 'master' dataall the relevant 'slave' items in the other computers are automaticallyflagged 'bad'.

6. CONCLUSIONS

The rapidly increasing use of distributed computer systems foron-line control projects emphasizes the need to contain the resource require-ments associated with the development and long-term support of such projects.The adoption of standard portable software appropriate to a range ofapplications, and designed for use by engineers rather than computer specialists,is considered a significant factor in reducing these resource requirements andhas led to the design of the CUTLASS software system described in this paper.

7. ACKNOWLEDGEMENTS

The author wishes to acknowledge the contribution of the members ofCentral Electricity Generating Board concerned with the development of CUTLASSand the formulation of the software policies outlined in this paper. Thepaper is published by permission of the Central Electricity Generating Board.

Page 60: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 43 -

DISTRIBUTED CONTROL AND DATA PROCESSING SYSTEM WITH

A CENTRALIZED DATABASE FOR A BWR POWER PLANT

K. Fujii *

T. Neda *

A. Kawamura **

K. Monta ***

K. Satoh ***

* Atomic Power Generation Control Systems

Designing Section, Fuchu Works, Toshiba Corp.

** Control and Electrical Engineering Section

Nuclear Energy Group, Toshiba Corp.

*** Nuclear Engineering Dept.. NAIG Nuclear Research

Labo., NAIG Co., Ltd.

Page 61: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 44 -

ABSTRACT

Recent digital technics based on remarkable changes of

electronics and computer technologies have realized a

very wide scale of computer application to a BWR Power

Plant control and instrumentation. Multifarious compu-

ters, from micro to mega, are introduced separately.

And to get better control and instrumentation system

performance, hierarchical computer complex system archi-

tecture has been developed.

This paper addresses the hierachical computer complex

system architecture which enables more efficient intro-

duction of computer systems to a Nuclear Power Plant.

Distributed control and processing systems which are the

components of the hierarchical computer complex, are des-

cribed in some detail, and database for the hierarchical

computer complex is also discussed.

The hierarchical computer complex system had been developed

and now under detail design stage for actual power plant

application.

Page 62: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 45 -

INTRODUCTION

The number of nuclear power generating units and the

capacity of generating units are steadily increasing;

in consequence, the efficiency of the control, operation

and safety of plants urgently needs to be improved.

Under such circumstances, owing to the recent progress of

process computer hardware and software technics, the

process computer systems introduced in to power plants

and becoming increasingly multifarious.

Form the point of view of elevating the level and

efficiency of plant operation, the tasks to be accomplished

in plants by computers have been readjusted and systema-

tized.

Page 63: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 46 -

HIERARCHAL COMPUTER COMPLEX (HCC) CONFIGURATION

The HCC is an overall computerized control and instrumen-

tation concept and is based on a systematic analysis of

the information concerning BWR power station operation

and management. Information and its processing system

have been classified into two categories from a functional

view point, and three levels dependent on scope. The

total system has been designed on the basis that control

and processing should be distributed in order to minimize

the chance of total system fail. On the other hand infor-

mation should be centralized to allow manual intervention.

Category 1 is supervisory and control of the plant

operation.

Category 2 is data processing and management.

Three levels of the HCC are

(1) Level 1 for power station overall information manage-

ment involving plueal generating units.

This level is called a site database.

(2) Level 2 for information processing, supervisory and

control of the generating unit.

This level is called a-unit level.

(3) Level 3 for local control and data processing which

is based on distributed system technique. This level

is called a local level.

Page 64: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 47 -

Relationship between above categories and levels is shown

in Figure 1.

Computer systems in BWR based on the HCC concept are

shown in Figure 2.

The major components of the HCC are database computers,

data communication Networks and specific computers in each

level. .

Level 1 c Site information management

Level 2Unit operationSupervisory and

control

Level 3 Local Control

Category 1

Unit dataProcessing andManagement

0Local dataprocessing

Category 2

Figure 1 Relationship between categories and levels

Page 65: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

Li-<•/<•;! 1

/s i tu Level'IComp.

Level 2

/Unit \I Subsystem)\Comp. /

Level 3

/Unit LocaP( Control\Comp.

Utility CenterComp.

Site DataBase Comp.

Unit DataBase Comp.

PlantDiagnosisComp.

PodiaComp. Sys

ShippingAutomation

ISIAutomation

I! Plant DataI Treatment

CRD Exchange;Automation

Rad WasteComp. Sys

PLRControl

Control

Radio ActiveManagementComp. Sys

RAD WasteControl

Rx. PressControl

PersonnelDosimetryManagementComp. Sys

Process Radia-tion Monitor

Area RadiationMonitor

I

03

TLD

Whole BodyCounter

Dust RadiationMonitor

Re-; fiii-liny PlatformAutomation

Figure 2 Computer System in BWR

Page 66: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 49 -

Two major data highway loops are provided in the HCC, viz.

site loop and unit loop. The unit data highway loop

interconnects level 2 computers and transmits data from the

operating plant to the unit database computer. The site

data highway loop interconnects unit database computers and

transmits data from unit database computers to the site

database computer. Configuration of the HCC is shown in

Figure 3.

Utility CenterComputer

( Site Data Base\I Computer )

Other site database computers

Site data highway loop

Other unit databasecomputer /'" Unit Database

Unit level computersunit ProcessComputer

Unit data highway loop

Local controllers, local processors

Figure 3 Configuration of Hiororchical Computer Complex

Page 67: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 50 -

The Level-1 Function

The Level-1 computer system is interconnected to each

Unit database computer in Level-2 through a communication

line. This computer system accumulates the unit data sent

from the Level-2 computer systems, so that the. Station

database (Site database) is formed.

The major items of the Site database are as follows.

(1) Operating data commonly used by each unit.

(2) Plant operating historical records over a long

period.

(3) Personnel dosimetry monitoring data.

(4) Radiation .lonitoring data.

(5) Fuel istopic data.

(6) Maintenance management data.

(7) Site environmental data.

(8) Spare parts stock data.

(9) Nucleus material library.

The Level-1 computer system performs the following func-

tions.

(1) Plant operation management.

(2) Security Management.

(3) Maintenance Management.

(4) Communication with head office.

Page 68: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 51 -

Plant operation management

The control rod patterns are usually exchanged every few

months to attain the ^esired fuel exposure distribution.

A reactor simulation program makes a prediction of the

long-term fuel exposure, and reactivity change rate, which

provides the plant manager with the capability of planning

the changes in the control rod pattern. The Level-1 com-

puter system can offer past, present and future operating

data in order to maintain optimum operations.

Security management

In order to reduce the personal dose, it is necessary to

monitor the personnel dosimetry throughout the working

hours and work area. Personnel dosimetry monitoring is

a slow laborious, routine task, the computer assists in

such a routine task by acquiring several types of person-

nel dose data via the Level-2 computer system, and

storing them in the Site database. The dose data can

then be printed out and/or displayed for each person

individuallly, on request.

On the other hand, the Level-1 computer system records

and monitors area radiation, process radiation, process

discharges and environmental conditions in many locations,

both inside and outside the BWR Power Station. This

radiation monitoring data is verified tc maintain the

environmental conditions in their correct range.

Page 69: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 52 -

Maintenance management

Routine maintenance is conducted in accordance with the

specified procedures, to reduce the possibility of error

disturbance to the components. It is essential that the

component and system maintenance records are properly

maintained, to facilitate the scheduling and completion

of all the necessary maintenance. The computer performs

the schedule reporting of the necessary maintenance and

testing, and spare parts stock management, in support of

the routine maintenance.

Nonroutine maintenance is scheduled as necessary during

periodic shutdowns of the unit. Some examples are,

refueling (;exchanging the fuel assembly), control rod

replacement, control :.:tfad;C'drive mechanism removal and

replacement. The c£>inpU:|fer xaemQri2:;es the component his-

tories and lists all necessary ^replacements of the compo-

nents.

Communication with the Head Office

The Level-1 computer system can communicate with the Head

Office through a communication line such as a telephone

line. Economic load dispatching can ba achieved by

receiving the load schedule, from the Head Office to the

Station, via a communication line. The Level-1 computer

system can perform load calculation and dispatch the unit

schedules.

Page 70: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 53 -

The Level-2 Functions

The Level-2 system, (computer complex), consists of the

following;

(1) Unit database computer system.

(2) PODIA (Plant Operation by Displayed Information and

Automation) system.

(3) R/w (Radioactive Waste Treatment) supervisory compu-

ter system.

(4) RPDM (Radiation and Personnel Dosimetry Monitoring)

system.

Unit database computer system

The Unit database computer system gathers the unit opera-

tion real time data from the other Level-2 computer systems,

then handles and transmits it to the Level-1 computer

system. The unit database is a part of the Site database,

which is especially concerned with the respective unit

operation data.

This system controls the communications between the Level-

2 systems when accessing the unit database.

PODIA system

The main objectives of the PODIA system are to provide an

effective aid for the monitoring, control and operation

of the unit. The PODIA system performs an important role

in unit operations. The main interface between the Opera-

Page 71: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 54 -

tor and the Plant is a computerized and miniaturized PODIA

console (Advanced Operator Plant Interface), located in

the central control room. The unit operating data

including the status, value and alarms, is displayed on

color CRTs in graphic display formats. In the PODIA sys-

tem, the hardware display devices (i.e. meters, indicators,

recorders, etc.) on a conventional operating berchboard

are replaced by color CRT graphic displays.

Switches and control devices are fully miniatuarized, and

their number reduced by computerized automation. Therefore,

the PODIA console is much more compact in contrast to the

conventional operating benchboard,

(about one third)

R/W supervisory system

The objectives of this system are to monitor the liquid

and solid waste treatment system components.

In this system, hardware mimic display panels are replaced

by the full color CRT graphic displays, so that this sys-

tem console is more compact in size than a conventional

R/W operating board. The computer system conducts the

detailed historical data logging and liquid waste mass

balance calculations.

Page 72: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 55 -

RPDM system

The major functions of the RPDM system are the monitoring

and recording of the process radiation, area radiation and

liquid and gaseous radiation. Another function of the

RPDM system is the personnel dosimetry monitoring, which

relieves the staff in charge of radiation protection, of

the burden of laborious and time-consuming routine work.

Level-3 Function

Local control and data processing computers based on the

distributed system technique have been widely introduced

into BWR Power Plant.

Examples of the Level-3 are;

(1) Digital controllers for power control.

(2) Programmable logic sequence controllers for Rad/

Waste control.

(3) Refueling platform automation.

(4) Control rod drive mechanism exchange automation.

(5) Dosimeter reader controller.

(6) Pre-processing of the signals for plant diagnosis

system.

(7) Reactor power program controller.

Digital Controllers for BWR Power Plant control

Digital Controllers, one of the typical application exam-

ple of a distributed control system, which utilizes 16 bits

Page 73: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 56 -

micro-processors has been developed to control BWR major

control systems, such as primary recirculation control

system, reactor pressure control system and feed water

control system.

In general, digital controllers consisting of micro-

processors have numerous advantages in comperison with

analog controllers.

These are;

a. Separation of function and hardware

b. Easiness to standardize and modularize hardware

c. Flexibility to change, add or delete

d. Capability of complex arithmetic and logical

functions

e. High reliability and mairltainability

TOSHIBA has developed micro-processor application study

and applied it to the primary recirculation control system,

reactor pressure control system and feed water control

system.

Figure-4 shows the system configuration of digital

controllers for the power control system.

The digital controller consists primarily of CPU (Central

Processor Unit), memory, and process I/O interface.

Page 74: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 57 -

Hardware specification

CPU

Memory

Process I/OInterface

Execution time

Instruction word

I/O transfer rate

Micro program control

Magnetic core

Cycle time

Capacity

High Speed AnalogInput

Normal Sample Mode

High Speed AnalogOutput

Speed

Digital Input/Output

2.1 ys

16 bits/32 bits

38 Kbytes/sec

16 bits + 1 parity

800 nsec

32 Kbytes

Sign + 11 bits

20 % 25 us

Sign + 11 bits

10 JJS

The system performance is demonstrated utilizing the hybrid

computer. (Simulation Test)

System performance is demonstrated as follows:

a. Basic control performance

b. Demonstration of the capability to improve and

evolve

for example

Reactor water level setdown function on turbine

trip or reactor scram

Recirculation pump runback for a feedwater

pump trip

Page 75: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

Turbine Inlet Pressure

Reactor Water Level

-O

GeneratorPower

,evel 2 Computer

inoo

Figure 4 Digital Controller for Power Control System

Page 76: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 59 -

It is confirmad through the simulation test that the

control performance of the conventional analog controller

can be realized by the digital controller and that the

digital controller, has the capability to add, improve,

and evolve plant control function satisfactorily.

Sequence Controller

Process control functions in nuclear power plant are

becoming more and more complicated in order to attain safe

and reliable system conditions.

These complicated control functions cause an increase in

the number of components, such as relays, contactors and

wiring.

Therefore, the availability and reliability of these

control circuits is decreasing in proportion to the

increase in the number of components.

TOSHIBA has developed a compact, highly reliable sequence

controller for application in a nuclear power plant.

Sequence controller logics are stored into the memory and

controlled by the control program.

Control processor is 16 or 12 bits LSI.

An actual application in a nuclear power plant is in the

radio active waste system.

The reason why radio active waste system application is

selected is that this system is a more complicated func-

tion and the system scale is very wide.

Page 77: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 60 -

By application of the sequence controllers to a radio-

active waste system, the total control panel length is

reduced to almost one-second of the conventional control

panel.

The sequence controller also has a data processing

function in order to transmit process data to an upper

level supervisory process computer.

Page 78: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 61 -

COMMUNICATION NETWORK AND DATABASE

• To achieve optimum system performance 2 merits and de-

merits of the distributed and/or centralized system have

been studied. Major components of the distributed system,

viz. database, communication method and processing modules,

had the key to system performance, therefore they have to

be selected for optimum performance. Computer systems in

each level described above, are examples of processing

modules.

A communication network is essential to the distributed

system to attain the centralized supervisory information

system. To reduce the plant operator's burden, information

of the plant operation has to be gathered and prepared

ready for use. Performance of the communication network

is shown in Table 1 as an example.

A nuclear power plant involves a very large scale and

complicated process, therefore database of plant operation

and management is utilrzed not only for

No. of station

Communication Speed

Connection

Distance between station

Maximum loops length

64

10 Mega bps

Duplicated Loops

4 kilo meters

100 kilo meters

Table 1: Performance of CommunicationLine

Page 79: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 62 -

daily plant operation, but also security control and plant

maintenance.

A nuclear power plant is so complicated and data is so

interrelated that a centralised database system have been

adopted rather than a distributed database. Capacity of

the database is more than 300 Mega Bytes.

Page 80: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 63 -

CONCLUSION

.Computer based, totally integrated digital control and moni-

toring systems have been developed, and distributed systems

are introduced as a local level. It is expected that this

trend towards the digitalization of control and instrumen-

tation will become more common.

Then, problem to determine a reasonable devision between

hardwired and softwored systems will need to be studied.

Micro-processors have realized the distributed system but

emergence of high performance mini computers, called super-

mini, provided a chance to introduce a site specific infor-

mation management computer within reasonable cost. Hence,

it is necessary to study a total system involving both

distributed and centralized systems.

Further study will be made to include a remote multiplex-

ing system which will be classified below level 3, say to

level 4, to improve reliability, availability and main-

tainability of a nuclear power plant.

Page 81: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 64 -

REFERENCE

. (1) M. Raneda et al.

"The New Integrated Hierarchical Computer Complex in

a BWR Plant"

Paper, IFAC Symposium on Computer Applications in a

Large Scale Power Systems, New Delhi Aug. 1979.

(2) H. Kawahara et al.

"On-line Process Computer Application for Operator's

Aid in TOSHIBA BWR"

Paper, IAEA Specialists' Meeting on Procedures and

Systems for Assisting an Operator During Normal and

Anomalous Nuclear Power Plant Operation Situations,

Mumich Dec. 1979.

(3) R. Aoki et al.

"BWR Centralized Monitor and Control Systems, PODIA

TM Development"

Toshiba Review Int'1 Ed. No. 107 Jan. - Feb. 1977.

(4) K. Makino et al.

"Reactor Management System"

Paper, Enlarged Halden -Program Group Meeting, Loen

June 1978.

Page 82: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 65 -

'PROWAY' - A STANDARD FOR

DISTRIBUTED CONTROL SYSTEMS

R.W. Gellie

Control Systems & Human Engineering Laboratory

National Research Council of Canada

ABSTRACT

The availability of cheap and powerful microcomputer and

data communications equipment has led to a major revision of

instrumentation and control systems. Intelligent dsvices can

now be used and distributed about the Control system in a

systematic r-nd economic manner. Thes« sub-units are linked

by a communications system to provide a total system capable

of meeting the required plant objectives. PROWAY, an international

standard process data highway for interconnecting processing

units in distributed industrial process control systems,

is currently being developed. This paper describes the salient

features and current status of the PROWAY effort. '

INTRODUCTION

Industry is facing shifts in the economic climate with

increased emphasis in such areas as energy management, pollution

control, and conservation of natural resources. Future in-

dustrial prociss control systems must reflect these changing

demands by offering better control, reduced production costs, and

more complete and accurate operating data upon which to base

management decisions.

Page 83: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 66 -

Improved performance and economies are already being

realized through exploitation of the developments in semi-conductor

technology. Control system components and subsystems are being

redesigned to take advantage of microelectronics. The new

generation of local instrumentation and control 'functions, con-

taining inbedded intelligence through microprocessor-based

computers, not only offer improved performance but have enable^

a much wider spectrum of potential applications.

Other developments especially in electronic data processing

(EDP) and telecommunications, are also helping to shape future

process control systems. The availability of inexpensive and

powerful microcomputers together with data communications

equipment has made local and extended computer networks a

viable and attractive approach in the data processing environ-

ment .

Under the influence of these developments the traditional

approach to process control system design is being revised.

Intelligent devices and subsystems can now be used and distri-

buted within a complete control system in a systematic and

economic manner to achieve improved system performance.

DISTRIBUTED CONTROL

This distributed architecture returns the control functions

to the remote points where they can be most conveniently per-

formed and managed. Such process control systems are reminiscent

of the original manually operated plants in which instruments,

transducers and controllers were mounted directly on production

units distributed throughout the plant. At the same time

Page 84: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 67 -

distributed digital control incorporates many of the hardware

and software concepts of the classical centralized computer

control systems while avoiding the risks of depending on a

single computer.

In essence the new distributed approach embodies the best

features of distributed analog control and centralized digital

control to provide improvements in system performance and cost.

Distributed control offers many advantages to the system

designer. Hardware and software complexity are reduced by

partitioning whereby the system is subdivided into relatively

small subunits which are capable of concurrent processing.

Interference between subunits is minimized because they are

functionally independent. Maintenance and fault diagnosis of

software and hardware are greatly simplified. The inherent modu-

larity of hardware and software permits economies of scale in

instrumentation, control and communications.

The key to the successful exploitation of distributed process

control is the rationalization of these concepts into an inte-

grated approach to system design.

Traditionally, specialist suppliers have been identified

with transducers, actuators and instruments. It is equally

true with computers, microprocessors, and communications equip-

ment that no one vendor may have the expertise or products to

meet the requirements in all fields. Similarly software

functions such as operating systems, real-time languages, data-

base management and networks are regarded as specialist activities.

It therefore requires the detailied identification and

Page 85: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 68 -

specification of software and hardware modules and their

interfaces to permit coordination and integration of the

components into a viable system.

However, it is not sufficient to ensure cornpatibility of

system components. The system must be capable of meeting

performance objectives and must be optimised for the appli-

cation and the operating environment.

PROCESS PLANT •

In a typical industrial plant processing units are

gruuptd into local areas where a specific part of the process

is performed. In each such area materials and energy are

converted into intermediate or final product by local unit

operations such as heating, cooling, mixing or chemical

reactions. Each of these local process units may require

monitoring or corrective control.

Throughout the plant there may be many local areas or

control rooms each physically located near the processing

units. The activities in each of these local areas must

be coordinated and kept working to achieve overall plant

production objectives. The product from one area may be the

raw feed stock for another area. Several local areas may

share common services such as steam, water, etc. Coordination

of the plant requires that there be good access to process

status information from these areas.

The plant may be operating according to corporate

production schedules. It is necessary that management in-

formation be made available if timely and economic operation

of the plants is to be achieved.

Page 86: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 69 -

The typical industrial plant can be divided into a

hierarchy as shown in Figure 1. At each level there is a flow

of status information from lower levels and a flow of control

commands to lower levels. Superimposed on this'control

hierarchy is a topology determined by the physical location and

constructional layout of the plant. The topology determines

the transmission distance between levels. The nature of the

functions performed at each level determines the response time,

integrity and security requirements of the control and information

signals.

Local Area (Direct Control)

Control of the local process plant is via communication

with sensors and actuators over short distance. Real-time

process response requirements are determined by the plant

dynamics and can be fast. This is the most time-critical

level,-any failure here requires immediate action, typically

within minutes or even seconds. Duplicated functions and standby

units are frequently used in sensitive areas. Dedicated

control units or units capable of autonomous operation are an

obvious choice to ensure that the real-time response require-

ments can be met. With the advent of cheap microprocessor

computing power the operational area can now be under the

control of several dedicated control computers. These

intelligent units not only execute the detailed control tasks

but also perform data collection and reduction in support of

the information requirements of the higher levels of control.

Page 87: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 70 -

Plant Level (Supervision)

This level is concerned with the interactions between

operating areas. The control function here is one of coordin-

ation and supervision of the various operating areas to achieve

an intergrated production facility. The process response

requirements are still real-time and time critical but, since

the inter-area control is supervisory rather than direct, the

requirements are less demanding than at the operating level.

In the traditional centralized computer control system

a single large computer was located at this level. This

computer connected directly to actuators and sensors through

individual dedicated lines radiating from it. Because this

computer performed the local area control functions it was

often hard pressed to satisfy multiple interleaved real-time

demands from the various local plant areas.

When the intelligent units in the local areas perform

the detailed control tasks, the computational load and

communication requirements at the plant level are greatly reduced.

Corporate Level (Management)

Management decisions require a total picture of the

multiplant operation but this plant data is not required in

real-time. Production targets may for example be set on a

daily basis. In this case it would be sufficient that the

data base accessed by management be refreshed within this

period. Similarly, a failure of the communication systems

for less that this period would not interfere with the up-

dating of plant production targets.

Page 88: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 71 -

Communication Requirements

The communication and response time requirements vary

considerably between the levels of the control hierarchy. In

a centralized computer system all communication is handled

by this single processor which must satisfy the time-critical

needs of all the process units as we'll as transfer information

to the management level over considerable distances. All

communication is between levels.

In a distributed processing system intelligence is

added at the lower levels where response times must be fast,

data rates high and communications availability and integrity

are most important. Not only does this unburden the higher

levels but it localizes the time-critical communication so

that the transmission distances can be kept short. Information

transfers can now occur between units at the same level so

further reducing the data flow between levels.

It is apparent that the communication and response time

requirements vary considerably between the levels of the

control hierarchy. Communication within a control room or a

local area of a fast plant may involve information rates of

100,000 baud transmitted over 200 metres. Plant dynamics may

require resonse times measured in seconds. On the other hand

management level communication is not time-critical and might

be typified by an information transmission rate of 20 kbaud

over a distance of several kilometres.

A hierarchy requires a structured communication systems

with each level of communication optimized for the distances,

data rates and traffic patterns demanded by the appreciation

environment.

Page 89: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 72 -

STANDARDIZATION IN COMMUNICATIONS

It has been widely recognized that a standard communications

system, capable of supporting the requirements of distributed

process control as discussed above would enable .the system

designer to construct truly integrated systems.

In April 1975 at a meeting of Subcommittee SC65A (System

Considerations) of Technical Committee TC65 (Industrial Process

Measurement and Control) of the International Electrotechnical

Commission (IEC), working group WG6 was established and charged

with developing "PROV.7AY:a Process Data Highway".

The communication system proposed b" IEC SC6 5A is an

asynchronous or half duplex message mode serial line. The

functional requirements of such a system have been defined

and are incorporated in a draft proposal for the PROWAY System,

which provides a specification for functional and operational

requirements. '

The functional requirements document has been published and

is freely available. The following requirements are intended

to convey some "feel" for the PROWAY proposal:

(i) PROWAY is not intended to provide either an optimized

. interface for high-speed standard computer peripherals,

or efficient sharing of mass storage ;r periperals

between processors,

(ii) PROWAY is to be capable of supporting centralized

intelligence, distributed intelligence, and hierarchial

intelligence.

(iii) PROWAY is to be optimized for bit serial data over a

single shared communication link;a complete system

may i :mtain several independent links forming a network.

Page 90: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 73 -

(iv) PROWAY is to be capable of supporting transmission of

event-oriented data in real time. Also it is to be

capable of supporting direct data interchange between

any two stations without involving store and forward

at a third station.

(v) PROWAY allows one originator to deliver frames con-

currently to several destinations. Each destination

is able to independently acknowledge that frame,

(vi) PROWAY is to be capable of maintaining correct frame

sequencing and integrity of transmitted data while

operating in an electrically noisy environment,

(vii) The rate of undetected frame errors should be less

than one error per 1,000 years of operation, provided

that the data circuit bit error rate is less than

10 and the frame length is less than 100 bits,

(viii) No single failure of any part of any device should

cause failure of the entire process control system, or

of any functions except those in which the failed

device is directly involved,

(ix) The process control system should be capable of

tolerating changes of configurations without loss of

communication function, failure of any one transmission

line, or failure of any one station.

(x) PROWAY should respond to unsolicited requests within a

time period appropriate to the application (typically

2 ms.).

Page 91: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 74 -

The PROWAY specification is not intended to restrict the

technology of construction or the architecture used in a

particular application. It couldj for example, encompass a

twisted pair wire telecommunications link at one extreme or an

optical fibre communications system at the other. It does,

hc-wever, provide guidelines on safety, integrity and system

availability, which would apply equally to each type of system.

A PROWAY station is seen as having five logical levels of

protocol as shown in Figure 2. Each protocol layer is an indepen-

dent and logically complete specification. The final PROWAY

proposal will include detailed specification of each of the

protocols and interfaces (the network protocol is not included

within the scope of the PROWAY standard). The Highway Interface

and Path Interface are logical entities only, but the Coupler

Interface and Line Interface will be described fully with electrical

and mechanical specifications.

Current activity within the PROWAY effort is focussed on two

basic aspects of the communication system? the path or link

protocol, and the mechanisms for link control and link access.

Path Protocol

The- function of the path protocol is to perform the con-

version from an error prone, serial physical circuit to a relatively

error free logical link. The Path Unit serializes and deserial-

izes frames, generates and checks error detection codes, ensures

data transparency, and performs sequence control.

A number of path'protocols are in common usage for inter-

computer communication. The most widely used are BISYNC, DDCMP,

SDLC, ADCCP and I1DLC. HDLC is an international .standard protocol

Page 92: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 75 -

defined by the International Standards Organization (ISO).

It is of particular importance because it is specified within

the International Telegraph and Telephone Consultative Com-

mittee (CCITT) recommendation X25 which is a powerful network

access protocol designed for the long distance, packet

switching networks (such as DATAPAC in Canada). X25 will

almost certainly be implemented by common carriers world-wide

to provide an international communications network.

HDLC contains a frame-level link protocol which has

already been implemented in LSI hardware by many semi-conductor

manufacturers. One VLSI chip (Western Digital WD2501,WD2511)

which also includes many of the communications control functions

has been announced with sample quantities available early 1980.

IEC SC65A WG6 has already passed a resolution to implement

as many features of HDLC as possible in the PROWAY system. This

will not only give economies in production, but will also

allow compatibility between the process control networks and

the EDP system within a company. It will also allow the

process control community to use or adapt the peripherals, mass

storage devices and test gear developed for the HDLC-orientated

EDP market.

However, HDLC was developed for single master long

distance communication and there are several aspects of the

protocol which do not meet the PROV7AY functional requirements.

The specific areas of concern are:

(i) Asynchronous worst case responre time to Demanders.

(ii) Direct, peer-to-peer data exchange,

(iii) Undetected error rate.

Page 93: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 76 -

(iv) Broadcast messages with separate acknowledgements.

Cooperation with ISO is currently being solicited to

determine the minimum enhancements of the present HDLC

standards that are needed to meet the PROWAY functional re-

quirements. It is important that these enhancements be agreed

upon and accepted as part of the ISO specifications as soon

as possible ais to ensure that the next generation of VLSI chips

don't include or .omit features which will result in their being

incompatible with PROW/AY.

Figure 3 shows a typical implementation of a PROWAY

station using microprocessors and "HDLC" chips, etc.

Link Control and Link Access

The PROWAY proposals describes a process control system

including up to 100 stations (intelligent units) of which at

least two could be "manager units" responsible for data high-

way management; at least another two could act as "supervisor

units" responsible for supervision of highway operation; and

at least a further eight could be "demander units" capable of

making unsolicted requests for permission to use the data

highv.vy. A typical i istallation might consist of 31 stations,

of which four are likely to be demander units.

PROWAY defines a multi-master system; at any given time

control of the link will be localized, but the location of

this control will vary with time. Mastership, or the right

to allocate link access, c::n be transferred -in many different

ways. The PROV7A.Y Vtorking Group is currently conri daring a.'icl

evaluating alternative transfer mechanisms.

Page 94: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 77 -

The requirement that there can be direct information

exchange between any two stations without necessarily involving

store and forward in a third station and that the conununication

system must be able to handle unsolicited demands implies that,

at any given time,, several stations may be candidates for link

access. The method proposed for allocating link access and

the mechanism selected to transfer masteship are clearly inter-

dependent.

The path protocol and mastership/link access mechanism

are seen as key areas of the communication system. Once these

aspects are resolved it is expected that, it should not take

long to develop the complete PROWAY specification . A draft

proposal of the complete PROWAY specification should be avail-

able in late 1981.

CONCLUSIONS

Distribution of the control functions among intelligent

devices located close to the process uni permits an architecture

more compatible with plant topology. r : communications system

which supports this distributed proce ing can""be designed to

match the control structure of the p-L. nt. Distributed process

control systems would seem to be a more natural approach than

the traditional centralized computer systems. The PROWAY

standard should enable the design of integrated distributed

process control systems to be optimized for the application

environment and capable of raeeting production and cost

objectives.

Page 95: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

CC^PCFATE

INCREASINGDISTANCE

PLANTS

PPCC^SSUX.TS

XT

o\ INCREASING FASTERL INTEGRITY, RESPONSE

h }AV/AILAEIL ITY TIME

I

00

FIGURE 1:_OPERATIONAL LEVELS AND COMMUNICATION CHARACTERISTICS

Page 96: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

APPLN. UNIT

APPLICATION PROTOCOLS+ DATA INTERPRETATION

NETWORK IN"1

XAPPLN. UNIT APPLN. UNIT

• APPLICATION COUPLER

'ERFACE & FRAME

—<H

NETWORK UNIT ! NETWORK SERVICES: BUFFER, SOURCE/SINK CHECK

S PROTOCOL , NETWORK MANAGEMENT: PACKET SWITCH DIRECTOR*

HIGHWAY INTERFACE a FRAME

HIGHWAY UNIT ERROR RECOVERY, LINE ACCESS, MANAGERfi p ' n [ SUPERVISOR, (DEMANDER), INITIATOR, RESPONDF.R,° _ (LISTENER)

PATH INTERFACE a FRAME

PATH UNIT ' ADDRESS RECOGNIZE, ERROR CODE,a PROTOCOL J SERIALIZE

COUPLER INTERFACE a FRAME fLINE COUPLERa PROTOCOL

SIGNAL CONDITION. SYNCHRONIZE

LINE INTERFACE a FRAMEY

TRANSMISSION LINE

I

FIGURE 2: SIMPLIFIED PROWAY STRUCTURE

Page 97: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

r X-0 A

1tI

APPLICATION1 UNIT .? I iAPPLICATION UNIT ft 2

C< j I / O j ! UART

CC'JPLER-

; PAR~ cjr' .VEITV.-CRK' A\D! HIGH'.VAV

STATION

j SINGLE-3CARD MICRO-CCV.PJTF.R j '

i APPLICATION PROGRAM PROM ! ;! I

3 ' • ; G L E-BOARD

\ L T W C . V . F R O I C

Hi GHV,

MICRO - 00^

COL FROM

AY PROTOCOL PROV,

'.PUT

i

I

ER

r.OLCCH'P

1

COMMONRAM

1LI NE !

DRIVER I7" - I " 1

-NETWORK,HiGH'/VAY ANDPATH UNIT

CO'JPLER INTERFACE

LINE CCL'PLER

J TRA\SV!;SSiCN LIKE F

— L'.\'E INTERFACE

00o

FIGURF 2 : EXAMPLE OF AN IMPLJ^FXTATION OF A PROWAY STATION

Page 98: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 81 -

Not for reproduction 65A(Secretariat)l8Original: English March 1979

INTERNATIONAL 3LECTR0TECHHTCAL COICHSSION

TECHNICAL COI&ITTTEE Ifo. 65: INDUSTRIAL-FROCEGSMEASPKCJIEKT AND COBTROL "

SUB-COfftHTTES 65A: SYSTEM CONSIDERATIONS

Draft - Process, data highway (proway)for distributed process control systemsPart 2: Functional requireEents

This document has been prepared by VG 6 of SC 65A in accordance vjith thedecision taken during the meeting of SC 65A held in Koscov in April 1975.

It is submitted to National Connittees for comnents. This document nay bealso of interest to the experts of other technical comnittees especially TC 45,<SC 47, TC 57 and TC 66.

* Part 1: General and Fart 3: Glossary will te circulated shortly. Other Part3in this Series are in preparation.

Page 99: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 82 -

Introduction ........

1. Scope •

2. Application environment2.1 Optimum characteristics2.2 Economy versus technical factor..

3. Device typea3.1 Communications between devices ..........................«'.....3.2 Interface linitations ,

4. System structureA.I Control system structure..4.2 Data highway structure4.3 Coapatability with other systems ^ ,

5. Maintenance and service features5.1 Testing and fault diagnosis5.2 State transitions ......

6* Protocols ..,.6.1 General description cf protocols6.2 The role of each protocol6*3 Protocol functions6.4 Line coupler and protocol6.5 Path unit and protocol6.6 Highway unit and protocol ,... ............6.7 Network unit end protocol.. ....06.8 Application protocols

7. Safety ...„7.1 Fault potential ....,,. ;.....7.2 Optional versions

8. Data transfer integrity....................8.1 Use in electrically noisy anvironmente.........................8.2 Error detection algorithms8.3 Residual error mode..8.4 Undetected frame errors8.5 Low bit error rate i.......

9.' System availability ..,........'..........,».9.1 Restrictions on failures..,,9.2 Tolerance to changes of configuration9.3 Redundancy ...,.....,.......»........,..,,.,....«........••....,9.4 Internal status and error reporting9.5 Automatic recovery9.6 Control of stations

ZZ2843v 65A(Secretariat)l8

Page 100: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 83 -

Clause

10. Performance criterialOy 1 Communication efficiency ...........,*..IO\2 Response tine'.. ....................

FIGt l£S

1. Simplified PROWAY structure and terms2* • Allocation of communication functions, within a PROWAY station.

3a. Relationships between PROWAY stations3b. Composition of data circuits, data paths and data highways ...4* Example of an implementation of a PROWAY station

INTRODUCTION

The characteristics! which differentiates process control systems from otheron-line real-tine computer networks is that the control ayotea output causesmaterial or energy to'nave.

Devices comprising a distributed process control system should be able tocoMiunicate uaaaibigously over a shared process data highway (FR03AY). the datahighway should ba suitable for serial transmission over a single, sharedelectrical transnisssion line but the use of alternative trensaission cadia andmodes (such as fibre optics and parallel transmission respectively) vny baincluded.

1. SCOPE

This docusiant describes fuuctioc^l requirements of e. system for communicationbetween devices that comprise a distributed process control system. XftOWAYcommunication protocols and interfaces complying t/ith this Gtandard vrill ensbiedevices ocnufactured by different organisations to ba used in the case controlsystem. • ' •

Page 101: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 84 -

2. APPLICATION ENVIRONMENT

2.1 Optimum characteristics

The characteristics of the data highway should b2 such that they provide optimumconditions for use in control systems U3ed in die prccdiEg industries and shallbe applicable to both continuous and discrete processes. A process data highwayis characterised by the following:

a) Event driven cotsmunlcation which al!i>'T3 real time response to events.

b).Very high availability.-

c) Very high data integrity.

d) Proper opsratioa in the presence of electromagnetic interference anddifferences in earth potential and

e) Dedicated intra-plant transmission lines.

2.2 EcbnoEi'c versus technical factors

To achieve broad applicability it is essential that process data highways shouldbe economical to use in control systems under the following conditions:

a) With low or high information transfer rate requirements.

b) Within a control 'room ?.nd/or while exposed to the plant environment and

c) In geographically snail or large process plants.

2.2.1 The ecor.oaic and technical factors may need to bs reconciled in order toachieve a balance batwaen transmission line length and data signalling rate.

2.2.1.1 A data highway contained entirely within a local area (such as a controlrooa5 gsnecally requires:

a) A transmission line length of 200 a and

b) An information transfer rate of 100,000 bits/sec.

2.2.1.2 A data highway which traverses a large plant generally requires:

a) A transmission line length of 2000 m and

b) An information transfer rate of 30,000 bits/sec.

NOTE - While the requirements given in 2.2.1.1 and 2.2.1.2 are typical,there are classes of control systens with both substantially higher andsubstantially lower requirements.

Page 102: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 85 -

3. DEVICE TYPES

3.1 Communications between devices

Communications shall be provided between devices of the following types:

a)-Input, output and control devices. Examples of these are as follows:

1) Analog inputs - with and without special signal conditioning*

2) Digital inputs - with and without change of state notification.

3} Analog output.

4) Digital output.

5) Scanning and analytical instrument interfaces.

6) Combinations of the above.

7) Combinations of the above including process monitoring, alarmand/or control algorithms.

b) Man/machine interface and product identification devices. Examples ofthese are as follows:

1) Process operator consoles.

2) Alarm displays and annunciators.

3)' Video terminals.

4) Teleprinters.

5) Keyboards and keypads.

6) Character, line and special displays.

7) Label readers.

8) Sin5le card -aaders - mark sense and punched.

9) Badge and magnetic stripe readers.

10) Dedicated card punches.

11) Label, ticket and special forts printers.

12) Combinations of the above.

13) Combinations of the above including process monitoring, alarmand/or control algorithms.

Page 103: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 86 -

c) Service, suuport and maintenance

d) Superi^ory coivputera a) b) esid c) which control and interrogate thedevicee described i;i 3.1. Theoa computers nay be required to exchange largeblocks of dzt?. ciid/ci" prosrair^fs \r±th Che other devices on an infrequentbasis... .

e) Combinations of e.ay or all of the above.

A process data highway is not intended to provide an optimized interface for".ligh-speed computer peripherals, such as ziass Emories, line printers, unitrecord equip*n.?nt cr graphics terninals. It is also not intended for efficientsharing of V~KS storage cr peripherals between processors.

Ho devicse cr V'vr.j of devices are r.pecifically excluded from exchanging dataover a process <?at.i Mghiray, provided they•• co«forn to the-requirements of this'standard. However, the syscan rhall b^ capr.bli?. of providing optimum performancewhen Che devic'as lisired in j.l .re used.

4.1 Coatrol JY.°tf.a sirv.turs

L frocasa data hish?.v.y rb.clJ be capable of supporting control systems with".eiZcrtflissi intolii.^nnce, distributed intelligence, hierarchical Intelligenceaad cozbinr.tion." thereof.

4.7. feta hi",li .';_ rtru'vzitre

TI.c data hich,.-.ny chcll be CApabic of supporting transmission of event orienteddata in real M r s . It shall alec ba capable of supporting direct datainterchr.iigo 'o2f.\:sep. c.ny ;o c-tatious without ir.volvir.g store and forward at athird etation.

4.2.1

Other vcrcions ex the cat.i highway nay be required v,hich allow reconfigurationof the control sy-trti v J.le the procoos is operating.. Such changes may beallowed tc disturb the c::chanp,a of frades if limited to a transient effect,provided that the Hat.v. hiclway is ab.le to def.ect such disturbances and that itcan recover full o^eratioti within r. titr.e appropriate to the application,

ples of confif,i.'':Atlon dMr-ga.'j which avy be needed are follows;

a) Extendjup, shortening or rerouting transmission, linos and

b) Connc:ct.tr3 or disconnecting, stations froa the transmission line.

Page 104: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 87 -

4.2.2 Station capability of the data highway. The data highway shall allow oneoriginator to deliver frames concurrently to several destinations. Eachdestination should ba able to independently acknowledge that frame.

A data highway shall ba capable of supporting:

a) up.to 100 stations;

b) at least 8 demander stations;

c) at -least 2 supervisor stations;

d) at least 2 manager stations.

NOTE - A typical data highway may wall support a total of 31 stations, fourof which are likely to be de.mander stations.

4.3 Compatibility with other systems

The data highway should be designed to facilitate implementation of certain datacircuits using conmou carrier facilities.

5. MAINTENANCE ME) SERVICE FEATURES

5.1 Testing and fault diagnosis

While on-line testing and fsult diagnosis nGe<* n°t be Integral functions of thedata highway nevertheless the data highway shall.be capable of cupporting suchfeatures which include:

a) traffic nonitoring and

b) loop back tests.

5.2 State transitions

Any station shall bs able to perform transitions from one state to anotherwithout generating bit errors between other stations. Examples of suchtransitions are a3 follows:

a) On-line/off-line.

b) Power-on/povrer-off.

c) Ready/not ready.

d) BuGy/not buey.

•e) Local/remote.

Page 105: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 88 -

6. PROTOCOLS

6.1 Caueral description of protocols

A dictributed process control 3ystea consists of a number of stations thatcontain explication units. The stations communicate over a process data network(or tigh\.E.y).

Tha logical structure of the data network is defined by several levels (orlayers) of ccmauaicntion protocols. Each protocol co-operates with its level ofequal rank in other stations supported by the data highway. Each protocol is•interfaced1 vith the protocols above and below it. These relationships areshown in figures i and 3.

Each protocol is logically complete and is independent of the protocols belowit.'This, ir.plies that new transmission technologies can be implemented byreplacing only the lower level protocols.

Each protocol defines the logical location of the functions required toimplement it. The protocols do not necessarily imply physical locations within astation. (Sae figure A.)

6.2 The role of each protocol

The structure of the comiEunication functions within a process control system areshown in figures 1 and 2 and the role of each protocol is as follows:

a) The line protocol is implenented by the line couplers. They convertbetv;cca line. ?.nd coupler frames vhich pass across the line and couplerinterfaces, respectively.

b) The path protocol is implemented by the path units. They convert betweencoupler and path frames which pass across the coupler and path interfaces,respectively.

c) The highway protocol is impleisented by the highway units. They convertbetvsr.n. paf-h and highway fraices which pass across the path and highwayinterfaces, respectively.

d) The p-atwo-fc. protocol is implemented by the network units. They convertbetveen highway end network franes which pass across the highway andn*2t;*-ork interfaces, respectively.

e) The application protocols are implemented by the application units andco'uplsrs. These protocols provide rules for interpreting information in thenetwork (or highway) frames.

f) TUP. application units perform process control or other user definedfunctions.

NOTE - While this standard relates principally to one data highwayincluding its transmission line(s), line couplers, path units and highwayunits, the network units, application couplers, and application units ai.:-described in 7.2 only to clarify their influence on the data highway.

65A(Secretariat) 18

Page 106: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 89 -

6.3 Protocol functions "

Each protocol shall be capable of carrying out the following functions:

a) Transfer data with high integrity by checking for errors and usingappropriate recovery procedures.

b) Notify the next higher unit of errors which it cannot correct.

c) Support communication of arbitrary data within the information field ofthe frame passed to it from the next higher protocol level.

d) Be logically complete: every possible transaction sequence must bepredictable in its outcome and must exit to an acceptable state. Logicalcompleteness may be demonscrated by a transitiou state analysis.

e) Be transparent to transmission line lengths and data signalling rates.This requ:?rement does not apply to- the line protocol or line couplers.

£) Support changes to. the number of Btations connected to the data highway.

g) Support changes to the status and mode of stations connected to the data• highway.

b) Support monitoring and recording or communication performance.

6.4 Line coupler and protocol

The purpose of the line coupler is to convert frames from their internalrepresentation within a station to signals compatible with the transmission line,according to the line protocol. In order to perform this function the linecoupler should:

a) Convert signals levels and or formats.

b) Provide galvanic isolation between the station and the transmissionline.

c) Monitor signal quality.

d) Generate and detect frame synchronization signals.

e) Add and remove frame delimiters.

f)- Detect transmission line busy, idle, and incomplete states,

g) Synchronize initiator, respondcr and listener(s).

Page 107: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 90 -

6.5 Path unit and protocol

The purpose.of the path unit is to convert frames from their parallel to theirserial representations and implement an error code according to the pathprotocol. In order that the path unit performs these functions it shall becapable of:

a) Serializing and deserializing frames.

b) Adding and removing frame synchronization patterns.

c) Detecting frame synchronization errors.

d) Detecting frame size errors.

e) Recognizing frames addressed to a designated station.

f) Preventing the station from transmitting without pause for an excessivetime.

g) Generating and monitoring an error detecting or error correcting codethat guarantee high data integrity. The error code shall encompass thecomplete path frane, possibly excluding synchronization patterns.

vh) Handling highway frames of widely dlfferenc lengths efficiently..

NOTE - Information fields in highway frames typically range from 2 to1024 bytes of 3 bits each. -

i) Performing switchover to a redundant transmission line when appropriate.

6.6 Highway unit and protocol

The purpose of the highvzay units is to supervise and manage operation of a datahighway, including error recovery and control of line access, according to thehighway protocol. The highway functions are organized into ranks which ara .givenbelow:

Manager (highest rank)SupervisorDemanderInitiatorResponderListener (lowest rank) . •

A station shall contain the function of at least one of the .above ranks. Astation is identified by the highest rank function that it contains. A stationnormally contains all lower rank functions, but that is not a necessaryrequirement.

Page 108: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 91 -

A station which Is currently performing t.ie funct. lone of a rank should bedescribed as active. Fee example, a station which performs the initiatorfunctions should be knova as the active in.itif.tor. There can be one or moreactive listeners or daniandera on a data highway, but only one station may beactively performing any other functiou.

A station which desires to ba activated at a rank should be described as acandidate. For example, a station chicn is prepared to transmit a frame and hasnot yet been granted active initiator status should be known as a candidateinitiator.

6. I Functions conraon to all ranks

The functions that all ranks &hall perform are as follows:

a) 24aintain the same time sequence of frames at the destination as at theoriginator.

b).Support extendable addressing and control structures.

c) Notify the next higher rank of errors which it cannot correct".

6.6.2 Listener functions

The listener functions shall eccept correct frames of Interest to the designatedstation.

6.6.3 Easponder functions

The responder functions phall accept correct frames containing the designatedstation's addrcas(e9) and respond as appropriate and also inxoria the initiatorof accepted frames immediately.

6.6.4 Initiator functions

The initiator functiono shall carry out each of the following tasks:

a) Respond to the active supervisor's poll with a request for access to thedata highway.

b) Transmit frames to listeners*

c) Select transaction responder by transmitting frames containing theresponder's address.

d) Detect absence of frame acceptance by the respondcr and initiaterecovery procedures which, where feasible, shall be automatic and shouldnot delay other transactions unduly.

Page 109: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 92 -

6.6.5 Demander functions

The demander functions shall transfer an unsolicited request through the datahighway. This request is primarily used by a candidate initiator. The requestmay be transmitted over dedicated lines or by generating special states on thedata circuit. If the deiaand function is used routinely, it shall not corruptframes on the data highway. A process control system may exist without demandersif its real time response.requirements are satisfied otherwise.

6.6.6 Supervisor functions

The supervisor functions ehall parfora the following tasks:

a) Control Una access by establishing the active initiator for eachtransaction.

b) Arbitrate contention among active deraanders and/or candidateinitiatore within a tine period appropriate to the application.

NOTE - In a typical application, the access time of an active denandernormally should be less than 2 EG, SO long as the frane transmission timedoes not. exceed 1.5 TUB and aJ.30 the access tine of a candidate initiatorshould normally be iesc than 20 ns, so long as the frame transmission timedoe6 not exceed 5 ns.

6) Monitor initiator activity to detect and deal with errors.

d) Limit transaction tiraes and numbers as required to keep any initiatorfrom overloading the data highway.

e) Ensure 'continuity of- data highway operation when the active initiatorfails.

f) Monitor performance of the data path iu use.

g) Activate an alternate data path where available and appropriate. Forexample v;hen unrecoverable errors occur on the data path in use or when thetransmission line in use ic severed or disturbed.

Page 110: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 93 -

6.. 6. 7 Additional supervisor functions

The supervisor functions given in 7.6 may be supplemented by the following:

a) Transmitting global frames which are addressed .to all stations.

b) Activating and deactivating functions of lower rank within otherstations,

c) Initializing and setting modes in other stations.

d) Determining the status and mode of other stations.

6.6.8 Manager functions

A station with a manager function should conitor performance of the data highway(possibly including all stations) and record this information. The manageifunctions shall perform the following tasks:

a) Assign control of the data highway by establishing the activesupervisor.

b) Arbritrate contention among candidate supervisors within a timeappropriate to the application.

HOTE - In a typical application the access tice of a candidate supervisorshould be less than 1 s.

c) Ensure continuity of data highway operation when the active supervisorfails.

6.7 Network unit and protocol

The natwork unit i&iy provide data highway interface services to the applicationunits in any station. The purpose of tha network units of stations that containdirector functions is to direct the oparation of a data network consisting ofmultiple data highways. Data highway interface services provided by networkunits cay be responsible for the following:

a) Converting originator and destination logical identifications intostation addresses.

b) Arbitrating between the data highway's maximum allowed frame size andthe TTiCesage size required by the application un\ts. Examples are rejectingor buffering cessagee which exceed the maximum frus© size.

c) Maintaining the same sequence of massages at the originator anddestination.

d) Performing source to sink error checks on messages which the nserdeclares are critical, for example a check before operate mechanic-, forcontrol actions, and a verification that specified actions have take;,place.

Page 111: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 94 -

6.7.2 Director functions

A station that containe a director function will monitor performance of, andmaintain statistics on, the entire process data network. The director functionsshall perform the following tasks:

a) Assign management of the data highway by establishing its activemanager.

b) Arbitrate contention between candidate managers of a data highway withina time appropriate to the application.

c) Ensure continuity of data highway operation when active manager fails.

d) Select alternate data highways between devices as appropriate.

e) Perform route selection and store and forward transmission betweendevices as appropriate.

6.8 Application protocols

The application protocols cefine the rules w.ich ensure c.onsistantinterpretation of the application dependent information contained in network (crhighway) frarac:s. Examples of application dependent information are as follows:

Subaddrecsp.G within the device.Device control and status signals.Data forMats.Character codes.

7. SAFETY

* * Fault potential

All devices used in the data highway shall be capable 6'f withstanding theapplication oi: an allowable fault potential appropriate to the application.Application of this potential to the device's connection to the transmissionline shall not damage the devd.ee or cause it to become hazardous to personnel orother devices.

NOTE - This fault potential is typically the power mains voltage in theareo traversed by the transmission l.ine(s).

l/hen the allowable fault potential is that generated by' a lightning strike Coan arbitrary point near the trantn.iission l.fne(s), all devices used in the datahighway shall be capoble of withstanding the application of such a faultpotential.

KOTE - For example: A pulse of 2.5 kv peak with a 1 us rise time and 50 usdecay time. Test procedures are defined in IEC Publication 255-4.

Page 112: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 95 -

7.2 Optional version?

When intrinsic safety certification Is demanded the design of the data highwayshall ba capable of modification to neet ths requirements.

8. DATA TRANSFER INTEGRITY

8*1 USG in electrically noisy environments '

The data highway shall be capable of maintaining correct frame sequencing andintegrity of transmitted data while it is operating in an electrically noisyenvironment.

8.2 Error1'' detection algorithms

The data highway shall include error detection algorithms that check thecomplete coupler frame.

8.3 Residual error rati

The data highway shall achieve a residual error rate and information transferrate appropriate tc the application when expo sad to a typic.il induct rialenvironment. The relationship bctwaen resideal error rate, information transferrate and data circuit signal to noise ratio Ray be expressed by graphs ortables.

8.4 Undetected fraiaa errors

The rate of undetected frame errors should be less than one 1 error p°r 1000years of data highway operation, provided that the data circuit bit error rateiG less than 10-6 and tha frame .length is less than 100 bits. This correspondsto a residual error rate of 3 >; 10-15 assuming 100 % utilization of a datasignalling rate of 1,000,000 bits par second.

8.5 -Low bit error rate

The data circuit chall achieve an acceptably low bit error rate in the presenceof a common mode potential appropriate to the application.

NOTE 1 - When the data circuit's trancnicnion line is entirely contained ina protected are?-,, this common lsode potential is typically less than 10 Vpeak-to-peak at frequencies less than 400 Hz.

NOTE 2 - When the transmission line is exposed to the plant environment,this conation mode potential is typically less than 50 V peak-to-paak atfrequencies less than 400 Hz.

Page 113: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 96 -

9. SYSTEM AVAILABILITY

9.1 Restrictions on failures

No single failure of any part of any device used within or connected to the datahighway shall cause failure of the entire process control system, or of anyfunction except; those in \frich the failed device is directly involved.

9• 2 Tolerance to changes-of conf i;;uration

The process control system shall be capable of tolerating changes ofconfiguration, without .loss of coEmunicati'on function, failure of any onetransmission line, or failure of any one station.

9.3 Redundancy

The data highly shsl-\ provide sufficient redundancy in a proceus control systemwith centra.lir.ru, distributed and/or hierarchiai intelligence to achieve highcontrol system availability.

9. A In tern J.I status anc! error reporting

The data highway shall h-svc: an internal status and error reporting capability.

9.5 Automatic recovery

The data highway shall b2 capable of automatic recovery from common failures.

^'^ Control of rt-tion"

The data hiphvay shell ensure that loading, starting, stopping, reloading, andresetting any svration i:3 properly carried out.

10. PSRF&2!-iA,VOT CRI1S1MA

10.1 Conr-unicat ion efficiency

The highway and path prolocclr, shall nvake efficient use of the data signallingrate.ar.d of ths transmission linrj b.indv;idth.

10.2 Resrcr.frg ti;?s

Th'?. data hif.hwr.y should be. cfti'fble of enabling any station, to obtain or providedata within tisis l in i t s imposed by tlie type of massage

Page 114: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

6SA tSccr*t*rl«t)

PKCWVY TERMS.'CUXMWTS C 6 S V W 5 6srroRTS

APPLICATIOHPROTOCOLS -

BATA IW7EPFKETATI0JI

: PATH I ADDRESS pyrc--?':zE|I UNIT 1 I E*.'«S COS II I-TOTC-COL J S E I M M , ] ^ j

PROTOCOL

ISO nrrs(niCTIOKAW, Pt •»

SCOPE orI I LA TED

STA.M9ARJS

TWUISK1SSI0N LINE

PROTOCOL

ifirPHVSICAl If

&OGICAX, ITsTPHY.MCAi IF

SPEC rcR s vrptELECTRICAL CABLE

TC 66pirssiCAjt,

lr

CC1TT I ISOPKTSICAL I f

I

CCITTX.27

F1C3.T-E 1 - S i m p l i f i e d

Page 115: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

Transi t £at» wiih hloJi integrity( * 1 exjrtr pmr )W^ yra)

t iror cinclte 4 xacovaxy po3c«dt:r«iNotify hlfhj>r unit ofUtrOL.-ovdi.islo 41IDC3

Camuhicat*: arbitrary <]*t«,

Tro*•parent to Jin? length<S fkJ.9K1lj.ltj? jrtt* (no t COUPIJiaj

nonltorlrt?Support

(optional function)

A " /.CtlVR

C - Candid* t*

- 98 -

APPLICATION rimCTIUriS USINGAPPLICATION rnrrxxoLS

J\PI*LH

UKIT 1 tUN'LH

UH;T 7

| Arpl.KATIG!

destination Identification.

- ULIOWED JK

OIKCCmt

1

d«*i wiui

• v l t c t i HLr

niciivsir W I T t rre>i*>=oL

rUMCTiOfiS Oi' Mi , 1VANKS

tKtcndtbl? a i d r « c c *.i>(! c o n t r o l

notify higher rt -k ol citori

U i 5T..T1CWE

contention u<iUllurf of A/

sttuctur*

fiq C/Hf

J

I ""I IH1TU.TOP.

1I

•• n

*sm£^n control o ' ni^Uu^y by matfijliahln? tt/S*rbitr*t« contention ^nong C/S, C/i *cc««* < 3<I»tcct/cori«cc failure of A/S

oontiol Klv'^^y otilir.ivion ty *.»t*bil»:.lrig /s'l*rbitrct.« ccrtlcntion iwny A/D *cc«a» X 2

C/I *tcc»i < JO KS

licit trwiractJon cin-ac tdptl viLh ftllure cf A/I

(act!v^tit/^cjictivdtfi lower rtnVt)( i j i i tUIif* C f t rodac Li ctittr «t*lJon*>(dttcrrilne atvttua & tooiJ* of otltec 11*( Jonc)

U.isoJ n lti-d r^i-vil , nor.jlJy tp tw A/I

*j>_-rd to A/5 poll wltli )iti« *rccx« riiqj^H

r>g fr.r-c*of X .t>jte.

accept corioct fii.cv:o with fet&Ljon ad^rvat

rx:acctpc corivct tr+.Mc <?t iotcrwcc

IWIT / . rroTOCQL fPATH i i torf*c« S. fr*»«i highw. control fUltl

V COUVLEK L n t t r U c * ft f i t » * i pa th f r * M * »r*or coda *

V I liti i n t e r f a c e ft ( C J - M I coupler ( r sv« * •ynct . t tmltwt*

XTkANS»U»&10K LILT 1AHT Of STATlOKt

MCUAE_a - AllocALlon o f ccwMinJcatlnn f u n r U o n t Mltl i in7*«?ar«rtro [vACi.loi.fcJ i».)tilj«j«i*).Lt - t « c t i o o n « v o

65Ak(Socretariat)l8

Page 116: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 99 -

STATION 1

Al'H.ICATIOH

STATION 7 STATlUN H

AWt.lfATIOH I AT f t I CAT It Hi j APPLICATION I Al i Ll< AT |O!l

WIT ;*• FAjlUL-OL ' OMIT

IMPLICATION COUPLER APPLICATION COUPLER

HETWOftJt, UNIT

LIKE COIVLER

_J

KICtfXAY. UNIT

i

LIME COb'PIXR

AIM'LH'AYIUX

iisstc: I LIKITKAHSHISStC: I LIKE

J7OCT.1, *'i

lCATtOU COUPLtA

HtTWUKK UH1T

HIUMWAY UKIT

STATION

APPLICATION

W I T

»n.,c«,«w«.:,

UltiT

1

UK:T

STATIOH 2 STATION H

DATA ICGHiWPROi'AY

Page 117: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

7\ \0S:< , HIGHWAYAND

PATH UNIT

FIGURE 4 - Exairpie of an ir;-.pieir.oata*;xcr. of a ?SC

EL2C/ea1.3.79

e n t r a i Off ici of rhe -ISC1, :--.e de Varczbé

£5A(Secr3tarie.t)lS

oo

Page 118: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 101 -

A WESTINGHOUSE DESIGNED DISTRIBUTED MICROPROCESSOR

BASED PROTECTION AND CONTROL SYSTEM

J. BRUNO J. B. REID

Westinghouse Electric CorporationNuclear Energy Systems

P.O. Box 355Pittsburgh, Pennsylvania 15230

Page 119: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 102 -

INTRODUCTION

For approximately the last five years, Westinghouse has been involved in the designand licensing of a distributed microprocessor based system for the protection andcontrol of a pressurized water reactor nuclear steam supply system. A "top-down"design methodology was used, in which the system global perforrrince objectiveswere first specified, followed by increasingly more detailed design specificationswhich utlimately decomposed the system into its basic hardware and softwareelements.

The design process and design decisions were influenced by the recognition that thefinal product would have to be verified to ensure its capability to perform thesafety-related functions of a class IE protection system.

FIELDSENSORS*

INTEGRATEDPROTECTION

CABINETS PLANTCOMPUTER

SYSTEM

INTEGRATEDLOGIC

CABINETS

•+•DISPLAYS

CONTROL SWITCHES

ACTUATEDDEVICES

MAIN CONTROL BOARD

SAFETYRELATED

NON-SAFETYRELATED

FIELDSENSORS

INTEGRATEDCONTROLCABINETS

INTEGRATEDCONTROL

LOGICCABINETS

ACTUATEDDEVICES

* SOME SENSOR SIGNALS SHARED BETWEEN CONTROL AND PROTECTION

Figure 1 Plant Integrated Control Center

4411A

Page 120: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 103 -

The verification process mirrored the design process except that it was"bottom-up" and thus started with the basic elements and worked upwards throughthe system in increasingly complex blocks.

It is our intention in this paper to concentrate on a number of areas which are ofinterest in a distributed system. Some are unique to distributed systems, whileothers are considerations in any control and protection system.

Two systems will be discussed. The first, the Integrated Protection System (IPS) isprimarily responsible for processing signals from field mounted sensors to providefor reactor trips (scrams) and the initiation of the Engineered Safety Features(ESF). The Integrated Control System (ICS) which is organized in a parallelmanner, processes other sensor signals and generates the necessary analog andon-off signals to maintain the plant parameters within specified limits.

Since the IPS in general represents a more limiting case, the majority of the pointsdiscussed will make reference to the IPS, however, in most cases the same factorsare a consideration for the ICS.

CHANNEL SET I

SENSOR

CHANNEL SET CHANNEL SET I

HINTEGRATED

1 PROTECTION' CABINET

- PAM• CONT

'"'"", MTECM.iD1 1 1 PROTECTION" ' 1 CABINET

JIT

PAM

CONT

SfNSOlINPUTS

irlA-.O

CALC

VOTINGLOGIC

So

INTEGRATEDPROTECTIONCABINET

-fc- CONT

SENSORINPUTS

CAIC

VOTINGtOGIC

SoL

INTEGRATEDPROTECTIONCA6INET

r:.) --»*• P AM

MMINTERPOSING

\ \ \

nilINTEGRATEDLOGICCARINCT

SYSTEMINTERPOSING

LOGIC

ACTUATIONS

LEGENDA-*OCALCISOLP A MCONTILCESFnTMC«

ANALOGCALCULAISOL AT INPOST ACCINTEGRAINTEGRAENGlNEElREATOH

TO DIGITAL CONVERSIONnoNSG DEVICEIDf NT MONITORING SYSTEMrEO CONTROL SYSTEMED LOGIC CARINE1ED SAFETY FEATURES

TFIPMAIN CONTROL MlAflD

Hgure 2 Integrated Protection System Basic Architecture

4411A

Page 121: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 104 -

SYSTEM STRUCTURE

Figure 1 shows the overall structure of the IPS and ICS in relation to each otherand to the control room. This collection of equipment is called the PlantIntegrated Control Center (PICC). The IPS is composed of two major sub-blocks,the Integrated Protection Cabinets (IPC) and the Integrated Logic Cabinets (ILC).Similarly the ICS is composed of the Integrated Control Cabinets (ICC) and theIntegrated Control Logic Cabinets (ICLC).

Finer details of the IPS are shown in Figure 2 in which the four way separationamong the four IPC's and the two way separation between the two sets of ILC's isevident. Each IPC processes one of four redundant process signals to generate achannel trip. The channel trip in each IPC is combined with channel trip statusesfrom similar channels in the other three IPC's in a two-out-of-four (2/4) votingmatrix within each IPC. The result of this vote provides outputs to the reactor tripbreakers or to the system level ESF actuation circuits in the ILC's.

SENSORINPUTS

TUT

LEGEND:ESFMC8.R.T.PAMSILCIPC

IPC

118V AC

STARTAUTO TEST

MC8 BLOCKS '

MAN R.T •

ENGINEERED SAFETY FEATUREMAIN CONTROL BOARDREACTOR TRIPPORT ACCIDENT MONITORING SYSTEMINTEGRATED LOGIC CABINETINTEGRATED PROTECTION CABINET

AUTOMATICTESTER

SIGNAL CONDITIONINGA D CONVERSION

ANALOGPROCESSING

ANALOGCOMPARATORS

DATALINK

RECEIVER

DIGITALPROCESSING

DIGITALCOMPARATORS

TRIP LOGICMODULE

POWERSUPPLIES

r/:isc.INTERFACES

ESFLOGIC

COMMUNICATIONSMODULE

DATALINK

TRANS.

1S0

LAT1ON

Figure 3 Functional Groupings Within the Integrated Protection Cabinet

4411A

Page 122: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 105 -

Channel trip information is communicated among the IPC's and ILC's by serial datalinks running at 19.2 kilobaud. Where isolation is required to avoid interactionbetween redundant circuits, the transmission medium is a fiber optic cable,otherwise, a twisted shielded copper pair is used.

The internal details of an IPC are identified in Figure 3, which shows the majorfunctional blocks. Note, in particular, that both analog and digital signalprocessing is used. A resident tester is provided which, automatically (oncommand), checks the system from the input A/D converters to the outputs of theIPC.

SENSORS

1SIGNAL COND.

AUTO TESTINTERFACE

IPC AUTOTEST MODULE

DNB/KW/FTMODULE

ANALOGFUNCTIONAL

CARDS

TRIP LOGIC MODULE

i

00Ui

Q.

o o o

illLSLSLLI

COMM.MODULE

FROMILC

TORTBREAKERS

- } FROM IPCHJO.JSL

TOIPCH.IE1E

LEGEND:

SM - SHARED MEMORY1 - ISOLATED OUTPUT

Figure 4 Integrated Protection Cabinet Block Diagram

4411A

Page 123: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 106 -

In Figure 4, the organization of the IPS is shown in a way that more clearly displaysits distributed nature. The signal processing blocks are shown feeding into eitherthe trip logic computer or to the ILC via data links from the ESF module. Thecommunications module collects information from the other modules throughshared memories and formats it for transmission to the control system, computersystem and control board. Each module, except the analog module is organizedaround a microcomputer with appropriate I/O. Each microcomputer runsasynchronously with respect to the others.

The ILC, as shown in Figure 5, consists of the three cabinets per train whichinclude a microcomputer based system level logic and multiplexing systems, and ahard-wired interposing logic for individual actuated devices. Critical portions aremade internally redundant to mitigate the effects of an internal failure on to plantavailability. Power interface devices, typically solid state switches, convert fromlow level signals to signals capable of operating contactors and switchgear.

PROM IPCs

TOIPC I I I

in IV

MCB

MCB c

DATA LINKTRANSMITTER

SYSTEMLEVELLOGIC

DATA LINKRECEIVERS

AUTOMATICTESTER

INTERPOSINGLOGIC

SYSTEMLEVELLOGIC

INTERPOSINGLOGIC

INTkRPOSING LOGIC(NON-REDUNDANT)

POWERINTERFACE

POWERINTERFACE

' POWER INTERFACEINON REDUNDANT)

3-3 MC8

ittti.tUA.ttiLEGEND:

MCB.IPC

MAIN CONTROL BOARDINTEGRATED PROTECTION

SWITCH-GEAR

CABINET

MOTORCONTROLCENTERS

AIROPERATED

VALVES

Figure 5 Functional Groupings Within the Integrated Logic Cabinet

4411A

Page 124: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 107 -

SYSTEMS PARTITIONING STRATEGIES

Initial design constraints established that the IPS design would contain fourredundant sets of microprocessor based equipment for the IPC and two redundantsets for the ILC. Immediately, however, the second design question of how tohandle the various functions required in the IPC was raised. A hardware designerstrives to make each printed circuit card in the system as efficient as possible.The software designer strives to reduce the number of processors by operating eachone near its limit. The thrust of this effort was to configure a series system. Allincoming signals would be digitized and processed by one microprocessor whichwould provide all appropriate signal scaling, range checking, limit setting, and soforth. The results from this operation were then passed on to a secondmicroprocessor which would perform all arithmetic operations, and so on. In such asystem, a failure of any microprocessor caused the entire channel set to fail.

Upon reassessment a design with greater fault tolerance was adopted. Theapproach partitioned the total system into three function groupings as shown inFigure 6. Each of these functional groups were supported by a sensor and signalconditioning subsystem. This subsystem provided individual A/D converters foreach incoming analog signal and processes each incoming contact closure in aparallel, hardwired configuration. This minimized the possibility that a singlefailure in this subsystem would cause the entire channel set to lose its incomingsignals.

In some eases, as in the trip logic module, multi-microprocessors are used tofurther reduce the reliance of large portions of the system on one processor. Thisdistributed system was designed to function in an asynchronous mode andfurthermore, is designed to avoid the use of interrupts. Since each microprocessoris operating asynchronously and a single failure of one processor must not affectany other, the processors must have access to the same information but yet befunctionally isolated. This was achieved by using two-ported memory arrays whichallow two different processors to reach into the memory location to read or write,but prevent one processor from reaching the other through the memory location.This isolation capability of the two-ported memory allows one processor tocontinue operating while an adjacent one is inoperable.

Common Mode Failures (CMF) cannot be dealt with by redundancy alone since alllike elements are presumed to fail. Instead, they can be addressed by acombination of intensive testing of the smallest system building blocks to minimizethe effects of a CMF and by providing functional diversity. The system can bethought of as providing three echelons or levels of defense> as shown in Figure 6.The first echelon is the Control subsystem, which keeps the plant operating withinits normal limits. This is backed up by the Reactor Trip subsystem, which shutsdown the reactor if the plant operation goes outside acceptable limits with theEngineered Safety Features (ESF) providing the ultimate backup.

The partitioning of the IPS supports these three echelons of defense. TheCommunications module (Figure 4) plus the ICS represent the Control Subsystemechelon. The remainder of the IPC with the exception of the ESF module representthe Reactor Trip Subsystem echelon. The ESF module plus the ILC's provide theESF subsystem echelon.

4411A

Page 125: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 108 -

CONTROL

(1) (2)

REACTOR TRIPENGINEERED SAFETYFEATURES

Figure 6 Interconnecting Paths Among the Three Echelons of Defense

While parts of all three echelons are located in the IPC, they are physically selfcontained with the functions being performed by separt.+e hardware for eachechelon. Interfaces between the echelons are designed to minimize the likelihoodof failures propagating between the various echelons. This form of partitioning issupportive of the requirements of NUREG 0493 (Reference 12).

COMMUNICATIONS TECHNIQUES

The design of the IPS and the ICS had imposed on it one severe design constraint.Regardless of the nature of the new system and the communications medium ormethod used between its various parts, it must also communicate with existingplant instrumentation and actuating devices. Thus, a large number of parallelwired inputs and outputs were to he handled by this new system.

4411A

Page 126: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 109 -

The initial evaluation indicated that to provide enough space in the IPC's and ILCto interface with these plant components would double or triple the size of theenclosures required for the microprocessor hardware itself. Initially, multiplexingwas evaluated to replace the large quantity of parallel inputs. However, two majorconcerns became apparent.

Multiplexing only part of the plant signals did not appear economically ortechically justifiable.

Most actuating devices are supplied by the customer/AE making coordinationof the design effort very difficult.

Several key factors evolved from the evaluation which were factored into thedesign.

Congestion in the cable spreading rooms could be reduced substantially bycollecting the sensor and actuator signals together and routing multiconductorcables into the IPC's and ILC's.

The large quantity of signals routed between the various protection systemcabinets and between these cabinets and non-protection equipment could bemultiplexed.

Fiber optics was a viable communications medium.

Once the evaluation was complete, certain design decisions were made. Thesedecisions resulted in a number of different external communications methods beingused for the data links shown in Figure 1.

Hardwired inputs have been collected into field termination cabinets locatedthroughout the plant and cabled into the appropriate cabinet using multi conductorcables. Even if the quantity of field termination cabinets varies, the impact on thecable spreading area, near the control room complex, is minimized. In fact, it willbe possible to make some design changes without affecting the plant wiring.

External multiplexing has been employed extensively in the IPS design.Multiplexing is employed in the following areas.

between each IPC and the other three redundant IPC'sbetween each IPC and each ILCbetween each IPC and the plant computer/display systembetween each ILC and the main control board

These areas were chosen because they could further reduce cable spreading areavolume, simplify separation requirements, and reduce the quantity of isolatorsrequired.

The data links identified in the first three categories above require isolation. Fiberoptic cables became a logical ehoise for this application because of its inherentimmunity to electrical interference and its ability to prevent the propogation of anelectrical fault back into the protection system. Physical isolation was achievedby using the fiber optic cable as the Interconnecting medium between cabinets.This placed the data link transmitter and receiver in different, physically separatecabinets.

4411A

Page 127: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 110 -

Multiplexing in a nuclear power plant protection system posed some new concernswhich were addressed in the design. Loss of a single data link was considered acredible failure, but this could not be allowed to make the system inoperative.Redundant data links are employed to eliminate this probability. Furthermore, theprotocol for these data links include error detection codes which allow the receiverto determine that it is receiving good data. These communication techniques haveprovided a large reduction in the cabling congestion in and around the cablespreading area of the control room complex.

In addition to the external data links, several memory buses are employed forinternal communications between the distributed microprocessors.

SOFTWARE DESIGN CONCEPTS

It was recognized at the outset that the design of a distributed processingprotection system would pose special problems with respect to the need to designand verify the software. A study was commissioned with personnel from theWestinghouse Research Division (Ref. 1) playing a key role to identify and specifyconcepts and procedures to ensure that the software for the IPS would be designedand documented in a manner which would lend itself to the level of verificationappropriate to a safety related system.

The design rules developed were aimed at three areas of concern: 1) a properdocumentation and review process, 2) development of code that is relatively easyto verify and 3) constraints on the interconnections among the microcomputers inthe system to minimize and simplify interaction.

Documentation was addressed from two aspects. Specific rules were identifiedregarding the scope and contents of each document. This ensured that the steps inthe design process were capable of being reviewed and followed by independentobservers. Procedures were then implemented which identified specific points inthe process where independent reviews of the documentation would take place.

A key document in the process is called the Software Performance Specification(SPS). The SPS is written by the programmer prior to coding and serves todocument his understanding of the system requirements to be implemented by thecode. The SPS is reviewed and approved by the group which originated the systemrequirements thus providing a check to determine that the requirements have beencorrectly interpreted by the programmer.

The software rules are those 'associated with the concept of structuredprogramming. Their intent is to improve the readability and reliability of theprogram and are typified by: "Avoidance of GO TO statements" and "singleentry/single exit for sub-routines". An addition to the so called style guidelines, ahigh level programming language (WEMAP) was developed for microcomputerapplications (Ref. 2). This language supports structured programming and to alarge extent enforces the structured programming rules, for example no GO TOstatement is provided.

Interconnectors among microprocessors in a distributed system can have a majorimpact on the overall quality of the system performance. This is especially evidentwhere the interconnections must be interfaced through software. The architecture

4411A

Page 128: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- I l l -

of the IPS was designed with this eonside* ition in mind. As a result, efforts weremade to minimize the number of interfaces between microcomputers and to keepthe remaining ones as simple as possible. As can be seen from Figure 4, the mostimportant interfaces, those between each functional microcomputer and the TripLogic Computer (TLC), occur through parallel contact closures. While thisapproach is not very elegant, it produces an interface which is simple, easilycontrolled and tested, and which provides a high degree of fault isolation. None ofthe software executes on an interrupt basis. Each microcomputer is operated in acontinuously looping program whose execution rate is compatible with the systemtime response. Inputs and outputs are sampled once each cycle. This simplifies thecode by avoiding the necessity for providing the interrupt support software andavoids a whole class of verification problems associated with determining that theprogram returns to the correct point after servicing an interrupt. The otherinterface of interest, shown on Figure 4, is the shared memory. A shared memoryis a two port memory card that operates under a very rigid protocol which allowstwo microprocessors to access memory on the card without communicating directlywith one another. The shared memory thus acts as a "software isolator" by rigidlylimiting the degree of permissible interaction between the two microcomputers.As such, the influence of the failure of one microcomputer will be limited to theinability to place data into or retrieve data from the shared memory and not in anyother way affect the operation of the other microcomputer.

RELIABILITY AND MAINTAINABILITY

The architecture of the IPS reflects concerns related to the attainment ofreasonable reliability and maintenance goals. The reliability goal selected was perdemand for reactor trip and was chosen to be equal to the calculated reliability ofthe system it replaces. The maintainability goal was based on experience withsystems of a similar size and was specified to be less than two card failures permonth. These goals are, in themselves, contradictory since the redundancy neededto meet the reliability goal works against the need to minimize the quantity ofhardware which favorably impacts maintainability.

The architecture of the IPS was evaluated during its design phases and significantchanges were made in response to unfavorable predictions. The architecturepictured in Figure 4 was originally conceived with a number of basescommunicating partial trip information across the various functional modules underthe control of microcomputer based data handlers. Shared memories allowed forthe transfer of data while keeping the microcomputers operating independently andasynchronously. Each functional module performed its own two our of four (2/4)voting logic. The need for reliability mandated a minimum of four buses, three todistribute partial trip information from the other channel sets and one to collectinformation for transmission to the other three channel sets. When the design wasevaluated for maintainability, it was determined that the quantity of "overhead"cards required to support the communications function was so large. This placedan unrealistic mean time between failures (MTBF) on the cards needed to meet themaintainability goals.

The final design, shown in Figure 4, dealt with this problem by restructuring theblocks so that the voting logic is localized to only one module, the Trip LogicComputer (TLC). The other functional modules input to the TLC by simple contact

4411A

Page 129: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 112 -

closures to provide one of the voted inputs. The TLC serves as the terminal ofdata links between one IPC and the other three. Such an arrangement significantlyreduces the quantity of cards required in the system and brought themaintainability goal within the realm of possibiility.

It can be seen from Figure 4, however, that solving one problem raised another.All trip signals now pass through one module thus making its proper operationcritical to the attainment of the reliability goals. The solution to this problem wasto design the TLC around two microcomputers operating in a quasi-redundantmanner and with each checking on the proper operation of the other. Onemicrocomputeris setup to receive the partial trip inputs from the channels in itshost cabinet and those inputs with one out of three of the equivalent channel tripsfrom the other three cabinets to effect a reactor trip signal. The secondmicrocomputer examines the partial trip statuses of only the three incomingchannels in the other cabinets and performs a two out of three vote to generate areactor trip signal. The combination of the logic performed by bothmicroprocessors can be shown to be equivalent to classical 2/4 logic, however, dueto the partitioning of the functions a failure of either microcomputer will onlydisable part of the TLC. This approach, plus the cross checking and immediatereporting of a failed input permitted the achievement of the reliability goal.

To further enhance reliability and to minimize plant upsets and their effect onavailability, due to loss of power, each cabinet is designed to accept two sources ofAC power which energize redundant DC power supplies. The DC power from eachsupply is distributed to each card through on-card auctioneering diodes. If the ACpower sources are selected to be independent of each other in response toreasonable failure modes, the cabinet is ensured a continuing source of power. Itshould be noted that both sources of AC power must be derived utlimately from thesame source of Class IE power to preserve the required separation.

COMMERCIAL COMPONENT AVAILABILITY

One of the basic ground rules during the design of the IPS and ICS was to utilizecommercially available components. It was further determined that, whereverpossible, the standard utility/process line of circuit boards would be used. Thisstrategy was based on two premises. One, the use of "popular" commercialcomponents permits the selection of devices which are "second sourced" thusminimizing the risks associated with being tied to one supplier. In addition, the useof components that are widely used provides a more reliability data base andenhances the likelihood that they will be available over a long period of time.

Secondly, by using printed circuit cards that are shared by other product lines, onegains the advantages of borad production base. These include economies of scale,an increased familiarity of design, manufacturing and test personnel with thecharacteristics of the equipment and the ability to have the equipment checked outin non-nuclear applications. Only in cases where there is no fossil utility or processapplication have circuit cards been designed specifically for the nuclear systems.These are primarily in the area of special cards used to interface with nuclearinstrumentation detectors.

4411A

Page 130: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 113 -

The microprocessor selected is an example of the selection of popular components.After evaluating many models, the 8080A type was selected. It is second-sourcedby a number of vendors. Because of its wide usage, a complete set of support chipswas available. The processor and its support chips are expected to be available formany years.

Even the most widely used components can be expected to become obsolete orprohibitively expensive over the years. The use of a carded approach allowsflexibility to deal with this problem since it permits the selected redesign ofcertain cards in the system in response to the unavailability of particular parts andalso to make selective localized improvements within the usual constraints ofform, fit and function.

There are inevitable tradeoffs to be made concerning the use of commerciallyavailable components and their incorporation into a carded system. Typically,vendor quality control for commercial products is less stringent than for militaryproducts. This must be countered by a strengthened program within the systemmanufacturer's operation for vendor qualification and incoming inspection. Thereare some times when a circuit board of a universal type contains unused functions.These functions may contribute to the unreliability of the card yet offer nothingdirectly in return. The consideration here is the increased likelihood of spare partsavailability and the benefits of having a large history of usage which reduces thelikelihood of undetected design flaws.

EMI AND RFI SUSCEPT ABILITY

In earlier designs, the major concern in this area was electromagnetic interference(EMI) from electrical wiring and equipment permanently installed in the plant.However, with the increased use of radio communications on plant sites andtelemetry for offsite communications, the concern has been broadened to includeRadio Frequency Interference (RFI) as well. The latter is much more difficult todesign against because of the portable transmitters used. For example thetransmitters could be used:

adjacent to equipment cabinets with all doors open,

adjacent to equipment cabinets with some doors open.

near congested areas of low level signals affecting several circuitssimultaneously.

near individual transducers both inside the plant and at remote locations onthe site.

near display devices affecting -«ly the readouts,

near computer and data processing equipment.

It was apparent early in the design that immunity to both EMI and RFI must be anestablished goal. However, no good standards or criteria exist for the installationand operation of installed equipment, that would assure a consistent designapproach.

4411A

Page 131: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 114 -

For example, how close to solid state electronic cabinets or its signal wiring may a480 VAC bus operation several 600 hp motors be routed? Is it permissible for plantpersonnel to operate a walkie-talkie inside the electronic equipment room? Thecontrol room?

In the IPS and ICS certain specific efforts were made to design for EMI and RFIconditions. The total quantity of wires entering the cabinets was reduced by usingmulticonductor cables and multiplexing where appropriate. Fiber optics were alsoused in selected applications further reducing the susceptability to these noisesources. All low level signal cabling has been shielded and care has been exercisedto assure a consistent logical grounding approach to avoid ground loops and othercommon grounding problems.

Coupling fiber optics to electronics is an area of severe sensitivity to RFI. In theinitial design of the equipment for this application, it was determined that if acertain walkie-talkie were within six inches of the card edge that the data linkcommunication was disturbed. The circuitry and packaging of the optic receiverwas redesigned to eliminate the problem.

The IPS prototype will be subjected to several tests to confirm that EMI and RFIsusceptability has been reduced to an acceptably small level. However, untilindustry wide standards have been developed and accepted by all parties,manufacturers, utilities, licensing agencies, and constructors the question of howsmall is small will remain.

LICENSING ISSUES

In the development of any new class IE product, licensing issues are bound toarise. This is to be expected since any departure from the previously accepted wayof accomplishing a safety related function must be closely scrutinized by theregulators to ensure that the new design meets the appropriate criteria. In thecase of the IPS, many things are being done differently from previous designs andin recognition of this, a series of information meetings were held with members ofthe U.S. Nuclear Regulatory Commission (NRC) to review the new areas of providenecessary background information and design criteria. In most cases, the designdifferences were easily reconcilable within the framework of existing regulatorycriteria. Several areas did develop, however, which strained the interpretation ofthe criteria or were not adequately covered by existing criteria. Those latter areasrequired considerable mutual effort between Westinghouse and the NRC to obtainresolution. These issues were verification and validation (V<5cV) of software andcommon mode failures.

The V&V of software was of particular concern to the NRC. The use of softwarebased protection equipment had very little precedent and the criteria by which theadequacy of a V&V program could be judged were in a formative stage. The NRCutilized the experience of the aerospace and avionics industry to form the basis forthe development of criteria for the evlaluation of software V&V programs. Thesecriteria were applied to the IPS software V&V program developed by Westinghouse(Ref. 1) and it was established that the IPS program was substantially in agreementwith the criteria. Minor differences were subsequently resolved to the satisfactionof all.

4411A

Page 132: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 115 -

Common mode failures (CMF) are particularly difficult to deal with in amethodical manner since the identification and treatment of a particular CMFmechanism merely removes one CMF from consideration and an unknown quantityof them still remain. Fortunately, sound engineering judgement and prudent designprinciples go a long way towards producing a system in which the residual threat ofCMF can be considered to be acceptably low. The difficulty of showing that this isso lies in the lack of a systematic methodology by which the design can beevaluated and conclusions drawn.

This problem was addressed during the licensing phase of the IPS in a jointNRC/Westinghouse effort. The outcome of this activity was a documentNUREG-0493, in which guidelines for the systematic evaluation of a design for itssusceptability to CMF's are provided (Section 3). NUREG-0493 also states that thearchitecture of the IPS "is not in conflict with the guidelines" and on that basis, arecommendation for a preliminary design approval was given. Certain additionalwork was specified to be done prior to the final design approval. This work isprimarily analysis and testing to demonstrate that the IPS conforms to theguidelines of Section 3 and is currently under way.

APPLICABILITY

The IPS system, a digital microprocessor system, has inherent design features notavailable in todays protection systems. These additional features, such as twolevel 2/4 voting logic and real time calculations of DNB and KW/ft., provide a muchmore powerful system. The system has the capability, if installed in operatingplants, to make available additional operating margin. To fully achieve this,however, means of modifying or replacing many existing plant components such asreactor trip switchgear, quantity of protection system sensors, ESF actuationdevices, and most of the main control room complex must be available. Such atotal package is the plant integrated control center (PICC).

Most operating plants do not have the space for such a major redesign effort andeven if they did, could not justify the plant downtime or the cost of such a majorchange. The technology developed and proven for the IPS, however, does havemany potential applications in both the NSSS and BOP areas. The following is a listof such applications.

Core Limit CalculationProvide directions continuous indication of flux distribution and corelimits on a CRT display

MultiplexingProvide proven multiplexing hardware/software, and where required fiberoptics, for plantwide communications.

Stand-Alone SystemsProvide basic building blocks for plant upgrades and additional imposedrequirements (such as TMI).

The basic system, as shown in Figure 2, is a four train reactor trip systm with atwo train ESF actuation system. This design is expandable by simply adding a thirdILC to a three train ESF actuation system and likewise to a four train system.

4411A

Page 133: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 116 -

The concepts and the resulting design configuration, though applied to the NSSSinitially, are equally applicable to the balance of plant systems and to othernon-nuclear plant applications where a high degree of reliability and availabilityare necessary with commercially available equipment.

4411A

Page 134: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 117 -

REFERENCES

1. D. M. Edison, E. Sternheim: "Software Design and Verification for NuclearProtection Systems," IAEA/NPPCI Conference on Software Reliability forComputerized Control and Safety Systems Nuclear Power Plants; Pittsburgh,Pa., July, 1977.

2. A. J. Gibbons, D. F. Furgenson: "High Level Software Generation forReliability;" ibid

3. J. M. Gallagher, Jr., G. M. Lilly, "System Architecture for MicroprocessorBased Protection System," IAEA/NPPCI Specialists' Meeting on the Use ofComputers for Protection System and Automatic control, Nuremberg,Germany, May, 1976.

4. J. M. Gallagher, Jr., E. J. Madera, J. B. Reid, "Design of Internal Architecturefor Westinghouse Microprocessor Based Protection System," IEA InternationalSymposium on Nuclear Power Plant Control and Instrumentation, Cannes,France, April, 1978.

5. J. M. Gallagher, Jr., G. Lecocq, C Plennevaux, "Microprocessor BasedIntegrated Protection System," International Meeting on Nuclear PowerReactor Safety, Brussels, Belgium, October, 1978.

6. E. J. Madera, D. M. Rao, G. W. Remley, "A Microprocessor Based IntegratedProtection System for Nuclear Power Plants," ISA Power Symposium, ATlanta,Georgia, May 1979.

7. J. Sutherland, "Distributed Control Systems' part of the Solution or Part of theProblem?", ISA National Conference, October, 1977.

8. J. A. Donnelly, B. N. Lenderking, J. A. Neuner, J. F. Sutherland, "Secure DataTransmission in Nuclear Power Plants by Serial and Optical Techniques,"IAEA/NPPCI Specialists' Meeting on Design of Electronic Equipment toAchieve Electromagnetic Compatability, Winfrith, Dorset, U.K., February,1978.

9« B. M. Cook, "Use of Fault Tree Analysis in the Design of the WestinghouseMicroprocessor Based Reactor Trip Logic System," American Nuclear SocietyTopical Meeting, Probabilistic Analysis of Nuclear Reactor Safety, LosAngeles, California, May 1978.'

10. J. Bruno, B. M. Cook, D. N. Katz, J. B. Reid, "Microprocessor in NuclearPower Plant Protection System," IEEE PES Meeting, New York, N;Y.,February, 1980.

11. J. Bruno, J. B. Reid, "Fiber Optics: Use in Nuclear Power Plant Control andProtection Systems," IEEE PES Meeting, New York, N.Y., February, 1980.

12. Nuclear Regulatory Commission (NRC), "A Defense-in-Depth and DiversityAssessment of the RESAR-414 Integrated Protection System," March, 1979.

4411A

Page 135: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 118 -

BIBOGRAPHY

John Bruno was born in Johnstown, PA, on August 4, 1940. He received the BS andMS degrees in electrical engineering from the University of Pittsburgh in 1967 and1968, respectively.

In 1968 he joined Westinghouse Electric Corp., in Pittsburgh, PA, as a controlsystem designer for nuclear power plant control and protection systems. He iscurrently a senior engineer responsible for various aspects of the integratedprotection system design-He is a registered Professional Engineer in Pennsylvania.

J. Brian Reid was born in Hamilton, Ontario, on September 20, 1940. He receivedthe B. A. Sc. degree from the University of Waterloo, Waterloo, Ontaria, in 1965, inEngineering Physics.

In 1965, he joined Westinghouse Canada and in 1968, transferred to WestinghouseElectric Corp. in Pittsburgh, Pa. He is presently the Manager of a groupresponsible for the specification and system design aspects of nuclear power plantprotection and control systems.

Mr. Reid is a member of the Instrument Society of America.

4411A

Page 136: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 119 -

DATA MANAGEMENT PROBLEMS WITH A DISTRIBUTED COMPUTER NETWORK ON NUCLEAR POWERSTATIONS

BY I DAVIS, C.E.G.B., SOUTH WESTERN REGION, BRISTOL, ENGLAND

1. Introduction

In the South Western Region of the Central Electricity Generating Board,there are 4 nuclear power stations, all providing base load generationof about 2400MW. The latest of these to be commissioned was Hinkley Point'B' Power Station, which has 2 AGR reactors and 2 x 660 MW generatingunits. Along with many other stations designed at the same time in the1960's, the majority of plant monitoring was committed to a process controlcomputer (GEC M2140) which also handled some of the loop controls.

It is generally accepted that sometime during the lifetime of the plant thecentral computing system will need to be replaced. The date for thisreplacement has not been determined and reference in this paper to HinkleyPoint 'B1 is to provide an example of general data management problems.

The objective of the paper is to identify a mechanism whereby data canbe classified for allocation to a computer within a distributed network,and show that this is compatible with the proposed approach for replacinga centralised system with a distributed one.

It is of prime importance to ensure that the data within the computersystem is assigned a level of resilience appropriate to its use. In a moveto a distributed network the total resilience of the system increases, butany individual function might be less reliable. High reliability of thecomputer system in a centralised approach is usually obtained through a'hot' standby computer which provides a broad based approach to maintainingperformance in the event of component failure (Fig. 1).

The 'hct' standby approach is complex and practically difficult whendistributed computers are used and indeed invalidates the prime reason foradopting a distributed approach.

2. Advantages of a Distributed Computing Approach

The CEGB has for some time now preferred a distributed computer approachto plant monitoring and control, and there are now commissioned controlsystems on at least 3 mcjor generating units. The approach has yet tobe fully implemented for plant monitoring. Before a distributed computersystem is installed on a nuclear power plant, the CEGB will have to showthat the whole system and the critical components in it are adequatelyresilient. In practical terms this will mean proving any new method is asresilient and reliable as the existing centralised approach. There are 4major reasons why the distributed computing approach is considered to be abetter option to a centralised one. These are:-

Page 137: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 120

2.1 Resilience

Within a distributed computer environment the required degree ofresilience can be obtained by considering each function separatelyand tailoring the computer support to the specific need. Thusduplication or even triplication of functions are only supplied wherenecessary. Other less critical areas have no immediate functionalsupport and the rest of the system take account of localisedfailure.

2.2 Modularity

Because each element in a distributed network can be given a small,well defined task, it is strategically easy to re-generate, orreplace any particular function. This feature is naturally only madeavailable by a well designed system. In the event of a need toreplace a specific control loop or monicoring function, the computercan be considered as an extension of the plant I/O and replaced ormodified without affecting the rest of the system. However thesystem design requires an explicit data management structure topermit such modifications.

2.3 Extensions

Any monitor or control element can be attached to the system at alater date without the fear of degrading the performance of theexisting functions. The new elements can be added by engineers withonly a limited knowledge of the rest of the system. This isobviously very valuable when considering refurbishment in tt .at oncethe basic system has been designed and implemented, it can "grow" toeventually replace all the residual functions of an existingmechanical or computer based system.

24. Cost

The actual costs of a distributed computer system is probably greaterthan a centralised one when considering just the purchase price ofthe hardware. However over the lifetime of the computer system thedegree of maintenance work and lack of specialist environment for thedistributed equipment is likely to more than compensate for thedifference in cost. The major advantage of the distributed approachin terms of costs are the specialist manpower savings that accruefrom using a high level engineer orientated with a language (CUTLASS)and allowing piecemeal development with a pre-defined data managementstructure.

It can be seen that the single factor that has most effect on thebenefits to be gained from distributed computing is the design ofthe "data-base", in that it must be simple enough to be used byengineers and well structured to provide the degree of flexibilityand redundancy required. This is particularly true in the case ofNuclear Power Plant.

Page 138: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 121 -

Data Classification

The location of data within the distributed network should be determinedby identifying the most appropriate degree of resilience that the data musthave. For the sake of this paper, and the identification of a pragmaticapproach, resilience is redefined to be directly related to the mean timeto restore a specific function. This is because modern equipment icextremely reliable and the mean time to fail can be measured in months toyears. Thus a highly resilient function would be one where the mean time torecovery was very short.

For simplicity 3 levels of resilience are considered here to be appropriatefor a distributed computer network on power plant.

3.1 High resilience:- defined as where the mean time to recovery isless than 1 hour. Here the resilience can only be achieved byautomatic means whereby either duplication of function io providedor manual switching to an alternative approach. On conventionalplant this approach is typified by control systems that might freezeand then switch to manual with an alarm indication.

Particular areas in nuclear power plant which fall into this categoryare the BCD monitor, the shut down sequence equipment and rodcontrol.

3.2 Medium Resilience:- defined as when the mean time to recovery isless than 1 day. In this category partial failure may limitstrategic information and plant flexibility, but would not put theplant in jeopardy. It is essential to have the ability to recoverfrom this situation by facilities on site using shift staff. Typicalexamples of this are vibration monitorinn on the reactor orturboalternator, strategic history information and seme incidentrecording.

3.3 Low Resilience:- defined as when the mean time to recovery isgreater than 1 day. Here repair can be left to day staff or themanufacturers maintenance teams. Plant performance is not affectedby such items although the degree of functional "back-up" might be.Items here would usually include equipment and functions that aregenerally more complex, for example, failures within thecommunications network which are not significant because of its re-routing ability. Data items that come into this category are longterm history information and performance monitoring which arerequired as part of the off line data coJlac'.ing facilities.Generally this would not be operational information but related tothe long term plant integrety. Typical examples are the depositionrate monitor for graphite on the fuel elements, and life fractioncalcul&tions on the boiler tubes.

Having defined the 3 broad classifications it is necessary to allocatethe degree of resilience required for every data item in the particularsystem. This does not only include primary data measured from the plant,but also secondary derived data which usually requires the bulk of datastorage. This process can be simplified by categorising the data into thefollowing major classifications.

Page 139: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 122 -

(i) Loop control and bafety requiring high resilience(ii) Operating strategy, History and Incident records broadly requiring

medium resilience(iii) Monitored and Statutory information generally requiring low

resilience.

These are shown in Fig 2.

The main reasons for classifying the data into the categories is that highresilience is costly and complicated and therefore should only be madeavailable when absolutely necessary. The classifications also map veryconveniently onto a hierarchical Network and simplify the "data-base"requirements.

Network Organisation

This paper deals with a theoretical approach to the management of datain a distributed computer network. The prime reason for this study isto determine a strategy for the computer replacement at locations suchas Hinkley Point 'B' when the existing equipment becomes too restrictiveor unmaintainable. This is unlikely to happen over the next 5 years andtherefore assumptions are made about the availability of hardware. Inparticular it is assumec that in this timescale intelligent messagehandling networks will be available which automatically provide routingredundancy with mii.ltiple ports to the processing nodes. Therefore noaccount is taken of the resilience of the network except it could berepaired on the timescales defined for low resilience systems (greaterthan one day). Structures such as this are already available with somemain frame manufacturer's terminal systems. It is only a matter of timebefore they are available in the process control mini computer market.

With this assumption the normal concept of a hierarchy is not relevant,but is retained to assist the design of the "data-base". The normallinking hierarchy is replaced by a resilience hierarchy, with plant I/Oaccessing the hierarchy at all levels. Fig 3 illustrates the proposed typeof hierarchy which is basically 3 dimensional in that some of the highresilience functions are provided by cross linked duplicate or triplicateprocessors.

Unfortunately there is one plant area that causes difficulty in thisapproach and has to be considered separately. That is the provision ofon-line displays to the control room. The problem is that Ihe displayprocessors only hold background and format data locally and access thespecifically required live data from the processing centre assigned withthat data. Thus although it is obvious that the display processors aregiven the category of 'high-resilience' and their functions will probablybe triplicated, the data storage requirements fall more into the 'mediumresilience' field. For this reason the displays and associated processorsare not included in the hierarchy but located in another plane as in Fig4.

It is worth noting that the backing store requirements are broadlycompatible with the resilience levels in that the highly resilient levelgenerally requires no backing store (except for displays), the medium level

Page 140: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 123 -

would usually rely solely on solid state forms of backing store whilstthe low resilient level would use both solid state backing store andremovable data storage such as mag. tape and discs.

Replacement of a Centralised Computer

There are a number of very sicjnificRit factors which cause difficulty whenreplacing an existing central computer system with a distributed network.These problems are anticipated at Hinkley Point 'B' and must be solved andproven at another location where the criticality of the computerperformance is not as great. Within the South Western Region of the CEGBthere are 2 power stations with ageing computer systems that could be usedas a trial site.

These problems are itemised below:

5.1 Complete Change Over

The obvious approach to replacing the csntralised computer is toremove the existing computer and replace it with a distributednetwork. However this is considered too risky and violates some ofthe advantages of a distributed approach. The computer room atHinkley Point has very little spare space for more computer equipmenteven though the replacement system would not take as much space asthe original machine. The "parasitic replacement" approach isfavoured whereby complete functions are removed from the centralmachine and implemented on other equipment. An example that hasalready occurred for other reasons is the 400 KV network alarmmonitoring system. The display system is another good example in thatit would benefit from being upgraded to use modern colour graphicsdisplays. Replacing this function on modern minicomputers would saveconsiderable space in the computer room which could then be used tointroduce more components of the distributed network.

5.2 Shared Input

Some of the older scanners in service on station monitoring systemsare difficult to couple in parallel with new scanners. For examplethe traditional scanners in use at Hinkley Point Power Station testthe condition of the thermocouples by injecting 50 volt spikes whichcomplicates the concept of shared inputs. Where absolutelynecessary, this can be overcome by front end signal conditioning, butthis tends to be expensive and not fully successful. The moreappropriate method during the transition period is to scan the datawith the new or old equipment and transfer the data betweenprocessors.

5.3 Time to Replace

The time taken to develop the original software for the Hinkley Point'B' central computer system was about 60 man years. To replace thecomplete function with a distributed network could require about 20man years of effort not including the essential engineering works.This degree of effort would be better managed if the 'parasitic1

approach to replacement was followed whereby complete transfer of the

Page 141: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 124 -

computing function could occur over 2 to 3 years and only require alimited outage time. Using this approach the original computersystem eventually does no more than route messages in the same way asthe network organiser.

6. Implementation Strategy

The strategy to adopt therefore is a piecemeal approach where specificfunctions are taken from the central computer system and implemented onthe distributed network. It is suggested that the first major functionto be replaced should be the display system as this usually is the onemost in need of being updated and in the particular case where space inthe computer room is extremely limited, this should release enough spaceto start implementing the network and other parts of the system includingthe upgraded displays. The next major area is the 'Category I1

instrumentation and the control systems which can be treated in isolationand implemented with redundant processors to obtain the required level ofresilience. Once this stage has been completed and the new distributednetwork commissioned, the rest of the conversion exercise can proceedrelatively simply.

Table 1 lists the broad plant areas at Hinkiey Point together with thenumbers of analogue and digital inputs on the existing central computersystem and the probable numbers of minicomputers used to replace theexisting functions. The total number of inputs are about 6000 and wouldbe handled with over 50 minicomputers. It can be seen that the numberof medium and low resilient computers is small compared with the highresilient ones although the latter are larger and more complex.

The most interesting factor in this approach to the replacement of thecentralised computer with a distributed network is that with the use ofa high level engineer orientated language (CUTLAS5) with built in dataprotection, the majority of the work can be done by engineers with a closeunderstanding of the plant. It is only at the higher levels of thehierarchy and the basic design of the network and 'data base1 that theskills of computing specialists are needed to implement the applicationsroutines. Similarly any alterations or additions can be done by stationengineers with only a limited degree of outside assistance.

7. Conclusions

By approaching the design of a distributed computer network from therequirements of the data in the system, a simple pragmatic approach canbe used to determine the required system resilience. The networkarchitecture can thus be built up which is extremely flexible and ableto accommodate extensions in the future. The essential feature of thisapproach is that the resultant architecture is simple to understand andtherefore can be used by non computer specialists who understand the exactplant characteristics. With the help of an appropriate software system(CUTLASS) these same people can implement the applications routines ina secure environment.

Page 142: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 125 -

This approach is particularly relevant when considering the replacementof a centralised computer system with a distributed one. Once the overallstrategy has been defined the parasitic approach to piecemeal replacementof function within the old system can be managed easily and eventuallyleads to a fully distributed system with the desired characteristics.

The proposals in this paper are only theoretical, but the debate of howto replace the centralised computer systems with distributed network onnuclear plant in the CEGB has started. Such an approach will not be usedon nuclear plant for a few years. Prior to this, the approach will bestudied theoretically and probably on conventional plant. It is hoped thatthese proposals will generate some discussion which can beneficially affectthe approach prior to actual implementation on Nuclear Plant.

Acknowledgements

The author would like to thank the Station Manager and staff of Hinkley PointPower Station, in particular Mr C Green, for their ass tance in the preparationof this paper.

This paper has been produced by permission of the South Western Region of theCentral Electricity Generating Board.

ID/VLC0M03IDVLJUN18 March 1980

Page 143: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 126 -

1 PLANT AREA

I (SYSTEM)

I C W SystemsI TurbovisoryI AlternatorI Feed HeatI Boiler Feed PumpsI BoilersI CirculatorsI ReactorI Pressure VesselI Decay HeatI Safety SystemsI Health PhysicsI Gas TurbinesI SuppliesI 400 KVI Others

II TOTAL

1

Level 2 (History etc.)I Level 3 (Monitor etc.) ,I Displays

I INPUTS

[ Analoque

[ 101! 317

1311 117

75I 216

1428663607

201020960

250

2899

I Diqital

280229282231132218275454835501757308172228

3021

5920

NO OF PROCESSORS |

1 2 i1 2

2 I3 |4 I4 |4 I4 I2 I1 |

4 I1 |

2 I4 I2 I

11

1|

4 I5 |4 I

154 |

Table 1 Existing number of data inputs on the unitised centralcomputer system at Hinkley Point 'B' Power Station andapproximate number of minicomputers required to replaceit.

Page 144: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 127 -

• R3

i

y t

C

V

1

\\

\\

I\

\\ \

\C2

/C3

FIG.1. CENTRALISED COMPUTER 5VSTEM WHFRE PLANT(R)AND PERIPHEPALS(D)ARE SWITCHABLE TO A REDUNDANT PROCESSOR(C)

HIGH

MEDIUM

LOW

RESILIENCE

SHOUT

CONTROLSAFETY

FIG. 2 THF RE I.HELA711U

DATA

STORAGE

STRATEGICHISTORY

• nrTWFFN OATA TVPrr5j Rrc.lt.lFNCE_AN^D_OTHER

lii i ' iN A iit:,inl'iiij"ft:o"tJi.Tvvonn

LONG MCANTIMC TO• RLI'Alil

CONDITIONMONI TOR

DATA

Page 145: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

p-LANT

J

- 128 -

PLANT AREA

Conditionmonttoiii.

"\ etc.

History/'Strategy

PLANT AREA

jContro!/|Saf«ty

FIG. 3 HIERARCHICAL NETWORK OF PROCESSORS WITH DUPLICATION OF FUNCTION

AT THE LOWER LEVEL (NB. THE DATA HIGHWAYS ARE NOT HIERARCHICAL)

OISPLAVPROCESSESt PICTURE FILES

DISPLAYS

FIG./. DISPLAY FUNCTION IN THE HIFRAHCHICALJjITTWOrjK.

Page 146: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 129 -

The Use of D i s t r i b u t e d Micro P r o c e s s o r s fo r Sodium Pre -Hoa t inq System

K. FUJ1I, Fuchu Works, Toshiba Corp.

T. SATOU, Kuchu Works, Toshiba Corp.

M. SATOU, Advanced Reactor Kn<| i ncci i n<| Department , Toshiba f 'orp.

I I . O K A N C , N m . - l e . H ' K t u i i r n -i • i i n < | L . i b o i . 1 1 n r v , "I"« »r ; l 1 i i »<_* i ' n i p .

Page 147: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 130 -

Abstract

This article deals with a hierarchy/distributed control system forsodium system in Liquid Metal Cooled Fast Breeder Reactor (LMFBR). Thecontrol system consists of mini-computers, a computerized control paneland distributed front-end control units with micro-computer. In thissystem, the concentration of the plant operation information and the dis-tribution of the control function are aimed to improve the man-machinecommunication to increase the system availability. The preheating controldevice with micro-computer dealing with several thousands temperature con-trol points and the preheating system to maintain piping and componentabove the sodium melting temperature, had been developed and tested atactual sodium test facilities, and the result is satisfactory for systemrequirement to apply to the actual prototype LMFBR.

1. IntroductionThe trend toward ever larger and more sophisticated nuclear plants

has accentuated the requirement for their safe and efficient operation,and this has stimulated extensive studies in the utilization of digitalcomputers and micro computers for nuclear power plant control and moni-toring. For a sodium-cooled Fast Breeder Reactor (FBR) which containsadditional systems and components compared with Light Water Reactor(LWR), operation procedure and control system are accordingly morecomplicated. Considering this situation, we had been developed, forthe first step in 1976, centralized full automation system composed ofduplex-computer for sodium test facilities in our laboratory, which hasseveral thousands control points.

Furthermore, for the second step, authors have been designed moreadvanced and more reliable control system with operating experience ofthe automation system to apply to the actual prototype FBR. Combinedstructure of a hierarchy computer complex and a distributed controlsubsystem are aimed in our study to improve the performance, man-machinecommunication, reliability and construction and/or arrangement flexi-bility. And this concept will agree with the technical trend, to con-centrate the plant operation information and to distribute the controlfunction. Improvement of a microprocessor and/or micro computer, whichhas high-performance and high-reliability at the front-end controller,can be contributed to solve problems related to control that would beencountered in future, in the FBR operations.

The preheating system using micro computer had been developed toapply mainly FBR, and had been tested at the actual sodium test facil-ities .

2. Aspect of the prototype FBR plant control systemsThe first prototype FBR in Japan will be composed of the major

systems such as Reactor, Primary Cooling System, Intermediate HeatExchanger (IHX), Secondary Cooling System, Auxiliary Cooling System(ACS), Steam Generator, Turbine-Generator, Electrical Equipment andAuxiliaries. The plant control system will provide overall controland coordination of the above systems for all operation period. TMIaccident typically requires to the control and instrumentation systemmore efficiency and surely surveilance to assimiliate a vast quantityof plant information. In addition, as the operation of the FBR ishighly dependent on rapid and accurate monitoring of nuclear and

Page 148: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 131 -

thermal processes, the utilization of computers is becoming standardwhich results improved operation performance. Furthermore, advancedelectronic control panel and multiplexing technology to reduce cablequantities and to increase performance are being considered.

The functional configuration of computer systems for the typicalprototype FBR is shown in Figure-1, and its functional description, ex-cept offsite computers, is briefly shown in Table-1 . Site computersare allocated by respective functions and each block computer is design-ed with reliable duplex or multi system.

3. Automation System for Cooling System

3.1 Centralized Full Automation SystemCentralized full automation system mainly composed of duplex systemwith process computer T0SBAC-40C was developed in 1976 for sodiumtest facilities in our laboratory. Components included in thesefacilities are: 10 tanks, 6 electro-magnetic pumps, 13 heat exchan-gers, 4 cold traps, 5 plugging indicators, 120 sodium valves, 60argon valves and 800 electrical heaters. The test facilities aredevided into four loops and these can be ooerated independently.The automation system are consisted of TOSBAC-40C duplex system,interactive computerized operator console, interface between com-puter and plant actuators, system interlock and safety protectionsystem.

The automatic operation system was developed aiming: (1) fullyautomated simultaneous operation of four test loops by means ofdigital computer directly, (2) computerized operator console withimproved man-machine communication for the numerous plant input/output signals, (3) ultimate safety protection measures to beprovided by separate hardware independently of the computer system,whose function would thereby be dispensed of consideration for pro-tection and plant/computer interfaces to assure high reliabilityand safety. Computer system assures fully automated normal opera-tion of the four sodium test loops, including start up, steady staterun and shut down. Excluded from this automatic system are thefunctions of inspection and testing, as well as the maintenanceoperation before start up and after shut down. Particular effortwas applied to automation of the start up and shut down operations,which call for the manipulation of more than 100 valves and compo-nents in appropriate procedure, and the supervision and control of800 temperature control points.

The basic sequences of operation are largely common to all of thefour loops; (1) start up preparation, (2) argon gas exchange,(3) preheat, (4) sodium charge, (5) steady state run, (6) heat down,(7) drain, (8) plug check, (9) reset operation. Direct digitalcontrol (DDC) items for this system are as follows: (1) temperatureON/OFF control for 550 loops, (2) temperature PID control for 250loops, (3) air flow cooling control for 20 loops, (4) sodium flowcontrol with multivariable control method for 13 loops, (5) opera-tion of sodium valve for 120 pts, (6) operation of argon valve for60 pts, (7) operation of damper for 20 pts, (8) other componentsoperation for 100 pts.

Computer hardware system is shown in Figure 2. The sequence controlprogram using plant table method like a COPOS R^ (Computerized Optimum

Page 149: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 132 -

Plant Operation System), man-machine communication program forinteractive computerized operator console equipped with colourcathode-ray tube (CRT) display device like a PODIA® (Plant Opera-tion by Displayed Information and Automation), DDC programs andothers, have been developed for assuring automatic plant control.

Three years operating experience of this full automation systemhave gained fully satisfactorily compared with conventional manualcontrol system. Especially, the control performance by DDC, oneman operation, supervisory function and man-machine communicationhave been greatly improved. Through these experiences, the require-ments of remaining free manual operation if required, on line main-tenance function of the system and more reliable system have beennewly recognized to be necessary,

3.2 Distributed Control SystemActual cooling system in the FBR is similar to the test facilitiesin our laboratory, with respect to basic components and its opera-tion procedure. Nevertheless it is important but troublesome jobto control the system, the operator should operate cooling system,such as cooling system complex and additional components comparedwith LWR, during plant starts up or shuts down. In addition, con-trol panel size reduction, high reliable device and cable quantitiesreduction are serious problem to accomplish the prototype FBR inJapan.

Because of this situation, we have designed more advanced and relia-ble control system than the automation system described above.Figure-3 presents newly developed control system and Table-2 showsfunctions allocated to each control unit.

Concept of the newly developed control syscem is aimed to acquirethe following functions: (1) man-machine communication and sur-veillance function should be further improved through computerizedcontrol panel, (2) actuators should be controlled by distributedfront-end controllers separated from centralized computer, not onlyautomate fully the cooling system by one man but also can be operatedfreely as the conventional manual operation system, (3) online main-tenance partially avoiding the system failure should be improved,(A) system diagnostic function should be further improved.

Process computer TOSBAC-7 having high performance and high reliablethan TOSBAC-40, had been newly developed.

Already, programmable logic controller (PLC) TOSMAP (Toshiba Micro-processor Aided Power Controller) had been developed and improved tothe actual power plant. TOSDIC (Toshiba Direct Digital Controller)had been also developed and improved to the actual industries widely.These TOSMAP and TOSDIC can be linked between process computer TOSBAC-7. Therefore, this control system can be composed of a hierarchy anddistributed complex system to make sure the concentration of plantinformation and distribution of control functions. Though severaltens kinds of temperature control device have bee.i frequently availedin industries, there are not suitable preheating control device forFBR, which controls a numerous quantity of temperature control pointsmore than several thousands. Considering these situations, preheatingsystem especially for FBR have been developed.

Page 150: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 133 -

4. Distributed Preheating System

4.1 GeneralGenerally, the design of distributed control system defines the in-vestigation of system optimization with respect to concentration anddistribution. If the distribution of devices proceeds ultimately,it will greatly increase cost, space, complexity of maintenance andcommunication between computer, like that of the conventional con-trol devices. On the other hand, lack of distribution will occur sameproblem on the full automation system described above. Therefore,moderate distribution level depending on its system scale will beexpressed by the following equations.

f (wn) =

ni : parameter relating to system requirementswi : weight coeff. of nim : scale variable

4.2 System Requirements1. With similar equation described above, our centralized automa-

tion function would be gained; improved man-machine communicationwith computerized control panel, centralized data processing andmonitoring, control performance such as rate of change controldepending on operating procedure and balanced temperature distri-bution.

2. Direct digital control should be done with front-end controller,to continue control and monitor when the computer released.

3. Front-end controller should be allocated to each suitable blockto keep the system availability without backing-up system.

4. Manual operation would be selected freely and partially if ope-rator required.

5. Optimization of cost performance ratio

6. Reducing total space

7. Reducing cable quantities

8. Having system and self diagnostic function

9. Keeping suitable information-transmission ratio between computerand front-end controllers

10. To have a long life without failure and any trouble

11. Maintenanceability

12. Modulated system for expansion and modification

13. Simplified system

4.3 Function and System AllocationsFunction and their system allocations are basically defined as shownin Table-3.

Page 151: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 134 -

4.4 Preheating Control DevicePreheating control device as a front-end controller have beendevided into a control unit, a relay unit and a power switch unitwith electric leakage to ground detection and breaking feature(ELB). Figure-4 shows a conceptual architecture of this device.

Normally a set of three modules such as a control unit, a relayunit and a ELB unit, is used to control heaters, but it is possibleto use only a control unit to monitor the temperature points.Prototype FBR will have about three thousands heaters and abouttwo thousands monitoring inputs.

4.5 Architecture of Preheating Control DeviceOptimum solution with respect to distribution found out with con-sidering investigation of equation (1) and under the technical res-triction. 256 points are maximum quantities to be processed for eachcontrol unit, and its should be divided by 16 groups x 16 points/group at maximum.

A ELB unit contains 16 ELBs and each ELB turn on or off 16 SSRs andthe numbers of SSR corresponds to the numbers of heaters.

A relay unit contains 128 SSRs as a standard board size.

A control unit has a micro computer with process input and output,control panel, power supply and transmission module, but the referenceblock terminals to compensate the thermo-couple cold point are ex-cluded. The reference block would be equipped to a respective placelocally and multi wire cable would be installed for the reduction ofcable quantities and expensive thermo-couple compensating wire.

Specification of micro computerCPU :Memory : RAM, ROM, N-RAM, total 8 kW

Process inputAnalogue : 256 T/C + off set pointDigital : 16 ELB status+SSR status (option)

Process outputDigital (TTL) : 256 points for alarm indicators

256 points for SSRDigital (Power type) : 256 points for EMR (option)parallel input/output: 1 set for thyristor control (option)

Interface for operator console

Transmission module : 1 set

Photo-1 shows the prototype preheating control device developed in1978, and the device was demonstrated in actual sodium test loop inOarai Engineering Center of Power Reactor and Nuclear Fuel Develop-ment Corporation (PNC), in 1979.

In the device shown in photo-1, transmission module is not includedbut a reference block is included in a control unit.

5. ConclusionAs a result of the experimental study of the possibility of ef-

ficiently operating and controlling the sodium cooling system, a hier-archy and distributed computer complex system can be contributed

Page 152: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 135 -

significantly to solve the problems related to control system that wouldbe encountered in the actual prototype FBR oneration. Particularly, thepreheating de-'ice newly developed was demonstrated in actual sodiumtest loop which installed in PNC Oarai Engineering Center, without anytrouble.

In considering future technology n~ospect related to control system,materials and devices using in the system may be changed by rapid pro-gress of electronics along with performance and reliability, and alsothe concept and architecture of the control system discussed aVove wouldbe improved more and more frequently and realistically.

AcknowledgementThe authors wish to thank members in Oarai-Engineering Center of

PNC for their suggestions, giving opportunity and support throughoutthis study.

Page 153: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 136 -

REFERENCES

(1) (',. Yan, et, al, "Distributed Computer Control System in FutureNuclear Plants" IAEA-SM-226/100.

(2) M. Olino, et, al, "Development of Computer Control System forSodium Loops" NUCLEX 73, Basel, Switzerland.

(3) S. Takamatsi:, et, al, "Computer Control System for Sodium TestLoops" Journal of Nuclear Science and Technology Vol. 15. No. 3(Aug. 1978)

(4) H. Otaka, et, al, "BWR Centralized Monitor and Control System,PODIA Development" Toshiba Review No. 107 (1977).

Page 154: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 137 -

§•OU

Training

Simulator

,arge scaleBusinessComputer

Main

Computer

Ift-coreDiagnosticComputer

PrimaryCooling SysComputer

Display

Computer

SecondaryCooling SysComputer

See. DetailFig. 3

Figure-1- Total Ccwcut«*r System

FuelHandlinqComputer

Management

Computer

Page 155: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 138 -

TO MAIN COMPUTER

Cooling

System

Computer

Computerized Control Panel forCooling System

Control Panel

Safety

System

TOSWAY

TOSMAP TOSMAP TOSDIC

Preheating Controller Programmable Logic Controller Digital Process Controller

Figure-3. Distributed Automation System for FBR Coaling System

Main computer

Computer for

cooling

system

Computerized control panel

display inf.

operated inf.

CRTsAnnunciatorsOperator con-sole switches

etc.

i Ii

Referencescondition

scan & alarminformation

Controlunit Micro computer

with Process I/O

Console panel

Alarm indicator

Transraision

module

Controlsignal

fromTemperaturesensors

Scan, alarm, control, man-machine communication withconsole panel diagnostic,transmision testing

Solid State Relays

Thyristers

Electric Magnetic

Relays

ELB unit

activate heaters On/off the power detectionof current leakage to ground

heaters

Figure-4. Conceptual architecture of preheating control device

Page 156: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 139 -

Table-1. Functions of computer

Functions

Hain Computer

+

Display Computer

1 .

a.

J .

5 .

1 .

2 .

Core performancecalculation

BOP Performance

calculationOperation guide

Scan, Log fiRecordData Gathering

CentratizedDisplay withCOT

AlarmAnalysisDisplay

In-core

Diagnostic

Computer

Followingdiagnostic

1.

2.

3 .

4 .

5.

6.

FailedFuel Detec-tionBulk & Local

NoiseAnalysis

FluctuationMethod

SodiumBoilingDetection

ReactivilyDiagnosticLoose

arts

Primary

Cooling Sys.

Computer

1. Scan, logand mnniftor

2. Sodiumleak monitor

3. Operationguide andsupervisoryof Primary J

cooling Sys.operation

Secondary

Cooling Sys.

Computer

1. Full auto-mation cMan-machinecommunica-tion

(See detai lTable-2)

1.

2 .

3 .

4 .

Fuel

Handling

Computer

Automaticcontrol forFuelHandlingMachine andRotatingPlug

Preventmi sopera-tion

Control &Imaging ofUnderSodiumViewer

Hanaqe theFuelInspectiondata

X.

2.

3 .

4 .

5.

6 .

Computer

PersonalRadioactiveExposure

RadioactiveExposureEvaluation

Weather

MonitoringPost

RadioactiveHasteDisposal

In/Out theControlRegion

Table-2. Function Allocation of Cooling System

Naaw

Functions1.

2 .

4 .

5.

6 .

Supervisory

Computer

TOSBAC-7

Start up/shut downautomation

Maintain thecomputerizedcontrolpanel

collectionfrom front-end-cont. £ loggingrecording

Systemmonitoring

Informationtransmisionbetweenfront-endcontrollersand maincomputer

Systemdiagnostic

ComputerizedControl

Panel

Includesr

1 .

2.

3 .

4.

5.

ColourCRTs withkeyboardandLightpen

Operator

f o rautomation

Annuncia-tors

TOSDIC

Minimixn -ControlSWsRecorder*IndicatorsControl-lers

Preheating

1.

2 .

3 .

4 .

5 .

6 .

Controller

Preheat-i n gControl

Automa-tionpreheat-ing

Scan, Alarm

Informationsupervisory

Diagnostic

Han-machine

Program-mableLogicController

TO SUP

1. Sequencecontrol

2. Automa-tionsequencecontrol

DigitalProcess

Controller

TOSOIC

1. Processcontrol &indication

2. Automa-tionprocesscontrol

check and monitor

transmisioncomputer

between

communication formanual operation.

Hardwired

Control

1. Safetysystem

2. Interfacerelays

Others

Subsystemautomation

e.g.

Plugging *SodiumPururitySystem

Page 157: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

Table-3. Functions and syateM allocation* of preheating ayateM

Photo-1; Preheating Control Device

(Left to right; • control unit, a r«l*y unitand a ELB unit)

Function!

1. Scan

2. Alarm

3. Control

ON/OFF or PID

4. Supervisory control

5. Logging

6. Protection

7, Manual operation

a* Dla^nOICXCM OT QQV1GG

9. Test the control loop

10. Data transmision

Between aain computer

Device

Computer (+ panel)

NO

¥ea, totally and visually

with horn

NO

Full automation

Rate of change control

Mininization of

temperature deviation

YES

NO

Approval

syuzetn * sei r

NO

YES

VIS

Pront-*nd control liir

YES

Yas, aach point and

alnpla

Ve>, aat point

control

Followed by conputar

with aach

refaranca

Option

Yaa, raaat tha

power supply

heatars

Approval

Salf

Option

res

HO

Page 158: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 141 -

DISTRIBUTED SYSTEMS DESIGN USING

SEPARABLE COMMUNICATIONS

by

A.C. Capel and G. Yan

ABSTRACT

One of the promises of distributed systems is theability to design each process function largely independentlyof the others, and in many cases locate the resulting hard-ware in close proximity to the application. The communicationsarchitecture for such systems should be approached in the sameway, using separable communications facilities to meet indiv-idual sets of requirements while at the same time reducing theinteractions between functions. Where complete physical sep-aration is not feasible and hardware resource sharing is req-uired, the protocols should be designed emphasizing the logicalseparation of communication paths. This paper discusses thedifferent types of communications for process control applic-ations and the parameters which need to be characterized indesigning separable communications for distributed systems.

Page 159: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 142 -

DISTRIBUTED SYSTEMS DESIGN USING

SEPARABLE COMMUNICATIONS

by

A. C. Capel and G. Yan

1. DISTRIBUTED ARCHITECTURES FOR PROCESS CONTROL

To achieve an acceptable balance between cost andperformance in designing distributed systems,architectures must be selected to match the specificrequirements of each application. Although nopractical design methodology is yet available tohelp designers to better understand distributedsystem properties and to guide them in generatingmore accurate specifications, a generalized designsequence was formulated to illustrate the applicationof established design methods in three major designareas: processing clusters, data communications, anddatabases [1].

In the data communications area, a wealth of informationis already available [2,3]. Much of this work isfocussed on the business data processing market whereresource sharing and remote human interactive services[4,5] must be supported over large physical distances,using facilities based on existing plant. Other work,directed towards the sharing of in-house computing,storage and terminal equipment, has led to highly-multiplexed localized single bus networks [6,7,8]. Incontrast, the process control requirements, such asthose found in a nuclear power plant, generallyencompass a spectrum of communication needs which willcall for a range of solutions.

One of the promises of distributed systems is theability to design each process function largely inde-pendently of others, and in many cases locate theresulting hardware in close proximity to the applica-tion. Consequently, effective communications must beprovided since functions now communicate over largerphysical distances and between many generically dis-similar machines. Rather than following the trend

Page 160: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 143 -

of using single highly multiplexed buses, it isproposed that separable communications would bestmeet the requirements of process control applica-tions.

2. SEPARABILITY AS A DESIGN OBJECTIVE

A distributed system can be envisaged as a complexassembly of hardware/software elements overlayed byan interconnected assembly of data acquisition, proces-sing, and control functions. Each of these functions,while performing the individual actions of, for example,pressure control, temperature control, operator input,should be designed in a manner which reduces inter-actions between each other due to implementation details.

To simplify the design tasks, the general approach takenis to subdivide the total information transport systeminto subsystems, each carefully matched to a particularset of requirements. A full understanding of the re-quirements is essential so that the partitioning of thetotal job can be carried out efficiently.

This partitioning is similar to that done for the compon-ents of the distributed system itself. Figure 1 illus-trates this analogy of equating centralized processingwith highly multiplexed communications, and distributedprocessing with separable communications. The rationaleis simple. Instead of simply looking at the cost savingsof shared hardware, which is the major attraction forboth centralized processing and highly multiplexed com-munications, one should now consider the better matchingof requirements with the available electrotechnologieswhich would lead to distributed architectures and separ-able communications.

While some functions with stringent requirements will useseparate communications hardware, clearly the cost andmechanical constraints of today will require some sharingof common equipment. The first task is to identify usergroups which have conflicting requirements or those forwhich the sharing of common facilities is not desirablenor economically attractive. Within each sharing usergroup, designs which make one user logically independentof the others should be adopted. In this manner certainperformance parameters, e.g. throughput, can be guaranteedto be independent of the state of other users.

Page 161: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 144 -

INTERNAL COMMUNICATIONS

CENTRAL PROCESSING

/

p

\p

HIGHLY MULTIPLEXED COMMUNICATIONSDISTRIBUTED PROCESSING

SEPARABLE COMMUNICATIONSDISTRIBUTED PROCESSING

FIGURE I: COMPARISON OF PROCESSING AND COMMUNICATION ARCHITECTURES

USERS USERS

PHYSICAL ACCESS '"

'. /

USER

f \

PROTOCOLS

/ \

COMMUNICATIONSMEDIUM

PACKET

FRAME

PHYSICAL

>

|

LAYER

LAYER

ACCESS |

COMMUNICATIONS

MEDIUM

USERS

I USER \ I/PROTOCOI.S1

PACKET

FRAME

I

LAYER

LAYER 1

ACCESS

PHYSICAL \ I(/ ACCESS ^

COMMUNICATIONS

MEDIUM

A) PHYSICALLY SEPARABLE B ) CCITT X . 2 5 C) INTRAN

FIGURE 2: SEPARABLE PROTOCOLS

Page 162: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 145 -

2.1 Physical Separability

Physical separability pertains to the use of totallydistinct hardware for different communication func-tions. Unique requirements call for separate sub-systems and the extent of this is limited by the costand physical complexity of the resulting design. InREDNET [9] , a unique requirement for a broadcast timebase for all processing elements led to the provisionof a time-of-day subsystem which uses separate hard-ware. On the other hand, a number of multipoint-to-multipoint subsystems each providing identical butseparate inter-function communications are overlycostly and complex, and the best approach may be asingle multiplexed subsystem. This option wasselected for terminals and some inter-process com-munications in REDNET [10].

Clearly a balance must be struck between the currentcost savings of shared hardware, with attendantincreases in interaction between functions, and theadvantages of physically separate facilities,

2.2 Protocol Separability

Even when communications hardware is to be shared,the design of protocols can be carried out in amanner which emphasizes the separability of supportto individual users. Since it is the protocolswhich permit resource sharing in the first place,it is in the design of protocols that separabilitymust be entrenched. This aspect of protocol designis one not generally considered in the availableliterature.

Currently the "layered" approach to protocol designis favoured [11] and three examples are shown diagram-matically in Figure 2. Each one is designed to inter-face a number of users to information transport equip-ment. Every layer of each protocol operates as inde-pendently as possible of the other layers, whileprogressively insulating the user from the specificconsiderations of the hardware transmission media.Clearly, for shared media, separability of individualuser traffic becomes progressively less obvious at the"lower" layers.

Page 163: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 146 -

Figure 2 (a) shows how physically separate mediapromote the use of completely separable protocols/although care must be taken when using common proces-sing hardware and software. Separate hardware doesone no good for example, if a poorly controlled sharedbuffer pool is used.

The CCITT X.25 [12] protocol of Figure 2(b) combineslogical channels at the Packet Layer so that they maybe subjected to common controls at the Frame Layer.For X.25, data flow and error control procedures areavailable at both layers, although their use at theFrame Layer without adequate protection at the PacketLayer, will allow logical channel data flows to poten-tially interfere with one another.

Figure 2 (c) represents the internal INTRAN structureused for the REDNET terminal support subsystem [10].Since this subsystem uses a multi-access media, anextra Protocol Layer is added to control media access.Additionally some units have access to a second physicalchannel (for high data rate transmissions), and so asplit in the data flow is made at the lower layers toroute the data appropriately. (The second channelutilizes a different media access protocol more suitedto large data transmissions.)

The INTRAN design attempts to minimize user inter-dependence. All flow control procedures reside atthe Packet Layer, with none provided at the FrameLayer where potential common blocking might occur.Similarly since ARQ* error control procedures areused, and since these procedures may also cause blocking,error detection only is provided at the Frame Layer withrecovery provided at the Packet Layer only.

Similar measures are taken at the Media Access Layerbut in addition, this layer must take into account theoperation of other units attached to the multi-accessmedia. INTRAN uses a primary channel access techniquewhich guarantees a minimum level of service irrespectiveof other loads. Additional capacity is availabledepending upon total load. The optional second channeluses a different access mechanism which is more efficientin channel usage but provides no access guarantees beyondstrictly statistical ones.

*ARQ: Automatic request for retransmission requiresretransmission of erroneously sent data; as opposedto FEC - forward error control - procedures whichinclude error correction codes within the initialtransmission.

Page 164: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 147 -

For communications systems which use store and forwardor inter-linking intermediaries, layers of protocolwill exist between users other than those shown inFigure 2. Considerations similar to those alreadydiscussed should be applied in these eases.

3. COMMUNICATION TYPES

Before a separable communications facility can bedesigned, it is necessary to identify the differentcommunication types to be supported, which can begrouped into: machine-to-machine, man-to-machine,and man-to-man communications. Although importantin practice, man-to-man communications will not bedealt with further.

3.1 Machine-to-Machine Communications

Five examples are analyzed and summarized in Figure 3to delineate a set of parameters needed to character-ize the communication types. For specific designs,the identification of communication types is deriveddirectly from the requirements.

3.1.1 Inter-Process Communications (IPC)

Refers to the interchange of short, concise coordina-tion messages between functions (or tasks). Largedata transfers are specifically excluded and relat-ively low throughout requirements are expected. Theseinterchanges are subject to very critical constraintsin terms of the control of transmission delays sincetask semaphoring schemes (using IPC) are very sensi-tive to execution delays. Uniform transactionarrival rates can be expected leading to an evenloading on the communications facility.

For distributed systems where some functions mayexecute in more than one physical machine, addressingby function would be an asset.

3.1.2 File Transport

Refers to all interactions between processors and/or storage devices involving the movement of largeamounts of data. These data can be characterizedin terms of the length of time for which they remainvalid. Thus, rapidly changing data must often betransported with correspondingly short delay timeswhile slowly changing data can tolerate longer delaytimes.

Page 165: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

148 -

PHYSICALENVIRONMENT

TOPOLOGY

ADDRESSING

ERROR CONTROL

DATA FLOW CONTROL(buffering)

LOAD RANGE

THROUGHPUT

TRANSMISSIONDELAY

DATA FORMATUNIT OF DATA

0 *O O tJ•4-1 O <M O

co• H

a s •3

£0

•H

0 Uc3

qo

•H>1 -P

•° 8

A id-u TO i uu -u c -HO O O -P

J= -H O -H

|

•su

2CL ft

s iH 4J U 4J

0) 0) O

FILETRANSPORT

a.om

1MACHINE-MACHINE

0JrH -

(D O

5 =

i e

Si

TERMINALS

MAN-MACHINE

ao

H3So1

*JC

arP

i

c-Ho

s14Jc-H

aHV

3S1

Bi2

C•HOCu

-H-JJ(H

1i0

4J1

4J

coB.I

-r-1• 0

4 J

c•H0

PM10

4Jf

• U

q

o0,i

p.

s

- .. e o

o w

3^ g*

Page 166: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 149 -

In Figure 3 a more detailed characterization has beenmade with the extremes represented by "batch-like"and "real-time" descriptors. Real-time data aremoved to real-time functions which rely on short trans-mission delays. Batch-like data can be subjected totransmission delays since the functions using thesedata are subjected to execution delays due to queuingand other factors.

Batch-like data will also require extra data flowcontrol procedures since batch functions are lesslikely to be ready to accept the data at the instantof arrival. Loading of the communication subsystemwill be more even since the higher tolerance on trans-mission delays will allow longer averaging periods fndthus lower peak loads.

Sensor and Actuator Communications

Involve the transfer of information between real-worldinterfaces and control functions. Traditionally, thesedevices use special purpose interface and communicationsubsystems. In a distributed system, where such datamay be moved to many physical locations, such communica-tions must be considered to be an integral part of thetotal information system.

It is difficult to identify communications requirementsfor sensors and actuators without a knowledge of theirfunctional partitioning. For example, a undirectionalbroadcast-style subsystem could be provided if sensorcalibration functions are partitioned away from thesensor. If sensors are capable of responding tospecific commands, bi-directional communication isrequired.

Generally, devices to be served are geographically-distributed, oftentimes have highly periodic datathroughput requirements, and require predictable (andpossibly short) transmission delays. These require-ments can be judged in more detail based on tha controlalgorithms in use.

In many cases the physical environment of sensorsand actuators vary widely. Thus the communicationssubsystem must be suitable, in terns of cost, wiringtechnique, etc. for the range of such environments.

Page 167: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 150 -

3.2 Man-to-Machine Communications

Communications with man encompasses a wholly differentrange of constraints, both in the area of timing andpresentation of data, and in the area of variabilityof input. A demand for a display, for example, mayoccur at any time and the required system responsespeed is based on difficult-to-determine psychologicalfactors. Man communications also precludes the useof complicated protocols to control data flow anderrors.

3.2.1 Status Reporting

Status reporting is likely to be implemented as a"read-only" function. Specific reports may beelicited by operator demand or general informationmay be presented by "broadcast" displays. Displaysmay originate from one database or may be made upfrom information obtained from several differentsites. These sites may be direct outputs fromsensors, functions, controllers, and databases.Combining data into a single display could be denedynamically according to total system state oroperator enquiry and may be a local function of an"intelligent'1 display unit.

3.2.2 Command Control Communications

Control operations are typically "read-write"functions accessible by selected personnel whichallow them to change plant operating parameters.Status and command control functions may be combinedin one device, although additional security wouldbe required and personnel access procedures becomecomplicated.

The command control sequence is: select the function,pass any security control information (key) to thefunction and make a secure transmission of the newparameter settings. While a variety of conventionaltechniques can be used to secure the operator inter-face to the control facility, security within thecommunications facility is essential.

3.2.3 "Direct" Man Communications

"Direct" communications between man and the equipmentpertains to functions which require minimal intelligenceat the man/machine interface. Such automatic functions

Page 168: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

151 -

as: door lock switch sensing and (possible) activation,personnel presence sensors, alarm annunciation areincluded. Typical data flows for such applicationshave high priority, are bursty, but have low averagedata rate requirements.

4. CHARACTERIZATION OF COMMUNICATION REQUIREMENTS

4.1 Data Format and Units of Data

Many communication parameters are definedassuming a specific data format and are based on theminimum quantity of this data that must be transferredat any one time. Terms such as bits, bytes, words,records, blocks, and files may be used to describe theusers1 data format. The concept of a "unit of data"is helpful to describe that quantity of data which,when passed through the communications system, issufficient to initiate significant processing at areceiver. A unit of data might range from a binaryindicator of a contact closure state to a whole filewhich describes a matrix of temperature readings. Onecan envisage that a communications scheme for the scan-ning of sensors (individual small units of data) wouldbe different than that for large file transfers.

4.2 Transmission Delay

An event or condition at one location is reported toanother via a communications system which introducesa transmission delay. Delays between sensor inputsand calculated control outputs are very importantparameters in control system design. Some controlalgorithms can be constructed to compensate for knowndelays, but in general one would like to have delayswhich are as small and as predictable as possible.Interprocess communications is very susceptible totransmission delays. A function in one place re-questing a service of another is often subjectedto time coordination problems. In some cases trans-mission delays can have an excessive impact on tasksemaphores.

The delay will be the sum of several factors includingpropagation delay, service and waiting times, anddelays introduced by error control procedures. Thelatter two can be significant for process controlapplications. When sharing common hardware, accessto the equipment must be made prior to transmissionand this will affect the waiting time. ARQ errorcontrol procedures may require several attempts beforetransmission is successfully completed.

Page 169: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 152 -

4.3 Throughput and Load Range

The communications throughput requirements are deter-mined by the amount and speed of the data to betransported. In a shared facility, the delayimposed on a transfer will vary due to loading andthe throughput available to each user must be speci-fied in terms of the varying load to be expected.During certain system states, variations can belarge: trip, shut-down, start-up, etc. Occasionallarge instantaneous (worst case) loads can also beimposed even during normal operation if periodicdata transport is unsynchronized.

When discussing loading, users must estimate theloads which they expect to impose (and when) on thecommunications equipment. Thvi communicationsdesigner must clearly indicate guaranteed andaverage performance in the light of all user loads.In many cases, guaranteed performance levels willbe an important factor in determining whethersharing of the communications subsystem is feasible.

4.4 Error Control

While every communication channel is susceptible toerrors, the specification of error rates must be madein context since unrealistic demands often lead toinefficiencies which do not meet real overall per-formance goals. Sensors and human operators, forexample, often have a high error (failure) rate forwhich processing functions make due allowance.Communications between process and actuator on theother hand are more stringent and require consider-ably more attention. Various error control tech-niques are well-known and they must be selected witha knowledge of the requirements of the data beingtransported.

For example, forward error correcting codes allowthe receiver to correct all known errors withoutfurther interaction with the data source. Unfortu-nately, error correcting codes require greater datatransmission overheads than simpler error detection-only codes. Automatic repeat request (ARQ) proced-ures, which use error detection-only codes, requirethat the sender maintain copies of all unacknowledgedmessages until they have been received correctly.

Page 170: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 153 -

4.5 Flow Control

Flow control procedures are used to co-ordinate thetransmission and reception of data. Flow controlis not always required, e.g. periodically producedsensor scanning for which the receiving processneeds only the "latest" value. Other situations,(e.g. terminal displays for operations staff), cannotguarantee that data source, data transmission media,and data sink can be made available at the same time.Flow controls may be required to limit loading duringcritical times, and may also be integrated with errorcontrol procedures.

4.6 Topology

Considering the flow of data, four basic topologiescan be described. A multipoint-to-multipoint con-figuration allows a number of data sources to commun-icate with data sinks with interconnection patternschanging dynamically. A point-to-multipoint linkhas a single data source which broadcasts to a numberof sinks. Multipoint-to-point configurations havethe opposite topology, e.g. sensors scanned from asingle point. The point-to-point configurationpermits two devices to communicate with one anotheronly.

It is clear that vastly different error controlprocedures, protocols, etc. are applied to thedifferent topologies. A broadcast topology canbe made very secure for example, simply by ensuringthat receivers cannot transmit, although FEC ratherthan ARQ error control would be required.

4.7 Addressing

If functions are permitted to change their physicaladdresses dynamically, then procedures must beprovided to allow the communications system tolocate them. A relocated function must tell thesystem its current address and these interactionsif dynamic, can become very complex.

4.8 Physical Environment

It is evident that the basic components of communica-tion subsystems serving each type of user must beappropriate for the physical environment. For example.

Page 171: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 154 -

sensors and actuators must be connected to communicationsequipment appropriate to local conditions. Requirementsplaced on computer-to-computer equipment are likely tobe less stringent.

5. CONCLUSIONS

A uniform approach using separability as a designobjective is effective in developing communicationsfacilities for real-time process control applications.Parameters, characterizing the different types ofcommunications, have been discussed and these can beused as criteria in selecting the communications sub-systems for specific applications. The use ofseparable communications is consistent with thefunctional partitioning strategy for system archit-ectures and is another step towards realizing theadvantages of distributed systems.

REFERENCES

1. L'ARCHEVEQUE, J.V.R., YAK, G., "On the Selectionof Architectures for Distributed Computer Systemsin Real-time Applications", IEEE Transactions onNuclear Science, NS-24, pg 454-459,- February 1977.

2. GREEN, P.E., LUCKY, R.W., "Computer Communications",IEEE Reprint Series, 1974.

3. FALK, G., McQUILLAN, J.M., "Alternatives for DataNetwork Architectures", IEEE Computer pg 22-29,November 1977.

4. GREEN, W., POOCH, V.W., "A Review of ClassificationSchemes for Computer Communication Networks", IEEEComputer pg 12-21, November 1977.

5. SCHWARTZ, M., BOORSTYN, R.R., PICKHOLTZ, R.L.,"Terminal-Oriented Computer-Communication Networks",Proceedings IEEE, Vol. 16, n 11, pg 1408-1423,November 1972.

6. F.ARBER, D.J., et ai, "The Distributed ComputingSystem", IEEE Compcon 73 Digest, pg 31-34, 1973.

Page 172: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 155 -

7. METCALFE, R.M., BOGGS, D.R., "Ethernet DistributedPacket Switching for Local Computer Networks",Communications ACM, Vol. 19 n 7, pg 395-404, July1976.

8. WATSON, R.W., "Network Architecture Design forBack-end Storage Networks", IEEE Computer pg 32-48,February 198 0.

9. YAN, G., L'ARCHEVEQUE, J.V.R., WATKINS, L.M.,"Distributed Computer Control Systems in FutureNuclear Power Plants", Nuclear Power Plant Controland Instrumentation Vol. II, International AtomicEnergy Agency, Vienna, 1978.

10. SHAH, R.R., CAPEL, A.C., PENSOM, C.F.,"DistributedTerminal Support in a Data Acquisition System forNuclear Research Reactors", IJ\'G/NPPCI Specialists1

Meeting on Distributed Systems for Nuclear PowerPlants, International Atomic Energy Agency, May 12,1980.

11. WftLDEN, D.C., "The Evolution of Host-to-Host ProtocolTechnology", IEEE Computer pg 29-38, September 1979.

12. "Provisional Recommendations X.3, X.25, X.28 and X.29on Packet-Switched Data Transmission Services", Inter-national Telegraph and Telephone Consultative Committee(CCITT), Geneva, 1978.

Page 173: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 156 -

THE IMPACT OP DATA HIGHWAYARCHITECTURES ON CONTROL AND

INSTRUMENTATION DESIGN

T. McNeilA. HepburnR. Olmstead

A.E.C.L.Engineering Company

INTRODUCTION

Digital computers have been successfully used in controlfunctions in all CANDU stations since the first commercial demonstrationplant. In each successive station design, the computer systems havebeen enhanced and expanded in their scope of application. For example,at Pickering, computers performed direct digital control of reactorregulation, boiler pressure control, and fuelling machine operation. Later,at Bruce, more control functions were added to the computer such asmoderator temperature and boiler level control.

The excellent performance of computers in these past designs, providesa basis for expansion of the computer's responsibilites in a new designcurrently under study. In this new computer system, we intend to add the diverseinterlock logic used by individual subsystems in the nuclear steam plant. Incurrent designs this logic is implemented by relays through the extensive useof modern distributed control techniques.

The benefits to the operating company will include more flexibility inmaking additions or alterations to the plant control and instrumentation during thelife of the plant. More systematic fault detection, correction, and maintenancewill also be possible.

However, the main reason to implement a distributed control system isthe advantages it offers in the design and engineering of control/instrumentationsystems.

By implementing more logic in software, we intend to improve designflexibility and most important, to reduce the impact of late design alterationsupon the overall construction schedule. The extensive use of data highways,replacing trunk cabling should reduce the installation and commissioning effort,as well.

In implementing such a system, we have to recognize it will hsve a large

impact on the way things get done in the engineering group. Obviously, the control

engineer will be implementing his logic in software rather than with relays. This

gives him many more degrees of freedom, and perhaps restricts him in other ways.

Page 174: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 157 -

He will have to become familiar with processor loading, memory size requirements,data communications throughputs, instead of cable tray loadings, number of contactsavailable, and number of spare wires in a trunk.

Any system proposed should address the entire problem, and not just theremote multiplexing of sensor data. We should give special attention to the needsof the designers and project managers to ensure that the system will be an effectivetool for them.

In this paper we briefly describe the engineering design process as itexists today, and identify some of the problems in this area. We will then look ata new control system architecture and some mechanized tools which become possiblewith this new approach to aid the design team, project management, and the fieldcommissioning crew.

DESIGN PRACTICE

At the beginning of a project, size estimates are made of the commonresources used by all control and instrumentation systems, such as, service power,trunk cabling, number of relays, number of computer inputs and outputs.

Engineering of each individual system in the plant is initiated from aprocess flow diagram supplied by a process design engineer. The instrumentationengineer reviews this document, and prepares a preliminary design sketch, indicatingtypes of transducers, actuators and indicators, along with preliminary operatingprocedures. After achieving consensus with the process design engineer, detaileddesign commences. This includes ordering the instruments, initiating mechanicaldrafting, (i.e. panels mounting brackets etc.), and initiating electrical draftingof wiring and relay logic. Assignment of trunk wires, relay numbers, and computerinputs and outputs occur at this phase. This electrical drafting activity is mostheavily affected by our new computer architecture.

?or example, one significant electrical drafting task to be replaced isthe routing of connections through trunks. This is an especially onerous task.Consider the routing of wire 4118 of a ladder diagram for head tank valvecontrol in the deuteration/dedeuteration system. (Figure 1).

Wire 4118 starts at terminal 11 of relay RL35. From here, it is connectedto terminal 79 in panel PL262, and from there goes by trunk cabling to the ControlDistribution Frame (CDF) terminal DF53H-79. A cross connection is made to terminalDF59H-76 which in turn is connected to terminal 176 on panel 161 and hence on toterminal 8 of level switch 91#3. This is the routing of one wire in the plant.

Page 175: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

322-PVS CONTROL

4103

4104

4117: = HS-62 =1HS-62

4121•±. HS-62

4124~ HS-62

4127

! LS-91 #? ; LS-91 #3

11

[~RL-35|

21

^Z LS-91 #2 j tLS-91 ^ LS-91 #1 j tLS-91 ^ I#2

LS-91 #3 LS-81 #2

v LEVEL < 224 mm LEVEL 163 mm

HS-83

ilRL-2

. RF

EN. TOOPEN PV5

LS-91 #1

LEVEL 104 mm

CLOSE3222-TK1

4130

LS-91#1

Z^Z RL-4 = : RL-21

00

ISEE

FIGUP J 1: Typical Relay Ladder Diagram

Page 176: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 159 -

In total, the entire nuclear steam plant control and instrumentationwiring encompasses:

4,000 cables200,000 terminations10,000 cross connections

The design effort in routing these wires is in excess of 50,000 man-hours andconsumes more than $200,000 of computer wiring management data processing services.

Changes to the wiring after design, are inevitable due to evolvingregulatory requirements, process problems encountered during commissioning, andimprovements made later upon gaining experience with operating the unit. Thesechanges are expensive and prone to error.

Aside from the difficult management problems involved in keeping track ofall this wire, there are other concerns in control and instrumentation engineering.For example, the time span between the day process and instrument engineers decidehow to control the system, to commissioning and testing of the system can bethree to four years. As a result, there is insufficient feedback in terms ofthe results of a given design. This is an open loop design procesi.

In summary, current control and instrumentation designs generate massiveamounts of information, created and managed manually for the most part. Thecorresponding equipment and wiring represent a large fabrication and commissioningeffort as well.

Therefore, our objectives in designing a new system are:

1) to reduce the impact of late control and instrumentationdesign changes upon overall construction schedules

2) to reduce the engineering effort required to design thecontrol and instrumentation systems

3) to reduce the control and instrumentation installation andcommissioning costs and time.

4) Maintain the levels of redundancy, reliability, availability of theexisting design.

Page 177: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 160 -

AN INTEGRATED DESIGN APPROACH

To meet these requirements, we are proposing the extensive use ofcomputers in the design area as well as in plant control. The design computersare coupled to the plant control computers to form an "INTEGRATED SYSTEM" whichprovides a p> /erful tool for design, testing and commissioning the system. ThisDESIGN/CONTROL system consists of three interconnected segments (figure 2):

1. Process Control Design Centre2. Simulator3. Unit Control Computer System

The process control design centre maintains information pertainingto all control and instrumentation systems. It is used by engineers to recordtheir design decisions. For example, the designer assigns computer addressesand calibration coefficients associated with each instrument. He prepares thelogic programs defining the interactions between a system's inputs' and outputs.This data base becomes the source of information for creating the on-line controldata base.

One of the decisions instrumentation engineers make in design is theselection of .nstruments, indicators and actuators. Typically each engineermaintains a list of instruments identified by an internal specification or partnumber. At some point in time, instrument purchase requistions must be filledout against this list and issued to the procurement section. The resources ofa design computer system could be i.iost helpful here.

Using a computer based process control design centre, instrumentationengineers may enter their requirements into a common data base which is accessedby the procurement section. Purchasing agents then add order information suchas purchase order number, manufacturer, supplier, delivery data, etc. Thispurchasing information is available to the designers and project management throughtheir respective terminals.

With the adven1 of software logic replacing relay panels, and data highwaysreplacing trunk cabling, much of the control and instrumentation design information,by necessity, is in machine readable form. In other words, during the design phaseof a project the design information being collected is assembled into a computerdata base. Why not take advantage of this technology to permit project managementto access this information on-line to monitor design progress. Note, this is nota radical departvre from current practice, but rather a mechanizaton of a manualprocess. It is an integral part of our solution to reduce engineering effort andgain advantage in maintaining schedules.

Page 178: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 161 -

INTEGRATED DESIGN/CONTROL SYSTEM

DESIGNCENTRE

UNITCONTROLSYSTEM

FIGURE 2

Page 179: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 162 -

SIMULATOR SYSTEM

As mentioned previously, one of the problems with the current engineeringprocess is the fact designs seldom are tested until commissioning time.

To permit testing as part of the design process, or at least beforecommissioning, we propose a simulation facility connected to the Design Centre.At a minimum, this simulator should provide easy to use static testing tools.

UNIT COMPUTER SYSTEM

In CANDU generating stations, each generating unit is monitored andcontrolled by its own unit control computer system. The existing unit computersystems support:

1) Alarm Annunciation2) Data Logging3) Turbine Run-up and Fuel Handling4) Control of Major Reactor and Boiler Systems

Of course, our new system continues to support these functions.

We present a model system configuration (see figures 3 and 4) for purposesof highlighting design issues such as system availability, modes of failures, andfault tolerance and correction.

The new system architecture is essentially based upon the existing CANDUconcept of independent dual systems operating in a MASTER and HOT STANDBYconfiguration. The configuration has been expanded beyond the dual processors toa multi-processor configuration, however, to support the larger responsibilitiesgiven to it. Extensive use is made of modern data highway and high speed localcomputer communications technologies to reduce the number of interconnectionsrequired and improve system flexibility.

In present day nuclear stations, instrumentation and control functionsare assigned to independent channels. These channels are geographically separatedand independently powered to reduce the possibility of common mode failures.

A significant feature of this new approach is the use of dualredundant data highways for each channel.

Page 180: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 163 -

These highways support high speed, full duplex communications, systemtime distribution, and potentially voice circuits for maintenance purposes.

Field controllers are located at geographically strategic locations onthe control channel highways A, B, and C.

These field controllers implement hands^ioch and interlock logic,(previously performed by relays) actuator control and confirmation of output commands,simple PID regulating loops (previously implemented by analog controllers). Theyalso interface the computers to sensors (RTD's, pressure transducers, etc) andperform alarm checking.

Economic constraints define a minimum acceptable size for these controllers.Hence, a controller may support many individual systems. Alternately, elements ofa given system may be widely dispersed geographically, (e.g. handswitches in thecontrol room, valves containment) hence individual system may require multiplecontrollers. For these reasons the controllers must be able to communicate amongstthemselves as well as with the control computers.

Alarm annunciation is also a critical function for unit operation and issupported by dual computers similiar to the control functions, (see figure 4) Asin the current design, the operator interface (e.g. CRT's, keyboards, etc) areconnected to both computers.

Special attention must be paid to the maintenance of the computers anddata highways, especially in light of the increased amount of hardware. Specialtools to aid in fault detection and correction in both hardware and software arerequired. Therefore, an overall system supervisor function is supported by themaintenance/monitor computer. It scans all inputs, outputs and communicationschannel traffic and maintains a separate data base of this information. It iscapable of generating alarms in the event of detected faults or when processingloads on a given computer, or the traffic on a given communications link exceedspreset thresholds.

The maintenance function provides software generation facilities (editor,compiler, etc). It also permits down loading of programs, data, and diagnosticsfrom the Design Office (note the link to the Design Centre, figure 4).

Page 181: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 164 -

UNIT CONTROL SYSTEM

XCONTROL

BUS

CONTROLCOMPUTER X

CONTROLCOMPUTER Y

-D-

-O-

-a-

CHANNEL DATA HIGHWAYS

-D-

-D-

ALARMANNUNCIATION COMPUTERS

VBUS

WBUS

MONITOR/MAINTENANCE& UTILITY PROCESSORS

JLVTTv

W

COMMUNICATION LINKTO SIMULATOR

FIGURE 3

Page 182: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 165 -

UNIT CONTROL SYSTEM

PERIPHERALBUS ALARM/ANNUN.

COMPUTERV

ALAFWJANNUN.COMPUTER

W

wPERIPHERALBUS

DATABASE

ALARM CRT'SANNUNCIATION WINDOWS

COLOUR GRAPHICDISPLAYS

FUEL HANDLINGCONSOLE

TURBINE RUN-UP CONSOLE

I FUTURE MISCELLANEOUSI CONSOLES

| I

i _ _

IX

UTILITYCOMPUTER

IX

MONITOR/MAINTENANCECOMPUTER

COMMUNICATION LINKTO DESIGN CENTRE

FIGURE 4

Page 183: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 166 -

FUTURE WORK

In conclusion, our proposed approach for the future computer systemraises a number of implementation questions and challenges.

In specifying the Design Centre an in-depth analysis of current paperworkand design procedures is required. Coupled with this is a definition of anacceptable programming language to be used by the process control engineers.

Simulation of the unit is a large, and challenging task on its own.

In firming up our unit control configuration, we need to evaluate possiblefailure modes, and fault correction techniques. We need to quantify the capacityrequirements of the field controllers and central computers.

The safety considerations associated with assigning many systems to onecontroller requires careful attention. There is an excellent opportunity here forresearch into "intelligent transducers" which could reduce the minimum economic sizeof the individual controllers and increase system diversity.

Solid-state controllers, capable of withstanding a LOCA, are very attractivesince their application can reduce the cost of contai jnent penetrations considerably.

Identification and selection of the data highway and local computernetworks communications technologies is key to the system. For on them, depends thesuccess achieved in constructing a flexible system capable of accommodating newtechnologies in computers and peripherals throughout the life of the plant.

Page 184: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 167 -

SESSION 2 - REQUIREMENTS AND DESIGN CONSIDERATIONS - I

Page 185: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 168 -

THE DESIGN, DEVELOPMENT AND COMMISSIONING OFTWO DISTRIBUTED COMPUTER BASED BOILER CONTROL SYSTEMS

By

D. COLLIER, L.R. JOHNSTONE, S.T. PRINGLE, R.W. WALKER

INDEX

1. INTRODUCTION

2. SYSTEM DESIGN

3. CONTROL COMPUTERS

3.1 Memory

3.2 Actuator Drives

3.2.1 Stepper Motor Drives

3.2.2 Solenoid Operated Actuators

3.3 Auto/Manual Philosophy

3.4 Watchdog

4. MANAGEMENT COMPUTER ROLE

5. COMMUNICATION

6. SOFTWARE

7. REFURBISHMENT OF COMPUTER SYSTEMS

The work reported was carried out in the N.E. Region Scientific ServicesDepartment of the CEGB and the paper is published by permission of theDirector General, N.E. Region, CEGB.

Page 186: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 169 -

1. INTRODUCTION

The CEGB N.E. Region has recently commissioned two major boiler controlschemes using distributed computer control system.

Both systems have considerable development potential to allow modificationsto meet changing operational requirements. The reason for this work beingthe increase in the amount of nuclear base load, the concentration ofload regulation in large 500 MW units, and obsolescence of station ccntrolequipment.

The distributed approach to control was chosen in both instances so asto achieve high control system availability and as a method of easingthe commiss ioning programmes.

Both systems may be compared by reference to the following table:-

TABLE 1

Unit SizeMo. or ControltlorinrsInterface Equip.Comp typeSupervisoryComputerControl DistributionComputer 1

25

DCommissioningPeriod

Other Points

SKELTON GRANGEPOWER STATION

120MW

GEC MEDIALSI 11 (Kratos Package)

PDP11/34

Feedwater ControlSteam temperatureDraught Plant-MillcontrolsMaster Pressure (coal)Master Pressure (Oil)Over b months periodwith Unit at load

Display via VT30 systemto operatorIncludes fault findingprograms.

THORPE MARSHPOWER STATION

550MW5

GEC MEDIADEC PDP11/03

PDP11/34

k Mill control & Master Pressure5 Mill controlFeedwater

Steam temperaturesDraught PlantInstallation & Commissioningduring 12 week outage & 2 weeksafter outage.

Connected to large logging &display system (11/34) andalso to Alarm micro(6800)

The experience gained with these two projects has reinforced the viewthat distributed computer systems show advantages over centralised singlecomputers especially if software is designed for the distributedsystem.

Page 187: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 170 -

The inherent flexibility has allowed very rapid commissioning. At Thorpe Marshthe majority of computer systems were installed, hardware tested andcommissioned against simple models during an outage for routine overhaul of 12weeks duration. Upon unit start up computer control was immediately availableand within two weeks all loops had been optimised for normal operation at anumber of generated loads.

In the first 18 months of operation at Skelton Grange one computer and 2interface card failures have occurred but the operators have had no problemin maintaining satisfactory manual control tor the time taken to replace theparts. The failure rate is in line with our experience and calculations ofpredicted oehaviuui. of the systems.

Most of the following paper is concerned with practical points gained fromour experience and not with the overall philosophy of distributedcontrol and to which we are committed. These advantages are reportedelsewhere (1,2).

2. SYSTEM DESIGN

Both systems are of the star form with the management computer at the centreand the control computers connected by serial links to the centre (see fig.l).

The reasons for choosing this form of system stem from the design criteriaestablished, coupled with the communication technology commercially availableat the time of equipment purchase.

2.1 Design Criteria

The basic design problems of distributed systems are to determine how todistribute the system functions among the various computers, how todetermine the number of computers, and to define the intercommunicationloading between computers.

The basic criteria relating to the system are:-

2.1.1 Each computer controls an area of plant such that in the eventof a computer failure the operator can maintain load on manualcontrol without penalty other than the potential risk of operatormaloperat ion.

2.1.2 Rach computer shall be capable of stand alone independent operationin the event of a communications failure or failure of themanagement computer.

Consideration was given to the plant control areas and decompositionof the control of these areas into the different compi:tere. Anumber of potential options of numbers of cotuput ers and distributionoi function were evaluated in the light of the two basic criteriaand an option chosen. The distribution chosen for both systemsis indicated in Table 1.

At this stage little reference was made to computer capabilitiesbecause previous DDC experience indicated that this was not a

Page 188: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 171 -

problem as the processing power of an LS111 with 32K words ofmemory was considered to be in excess of a minimum requirementeven using the specialised high level language DDACS.

3. CONTROL COMPUTERS

The block diagram of a Skelton Grange target computer system is shown inFig.l. Both distributed systems have essentially the same targetcomputer systems there are however detail differences.

3.1 Memory

Both systems use MOS volatile memory and in order to preserve programsin the event of power supplies failure both systems use battery back upsystems. The Thorpe Marsh system uses on-board battery back up andstatic RAM. Whereas the Skelton Grange System uses a separate batterysupply unit for the dynamic RAM.

Both systems operate satisfactorily but the Skelton Grange system ismore complex in that batteries are trickle charged, it has anautomatic routine test of batteries by connection of a resistive loadand monitoring of the voltage decay to determine the battery condition.

The onboard batteries are a cheaper solution. Due to the inherent highreliability of the batteries and the extremely low current drain whensupporting the static CMOS memory (0.3 mA max.)a small amount ofdeterioration of the cells can be tolerated. Thus a policy of totalcell replacement as a normal maintenance procedure after three yearsis sufficient to guarantee the availability of the memory at a levelconsistent with required reliability of the total system.

3.2 Actuator Drives

At Skelton Grange, all actuators use stepper motors, at Thorpe Marsha further form of actuator drive is used, this is the solenoid operatedactuator.

2.3.1 Stepper M^tor Drives

It is a CEGB standard requirement that in theevent of afailure of control equipment, that all valves and dampersfreeze (there are some particular exceptions to this rule).The general method adopted to achieve this for pneumaticvalve and damper drives is by the use of a stepper motorto act as a position reference with memory for the valvepositioner.

Experience on power plant to date has shown that the steppermotor type used is very reliable and because they are lightlyloaded consistent resolution of 1 step is normal. Typicallyvalvas and dampers are arranged to have 2000 steps from end toend thus a resolution of 0.05% is achieved.

These steppers are driven at a fixed rate of 200 steps persecond giving valve traverse times of 10 sec.

Page 189: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 172 -

3.2.2 Solenoid Operated Actuators

At Thorpe Marsh considerable thought has been given to thedesign of actuators for hydraulically operated dampers.Hydraulic actuators are expensive as they contain compTexinternal valves and feedback mechanisms. Recently we haveexamined the use of a cheap robust hydraulic power ramsoperated by solenoid valves under computer control. Thefeedback being taken as position from the final element,such as damper spindle.

This technique then uses the computer to close the loop andmaintain position control.

This technique works very well under most circumstances burit can cause computer loading problems due to the high dutyrequired to service these position controllers. Also whenmanual control is used the feedback mechanism does not existso that drift could occur especially in asymmetrically loadeddampers with leaky solenoid valves. This problem has now beenlargely overcome by the inclusion of additional hydrauliccomponents.

The obvious technique to overcome these problems is to use aseparate micro computer with each actuator to maintain positioncontrol so that n.inu- l inputs just alter input set point andso that the control computers are less heavily loaded. Thisis a development at present being actively pursued.

To minimise this undesirable behaviour at Thorpe Marsh arigorous maintenance schedule to minimise leakage is at presentcarried out, along with definition of the normal operatingcondition as auto control and that manual control is only usedduring abnormal conditions.

3.3 Auto/Manual Philosophy

The two schemes differ in the way the computer systems interact withthe operators desk. At Ske" m Grange the operator must request autocontrol, if the computer sy n accepts this request (ie plant iswithin control, range) then « computer pulls in a relay and controlstne stepper drive outputs. the computer then assesses whilst inAuto that a pjant constraint as been reached then the control action isinhibited the AUTO lamp is t hed and a message passed to the managementcomputer for display.

If a detectable plant failu:.v- occurs and no alternative control action ispossible then the computer scheme trips the loop to MANUAL control.

Thus it may be stated that the MANUAL mode is the fall back conditionand control cannot proceed from MANUAL to AUTO without operatorselection.

Page 190: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 173 -

At Thorpe Marsh however to,AUTO mode is the normal condition and thetransition from AUTO to MANUAL requires operator selection. If thecontrol regime reaches a plant constraint or detectable failureoccurs then control actions are frozen and the MANUAL lamp is flashedi.e. computer requesting MANUAL.

This second strategy was decided upon as confidence in the SkeltonGrarge system developed and the techniques for failure detection wererefined. The computer system is considered as a fully integratedpart of the plant.

There has also been a progression from set point inputs being viaanalogue potentiometers mounted on the desk at Skelton Grange to setpoints in general being entered via a Keyboard at Thorpe Marsh.More frequently changed set points such as st<=>am pressure are stillby potentiometers and mounted on the desk. Entry via a Keyboardobviously saves costs on analogue inputs, outputs and reduces thesize of desk layout.

The use of the computer to flash lamps on the desk is part of meetingthe basic criteria in allowing stand alone operation. Normally thesame signal is passed up to the management machine where it is displayedon a VDU as a plain language message. Loss of communication on themanagement computer means that the lamp flashing indicates constraintor failure condition which the operator must interpret from back panelmeters and knowledge of the plant as the display system will nolonger be functional.

3.4 Watchog

The Watchdogs in both system are identical and are designed to provide afail safe response in the event of hardware and software failures.

The Watchdog is reset every 20ms by a signal of alternating l's and O's onall data lines f' It will only respond if the reset signal is detectedwithir a window centred on 20ms and being 2ms wide.

The software provides support for the Watchdog and it is run at the fastestloop rate available ie.50Hz and as such it has high priority. Failure toreset the Watchdog causes the card to trip. It is wired to relays whichin general isolate all digital outputs under a trip condition.

The clock signals for stepper motor drives are generated on the Watchdogcard and loss of Watchdog reset also removes the clock signals and sofreezes the stepper output drive.

This feature of synchronising the hardware and a software to the linefrequency 50 Hz guarantee resolution to 1 step of the valve drive systems.

4. MANAGEMENT COMPUTER

The management computers have a number of roles which were defined at an earlystage and are the same for each system.

These roles may be listed:-

Page 191: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 174 -

4.1 To support all peripheral device necessary for system operation. Thishelps ensure security of control computer by limiting access via themanagement computer.

4.2 To provide intercommunication between peripheral devices and plantcontrol computers in order to allow the building, modifLcation andtuning of control schemes.

4.3 To hold master library copies of control schemes for use in the eventof failure of MOS memory supplies and for documentary record purposes.

4.4 To monitor the behaviour of control computers and control schemer..This allows access to plant limits, failure of hardware, status andalarms from control schemes within the management computer.

4.5 To process and output the. prime or derived signals to hard copyprinters, to semigraphics VDU for operator display, and to store dataon disk for record purposes.

4.6 To provide diagnostic aid programs to assist station engineers determinethe location of faults.

4.7 To support supervisory control by sending set points or other variablesto control computers.

4.8 To support the new CEGB standard realtime software system CUTLASS.

After due consideration of those requirements the management computers werespecified as follows:—

DEC PDP 11/34 with 64 k v/ords core and 6^ k words MOS memory andfloating point processor.

1 1.25 M byte removable cartridge disk1 2.25 M byte fixed cartridge disk1 Watchdog1 Programmable real time clock1 Clock/calendar module (battery backed)1 Seraigraphics colour display driver1 180 cps printer1 30 cps printer and keyboard2 VDU1 Serial line to each target LSI 11

COMMUNICATION

In designing the communications systems there appears to be two distinctcriteria which are basically unrelated which help define the connunicationsrate of links between computers.

It is important to note that the following discussion relates to an earlystage in the projects before the CUTLASS software system was available.

Page 192: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 175 -

The original software produces alarm signals from the control computers thatare two 10 bit characters which a rate of 2.4 k baud is equivalent to120 alarm messages per second i.e. approximately 85ms per message.

5.1 The maximum acceptable response time from the initiation of alarm stateto operator display anywhere in the system.

Closer examination of disk access time and overheads due to the DECRSX executive indicate that this may be 0.7 sec. per alarm thus thecommunication links themselves could operate at 2.4 k baud and givea response time of 1 alarm per second.

5.2 The acceptable time taken to reload programs into the plant controlcomputers in the event of memory corruptions due to power fail wasdefined at Skelton Grange as being 1 minute per computer.

So with a 28 k work memory of 16 bit words with 10 bits/byte then9.6 k baud will reload in approximately 56 seconds.

So this second requirement appears to be more onerous and so a serialline of 9.6 k baud was chosen.

6. SOFTWARE

Over the past few years the N.E. Region of the CEGB has developed a high levelengineer oriented language DDACS MCS for real time online control.

This software system is dependent on DEC computers (it is written in PAL) andwas designed to be used in small stand alone installations.

Considerable experience has been gained in its use for control of powerstation plant.

Its main design feature being:-

a) Provision of all common modulating control functions as buildingblocks.

b) Provision for selectable control strategies.

c) Provision for inter control loop rjmmunication.

d) Bumpless transfers.

e) Automatic loop starting s( -ncing for time cascade systems.

f) Watchdog support.

g) Built in error checuing and fail safe response for input/outputfailures.

h) Automatic initialisation of inactive blocks when caused to run onbranching in program.

Page 193: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 176 -

However a re-examination of the CEGB requirements for the whole rangeof online real time computing needs for th'fe next fifteen years or so, causedthe definition cf a software system CUTLASS (Computer Users TechnicalLanguage And Software System) that is machine independent by being writtenin CORAL 66 and which is designed at the outset to accommodate distributedand hierarchical computer systems. The last objectives are achieved by thedata structure (GLOBAL, COMMON, LOCAL) and a true communication package.In other respects CUTLASS has improved versions of the DDACS facilities.

At the time of writing (February 1980) the control programs at Skelton Grangeare being converted to CUTLASS for the first on site trial (3).

An illustrative example in CUTLASS programs is included in FIG.2.

7. REFURBISHMENT OF AGR COMPUTER CONTROL SYSTEMS

In the long term the existing monolithic computer systems used at Hartlepooland Heysham AGR stations will require replacement due to obsolescence andthus maintainability of display equipment memory and backing store components.

It is important to realise these stations use DDC for all of the main controlfunctions of the reactor and the boiler. Auto is the normal operationalstate and the manual control is not available.

Because of this essential role of the computer system refurbishment presentsproblems in commissioning, testing and gaining confidence in the new system.The use of a distributed system has advantages in the ability to commissionindependently the separate parts of the system without the problems ofprogram interaction.

It is likely that the basic stragegy of refurbishment will be that individualfunctions will be implemented on separate computers running in parallel withthe existing system but without control outputs connected. A test programwill exercise the programs and a period of operation making comparisonsbetween existing and replacement systems will occur to confirm the systemoperation.

The considerations relating to standby provision must be defined in detailbut general principles have been considered:—

a) Increasing the number of computers within a distributed system isonly valid to the point where the control function cannot be furtherdecomposed. Thus further distribution places essential relianceupon intercommunication.

b) If however redundant information exists it is appropriate to havemultiple functions contributing to a global data base such thatfailure of function only degrades the number of readings of aparameter available rather than basic control performancedegradation.

Consequently as the multiple plant inputs are routed via differentanalogue scanners each one could be serviced by its own separatecomputer and whose output is a contribution to the global data base.

Page 194: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 177 -

c) The equivalent principle for output actions says that if morethan one actuator affects a controlled plant variable theprovision of a control computer per actuator can provideresilient control.

d) The application of the previous principles reduces therequirement of standby provision to a few areas wherevery high integrity is required. In the cases where standbyis necessary the recomnended method is to use an identicalcomputer system with cross linked watchdog interruptingtheir outputs.

The programs run continuously so the system is immediatelyavailable to take over control as required.

The likelihood of simultaneous software bugs occurring is lowas the data to each program will be slightly different andtiming difference will occur. Also the use of well tested andused software systems should also give confidence in thesoftware reliability.

8. CONCLUSIONS

Experience at two power stations of distributed boiler control systemconfirms that such systems have advantages over centralized computercontrol systems.

The advantages observed so far stem from ease of conmissioning, flexibilityin modification of software, security and overall system reliability.

Experience has also shown that high level software system (such as CUTLASS)designed for use on distributed computer systems also contribute significantlyto the advantages.

The design of distributed systems raises a number of interesting technicalpoints which require solutions optimised to the architecture.

There is now every confidence in the philosophy of distributed computercontrol for power station application and guidelines are being developedto assist in the application ac new stations and refurbishment at conventionaland nuclear plant.

9 . REFERENCES

1. Johnstone, Marsland & Pringle, "A distributed control system of a 120MWBoiler" IEE Conf. Proc. No.153 1977.

2. Johnstone & Marsland

3. Brown & Walker

"Distributed Computer Control for PowerStation Plant". Electronics & Power 24,P373-378.

"COMPASS - An Engineer Orientated ComputerOn-line Multipurpose Application SoftwareSystem" IEE Conf. Proc."trends in On-line Computer Control SYS1'1

1979.

The name COMPASS has been changed toCUTLASS for copyright reasons.

Page 195: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 178 -

FIXEDDISC 2 . 5 M Byte

REMOVABLE

CARTRIDGEDISC

cnDISC

UNIT

PROGRA•A

WATCH- D O G

I Ut-IIBUS

123 xWORDSTOKE

SERIALI/OUNIT

&-VYAY SERIAL

uwn

2SKWORDSSTOGE

LSI-llPBO-CËSSOR.

o/aus

|VT3O I 1 VDU

&SOO BAUD

SUB- SVSTEMFEEDWATftK

SERIALNO

USIIT

PS.MONITOR

o/su

UNI-/

/MtDI^

UNI-BUS

UMIBUS

SUB-SYSTEM

2

TEMPER-

- kTLJtie

SUB-SYSTEM

3

AIR

SOB-SYSTEM

A,

PRESSURE

CCOAL)

SUB-SYSTEM

5

PE6SSURE

COlO

19 M "29 M 3OA.I 15*.!401 S3OI 2t Ol iGOl

IÊDO 35OO 25 DO ISDO

4 S 7 S as 3 5

WATCH-DOS

AVlt.LOâINPUT

MEDIA HIGHWAY

ANALOGOOTPUT

DIGITALINPUT

DIGITALOUTPUT

STSPPINGM0TO9ORIVC

8-o<f 4-off

SKELTOW GRAWGE

SYSTEM BLOCK DIAGRAMFIG.

Page 196: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

HtViBV ZWOmS

AUNIVIHV *40-"S

f \COMCONS

CtOOAt

COMMONCONTHQLLER

COMMON

GlOBAl

TCOMMON

CEGB CUTLfiSi

010?

01150114

ICHEME: ".HTEMPCOriTH."( : ftTTEMF

01 0401 05oj 0601070108

r PI : KPEALLOGICPERLLOGIC

RTTEMFLDCflll

COM:

VDt • T

PUM EVEPV c'500 M;EC:

HDt • rnat • Toot <pfi

:TftPT

I.I 1 1 ij L O M : : = H I L I M OP LOI-ILIM DP 'Lai. iLIMlT ' I0111 CDflSTPHIH COM:f i l l £ rOMCON" :=CDti :

L D I : M D : = H HftNDUT HETEP ££.

DECLARATIONS

CONSTRAINT

011501 l e

01 17o 11 soi r?01 cUnii'lijl££

EPPc::=T'i'£ •" = IMC

VOl : = ''•.TRDt : = •TDQI : = •TOD/ : = •pf l :=VDfDI GOUT

R-MIlP11'. E

p:: o. ?Tfl'-O.Til -ii.TO 0.

INPUT

PP£• t •T

; • DP •?:? . DP?ti • DP•?S • D P

I . T i l . T

v'p- n. C• TR-: U.• TD 0.• TD- I i .

TRDt TRDt RHIP R : L D T 4 4 . 1

|

P] 1

-.Oil' •O i i ' 'i j :• .

TDDt HMIJ CLERP-I

TDDtET.CLERF

PERMITAUTO

01 £ 3 ENDTFO

Figure 2. Part of Superheater Temperature Control Using CUTLASS DDC Language

Page 197: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 180 -

Condensate Clean Up Control System with Distributed DDC

by K. Yoshioka," T. Tazicia,* 0. Nakamura*

S. Kobayashi,**

* TOShlbA CORPORATION, TOKYO, JAPAI!

** WA;;KDA UNIVERSITY, TOKYO, JAPAN

AI-S TRACT

In the operation of Condensate Clean Up System ii, bV,'R plants,regeneration intervals of the derr:inorali::er are not e'jua] and there i:no base to determine the interval which in usually decided by onera-tor's experience. Regeneration of resin is, t!v-refore , sometir.esperformed too early remaining much capacity of resin unused.

In order to improve such operation mode, follov,imr. approach was mace

(1) to equalize the operating tine difference of sequential twodemineralizers (t]-to, t?~tl, ..., in Fie. 2) which is calledas operating1 intervals in this pape^.

(2) to control initial flov: for newly connected demin.'ralizer nrs nowcne ha? IK S S flow, resulting higher flov: which makes further un-balance in flow between new and old lemineralizers in paralleloperation

The economic and efficient operation of this system, along with thereduction of radioactive resins, and safety supervisory function, canbe achieved by the distributed DDC with microprocessor' in the direc-tion of above approaches.

1. INTRODUCTION

The current CondensateClean Up System in BWRplants consists of multi-dercineralizers as shownIn Fig. 1.The system is not control-led in the flows of demi-neralizers and their oper-ating intervals are decidedby operator's experience.Unequal flows i-s resultedas new demineralizer hashigher flow than the oldones.If their operating inter-vals are very short, theoperator regenerates thedemineralizer by manual Fcontrol in order to continuewith remained capacity.

Condensaie Clean Up System

Q.*rt** ru r Feed waterHeaterReactor

p. 1 Condensate Clean Up Systemits operation, so they are regenerated

Page 198: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 181 -

13

Normal End Point

In addition, its system is usually operated with one si.are ofdemineralizer for regeneration to decide the termination at. whichthe first demineralizer should be switched to spare for regenera-tion and for this purpose, and the operator' calculates theIon exchange quantity [difined as the product of conductivitydifference (}jd5/cm) between inlet and outlet of the dernintrali v.<>rand flow (ton)J by himself which is considered to be troublesomeand exhausting work for the operator.The current operatingcondition of demi-neralizers (8 normallyin operation, 1 stand-by) is shown in Pig. 2.The each operatinginterval of them, (tl-tO, t2-tl,..., t8-t7),is not equal.

As each Intervals isnot controlled, theeffect of Increase ordecrease in their flowsand inlet conductivitiesthat is, the distribu-tion of their ion ex-change quantities cannotbe equalized by this

to tt te £3 t* ts ""Pig. 2 Time VS Ion Exchange

-*• TimeQuantity

operation mode. Therefore, except the random average disturbance,operating intervals of demineralizers are not equal. If any inter-val of two sequential demineralizes was very short, the latterdemineralizer should reach the end point before the former hasfinished off its regeneration. In such a case, conductivity ofclean up water returned to the reactor becomes higher which willgive unfavorable effect to the plant operation.

In order to avoid such condition, it is best that each of theirIntervals at the end point should be controlled to be equal.

Then, the system will be safety operated with higher end point tymaking use of their remained capacities within the limit of itsoutlet conductivity than the current operation mode.

The ion exchange quantity, ?T(t), is defined as

M(t) = <yLQ [Di(T) - D»(T)] X Q( T )dt (1)

where,

tto

Dj( Z )

Do( 7 )Q(t)

timetime beginning to connect with Condensate CleanUp System

Inlet conductivity of demineralizersoutlet conductivity of demineralizersflow of demineralizers

Page 199: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 182 -

In c a s e t h a t the ion d e n s i t y o f th>-it Is difficult,

f l o w o r :• !'.]••

lizer is increased,due to the increase oftherefore, to prove the efthe equalizing operatingFig. 3 has been adopted instead of

The liner,, a, b, ... h,show the integral flowsof 8 demineralizers.

Here, they should benormally rep;eneratc?d att.he end point, Q(t)=Qm.If Dj_(t) is increasedstepwise, the end point..Qm, depended on theinlet conductivity,should fc? changed tothe line (1) and (2)as shown in Fig. 3-

The gradient of lines,a, b, ..., h, showsthe flows of demlnera-lizer.,.

water at t.he inlet of dcrp.inera-tc decide whether Its effect is

*. c uicuiu t i vi ty . ] f, is not easy,ra< :.zcv of the flow (Control on

Then, the cone -pi. shown in'.>|t-ration :nede shown m Fig. 2.

End point_

depended on InletConductivity

to tt tz ti f i ts "" —"• TimePig. 3 Tine vs Deminerali :•.«.'i'' s rlov:

In this concept, we can express the two process variation, the con-ductivity {f* 15/cm) and integral flow (ton), separated. Therefore,the following analysis can be made in sample manner.

?.. SYSTEM ANALYSIS

Even In the case that the Inlet conductivity of demineralizer? wasIncrease, their outlet conductivities are practically constant.From equation (1)

[Di-Do(M)]QM

dMdt

where,

= (Di-Do)Q =

= -QPo(M) + F

F = Qz • Q

By the numerical analy-sis of the equation (2),M(t), Characteristics ofTime vs Ion ExchangeQuantity in Fig. '4 areobtained.

We can consider that theactual characteristicsare nearly equivalent tothe straight line.

So, we can set theassumption as follows.

f

3J«

xKT

5.0

Actual CharacteristicCalculation by E$.(2)

20 4.0 60 80 WO 120 140—* Time [Hour )

Fig. Characteristics of Timevn Ion Exchange Quantity

Page 200: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 183 -

THE OUTLET CONDUCTIVITY IS CONSTANTIN CHARACTERISTICS OF ION EXCHANGE QUANTITY

Under this assumption, as the right side of equation (?) is constant,Characteristics of Ion Exchange Quantity respect of time is thestraight line in the domain which the regeneration is possible, exceptthe large scale leakage of sea water. Therefore, Time vs Ion ExchangeQuantity is equivalent to Time vs Integral flow.

Considering above correlation of Time vs Integral Flow, control of theflow of tiie demineralizers is the most appropriate for efficient opera-tion of the system. Concept for this operation is that:

(1) for the demineralizer closer to the end point of its capacityshould have a maximum possible flow.

(2) for the demineralizer which has much capacity remained shouldhave a lower initial flow to give wider flow control range.

So we adopt the control approaches as follows.

I) Minimize initial flow for newly connected demineralizer

ii) Maximize the flow of other demineralizers

This operating intervals for all demineralizers.

General Theory

Nomenclatures used inthis paper are as follows

M: Life of a deminera-lizer (constant)

a: initial flowb: Maximum flowT: Operating interval of

demineralizers

i: Operating intervalnumber of deminera-lizer

K: Number for a group of8 demineralizers.

1 2 3 4. 5 6 7 8

N0.(K-/) NO. (K+l) Group

General Analysis Configulation

For the characteristics of first demineralizer in the group K,we obtain

8IT

aT* + b(1

T.) = M (3)1 = 2

For the characteristics of the second one,8

T^x) = Ma T2 + b(i=3Ti + Tl

from equation (3) and

k+1 _ al b (1 - f) (5)

Page 201: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 184 -

The some procedure will be applied for other derainerali^ers andsimilar seven equations will be obtained as follovs.

Tk+1 = a TkL2 b 2

_ a, kb ; 3

Tk+1 _ a k n a.) T^8 " b ^8 U " bJ l

By defining X^ = T^ - T^

from equation ('j) and (6)

yk+l _ a Yk n a, Jx 1 _ _ X;L + (i - F ) x

(6)

Yk+1 _ a Yk ,X7 " b X7

X8a

IT LT If L' T1

Also, by defining X = (X,, Xo_. •••, ^K-1 a n d .V expressing a/b

following time invariant discrete equation is obtained.

Co

o.1—P 0

g(l-g), U-gT 0 -0

= Ax ( /

The constant operating interval of 8 demineralizers on steady stateis equivalent to the condition that X^ converge to zero.

If the eigenvalue is shown by A , following relation may be obtained.

(8)

Page 202: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 185 -

Therefore, the characteristic equation of A is obtained as

( A - g ) 8 - A (1-g)8 = 0 -(9)

For example: from equation (a) for g=0,5, the eigenvalues, A , are

(10 )

0.45 ± 0.47J0.85 ± 0.37J10.16 + 0.26j0.12

For simplicity, we consider the example of 3 demineralizers insteadof obtaining the initial value for \ =1 in equation (7)«

Xk+1g 1-g ° \ k0 g 1-g XK

g(l-g) (1-g)2 g /

Her6 x 1° = 0

(11)

(12)'

Their eigenvalues are A =1, j |(3g-l) - d-g) \ 4g-l j|

where g <. 1.

Therefore, | A ] < 1 on g < lThe equation (11) is made diagonal by variable transformation,X=PZ, and

,k+l _ „-(13)

is obtained.

By defining

Looking for' P, we obtain,

a o0 d..0 0

z k + 1 = 0 d, - B ) z k

,-1Pn P / 2 P 13

2I ?2Z P 2 3det P V P3/ P 32 Pi3

We transfer the initial value satisfying equation >( ] 2) to Z by

z°=p-ix°.

is obtained. It is clear that initial value for eigenvalue,A=l, is usually zero.

Page 203: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 186 -

Therefore, it will 12 assumed that THIS CONTROL SYSTEM IS OPARATEDTO EQUALIZE THEIR OPERATING INTERVALS.

COMPUTER SIMULATION RESULTS

The result of system analysis shows that the uncontrolled operatingintervals are gradually averaged by controlling the demineralizer'sflow. Then, in order to prove the analysis results of 8 demin^-ra-lizers, the following computer simulations were tried on the above-mentioned control approach.

Simulation [1] ... Time vs Ion Exchange for random disturbance.

<8)Input conditions

(1) Normal inlet conductivity: 0.12 ^ / c m(2) Change value of inlet conductivity: + 0.06, -0.03 "(3) Change mode of inlet conductivity: random mode(4) Outlet conductivity: 0.07 ^ / c m

(5) Ratio of the control flow (g=|): O.B67 (a<b)

© R e s u l t - • We cannot of find which cause on flow or inlet conductivityis there in Fig. 6.

— \..' 1 ZS- . - £.- 1 " r _ . I . " J X - . -

Fig. 6 Time vs Ion Exchange Quantity for Random Disturbance

Simulation [2] ... Time vs Integral Flow for random disturbance

®Input conditions are the same as Simulation [1].

^ p , We can express the cause on flow as the gradient of their"* characteristics and the cause on inlet conductivity as their

end point in Fig. 7.

Page 204: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 187 -

M * -=_-=J - -!-•—•

.-i—_/• J-'J-\J

4=-3JJ

U - > . f -'• ' i l l -"I

J" I.

Fig. 7 Time vs Integral Plow for Random Disturbance

Simulation [3] . . All equal controlled flow on each deminera-lizer.

©Input conditions

(1) Normal inlet conductivity: 0.12 fV/cm(2) Change value of inlet conductivity: none(3) Change mode of inlet conductivity: noneW) Outlet conductivity: 0.07 M&/cm(5) Ratio of controlled flow (g=a/b): 1 All equal flow

©Result • Its previous distribution in operating intervalsis not averaged in Fig. 8.

I I " " H - •• -I- •-!•' ••

17 Z7 Ip ZT ~J' ~J ~J " J~ ~ J J,J J J J J J J ~JD O~ J

Fig. 8 Time vs Integral Flow in case of All Equal Controlled Flow

Simulation [4]

®Input conditionsThe same as Simulation [3] except ratio of control flow(5) Ratio of control flow (g=a/b): 0.867 (a<b)

Page 205: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 18 8 -

~7 J 71

Fig. 9 Time vs Integral Flow in such flow as a <b

®Result •*• Initial distribution of operating intervals isgradually averaged by the effect of inputcondition (5)

l-\. CONTROL METHOD

From system analysis and computer simulation, even if there is"increase of their inlet conductivity, eg, leakage of sea water intoturbine's condensor and so on, their operating intervals should beequalized as much as possible.

It has been considered that two approaches Kill equalize their op-erating Intervals:

(1) predict control approach settle the end point to equalize theirIntervals within limit of outlet conductivity of demineralizers.

(2) ratio control of initial flow for newly connected deminerali^erto Condensate Clean Up System and flows of other demineralizers.

The former is more complex and needs more memory capacity than thelatter.

The deviations of Ion Exchange Quantities at the end point are verysmall because of their averaging intervals in the control (2).

Considering above results, It is concluded that ratio control methodIs the most favorable and these controls can be performed by DDCStation, a sort of unit in the Toshiba Digital Instrumentation andControl System "TOSDIC". These distributed DDC System is able tocontrol their flow, but also backwash and regeneration sequencecontrol in Process I/O Station. The both stations are connected tothe digital bus via Access Station in TOSDIC system.

TOSDIC hardwares basically consist of three sort of stations, Loor.Station [DDLS], DDC Station [DDCS] and Access Station LDDAS] withMinioperator console [POC], although they can be a easily expandedup to a large scale over all instrumentation and control systemwhich may employ a CRT display unit.

The distributed DDC system will be applied to the Condensate CleanUp System in a concept shown In Fig. 10.

Page 206: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 189 -

Loop Station

DDC Station

FlowControl

CRTConsole

Mini operatorConsole

hltiplex4

t t Process Vo Station *•

ProcessComputer

Sequence Control

Note;OMultiplex:

200m max length

2)MinioperaUrConsole : option

3) * Sign : includes Loop Station (DDLS

Process ComputerSystem:Provide informa-tion processingand high-level £30for an entiresystem. Completewith CRT displayintended for aone-man controlsystem.

Multiplex:is provided foran overall pro-duction system.

Access Station(DDAS):A data transmiss-ion unit for con-centrated manage-ment of the TOSDICSystem.

DDC Station (DDCS)with the built-inmicroprocessor upto 8 ]oops can becontrolled.

Process I/O Station(PIOS):Process noncontrolvariables.

Pig. 10

Notation

1:

2:

3:

4:

microprocessor.

Condensate Clean Up Control SystemConfigulation with Distributed DDC

Capable ofeffecting thesame operationand monitoringas those by con-ventional analogcontroller.

Manual operation

Instrumentation control panel level

Local console-concentrated level(Medium-scaled instrumentation)

Computer hierarachy control level(Overall instrumentation control system)

Page 207: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 190 -

5. CONCLUSION

The features of the control system discussed in this paper areas follows:

1. More Economical Operation for Condensate Clean Up System

As the end point for regeneration can be extended by the controlmethod described in this paper, average operating intervals fordemineralizers will be longer than the current uncontrolledsystem.

If the maximum flow of demineralisers were designed to handleexcess flow due to the flow decrease for newly connecteddeir.ineralizer, number of operating demineralizer could bereduced or could have more operating capacity than currentsystem.

2. Decrease of Radioactive Waste

The regenerative operation will be decreased on the same reasonas above. The consumption of acid and alkaline for regenerationis also decreased as compared with the current operation mode.In addition, this radioactive waste is ordinarily send to theR/W equipments, so it is easily controlled.

3. Improve Operability of the System

Operators obtain the ion exchange quantities by calculating fromthe flow and conductivity difference between the inlet and outletof demineralizer in the current system, but their quantities willbe directly indicated by the distributed microprocessor in theimproved system. In addition the operators can remotely super-vise the operating conditions by multiplex transmission and watchconcentrately informations combined together with other technicalinformations by a CRT display unit.

4. Safe Supervisory Function

Advantage of digital processing is fully utilized by distributedmicroprocessor system "TOSDIC" for various safety control stepswhich formerly were not practically obtainable from analoginstrumentation. The added safely management includes a func-tional check by validity of sensors, upper/lower alarm limits,rate of change alarm and deviation alarm.

Page 208: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 191 -

REFERENCE:

1) L.I. Rozonoer: "L.S. Pontryagin Maximum Principlein the Theory of Optimum Systems I" Automation & RemoteControl vol.20, pp.1291 - 1293 from 1288/1302

2) . ii ii" vol.20, pp.1407 - 1^09from 1405/1421

3) Reference Manual of TOSHIBA DIGITAL INSTRUMENTATION ANDCONTROL SYSTEM

Page 209: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 1 9 2 -

THE NORTHEAST UTILITIESGENERIC PLANT COMPUTER SYSTEM

K. J . SpltznerMortheast U t i l i t i e s Service Company

Hartford, Connecticut

ABSTRACT

A var i e ty of computer manufacturer ' s equipmentmonitors p l an t systems in Northeast U t i l i t i e s ' (NU) nu-c l ea r and f o s s i l power p l a n t s . The hardware configura-t ion and the app l ica t ion software in each of these s y s -tems are e s s e n t i a l l y one of a kind. Over the next fewyears these computer systems w i l l be replaced by the NUGeneric S>.*tem, whose prototype i s under developmentnow for Mil ls tone I I I , an 1150 Hwe Pressur ized HaterReactor p lan t being constructed in Waterford, Connecti-cut . Tliis paper d iscusses the Mil ls tone I I I computersystem design, concent ra t ing on the s p e c i a l problemsinherent in a distributed system configuration such asthis.

INTRODUCTION

The Generic Approach to p lan t computers evolvedfrom a need to s tandard ize because of economic and timec o n s t r a i n t s . NU was simultaneously faced with develop-ing a new p lan t computer system for Mil ls tone I I I andthe replacements of aging computer systems in three nu-c lea r and two f o s s i l power p l a n t s . The need for a newapproach r e s u l t e d in the Generic System concept whichis distinguished by two characteris t ics:

Standardized hardware configuration.A highly modular configuration was developedallowing the tai loring of a system to a spec-i f i c p lan t ' s needs by simply removing modulesfrom, or adding modules to, the basic design.A benefit of this approach is a reduction inmaintenance expenses through equipment com-monality.

Standardized scan, log and alarm software.A major portion of the application software,collectively called the scan, log and alarmsoftware, is very similar from plant to plant.Designing this software with transportabili tyin mind can result in significant cost andtime savings.

The Millstone I I I computer system, the f i r s t Gen-er ic System, i s presently undergoing software develop-ment at NU headquarters in Berlin, Connecticut.

SYSTEM HARDWARE

Why Distributed?

The Millstone I I I Plant Computer System configur-ation i s shown in the attached figure. The distributed-hierarchical design has not changed significantly sinceearly 1975. At that time NU had limited experience witha distributed system for Millstone I I , where a front endcomputer to an IBM 1800 was used exclusively for digi ta linput (DI) scanning of 2300 points in 8 milliseconds.Millstone I I I requires the scanning of a l l 3500 digi ta lInputs in 4 milliseconds. All 1200 analog inputs (AI)wil l be scanned every 3 seconds, and engineering unitsconversion and limit checking will be performed by thefront end computers. The power of the host computersis reserved for data base management, control of systemresources and number crunching. Clearly, a distributed

computer configuration was necessary to achieve systemobjectives.

Redundancy was not a requirement in i t i a l ly , butsince most of the multiple components needed to achieveredundancy were already present, and i t was thoughtthat this capability may be required in the future, i twas built in from the s t a r t . The concept i s one ofpairing computers for load-sharing redundancy. Overthe years, a changing regulatory climate, and more re-cently the effects of Three Mile Island, have bom outthe wisdom of this decision.

System Highlights

Some of the features of the system are:1200 analog inputs, a l l scanned in 3 seconds.3456 digi tal inputs, a l l scanned in 4 mi l l i -

seconds.Redundancy. No single failure of e computer,

a communications link, a disk, a bulk coreunit, or a color-graphics subsystem preventsthe operator from obtaining plant data.

System data base on bulk core shared by hostcomputers, for high speed access.

Communications capability to off-si te IBM 370/3033 at the headquarters Engineering ComputerCenter, and with other dedicated microproces-sor and minicomputer systems in the plant.

Man-machine interface incorporating color-graphics hardware.

Magnetic tape units for the saving of a l l plantevents for la ter analysis.

These features will be discussed in more detai l inappropriate sections of this paper.

Sate l l i te Computers and Process Equipment

The s a t e l l i t e system consists of the followinghardware:

6 Modular Computer Systems, Inc. (Modcomp)11/26, 16-bit computers with 64K words ofcore memory and hardware floating point.

18 Modcomp 1725 Wide Range Analog Input Subsys-tems.

6 Modcomp 1199 Input/Output (I/O) Interfacesfor digi tal I/O and analog outputs.

6 5213 I/O Bus Switches.12 4701 Interval Timers.

Analog and digi ta l inputs are evenly distributedover the six sa t e l l i t e computers which allows the scan-ning software to be exactly alike in a l l s a t e l l i t e s .The remaining process I/O equipment, because of limitedquanti t ies , i s connected to specific s a t e l l i t e s asshown in the attached figure. Controllers for each ofthe six sets of process I/O hardware are. connected toI/O bus switches which can be switched manually, orunder program control, to either of two sa t e l l i t e com-puters. The s a t e l l i t e s , therefore, operate In pai rs ,normally sharing the process I/O burden. In case of acomputer fai lure, redundancy i s achieved through theremaining computer of a pai r , which i s designed to per-form double duty.

Page 210: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 193 -

Network Communications

Communication between sate l l i te and host comput-ers is performed over 12 high speed serial coax links.Data transmission is synchronous at word transfer ratesof up to 125Khz. Remote f i l l capability allows satel-l i te program loading to be initiated at the host com-puters. Satellite console initiated f i l l is also sup-ported. The same type of link is also used for hostto host communications.

Host Computers

The host system consists of the following hard-ware:

2 Modcomp Classic 7870 32-bit computers with256K words of error correcting MOS memory,high speed 64-bit parallel floating pointprocessor, and 3 I/O busses.

2 Modcomu Memory Plus (M+) bulk core storageunits, 256K words each, dual-ported.

2 Ampex 88 M-byte disc drives.2 GE Terminet 30 Console typers.4 4903 and 2 4905 Peripheral Controller Inter-

faces.2 4906 and 4 5215 Peripheral Controller

Switches.2 4701 Interval Timers.1 4821 Communications Link.2 5198 IEEE-488 I/O Subsystems for system

timing.1 Chronolog Clock to establish system time.

Overall system control is handled at the hostcomputer level, including system timing.

Man-Machine Interface

Two Aydin 5215 Display Generators driving sixcolor Cathode Ray Tubes (CRT), four keyboards and fourGE Terminet 340 line printers make up the man-machineinterface. Three of the CRT's, with 25-inch screens,are mounted in the main control board of the controlroom. A switchable keyboard, also on the controlboard, can address any of these CRT's. In addition,the operator has available to him an operator's con-sole consisting of a 19-inch color CRT, and a dedica-ted keyboard with trackball cursor control. The re-actor engineer's and the shift supervisor's consolesare similar to the operator's console. Either displaygenerator can be connected manually, or under programcontrol, to either host computer. The four hard-copydevices, 300 lpm line printers, are used as log,alarm, trend and special reports printers.

Peripheral Equipment

The remaining system peripherals, connected tothe hosts through peripheral switches, consist of thefollowing:

1 Card reader, 300 cpm.1 Data link to the headquarters Engineering

Computer Center IBM 370/3033. A softwarepackage is used to emulate the functionsof an IBM HASP workstation remote jobentry terminal.

1 Line printer, 600 lpm.2 WANGCO, 9 track, 45 i p s , 1600 bpi magnetic

tape dr ives used to make save-tapes of s y s -tem software and for storage of plant h i s -tor ica l data.

5 ADDS interact ive CRT terminals are presentlyused for program development. When theplant i s operational two w i l l be ins ta l l edIn the health physics and chemistry labora-tor ies for inforaatiou .access to. the plane

computer data base .Modem for remote terminal access .

SYSTEM OPERATION

The Operating System software supplied with thesystem by Modcomp cons i s t s of MAX III/MAXNET I I I forthe s a t e l l i t e computers and MAX IV/MAXNET IV for thehost computers. MAXNET is a superset of the MAX oper-a t ing systems adding network communications capab i l i t yto the bas ic software.

Several custom software ^emencs l i s t e d belowwere a lso included.

Special IBM HASP workstat ion terminal emulator .Special communications handler to support a

dia l -up modem for remote terminal access .Special I/O handler to support the Aydin color

graphics system.

All appl ica t ion software described in subsequent s ec -t ions was wr i t t en by NU.

Data acqu i s i t i on and a l imited amount of data pro-cessing are performed a t the s a t e l l i t e l e v e l . Per iod-i c a l l y t h i s data i s shipped via Modcomp network s o f t -ware through Host B to M+., the locat ion of the systemhigh speed data base. Further data ana lys i s i s sub-sequently performed in Host A where a l l tasks r equ i r -ing data base access r e s i d e . Host l eve l pe r iphe ra l sare normally in the AUTO mode, and by software switchedto Host A. Host B, in addi t ion to ac t ing as the s a t e l -l i t e data concent ra tor , con t ro l s and synchronizes thecomplete system.

D i g i t a l Input in the S a t e l l i t e Computers

Every 4 mil l i seconds Host Computer B simultaneous-ly interrupts all six satellites and a DI scan is in-itiated. A satellite collects the status informationon its process 1/0 group, compares this with the prev-ious scan's data, and starts building a buffer whichultimately holds change in state information of 250scans covering a period of 1 second. The buffer con-tains the following information:

Buffer size.Time of day in hours, minutes, seconds and

milliseconds.

A header containing counts of attempts andfailures in shipping this buffer to the host.

Time stamp V"'t information indicating duringwhich of ci.e 250 scans changes in state oc-curred. This is used later to speed up sor-ting of the data from six satellites in hostcomputers.

Actual status of all inputs during the 250thscan.

Change in state data consisting of the scannumber, the satellite number, and the numberof changes in state this scan, followed bythe point numbers, and actual status informa-tion on these points.

Change in state data as above for subsequentscans.

At the end of the 1-second period, this buffer isshipped to Host Computer B, which acts as the DI dataconcentrator, via Modcomp network software. While itis being shipped, a second buffer is starting Co befilled with change in state information of the nextsecond. This double buffering continues, with onebuffer used for even second scans, and the other forodd second scans.

Two other DI features are a slow scan and the

Page 211: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 194 -

filtering of oscillating points. The slow scan allowsthe scanning of designated inputs at 100 millisecondsrather than 4 milliseconds. As many points as desiredcan be dynamically entered into this slow scan group.

The filtering of oscillating inputs is invokedwhen there are more than n successive changes instate during a one second reporting interval, where nis a user entered number initially set to 5. A flagis then set in the nth change in state entry and anyadditional ones are ignored for the remaining scans ofthat second. A new counting interval starts with thefollowing second.

Digital Input in the Host Computers

Every second the satellites send DI data throughHost B to six dedicated M+ partitions. Kach partitionis further divided into an area for even scan dataand an area for odd scan data, thus providing doublebuffering. Host A then performs all remaining DI dataprocessing, consisting of the saving of all changesin state at a 4 millisecond resolution, and the re-porting of changes in state at a one second resolution.

As the result of an interrupt from Host B, atask, is activated in Host A to sort, in time sequence,all 4 millisecond change in state information out ofthe si?: partitions. Since i t is historical data,possibly voluminous, to be saved for later analysis,an efficient means must be employed to get the infor-mation op. to the final storage media which is magnetictape. For that purpose the stored data is first ac-cumulated in a 128 word global common table. Whenthis table is full, i t is transferred to a doublebuffered M+ partition which has the capacity to hold46 of these data tables. When this partition is full,data is in turn written to both disks, again employ-ing double buffering. These disk areas, 21,000 sec-tors long, are then dumped to tapo on demand or whenfull. Data analysis is in an off-line mode with theoperator having to enter a window consisting of date,start time and end time for which he wants the infor-mation to be listed.

The sorting task also saves the actual status ofall inputs, as determined by the 250th scan of eachsecond, in global common. This data is then comparedto the past second's data and any changes in stateare reported to the operator. The one second resolu-tion was chosen to eliminate nuisance printouts dueto noisy inputs.

The treatment of oscillating DI points at thehost level consists of keeping track of flags set bythe satel l i tes. If a point is flagged for more thanthree consecutive 1-second scan periods, i t is auto-matically removed from scan and the plant operator isnotified.

Analog Input in the Satellite Computers

Analog Input scanning, performed every threeseconds, is controlled by scan tables consisting ofone word entries per input point, specifying the gainsetting, automatic gain ranging or not, and the pointnumber, and must be in compressed format. If a pointis added or deleted from scan, the selection wordmust be inserted or deleted in the appropriate spotand the scan table expanded or compressed.

After al l raw analog values have been obtained,the processing begins with the conversion to voltageand engineering units values. Alarm limit checking,based on engineering units, is also performed andalarm violations are flagged. An analog trans . :ion

table is then constructed consisting of the voltagevalues, the engineering units values, and the alarmflag table and is shipped through Host B to M+ everythree seconds.

An additional feature of AI is an Open CircuitDetection (OCD) scheme seleetable for any input, butusually applied only to low source impedance inputs.Three points per satel l i te , one from each analog con-troller, can be checked during every 3 second scan.If a bad input is detected, the point is taken out ofscan and a warning message is issued to the plant oper-ator.

Analog Input in the Host Computers

A task in Host A updates the voltage and engi-neering units values in the data base and, based orinformat ion in the alarm flag tables, performs thefinal limit check. Alarm messages are sent to thealarm CRT, the alarm printer and output to a K+ buffer.Similar to DI, the M+ buffer is copied to disk whenfull, and ultimacely to magnetic tape for historicalstorage and later analysis.

Man-Machine Interface

Plant operator/reactor engineer interface opera-tion can be divided into three areas. First, 55 key-board function keys are available to activate, abort,or resume application tasks. Most are dedicated tothe demand execution of various performance, test, re-port, or display programs.

Second, an Interface language was written for themanipulation and displaying of certain data base para-meters or groups of parameters. It includes changinglimits, substituting values, deleting/restoring toscan, building trend groups, or displaying graphicpictures, to name a few.

Third, a special software package was developedfor the on—line building or modifying of color-graphicdisplays. This "picture compiler" allows the construc-tion of a static background picture and the additionof dynamic components to this background. Frequentlyused dynamic symbols, like valves and pumps, for in-stance, can be selected from a shape library.

System Tiroe Synchronization

The Chronolog Clock supplies time of day informa-tion in BCD format to each host computer as read inthrough the host I/O subsystems. Further, the clockissues 1 millisecond interrupts to each host which es-tablishes the basic system timing. During normal oper-ation Host B acts as the synchronizer and uses digitaloutput, by means of i ts 1/0 subsystem, to control andsynchronize a l l six satell i te computers. Every <4 mil-liseconds the satelli tes are interrupted to start aDI scan, and each second, time of day information isbroadcast to the sa te l l i tes .

FAILOVER

A significant portion of application software isnecessary to detect and recover from device failures.The major components considered in the failure schemeare the host computers, disks, M+, satellite computersand communications links.

Central to host level failover is a special part-ition on each M+ unit (primary and secondary), vhichcontains counters for host to host failover and flagsfor each FORTRAN accessible random access file. Forfailover purposes, each host writes its information

Page 212: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 195 -

to a dedicated sector on the partition. Data in both

sectors is continually being updated.

During normal operation any task writing to the

data base executes in Host A. Also, all peripherals

are normally switched to the AUTO mode and software-

connected to Host A. Host B acts as the satellite

data concentrator and controls system time synchroni-

zation .

Host Failure

Every second, while each host updates its countersand flags in the special M+ partition, i t also tries

reading the other host's sector to check if i t is

doing the same.

Case A.

If a host is able to road the primary parti-tion, but finds it has not been updated, i talso reads the secondary partition. If thishas not been updated either, the checkingprocess is repeated twice more. Subsequentfailures indicate Chat the other host is dead,i ts load will be shifted to the s t i l l activehost, and the plant operator will be notified.

If a host is not able to read the other host'sprimary or secondary partitions directly, i t willattempt the read across the host to host link andthrough the other host. The following three variationsexist in this case:

Case B.

If the other host's partition can be readover the link and if this data is being up-dated, then the other host must be alive,but the testing host cannot read M+. Thetesting host subsequeitly kills itself bystopping its timer controlling task execu-tions, thus halting any further updatingo'f i ts partition. This creates a Case Asituation and failover will proceed accord-ingly.

Case c.

If the other host's partition can be readover the link but the data is not being up-dated, the read Is repeated twice. No actionis taken as the result of further unsuccess-ful tests, because the other host must behurting, but a message is output to the opera-tor.

Case D.

If the other host's partition cannot be readover the link, the other host is assumed tobe dead. The testing host must therefore keepitself alive. A message is written to the op-erator in this case.

Disk Failure

The two disks contain the same data files and loadmodule files, but each one can only be addressed by i tsrespective host. For a critical function, therefore,a disk failure is treated like a host failure and ap-propriate action is taken as explained above. For non-critical functions the user must decide how Co proceed,and in most cases only a message is issued to the oper-ator.

M+ Failure

Again, a number of different failure modes exist.They are:

1. DI raw data shipmentHost B is the recipient of the raw Dl datasent by the satellite. The clocking task,alsc running in Host B, determines which M+partitions can expect raw data and informsHost A accordingly. Host A does all furtheranalysis of this data. This host, which hadset the first word in each partition to a -1after the last successful shipment, tests ifthis checkword has been overlaid by new data.If the read can be performed, but no new datais present, the secondary partition is simi-larly tried. A failure in this operation isfollowed by a test of Host B's status. Apositive test indicates that Host B is ableto take over the data base functions and HostA subsequently stops its timer, killing it-self. If Most B cannot take over, a messageis written to the operator and Host A musttry to keep going.

In case the read of the checkword cannot beperformed frjm either partition, Host B'sstatus is checked and action is taken as ex-plained above.

2. AI Data ShipmentThis operation i s basical ly the same as theDI data shipment detai led above.

3. SPECREAD/SPECURITESPECREAD and SPECWRITE are user routines con-trolling all FORTRAN access of tH- data files.A device online check is always performed onthe addressed partition first, and if offline,the secondary partition is checked. If thesecondary is offline also, the host stops itstimers killing itself when the other host isalive, but it tries to continue operating ifthe other host is found to be dead.

SPECWRITE always writes to both the primaryand secondary partition. For each FORTRAN"file written to, a bit is set or reset in thespecial M+ partition reserved for failoverstatus information depending on the successor failure of the operation. If neither par-tition can be written to, the condition issimilar to the device offline condition aboveand action is taken accordingly.

SPECREAD initially checks if the last writeoperation to the partition addressed was suc-cessful, and then tries to perform the readoperation. If either is negative, both oper-ations are repeated on the secondary parti-tion. A failure indicates again an offlinetype condition and the above scenario is re-peated.

4. Tasks on M+If a task cannot be loaded from the primaryM+ load module file, global assignments arechanged to the partition name of the secondaryMt- unit and the operation is repeated. Afailure in this operation again reverts backto a condition similar to the device offlinestate.

Page 213: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 196 -

E at ell i te Failure

Normal frontend operation has each satelli tescanning i ts own process equipment and all bus switchesin the AUTO (programmable) mode. Each satell i te of apair is interrupted by i ts partner once a second, andduring the resulting execution of an interrupt handler,a counter is set to 4. Tasks in each satellite thendecrement this counter by one every second. If thecount goes to zero, the other satelli te is consideredto be dead. The good satelli te subsequently switchesthe other's process equipment to itself if possible,doubling i ts load. AI will continue to be scanned in3 seconds, DI, however, will degrade to an effective8 millisecond scan by the satelli te alternating thescanning of i ts process equipment with that of the badsatell i te every 4 milliseconds.

if the bus switch of the suspect computer is inthe manual mode, no action can be taken, but a messageis written to the operator.

Communications Link Failure

Satellites can be booted and tasks can be loadedfrom either host computer across the redundant links.A link failure will therefore not prevent satell i testartup.

Shipping of the DI and A! data normally occursto Host B. If this is not successful, the data issent across the second link to Host A.

Time Synchronization Failure

During normal operation a task in Host B controlssystem timing and synchronizes al l satellites based onciironolog clock information. This task also contin-ually monitors the operation of the clock, the 4701interval timer which acts as the back-up timer, andthe host I/O subsystem. In case of a clock failure,the back-up timer is utilized. If the I/O subsystemfails, the redundant subsystem will take over, and ifHost B fails, Host A will continue the system timingfunction.

CONCLUSION

Anticipated regula tory changes brought on byThroe Mile Is land w i l l have fa r reaching e f fec t s onfuture p lan t computer designs . Redundancy, g r e a t e rcomputing power, communications capab i l i t y and easeof system expansion w i l l c o n s t i t u t e some of the designo b j e c t i v e s . The Generic System, developed for Mi l l -stone I I I and the replacement of e x i s t i n g p l an t compu-t e r s , provides the foundation for meeting these, futurerequirements.

Page 214: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 197 -

PROCESS I / O

tOO OTS S7i PTS (O PT1 300 I 5?» PT5 4 t PTS

SUiSrSTEM

! DiQir»L

SUBStSTEM

&MALOO

SUBSVSTCU

AHALOgINPUTS

SUBSr jT in

0>Slr*L

sutsrjrty

• ULSt

SATELLITE

COMPUTE* * 4

~ A I AND OXHANDLING CAPABILITY

cownw. ANOTO SATELLITES

SSSJS

S E C T I O N 1

Page 215: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

INPUTS ItOODIGITAL INPUTS MManAt_O0 OUTPUT 10OHITAL OUTPUT JiPULSE MPUTB 4S

NMTKMT imUTB KMCE ClMODEM —FOR REMOTE

RMIHAL »CCE»J MILLSTONE POINT UNITES COMPUTERSYSTEM C0NF6URATI0N

Page 216: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 198 -

DESIGN CONCEPTS WO EXPERIENCE IN THE APPLICATION OF DISTRIBUTED COMPUTING TD THE CONTROL OF LARGE CF,GB POWER PLANT

By 3 N Wallace, CECB, 5outh Western Region, B r i s t o l , England.

1- INTRODUCTION

With the ever increas ing p r i ce of f o s s i l fuels i t become obvious durinq the 197O's tha t Dembroke Power S t a t i o n(4 x 500MW o i l f i r ed ) and Didcot Power S ta t ion (4 x 50DMW coal f i r e d ) were going to ope ra te f l ex ib ly with many u n i t stwo-sh i f t ing f requent ly . The region was a lso expecting to refurbish nuclear p lan t in the 1980 ' a . Baaed on previousexper ience with mini-computers , the region i n i t i a t e d a research/development proqramme aimed a t r e f i t t i n g Pembroke andDidcot using d i s t r i b u t e d computer techniques tha t were a l so broadly app l i cab le to nuc lea r pJant* Major schemes hawsnow been implemented a t Pembroke and Didcot for p lan t condi t ion iRonitoring, con t ro l and d i s p l a y . At the t i n e ofwr i t i ng a l l computers on two units a t each s t a t i o n are now funct ional with a t h i r d un i t c u r r e n t l y being s e t towork.

This paper aims to o u t l i n e the gener ic technica l a spec t s of these schemes, descr ibe the implementation s t r a t e q yadopted and develop some thoughts on • . nuclear power p lan t a p p l i c a t i o n s .

2 . THE DISTRIBUTED COMPUTER CONTRDt/HOWlTnaiNC/DlSPLAY SYSTEMS AT PEMBROKE ASP DIDCOT POWER STATIONS

2.1 S1 stem Keguirements

In examining the various solutions available for a new control/monitciring/dlapley systems at Pembroke andDidcot) a number of ba3ic requirements were identified. These included:

Performance The need For a system capable of hiqh control performance to ensure minimum plant damage endenvironmental problems whilst meeting the new demanding operating regimes.

Flexibility The need for a system that could cope with a variety of problems, encompass new developments asthey arose and allowed further system expansion in the future.

Integrity and Reliability Important requirements were that there should be no sinqle coint failures, that thesystem should be designed around a fail safe philosophy and that a3 hardware failed the system oerformanceshould oeqrade gently.

Hardware ft general requirement was that the hardware purchased should be commercially availaDle. The amount ofspecial purpose hardware wa3 therefore to be minimised.

Ease of Commissioning The system had to be caoable of 'jeing developed and commissioned on a piecemeal basissince the 500MW units were fully operational. Vo specific planned outage time was available since thi3 wouldhave considerably increased total scheme noat.

Ease of Hainfrenance/Traininq The system was required to be fully maintainable by station staff with limitedtraining or specialist skills. Spares holding and availability were also crucial considerations.

Project Timescales/Cost Project timescale and coat are inextricably linked. A svstem capable of rapidinstallation, development, commissioning and refinement vas therefore needed, Timescale was also importantbecause of the rapidly approaching need to 2 shift the units without incurring plant damage.

Parallel Experimental Development In view of the project tfmescales and complexity of the performance problemsthere was a requirement thet The system should permit small scale experimental -work to be incorporated directlyinto the main scheme.

South Western Region viewed the above from a background of experience on Power Stations with stand alone mini-computer systems. It was clear that a distributed computer based control/display svstem supporting a highlevel, engineer orientated software system was the only feasible solution. The system described in subsequentsections is a direct conseauence of this conclusion.

2.2 pygtem Design

The eventual design for each boiler/turbine unit at Pembroke called for five interconnected computers' arrangedin derarchical fashion {Fig.l(a)) whilst at Didcot (Fig.Kb)) a ring structure was adopted. In the liqht ofthe system requirements identified above the following key technical discussions were made:

Hardware Modularity All computers were tD identical in fora in orc!er to minimise maintenance, trainina endspares problems. Function or" each computer was to be determined by software olu3 the plant input/outDutconnections.

functional Distribution It «83 known *.hat T>e fewer loops per computer the smaller the effect of failure.However, economics prevailed to the extent .hat each plant area was allocated one computer. In addition, atPembroke a fourth computer was allocated the dual functions of primcry control displays and unit supervisorycontrol. Since all plant area computers displayed th-:.ir data via the unit supervisor it was felt that a bock ucdisplay computer should also be incorporated. Hbvinq no control function to perfoiTt it was thus also aoie toact as system host and plant condition monitor.

Page 217: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 199 -

Incremental Actuator "rives An incremental system usino stepper rotors drivinq Z/° or F/H convectors masselected because of the natural freeze on fail characteristics. To avoid plant area computer processor overloatindividual steoper motor drive carets were incoroorated. These >erc 1 bit viicroorocessor based and providedwith niuitol I/O posts and 2 dedicated ADC's. Individually cowered 'ran the secure 51V r>C battery supplv theywere desiqned to give moderate performance, sinale looo Dock- 'jp control in the event of plant area comouterfailure. This ensured that 2 shiftinq without extreme nlnnt transients was nuaranteed.

iTommunicot ions In any distributed svotem an inter orocessor communications cvstem is a necessity. Serial datatransfer at up to °.6K baud rhut in practice 4.8K baud) wes selected because of its fail safe charocteristics.insensitivity to cable run lenaths and capability for ruture expansion. Rosic data communication was hBsed atPembroke on a ooint to ooint arrangement with the unit supervisor at the data centre collector/distributor.However a serial bus linking all computers to the plant monitor ho3t provided a back uo data transfer path fordisplay purposes. Thia serial bu3 was also desiqned to carry all hosted software loading and oroqrammeamendment activities. The flidcot system was constructed on a rina basis because or" the hioher monitorinq/display requirements.

nisolav qnd Command A ma.ior innovation in the scheme was 'he intention to orovide operators with a muchimproved dai.3 disDiay via the twin coluur VDU's a**J a wide range of command facilities via multiplexedpushbuttons mountea in modular 72mm DIN qrid plaques. These pushbuttons were designed to assert ooto isolateddiqital inputs and their function was to be determined by application software within each computer. As amatter of policy all desk modules were purely passive, includinq the computer/standby manual actuator driveplaque3. Potentially hiqh reliability and flexibility were key factors in this decision.

Software Previous experience with a variety of software systems/approaches led h.o the conclusion that the useof a simple, efficient, high lovef, multi task software system containing a range of 'uncrashable' generalpurpose statements was an optimum method of develooina integrated control, monitoring and display systems. Anenhanced nultiprocessor version of existing software was therefore develooed and used by a variety of engineersto test the equipment and subsequently set it to wort.-.

The svstem design outlined above resulted in the functional hierarchy oortrayed in Fig.2(a).

.3 T-"ical °lant Area Computer 5v3tem

Inalonue plant data is obtained throuoh a 64 channel scanner «ith a 2HH channel/second caoabilitv. [n practice,a lower scon rate is employed, with the ADC integrating over a full nains evele ?o live good supply freauencvnoise rejection. Solid state incut selectors are used throughout the control c&mouters whilst reeos are usedfor low level 3iqnals with variable common mode signal level. 50V dc statu3 sionais are sensed ana controlledby the comouter using opto isolated status inputs and reed relay isolated output cards.

°lant area comouter required to orovide displays have a solid state display interface and 32K ,'or 64K!solid date backing store for storina disolav aeneretion programmes and data.

Control outouts to plant actuators go through a high integrity outDut interface -hich is described in moredetail in 5ection 2.4.

2.A Typical Intelligent Actuator ^rive Card

The only oiece of hardware that was not commercially available was the actuator drive card shown in Fia.'Ka).Each is individually powered from the secure 50V dc supply and is desiqned to drive a two pnase stepper motorusing series resistors and 5tlV de supply. Opto isolation is used throuqhout to ensure non-interaction withother cards and signals. Two phase stepper motors were incorporated into the E/M 3nd F/p converters in orderthat a totally passive standby manual driive could be implemented. Whenever the card watchdog drops out'bringing a rla3hing supply to the 5M plague 'computer fail' button) or the 'computer fail' button is depressedthe changeover relav isolates the steoper motor from the card and incorporates a ohasino caoacitor across thestepper motor. The standby manual piaoues raise/lower buttons then merely apply 2&AC to one stepner motoo ohasiand the motor rotates in the appropriate direction.

The card was originally conceived as a device ""or minimising ISI-ll CP" utilisation whencirivir.g multiple 'up to14) stepper motors. However, merely by adding the two floating ADC's and opto isolated lioitai I/O anintelligent backup controller was obtained with minimal increase in cost/complexitv. The cards operate fixedprogrammes blasted into PROM and operate in one of two modes:

1- Normal: Oata handshake with LSI-11 defines the number of pulses requested and I/n status, "enuests fortoo nany pulses dre totally ignored for security reasons. Watchdog reset onlv when the LSI-H calls thecard ,md hence *ero pulse requests must be made to keeo the watchdoq set. 'Tyoical rtropout time is 2seconds). Havinq received a pulse request the microprocessor nenerates the anoronriate steopsr waveformsat a rixed freauency of 200 stena/cecond. during the narnal lata handshake the steoper card sends .analogand digital input values back to the LSI-11 where they are checked against values scanned by the LSI-11.This Gives an important increase in system security by allowing earlv failure detection/correction ratherthan waiting until back up auto is selected.

2. °.ack uo: If a backup orogramme exists then failure to communicate with the L?I-ll t- :.ults in the fixedback uo control algorithm operating using the two ADC's for .measurements. This programme then has theresponsibility for keepino'the watchdog set. For safety, transfer hack to L"51-11 control is not sossihleuntil standbv manual has "irst been selected. In some cases two cards/actuatcrs aff»ct a common processvariable. Intercard communications usino the opto isolated digital I/n tracts resolve this conflict.

Page 218: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 200 -

A problem arose in creatinq e fail safe drive for the shunt wound 240V dc speeder gear nator controllinggovernor valve lift. This could not be changed and so a special interface was developed. This uprateathe normal stepper rnotor sauare wave drive and drives a 50Hz AC transformer. The resulting squarewave is rectified end applied to the motor armature, giving o floating dc drive which disappears when thestepper card stops pulsing.

2.5" Svstem Software

If engineers are to successfully apply computers to their many varied problems ot a minimal cost it is essentialthat they should be provided with system software which enables them to write their own application software.After surveying the available software in the early 1970's it was apparent that to obtain suitable systemsoftware it would have to be produced in house. This approach has proved very successful with a large number ofinstallations covering a range of real-time applications in power stations, transmission sub3totion3 and thelaboratory all using common (configurable) system software with application software written by enqineers ratherthan programmers.

To achieve adequate performance- and enqineer acceptability the following design reauirements here adopted andmet;

1. Real Time flulti-task Executive

The system had to incorporate a real time executive to allow the overall work for each machine to bebroken down into a number of discrete tasks of differing priority. The main function of the executive isto stimulate tasks when time'or external events demand they should run, and to queue conflicting requestsfrom several tasks for a single resource (e.g. the c.p.u. or a hardware device).

2. Simple High Level General Purpose Language

The system had to incorporate a high level language tD enable engineers to write programmes with minimaldifficulty. High level languages are for more acceptable to non-specialists and their great benefit isthat they greatly reduce the number of coding errors and also constrain the damage which can be producedby such errors.

The language had to be simple for two reasons:

(a) to make it acceptable to people with no formal computer training.

(b) so that it could be implemented in small process control machines with only a limited proaramminqresource.

A general purpose language we3 required as the range of work to be covered precluded the use of speciallanguages with the resources available.

3. Time Efficient

Experience with manufacturers languages (particularly BASIC) revealed that great cere was required to getsufficient throughput. The languaqe had to incorporate inteqer end logical variable type3 for efficientorocessing and real variables for applications demanding greater precision. The programme source istranslated into a threaded polish object code which is exacuted with reasonable efficiency yet does notdemand a sophisticated compiler.

A. Space and Co3t Efficient

The software was designed to run on the low cost end of the POP11 ranqe which meant it had to run in notmore that 28k memory and could not be dependent on expensive backing store. This constraint is of courseless important row than it was eight years ago when the work started.

5. Table Oriver Compiler

To allow Language extensions to meet the demands of different applications and to give the necessaryconfigurability for each application, a table driver compiler with a free syntax was adopted.

6. Multiprocessor Support

The system had to support multiprocessor configurations as these were vital for reasons outlined elsewherein this paper.

7. Line-by-line Translation

The language comDilation is handled one line at a time as eoch line is entered, with only a small mrnberof whole programme checks at the end. This enables the machine to report syntax errors etc. back to theengineer 03 early as possible which is highly desirable when inexperienced people are learning toprogramme the system.

B. Wide Ranqe of Utilities

A wide range of overlaynble utilities '/ere provided which allow the user not only to perform essentialfunctions 3i.,ch as proqramme compilation, saving and restoring programme on backing store etc., but alsogivpg other useful facilities such as the ability to determine the status of running jobs and examine thecontents of programme varinble in running jobs. These facilities are heavily utilised during debuggingand commissioning.

Page 219: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 201 -

2.6 Communicntions

As described in Section 2.2 all data tranafer hetween processors is achieved U3inq serial links running at 4.8Khour). Thi.T baud rate is also selected for the link to the analogue scanner since in both cases the datatransfer failure rate is much reduced (to neglible proportions). The actual amount of data transferred frompoint to point is not large, the following figures being typical:

. Control data ID words every 3 seconds

. Primary display data 100 wordD every 2 seconds

. Fast display data 5 words every 0.5 seconds.

The use of integers, bit orientated status and alarm data and selection of qlobal data for displayserve tominimise the serial link baud rote reguirement3.

Data transfer is controlled by application level software calling system software routines. The standard CEC3

protocol GENNET was found to give unacceptable performance with even modest levels of electrical noisa and a

modified protocol was thus developed. A handshaking procedure is employed to ensure accurate data transfer with

the transmitter making up to five attmepts to transfer each byte before flaqging a communications error. This

is particularly important in the Pembroke configuration where the unit supervisor is extremely busy processing

five asynchronous serial lines in addition to sophisticated control calculation and data display-

Each epplication job dedicated to transmitting/receiving data has additional handshaking/watchdog facilities

inbuilt to ensure that data being displayed is not frozen due to loss of the data link. In addition, data for

control purposes is further checked with amplitude and rate limits relevant to the particular epplication. High

integrity is thus assured.

2.7 Applications Software The applications software is written in a hiqh level lanauaae by enaineers and occupiessome 8-12K of memory. The total computer function is subdivided into a number of small jota each of whichperforms a functionally distinct task. Each job consists of a series of simple (crash proof) statementsoperating on local and global variables, the latter being heavily used for inter-job co<nmunication3. Each jobis run on a user-defined priority basis at precise clocked intervals or Ies3 precise elapsed intervals.

Typically each plant area computer contains 10-15 control loops implemented using some 25-30 separate jobs.Inteoer variables are employed in most, cases since this gives maximum speed, minimum space utilisation and datatransfer overheads and allows bit as well as word manipulation. Particularly in the alarm and interlock fieldbit -nanioulation has proved highly space and time efficient and significantly reduces interprocessor data rateswithout resort to compression techniques. Real variables are employed in specialised applications such assquare roots, Kalman filter covariance matrices etc.

Each engineer tailors his application software to the particular application needs. However, typically anycomplete system would contain the job illustrated in Table 1. Experience has shown that subdivision of job byfunction rather than 'loop' is more efficient, safer, easier to comprehend,commission, and manage- and makes bestuse of the job priority structure.

Within the system there are naturally a nunber of cascaded looos and,at the higher levels of cotimisation andunit supervisory control,closely coupled multivarioble control algorithms. However, as a matter of policy eachprocess variable or control input ha3 a single loop identity and all loops conform to the same status format.This format consists of eight separate conditions:

CA - Computer auto: Plant area or unit supervisor computer evaluates the control action required. Loop set poincderived either from another loop or by computer manual raise/lower commands fran the operator.

CM - Computer manual, auto not available: same as CM but software interlocks prevent selection of CA.

TR - Tracking: Loop output is adjusted to track a variables measured value when that variable is not undercomputer auto control and bumpless transfer is required.

SA - Standby auto: An individual actuator drive card requl_te3 a single actuator usinn two plant measurements.Modest performance provided on critical control loops when the plant area computer has failed or communicationshave been lost. Transfer back to ncrmal computer manual/auto is achieved by prior selection of standby manualto ensure that the operator is aware of the status chanqe.

D?] - Desk manual: When a loop is in standby auto its set point can be adjusted by simultaneous operation of thenormal desk 'loop' pushbutton and a raise/lower pushbutton. This deliberately contrasts with the normalseguential operation required under computer manual operation.

5M - Standby manual: Direct raise/lower drive of an actuator using the standby manual plaaue.

SH - Standby manual with computer not available: Same 83 normal stBndby manual but software interlocks preventselection of computer drive of the actuator.

Page 220: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 202 -

2.3 Digital Concroi Techniques

A vide variety of digital control techniques are enployed in the system. For fasc sub loops digitalcontroller based on analog 3 term controllers are typically employed. However, full use is made ofthe general purpose language to optimise each of these controllers for the particular application inquestion. For example, different kinds of rate and amplitude limits are employed and digitalcompensators (for time delay ecc.) are also required in certain conditions. Specific techniques arealso employed Co cope with steady operation close to or at limits and. initial transients as loops arebrought into play.

Optimisation and supervisory loops employ techniques which are essentially digital in nature. Suchloops aim to manipulate set points of sub loops with a view to minimising a quadratic performanceindex of the general form.

(a) J, - q, x~(k)

(b) J = Z_, q x^(k) + q, XjCk) * q3 xhk) * ... q x~<k)

It • 1

where q q are weighting factors

x., x are plant variables, set point errors , control inputs or rate ofchange of concroi inputs.

Type (a) performance index is eioployed where no mathematical aodei of the process co be controlledexists (e.g. combustion products). The performance index Jfc is evaluated at each sampling intervaland che concroi inpuc updated in accordance with the relationship

- k

Where a simple process model can be obtained (e.g. load/pressure/superheater temperature,! che cype (b)performance index is employed since che feedback control gains can be calculated independently froma matrix riccati equation. Plant state estimates are obtained from a real time, parameter AdaptiveKaloan filter which also provides significant smoothing of noisy (pressure) measurements and a dynamicestiizane of plant disturbances. Research work is continuing on the application oc such techniquesboth at Pembroke and Didcot since initial plant trials have been encouraging.

2.9. Plant Data acquisition and Concroi Actuation

The acquisition of reliable, plant data is crucially important to the performance and integrity of anycomputer based control system. Thus solid state (racher than dry reed relay) input selectors areutilised for all concroi computers. All control signals are supplied from new, dedicated transducers/current loop transmitters and each signal is scanned only by one computer. Signal sharing is accomplishedby digital data transfer using the serial links in order to maximise system integrity. Since each storedsignal, is directly controlled in one plant area computer all other computers utilising the data forscheduling etc. are designed to fail safe in the event of (rare) data link failure. Degraded performanceis accepted.

Validity checking of raw scanned data is regarded as vital. Experience has shown that real plantmeasurements are capable of the most obscure and unexpected faults. Thus as a minimum, cate andamplitude checks are applied. Experiments are currently being condt. :ced using model prediction errors(Kaloan filtering) with a view to further improve signal failure detection capability. The particularchecks applied and action taken when data is detected as being invalid (bad) are application dependent.However, table 3 gives an indication of the kind of checks currently involved.

Checking of control outputs to actuators is also employed (Cable 3). For example, if an actuator drivecard fails to respond correctly then the drive card scatus is indicated as failed and the loop trippedoff computer- Actuator measured position is also compared with the computers prediction based on.iccuraulated pulses. If a discrepancy of more than 5-8Z arises che loop is again tripped off automatically.Since che measured position almost invariably comes from a potentiometer, cLacfcing of response toindividual pulse increments has proved fruitless.

Page 221: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 203 -

2.10 Marrnr. and Interlocks

* wide variety of alarms and software interlocks ooerate within each conauter. These ^re application soeciFic

nut tvpically include:

'. Calculation of both advisory and mandatory operatinq limits.

. Auaihlp warning to ooerators of serious alarm candtions such 93 a looo trip to computer manual.

Control of IOOD status in the face of lonicol constraint", such es had innut -1ata, actuator failure or

unavailability of other loops, actuators etc.

. Control of Looo set points, limits and actuator positions under specific plant ooeratinq conditions 3uch as

trios noa start uos<.

*s a matter of policy all Looos are tripped to computer manual when the qonrooriate prime control variablemeasurement or actuator is labelled as havinn failed ?bed). However, with auxiliarv data auch as that used for

control looo oarnmeter ^chedulinq/-he interlock 3ystem often sets a safe default value in the aonrooriate frozenneasurenent, thus avoiding a looo trip hut incurrinq degraded control oerfornance.

Visa ds a matter of oolicy, auto cannot be reenqaaed until the 'bad' status of Drime control measurements oractuator haa been specificially reset by the operator from the aesk- In some cases the measurement must liewithin a specified ranne before the 'reset* instruction is accented but in others this is not only notappropriate but inadvisable. Tor safety reasons no loop ever self selects auto, even when *had data1 ''laps arecleared. Similarly, in some cases the selection of auto on the outer loop of a cascaded pair automaticallyselects auto on the inner loop whilst interlocks prevent this haopenina in other cases.

"reet care is taken with toe alarm and interlock system in order to maximise both system inteqrity and 'auto'

looo status availability.

2.11 ^isolav m d Command System

'he unit aerator is provided with a modular iesk to enable him to closely xonitor and suoervise oiant behaviourin 3 fashion not previously attainaDle. At "Udcot the desk Tiodificationa *ere incorporated directly into thenormal "esk whilst 3t Pembroke a comoact extension *vis initially ?een addec. ria.3b illustrates this dsskextension 'based on a ~2mm DVi grid) *hich in a lenath of only A f V enccmoosses *"ive najor areas 3f controlconoustion. temoerature, "eea. burners and ijnit suoervision/load) olus 'lisolav selection ror boch 1'0iI'3.

Ul joeratpr oo.Tmunication with the computer is oiaital, each aushhutton asserting dioital status inouts in theapprcoriate oiant area computer wnilst acceptance and status is indicated to :he operator by digital reedreiavs liqnting pushbutton lamps. The oushhuttona ire laid out in nlant areas radially r'rom the central•sunervisorv control and oisolav racilities. The individual actuator "tendbv nanual olaaues ure arranaen 3t the^ack since .hey are intended for use only in the event of comourer -"ailure.

*11 normal xierations reouire a seouence of cush buttons 'tynieallv two or three'1 to be pressed to select annimplement 3 "unction. This nultiolexinq of buttons ensures a hioh standard of safetv and yet nakes for a verycomoact, °asy to use. ^vstem. "or example, there is normally onlv one VM - raise/lower plaaue, each additionallooo reouirinq the addition of only one extra pushbutton. Additional non-standard or unexoected ^unctions canreadily be iricoroorated at any itaoe of the development since the function is determined solely bv applicationssoftware and the label •attach to the oushbuttpn.

The central supervisory area provides additipnal operator facilities. cor -xamole a combined raise/lower njmberoad :G orovided •so that 1ata can be entered numerically, or using incninq or continuous raise/lower technioues.[noortant system oarameters can be cnanaeo from the desk but only on insertion of a <ey into the desk mounted' ev lock. This kev is held by the shift charoe enqineer rather than the- jnit operator. Coloured slaaue'jackarounds are used to demonstrate display selection, number Dad and normal control 'unctions. '*imic ^iaarams•ire aLso ennraved on plaques within each control area, but these have been less successful than the plaque:oiourinq.

. Too section - Permanent selected alarm summerv.

- :'entrc section - °riniary data dnd alarm rormats

. lottom -jection - Hiqh -5Deed interactive disolav for individual looos.

7he interactive ^rea updates every '1.5 -jeconds and self updates whenever sn operator selects a looo anvwhere inthe system. This provides "ine raise/iower resolution and raoid interrogation caoabilitv independently of the~iain disnlav format. °rimarv disolav formats update typically every 2.H seconds and are selected bv cnosina=eoui;ntiai lv olant area, type 'data, alarm; and numher 'I- 0 1. uowt*verf anain

ror raoid interroaation durinotransients/incidents-»kev cpntrol formats are available bv sinolt? pushriutton renuest. The alarm summaryclassifies ill alarm conditions into one of four :ateqories &nn "lisplavs these .is in table 1.

Extensive use is nade of simple, cleerlv understood lisplavs usina dounli: size characters, colour ;ortinotrevprsR video ind flashina ?ffect,s. ; common standarq has ^een estahlisned ann adopted For all lisolav *brnats•.ithin the system. The - ain features or' this are •summarised in taole 2,

The second olant Tonitor VOil-provides -1 comnrehensive net of trends, ^istooram etc- laain, careful ^toice -w*colour afvi -liaplav nattern readily communicates information to onerators without recourse to detailed Tornatjnaivsis. \t idcot, hack >JP control lisnlavs are -ivailahle on the second vnn <isina the crimarv lisolav""ormats. ft ;s nlonned that this feature will also be implenented at

Page 222: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 204 -

3. IMPLEMENTATION EXPERIENCE

3. I. Projec t Scracegy

Previous experience had shown Chat in dealing with complex problems of Che kind presented by Pembrokeand Didcot that CECB engineers were best qualified to write the applications software. A previousscheme had therefore been attempted on this basis with the commercial manufacturer supplying fullycesteu hardware and system software. However, both proved co have technical problems and much manpowerwas wasted in persuading the manufacturer to correct faults. Severe problems were encountered withthe cocc:i»rcial confidentiality aspects of che system software. With this background. South West Regiontherefore adopted the following project strategy :

1. Control system strategy design and development - CECB

2. System software development - CEGB

3. Application software - CEGB

4. Engineering details - CEGB

5. Hardware commissioning - CEGB

6. Hardware design and supply - CONTRACTOR

7. Demonstrations of hardware performance - CONTRACTOR

This has proved to be a successful method of implementing a major refurbishnenx requiring significanttechnical developments and innovation for their success. The interface with che contractor is welldefined, the aost difficult area being diagnosis of hardware/system software faults. There is scopefor contractors blaming hardware faults on Che system software which is strictly not his responsibility.However, in practice, given ready access to the system software, the problems encountered appear to befewer than those occurring under turnkey arrangements. Furthermore, this approach allows CEGB staff toaccrue detailed knowledge of hardware characteristics thac would otherwise be difficult to obtain. Thisis extremely useful in assessing equipment capability, integrity and longer tera maintenance/trainingrequirements.

The systems for Pembroke and Didcot were therefore ordered in the second half of !977 on an equipmentonly basis. Separate contracts were awarded for the computer equipment and the unit desk codificationsin order to make best use of specific contractors expertise. >bst or the hardware was scill at chedesign/prococype stage and almost no reLevant application software existed. However, in parallel withche rain contractors, an experimental system employing prototype equipment was installed on one unit.The inherent flexibility of a computer based system running a high level language allowed rapid developmentand proving of applications software and provided useful sice based experience with key pieces ofcommercial hardware.

3.2. System Testing and Installation

Before commissioning of the equipment commenced boch computer hardware and system software were subjected

co rigorous testing procedures. These consisted of :

Factory

>bdule development/testing using contractors own diagnostic software.

System integration testing using contractors own diagnostic software.

Module testing using CEGB system software and special purpose application software.

System integration testing using CEGB system software and special purpose application software.

Partial system testing using CEG3 software under harsh enviromcental conditions.

Site

System integration testing using contractors own diagnostic software.

Partial system testing (of experimental equipment) using CEGB system software and real

application software.

System integration testing using CEG3 system software and special purpose application, software.

Page 223: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 205 -

The most important point3 to note are firstly that much tine was saved by factory testing prior to shipment tosite ond secondly that complete hardware system testing with the final software system revealed a ho3t ofhardware flnw3 and module interactions previously undetected by the contractors own diaqnostic3. Indeed it wag3Dme oix months after the contractor first declared the system ready for shipment that it passed the systemtester written by CEXS staff. The whole testinq procedure was repeated at site where further problems weredetected and corrected.

The actual installation of the computer and desk equipment was simplified by the use of flying leads throughout.Stntion stoff were then able to install cabling, recks, marshalling cubicles, auxiliary power supplies,instrumentation and modified actuator head3 whilst awaiting delivery of desk modules and computer equi, '~nt.Final installation was then merely a case of terminating flying leads in accordance with a predefined schedule.Some detailed problems arose but generally the approach was successful and can be recommended.

3.3 Commissioning Procedures

Commissioning of the new system represented a major task since:

1. It had to be carried out with the minimum interference to high merit flexible plant.

2. Trc csferring an actuator to computer control irrevocably put the existing controls out of service.

3. Testing had to cover both equipment and computer software.

Cxtensive documentation was produced to cover both individual hardware item3 and complete system functionalchecks. The latter covered all alarms and interlock conditions as well as modulating control performance.Acceptance of each individual test carried out was dependent not only on correct behaviour of the appropriateplant area computer but also the associated communications and display system software in the displaycomputers.

In order to avoid prejudicing plant output or safety, loops were commissioned in a planned sequence with two orthree week gaps allowed for evaluation purposes. Applications software wa3 generally tested using plantsimulation but critical looos software was also verified on the experimental system (which could be switched outof service if required) prior to final commissioning.

Development of the system continues after initial commissioning in ordar to incorporate operational experienceand optimise performance. Where minor changes occur, these are directly incorporated and documented to theappronriate standard. However, where significant changes occur, the complete set of system functional checks isrepeated. Only in this way can 3ysttm integrity be maintained. The procedure is tedious but experience hasshown it to be worthwhile.

3. A System Maintenance and Management

The development of new software and control strategies is the responsibility of specialist regional staff.However, at all times, station staff are responsible for hardware maintenance and maintenance/nnnagenant ofcommissioned software.

Hardware maintenance is covered by shift staff up to a specific point and thereafter it is classified 83 anarea call for more skilled day 3taff. Qn interesting result of the comorehensive I/O data checking, alarms andinterlocks incorporated into the application software is that the system provides a hiqh degree of selfdiagnostic fault finding. Interrogation of the appropriate plant area alarm formats usually leads maintenancestaff to the cause of the problem.

Cards within the plant area computer are tested using special purDOse diaqnostic software whilst the intelligentactuator drive cards are tested using a special test box. This allows rapid testing of every facility on theenrd and it is a concept that could usefully be extended. 4 selection of spare cards is maintained running ina hot spares rack. This is mobile and can be located next to a plant area computer to assist fault finding.

Software maintenance procedures are an important part of system maintenance. Maintenance is required toincorporate changes brought about by operational experience, the development of a new control strategy, reque3tsfor additional desk or display facilities or changes in system software. The software system is deliberatelydesigned to allow such changes to be readily incorporated. However, unless the associated documentation issubmitted to the Station staff responsible for software manaqement/maintenance the chanqes are not formallyrecorded or accepted. Only by rigorous adherence to this rule can software integrity be maintained.

3.5 Traininn

Whilst Regional staff had accumulated several years experience with small real time computer systems, themajority of Station staff involved in the Pembroke and Oidcot schemes had no previous direct experience of thiskind of work. In addition, unit operators were faced with a new unit desk, new layout and operationalprocedures. This posed a major challenge to those regional/national staff involved in the schemes inceDtion todemonstrate that non specialist staff could be successfully trained to operate and maintain the new systems.

Page 224: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 206 -

b.

The atntegy adopted con be suimiarised as follows:

1. The installation of a experimental prototype sy3tem embracing all key features of the system. Thisfamiliarised mainter^nce and ooerations staff with the new concepts but allowed then to instantly re-select the original control system if problems arose.

2. The Station maintenance staff were involved in equipment testing at the contractors factory and weretotally responsible for testing and maintenance of equipment as it arrived on 3ite. Naturally, expertguidance and advice was given thus providing 'on the job' training.

3. Key operations staff were involved in system development, testing and commissioning. They were thenresponsible for training specific shift operator foremen who in turn trained all other unit operators.Simulation techniques were employed to assist with the training which was almost exclusively carried outon the installed eouipment.

The strategy has been successful and Station staff are now fully competent in the maintenance and operation of

the computer based systems.

3.6 Financial

The cost of the schemes at Pembroke at Oidcot can be assessed on the basis of expenditure of equipment andcontractors labour, together with CEG8 manpower ef for t . The following is an approximate breakdown of theexpenditure and manpower consumed.

Actuator modifications

New transmitters/instrumentation

Instal lat ion

Desk

Computer equipment

Cost (S)

5.8 Commissioning

6.6 Installation

13.2

Manpower (%)

8.0

13.3

System software (shared between projects) 10.6

9.1

65.3

"100.0

Control system development

Engineering and design

3D.8

37.3

mo. n

It is often difficult to assign financial benefits to improved control/display systems. However indications nrethat costs will be recovered in three to five years due to:

. Reduced plant damage

. Improved efficiency

However, there are many other less tangible but important benefits accruing from the approach outlined in thispaper including:

. Improved environmental performance

. Improved plant flexibility

. Improved equipment reliability end easier maintenance.

• Adaptability to future unforseen events

. Imoroved system integrity

. Increased operator awareness of plant conditions, and control system status/behaviour.

DISTRIBUTED COMPUTER CONTROL/DISPLAY SYSTEMS FOR NUCLEAR PLANT

4.1 Nuclear Plnnt Characteristics and Requirements

Many of the system requirements, design concents and dinitBl techniques described in this paper are relevant inthe context of Nuclear Plant Control. However, there are substantial differences which must be considered inthe application of distributed computer control to the latter. These include:

. Increased numbers of plant input signals.

. Increased numbers of important control Ioops/actuator3.

. Increased requirement for high reliability systems with predictable, fail safe characteristics.

Page 225: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 207 -

.. Increased requirement for failure diaqnostic capability.

. Increased requirement for plant condition monitoring, intelligent data analysis and effective corrmunicationwith operations otaff.

. Increased need for methods of system teatinq ond evaluation.

. Increased need for digitul control techniques capable of rapidly adaptinq to unexpected situations aridoptimising plant performance in response to particular sets of operational constraints.

Whilst the basic approach outlined in this paper is believed to be correct, the author believes that furtherdevelopments are needed to cope with the additional requirements and constraints outlined above. Principally,these involve:

1. Higher levels of functional distribution

2. Development of distributed hardware structures

3. Consolidation of the integrated control, display and commend system concepts-

4. Development of improved digital techniques for failure detection, control and system testing.

5- Software.

a.2 Future Developments

Based on che expecience of cte Pembroke and Didcot Projects, consideration in South West Reqion is currently beinqgiven to the future application of distributed computing concepts to Nuclear plant. The following appear to beimportant areas of development:

Functional Distribution

In the recent past, the plant area computer concept wa3 the only economic solution available. However, onnuclear plant, the loss of 10-20 reactor control loops due to computer failure is not likely to be acceptable.One can, of course, introduce redundancy but this inevitnbley introduces complexity. The alternative is toincrease the level of functional distribution to the point where loss of one or more computers does notreprssenta safety hazard or en operational constraint. Despite its many specific limitations, the intelligent actuatordrive card developed for Pembroke and Didcot offers simplicity, minimal loop interaction under failureconditions and easy maintenance/test procedures. It is thus felt that for nuclear plant the present plant areacomputer with its multiplexed scanner and actuator drive system should be replaced by a series of smallersystems containing simplified non-multiplexed E/0. Thus a sinqle system conteinina perhaps 30 cards would bereplace by ten systems of 3 cards or even 30 sinqle cards. F.ach of these small systems would still be capableof running a hiqh level ianguaqe such as the CEGB standard CUTLASS in order to achieve the high performancestandards required. With steadily reducing semi-conductor memory costs such a hic/i level of distribution islikely to be economic, particularly if the distributed hard-ware structure described below is also adopted.

Distributed Hardware Structure

In implementing a hiohly distributed computer system two critical hardware areas are plant input/outputinterfacing and inter-processor communications. The author believes that multiplexed eneloQue scanners forplant data input could be replaced by digital instrumentation and digital transmitters. These could beinterfaced to the computer via standard opto isolated digital input/outout parts which are readily available onalmost every manufacturers system. Figure 4(a) illustrates schematically auch a digital transmitter suitablefor direct connection to thermocouple signals. The data is supplied serially in response to computer requestathus simplifying hardware design and makinq maximum use of the computer. Since many such modules can be drivenin parallel the baud rate need be very low (e.g. 300 baud) thus eliminating the need for complicated linedriving, ^t auch low data rates the c.p.u overhead in the computer is not likely to be a problem. Highcommon mode and noise rejection capability is inherent in the design. Since there are many thermocouple signalson a nuclear station the transmitter module could be produced in volume to a defined specification. In such acase, cost per channel is believed to be comparable with multiplexed scanners but with the significant advantagethat there is minimal interaction between one channel and another.

The use of low speed serial digital transnisDion using oDto isolators could be extended to intelligent actuatordrive systems. Such devices are under development elsewhere in the CHIfl with the result that a distributedhardware structure with an all digital computer/plant interface could thus be proposed. The syst*jn would thustake the General form suggested in Fig.4b. Plant interfacing (tD a defined standard) is almost exclusivelyachieved through standard isoleted digital parts the only exceptions beinq the serial ports for hosting andintemrocessor communications.

For interprocesaar communications, it is suggested that a high speed (e.g. H.25M baud) serial bus based on fibreoptic technology could provide a highly reliable, secure solution. Cheap LED based transmitter/receiver pairsare now available as are multiple pc *" star couplers. Thus serial bus configurations based on both star andring structures are now possible. V is most appropriate is a matter for development.

Resides offering technical odvantoges, it is beliinterface could be helpfull in construe*1— ~c —specified at the usual early stage but „the non-trivial problem of equipment bee

Display and Command Systems

The provision of radically improved disolny and command facilifcies on the PenbroL-e and Didcot unit desks hasbeen beneficial. The unit operators have a much improved appreciation of plant conditions and consequences oftheir actions. Such fnctora are important in the context of nuclear plant operation and is expected thatfurther development work will continue. Mew technologies such 09 touch sensitive \DU screens will undoubtedlysuggest alternative strategies for unit desk layout and interaction with the computer system. Experimental workboth on simulated and real power plant will probably be necessary to evaluate the various eroonomic and safetyfactors involved.

Page 226: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 208 -

Oiqitnl Techniques

There is considerable potential for exploiting the powerful fncilitirn now offerer) by low rr-j-r-t r.n~.put .;>r nno S i-h

level, real tir** software systems. Evidence ta date indicotrs tlmt the npplicttion rcndrjm c-t innt: o> n:d

control theory using small scale plant models can moke a significant contribution to:

. Methods of measurement iind actuator failure detection.

. Oato .smoothing and nnalysis.

. Reduction in actuator wear and maintenance requirements.

. Robust, adaptive control algorithms requiring little or no specification or tuning.

. Supervisory techniques for real time optimisation of total plant performance.

The ubove, in conjunction with the capability to rapidly devnlop nppJicntion r.oftv.nre tnUort-1 to IXJJVL- •:?••<:±fie

problems,should Dignificantly improve control system integrity ond parfornnnco on nude or pJont.

Software

Software performance and r e l i a b i l i t y w i l l always be n l im i t ing foe Kir on any ennuter tvrsed cnntrol or rtirpLtiysystem. In a nuclear environment, demonstration and tea ting of software rn l inhi l i t y ri 'twiim nn :ntiortnnr ;M-,;Jactive area of research. Thu3 proceduren for software modification need oir*;ful a t ton t icn . ^xri'fi'i(?nrp f-n'* •;>hownthat in teract ive, high level software provides the engineer with a rapid r^chud of ^nd i f y im mvj l icat icnsoftware. Indeed minor changes can often benef ic ia l ly be made on-l ine whr.-n oiierotinq on cunvcntinnal p lant .

In a nuclear environment i t is d i f f i c u l t to see how such a net hod of working could t>e accept RR1 . fhpre *i 1 ] thi;;ibe n nef?d for methods cf approved automatic ve r i f i ca t ion of software changes. Some safety features can, ofcourse, be inbu i l t to the system software but many checks are highly appl ication snecif i c . H.p veri^icri* : cmprocedures w i l l need to be reasonably time e f f i c ien t i f soine of the benefits of engineers t i l ing hitf-* lrveJsoftware systems are not to be los t .

CflNCLUSIONS

This paper hns described the cteaiqn concepts behind the Pembroke end Didcot d istr ibuted computer control and .Jisplriysystems. fmolementation experience has also been reported as havincj sr^e prop03a La on further devel cprp-nt:-, fornuclear plant appl icat ion. Judged by the i n i t i a l renuircments described in Section 2.1 th*» pmjcctn lire a 5UCC*-:KI.High sttmdords of in teqr i t y and performance hove been achieved on canvcntionnl plant- I t is therefore expectrH Ik-jtthe d is t r ibuted computer approach w i l l be increasingly apnlied to nuclear p lant .

The design and imple;menl'ation of the systems described in this paper has been the rn3u 11 of jo in " s ta t ion , r*jtnonnana* national s to f f co-operotion in which the author hna been priveleged to takt» part .

This paner has been produced by permission 01 zhc Director ConcraL of SuuLh West Ru^ior. ->i cne Centr.ii EK ' c i nc i ;vC£r.er3cine Board.

AG0105JNW.

Page 227: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 209 -

TABLE 1 TYPICAL CONTROL JOB STRUCTURE

Analogue scan/validity check tone per scan race)

Digital scan

Analogue data scaling

Analogue da'.:a smooching

Control action calculation (I per loop typically)

Control loop parameter adoption and . _hedi»ling

Actuator drive

Desk command handling and Lamp driving

Interprocessor communications for control

Interprocessor coanunicacions for general display

Intcrprocessor communications for high speed display

Software uatchdog

Hardware uatchdog

Alarm and limit calculations

Software interlocks

System error logging/printing

General interactive programmes for debugging

TABLE 2 VDtf COLOUR CODING

TEXT

Alarm summary - p l a n e l i m i t \nsasurement ^actuator icomputer }

Portrait background

Foreground data

Loop status CACMCMTRSADMSMSM

FOREGROUNDCOLOUR

BLACK

RED

CREEN

YELLOWBLACK.BLACK

REDBLACK

BLUE

YELLOWBLACKCREENREDREDGREENBLACK

BACKGROUNDCOLOUR

BLACK

RED

BLACK

BLACKYELLOWYELLOWBLACKRED

BLACKBLACKYELLOWBLACKBLACKBLACKBLACKGREEN

BLINK

SO

YES

SO

SO

soYESNONO

NONONOSONONONONO

MEANING

New alarra - format not selected

Other colours used for mimics

Normal valid dataPlant limit alarmUrgenc plant limit alarm'Bad' dataData not being updated

Cairouter autoComputer manualComputer manual - auto unavailableTrackingStandby aticoDesk manualStandby nanualStandby manual - computer not

available

TABLE 3 - MEASUREMENT AND ACTUATOR FAILURE CRITERIA

TEST RESULT

Rate or change exceededRate ox change exceededRace of change exceeded 'N' times in 'J' samplesAbove max.operating limitBelow min,operating limitModel prediction error variance above max,limit>bdel prediction error variance below min.limitSingLe scanner errorConsequent scanner errorsScanner input overload

'Bad' measurementIgnore — use model estimate'Bad' ueasurement'Bad' oeasu:'Bad' measu:Alarm condiAlarm cor.diIgnore - u'Bad' measuiFre e ze ac I

•ertent

ion (signal frozen?) !nodel estimate or last scan I

ecsenc fst valid measurement or signal'

i

Actuator drive card responds incorrectlyActuator drive card responds incorrectlyActuator drive card responds incorrectly'M' ciaes in 'J1 calls '

Actuator 'ADC' measurements discrepancy jMeasured and computed actuator position discrepancy j

'Bad' actuatorIgnore'Bad' actuator

Alarm condition'Bad' actuator

Page 228: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 210 -

Figure is Pembroke Distributed System

DISPLAY OSUNIT DESK

DISPLAV ONUNIT DESK

FLOPPYDISC

PLANT MONITORt SYSTEM HOST

COMPUTER

TI

. - J

BACKINCSTORE

BACKINGSTORE

UNIT SUPERVISOR4 LOAD CONTROL

COMPUTER

COMMUNICATION LINKS

TEMPERATURECONTROLCOMPUTER

COMBUSTIONCONTROL

UJD OPTIMISATIONCOMPUTER

BURNERMANAGEMENT

COMPUTER

I.IFEED

CONTROLCOMPUTER

SACK IT COMMUNICATION LISX

Figure Jb Didcot Distributed System

TERMINAL

DISPLAYON

UNIT DESK

m • • • • • • « a ^ I M

FEEDI CONTROL

OWuTER

PLANT ms'ITOR& SYSTEM HOST

COMPUTER

DISPLAYON

UNIT DESK

I COMBUSTIONI CONTROL

:======! COMPUTEP.

I

COMMUNICATION L1HKS

BACKINGSTORE

BACKINGSTORE

VUSl MONITOR II

STEAM TEMPERA-TURE CONTROLCOMPUTER

UNIT SUPERVISOR

1 !.OAD COSTP-OL

COMPUTER

J

Page 229: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 211 -

UU1LJ1

AUXILIARY DATA »

SUPERVISORY CONTROL DATA -

PRIMARY CONTROL DATA •

FLANT MO:: [TDK

UNIT SLPLKVISOK

lPLANT AREACOMPUTER

rLA.".7 co.'.'DZT ro. . : J ) * : : T O : : ANdDAIA ANALYEIS/RHCORDa.

,

SLTEXVISOliV AUTO/MANUAL

COMPUT™. AITO. ' :HMAL

5INO.E LOOP PLAXT DATAINTELLIGENT

ACTL'ATOR DRIV1CAM

SIAXD3V ALTC

TO STEPP-K MTOR

FigurE 2b Typical Pliant firea

L S I - ! I.MICRO-

PROCESSOR

28k X 16 BITMEMORY WITH

BATTERY BACK-C

32k K 16 BITBACKINGMEMORY |

WATCHDOOTO DEIECT

COMPL1ER FALLIJTIE

I SOLID STATE II COLO'JR DISPLAY [j INTERFACE !j (YTV-3OH |

TO COLOUR >iONITOR0:: DESK

DATA

INTERCHWCE«ODO.E '.DIM'

DATA

Z3C7EXC1LANGEMDOU.E IDIS:1

"LAST SIGNALS

6 - CHANNELANALOGUE

DATAINTERCHANGE

MODULE (DIM)

OPTO-ISOLATEDSTATUS INPUTCARD 32 BITS

RELAY ISOLATEDSTATUS O'JTPLT

MODULI 32 BITS

DAIA

INTERCHANGE

MODULI (DIM-

H SERIAL

I m

_ _ 50V D.C_ - STATUS SICNAiS— e.f FROM DESK

_ ^ i . g 50V D.C_ » STATUS SI" ALS— TO DESK LAMP?

PARALLELDATA LINE

CONTROL OUTPUTINTERFACE

(STEPPER HOTOr.— TO PLANT

Page 230: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 212 -

tyua uiBGiatiLjdii i i_ii sue uu iu

MULTIPLEDC/DC CONVERTOR

BATTER* 30V DC

I \K avTT"I oVE PROM

n:ARD ADR.DECODE

DIM DATA BUS

OPTO ISOLATEDI/O

FLOATINGABC

FLOATINGADC

OPTO-ISOLATEDSTEPPER DRIVE

OPTO ISOLATED3 BIT DICIS

OPTO ISOLATED8 BIT DIGOUT

~LfLAST DATA

C/ORELAY

STANDDESK PLAQUE

5HSCq-1UE L

BATIE10. 50V DC

TO ACTUATOR

ACTCATOR POSITIONPROCESS VARIABLE

I^tTER-CAEDCOMHIWICATIOSS

Figure 3b PesnbrD e integrated Corfrdi/Oispiay

Page 231: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

REMOTE DIGITAL TRANSMITTER

BUFFERAMP

12 BIT

ADC

SHIFTREGISTER

LOGIC

- 213 -

DC /DCOS

A C DC

DIGITAL

CURRENT

LOOP

STANDARD

OPTC-ISOLATED

DIGITAL INPOT

POETS

STAKDARH OPTO-ISOLATED DIGITALOUTPUT POP.TS

Figure Qb Highly Distributed System modulesANALOG PLANT DATA

Di:::iL DIGITALrj a DESK P/B LED LAMPS

OPTO D1GIN/OUI PAIRS OPTO DICOLT

MICROPROCESSOR CPU/RAM/ROM/WATCHDOG

OPTO DICIS/DIGOUT P U E S

!

AfTUATOP.WITH

DIGITALINTERFACE

I IFIBRE OPTICSERIAL HOST

ACTUATO.".WITH

DIGITALINTERFACE

HICil Si'iED HICil S P t i DS;nlAL EL'S SERIAL BUS

COXTmi DATA DISPLAV DATA

PLANT STATUS INFORMATION

11OPTO ISOLATED D1GINS

I [CUP.REXT LOO-SERIAL PORT

I OPTO DIGOIT

nLLDIGITALICTEPJACE

ALAK.MSINTERLOCKS

Page 232: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 214 -

Leak Detection System with Distributed

Microprocessor in the Primary Containment Vessel

By K. Inohara, K. Yoshioka, T. Tomizawa

TOSHIBA CORPORATION, TOKYO, JAPAN

ABSTRACT

Responding to the demand for greater improvements of

the safety monitoring system, less public radiation exposure, and

increase of plant availability,measuring and control systems in

nuclear power plants have undergone many improvements. Leak

detection systems are also required to give earlier warning,

additional accuracy, and continuous monitoring function.

This paper describes the drywell sump leakage detection

system utilizing a distributed microprocessor, which is a success-

ful application owing to its versatile function and ease of

installation. The microprocessor performs various functions such

as a rate of level change computation, conversion to leakage flow

rate, initiation of alarrr, and sump pump control.

This system has already been applied to three operating BWR

plants that demonstrate its efficiency.

Page 233: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 215 -

1. INTRODUCTION

In response to the public interest for safety and increasing

plant availability in nuclear power stations, leak detection

systems are required to have a more rapid monitoring function plus

additional sensitivity and accuracy. These requirements have

hastened the improvement.

The drywell floor drain sump, which collects unidentified

leakage flow from the reactor coolant pressure bundary, is one of

the important leak detection points. For the conventional detec-

tion method, two mechanical level switches were used and were set

up at Ki-Lo limit point respectivily. When a primary coolant

leakage occures, the floor drain fills. The rising level of

coolant in the sump operates the low level switch that starts

a preset timer. If leakage into the sump increase by more than

one gpm, the timer will fail to run out before the coolant level

reaches the high level switch and an excessive leakage alarm is

given.

The disadvantages of this method are the detection time delay

due to the switching interval and the poor reliability of mecha-

nically operated switches. When reviewing these problems, par-

ticulary at the operating plants, the need for extra capabilities

is apparent with regard to the leakage monitoring function and

reliability. Furthermore, an important factor which applies to

the operating plants is the need to select easy-to-replace hardware,

Page 234: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 216 -

In order to meet the above-mentioned requirement the course to

take, has been to combine a continuous, non-contacting level

detector with a compact data processor which flexibility, high

accuracy, and is similar to standard analog instruments in both

appearance and handling.

At first, the process to replace the old design began by

using a part of a plant process computer, but it was not necessarily

a good approach in case of the operating plants with a view to need

spare memory space and stand-alone characteristics.

Consequently, on the new system, an ultrasonic level meter and

a distributed microprocessor system have been used. An ultrasonic

level monitor non-contactingly detects the sump level and transmits

a continuous signal to the microprocessor system, where leakage

flow calculation from level change is performed.

Features of the new system are as follows.

• EXPANDED DETECTION FUNCTION — Rate of sump level

change may be monitored corresponding to the amount of

leakage.

• IMPROVED PERFORMANCE — Distributed microprocessor system

"TOSDIC" (Toshiba Digital Instrument & Control Systems)

has advantages of both man-machine communication inter-

faces similar to an analog controller, and enhanced

accuracy as a digital machine.

Page 235: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 217 -

SIMPLIFIED MAINTENANCE WORKS — Ultrasonic level

monitor contribues to reduction of the maintenance

man-power in the drywell area due to its self-

calibration facility. With on-line self-diagnosis

by TOSDIC, safety monitoring functions have been

improved considerably.

2. DETECTION METHOD

The purpose of this system is to monitor unidentified reactor

coolant leakage by means of detecting level change of the floor

drain sump for liquid collection. Sources of the leak, responsible

for increasing the sump level are considered to be composed of

the following three factors. (See Pig. 1)

1) Liquid phase

coolant leakage

(= x)

2) Vapor phase

coolant leakage

(= y), which

condenses into

water in the

air coolers.

Unidentified reactorcooldnt leakage source

Liquidphase(X)

Vapor phase

1~ Normal moisture

in at moshere

Dryivell aircoolers

flo*

Condensated

J+Z©

levelrate of change

computation

Subtraction -^leakageoutput

floor drain sump.

F"ig. 1 Schematic Diagram of Detection Method

Page 236: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 218 -

3) Air Cooler background condensates under normal

condition (= z)

Although the background contributions vary with time, it is

necessary to separate the basic leakage from air coolers from

unidentified leakages, for the purpose of providing more prompt

and quantiative information to the operators.

As regards this point, the microprocessor is designed to

subtract the air cooler condensates flow from the total leakage

flow.

3. SYSTEM CONFIGURATION

This system is composed of an ultrasonic level monitor and

a distributed microprocessor system. (See Fig. 2).

Buttery unit

\TOSDIC

201

Level recorder

Signalresister unit

__ 5TOSDIC "1

22/ !ofer r .Conde-\ UITtd sonicnsated\ level detector

DDLS DDCS 'Floor drain sump

to Rod waste Level Hi-Hi. kr$e leakage. System

, ResetBuzzer unit

-ANN dbnormarities.

Fig. 2 Schematic Diagram of the System

Page 237: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 219 -

The ultrasonic level sensor is installed above the floor drain

sump, non-contacting to drain water. Its output signal is convert-

ed to 4~ 20mA DC current and is transmitted to the analog input of

DDLS (Direct Digital Loop Station). DDLS serves as an interfa"-

of the microprocessor system "TOSDIC". Output of the air cooler

condensated water flowmeter, and contacts of the sump pump operat-

ing state are also connected with DDLS. These process datas are

received from DDLS through the analog bus and subjects to DDCS

(Direct Digital Control Station).

The A/D converted analog data is controlled and calculated

about the leakage flow. The results are returned to DDLS through

the digital bus. The digital output of DDLS initiates abnormal

leakage alarm to the annunciator, and controls the sump pump

action. The following two units are auxiliary to the microprocessor

system. One is the power failure protection unit for DDCS's battery

back-up and the other is the buzzer unit for warning of abnormalities

of DDLS, DDCS, and sensors.

HARDWARE COMPONENTS •

1) Ultrasonic level monitor

The Toshiba ultrasonic level meter has been specially

developed to meet the requirements of nuclear power

stations. This sensor provides the following benefits,

as illustrated in Pig. 3'-

Page 238: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 220 -

(1) Easy-to-remove

sensor

The ultrasonic

transducer may

be easily removed

from the sensor

assembly by one

handed operation.

Tig. 3 View of Ultrasonic Level Sensor

This feature contributes to reduce radiation exposure

during maintenance work in the dry well.

(2) Radiation & corrosion resistant sensor

(3) Dry calibration facility with test pulse

By means of this option, operator is able to

check the instrument on-line, and calibrate,

remote from the dry well.

2) The distributed microprocessor system "TOSDIC"

This system consists of loop station (DDLS) and control

station (DDCS) as a basic configuration.

DDLS is designed to operate independently for

an individual loop and associated man-machine function,

identical to those of conventional analog controllers

as shown in Pig. 4.

Page 239: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 221 -

Read write adarpss- -hip select

Fig. 4

View of DDLS

Eig. 6 Block Diagram of DDLS

and DDCS

Pig. 5

View of DDCS

DDCS is designed to receive analog signals transmitted from

DDLS and to offer A/D conversion. The converted signals are con-

trolled and calculated in accordance with a pre-programmed control

algorizm and the results are returned to DDLS. Since a control

signal is used for transmission send-back collation method system

(in addition to a parity checking system for each transmission)

is employed to ensure highly reliable data transmission. The front

panel layout of DDCS is shown in Fig. 5 and its block diagram is

shown in Fig. 6. The latter consists of a microprocessor,

the TLCS-12A combined with individual elements based on bus

construction.

Page 240: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 222 -

5. FUNCTION

Fig. 7 shows a function block diagram of the system, which

is able to provide two calculation procedures of detection,

corresponding to the amount of leakage flow.

Cooler conde-nsdted voterflow 1

Flpgr_dr$iri _sump level

Sump

Sump

ToTorTo recorder**-'

AUX-1

PV

DI-1

DO-0

DO-t

DO-2

MV

Sensorcheck

Sensorcheck

Rate of levelchdnae everyone minute

Smoothing

Pump °%FFA ^control Hi/Lo

alarm

Smoothing

Rdie of levelchan$e everyseveral Ten minutes

Leakalarm

DDLS

Coolerdrain watercompensation

ConversionTo ilou/raTe

DDC5Alarm set point

(Digital value)

Fig. 7 Fucntion Digram of the Microprocessor System

The signal of the floor drain sump level is scanned every

few hundred milli-seconds of sampling interval, and checked for

the rejection of abnormal signals. Next, a rate of level change

is checked every one minute in order to detect a sudden doolant

Page 241: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 223 -

leakage. Under steady conditions, the level change is calculated

after few ten minutes interval for the detection of slow leakages.

Its timing chart is indicated at Fig. 8.30~60min. Counter

tminCount*

•300 mS Counter

/ 2 30 31 32 Time (min)

One minute eel culationperiod

30-60 o

Reset when no-rmal.

tuculdtion period.

Renewed everyZmin.

30—&Qm'in. calculation period.

Leakage flow ou-^ tput (its hold un-/til next)

Initiate an 5-"larm output if

abnormal-

Fig. 8 Timing Chart of Calculation

The result of the calculation is converted into the leakage

flow rate and, after subtracting the cooler condensates, this flow

rate is compared with the alarm set value of unidentified leakage flcv:

(i.e.; One gallon per minute leakage increase in one hour has been

recommended by N.R.C. Regulatory Guide 1-^5)

When the leakage exceeds the alarm limit, an Annunciator is

energised through the digital output of DDLS.

The relationship between computed leakage output, and the sumr

level, is schematically illustrated in ?ig. 9. There is a further

function whereby two position OM/OFF control of the sump pump

provides Kj-Lo limit alarm outputs.

Page 242: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 224 -

Wan-Fall SFAN

Samplevel

Checked everyone minute for

Odrvpt level change.

0 30 60 IZO 180 ZiO 300Leakage flow (yj)

360 390 Time(min-)Mated Calucjlatedan alarm every Two minuts

Alarm set pointThe background lea-

kage (Air cooler conden-

weter)

Time

Figure 8 <(/\) Change of sump level.Schematic of , „ > „ . . .recording chart \\u) Rate of level change(leakage flow)

Fie

Schematic of

Recording Chart

(A) Change of sump level

.!B) Fate of level change (leakage flow)

6. CONCLUSION

By use of the distributed microprocessor system, leak detection

by the drywell floor drain sump has been greatly improved and provides

the following added advantages.

Earlier warning:

The detection re.soonse time is shorter than with other

conventional units.

Improved monitoring functions:

Continuous leakage rnon. oring function and two way detection

Page 243: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 225 -

method, corresponding to the amount of leakage, improve

the monitoring function.

Ease of maintenance:

Man-power saving for maintenance work and reduction of

radiation exposure have been markedly improved by calibra-

tion free sensor and self-diagnosis function of microprocessor.

Suitable for existing installations:

This system is so compactly designed that it is more

appropriate for operating plants than a large scale process

computer.

This system has been successfully employed at Tepco, Pukushina

nuclear power plants for about two years. Fig. 10 shows a view of

the microprocessor-aided leak detection system in operation.

[Reference]

K. Inohara et. al.,

"Improvement of leak

detection system in

a drywell area with

microprocessor.

[B50, 1979 Fall

Meeting of the Atomic

Energy Society of Japan] Fig. 10 View of DDLS and DDCS

(in operation)

Page 244: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 226 -

DISTRIBUTED TERMINAL SUPPORT IN A DATA ACQUISITION SYSTEMFOR NUCLEAR RESEARCH REACTORS

by

R.R. Shah, A.C. Capel and C.F. Pensom

ABSTRACT

A comprehensive and flexible terminal support facilityis being designed to provide the necessary interactive man-machine interface for REDNET, a distributed data acquisitionsystem for nuclear research reactors. Host processors and alarge number of terminals are linked via three physicallyindependent but interconnected terminal support subsystems,which use in-house developed equipment based on cable TVtechnology. The CCITT X-25 protocol is supported, and virtualcircuits are used for communications between terminals andsoftware functions in host processors. This paper presents therequirements and conceptual design of the major terminal supportcomponents.

Page 245: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 227 -

DISTRIBUTED TERMINAL SUPPORT IN A DATA ACQUISITION SYSTEMFOR NUCLEAR RESEARCH REACTORS

R.R. Shah, A.C. Capel and C.F. Pensom

1. INTRODUCTION

The REDNET distributed multi-processor system isbeing developed at Chalk River Nuclear Laboratories (CRNL)to replace obsolescent data acquisition equipment in use atthe NRX and NRU research reactors [1]. The REDNET systemincorporates advanced system concepts to provide enhancedcapabilities for operators, experimenters and systempersonnel. These new technologies are being investigatedwith a view to their potential application in the CANDUpower reactor program.

A system requirement is the availability ofeffective interactive man-machine interfaces to providefacilities for:

direct interaction with experimenters,continuous monitoring of experiments,

- on-line data manipulations,examination of historical data,program development and debugging, andjargon-free communication.

Terminals are distributed around the site atlocations best suited to serve the user community. Moreover,changes are expected in the number of terminals and theirlocations to reflect the evolving experimental and opera-tional needs, and these changes are to be accomplished witha minimum disturbance to the system operation-

2. TERMINAL SUPPORT DESIGN CRITERIA

The REDNET user community consists of two groups:system personnel, and experimenters and operators.

System personnel, who are expected to be fullyconversant with the overall system operation, are to accesseach REDNET function at a terminal by specifying the nameof the desired function, without having to identify the hostprocessor. Since the function may be available in a numberof REDNET host processors, the terminal support facility isto allocate the host that is currently ready to provide thefunction.

A system user should also be able to access afunction in a specific host processor, and to establish acommunications link with another terminal by giving theremote terminal's identity.

Page 246: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

-- 228 -

Experimenters and operators, on the other hand,need access to only a limited number of well-defined systemfunctions, and they must be able to call up these functionswith simple procedures based on menu techniques and functionkeys .

Furthermore, a user in this group should, with asimple action, be able to force any terminal to revert fromits current state to a standard "home" state, where theterminal displays the basic user's menu and awaits requestsvia the terminal's function keys.

In addition to the user considerations, theterminal support facility should also permit softwarefunctions in host processors to establish communicationslinks and carry out data exchange with remote terminals andfunctions.

3. TERMINAL SUPPORT IMPLEMENTATION

Host processors and terminals will be linkedtogether via three physically independent but logicallyinterconnected Terminal Support Subsystems to provideseparable communications [2], as shown in Figure 1.

BUILDINGBOO

B'UILOIN-G375

COW'P'IITIRCCENTER

CDC BG00/CYBER 173CQMPU'TiERCOMPLEX

BUILD INS

200

REON£TSYSTEM

MANAGEMENTCLUSTER

PROCESSOR

. T O OTHERBUILDINGS

GLOBAL SUBSYSTEM

INTER-CHflNNELCOMMUNICATOR

NRXBUILDING

INTER-CHANNELCOMMUNICATOR

REONETREAL-TIME

CLUSTERPROCESSORS

REMOTEPROCESS

1/0STATIONS

• NRXA

< — NRXB

NRX SUBSYSTEMJ

HI

NRUBUILDING

(N«U PHASE STILLTO BE DCFINED)

NRU SUBSYSTEM

FIGURE 1: REDNET Terminal Support Facility

Page 247: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 229 ~

In the current phase of the REDNET project, onlythe NRX and Global Subsystems are being implemented. TheNRU Subsystem will be installed later.

The NRX Subsystem links together two MODCOMP IV/35host processors and about ten terminal devices, and islocated entirely within the NRX research reactor building.

The Global Subsystem extends around the site tolink together host processors (including the CDC computercomplex) and terminal devices in other buildings. The twosubsystems operate independently but terminals andprocessors on one are able to communicate with those on theother via the Inter-Channel Communicator.

The key feature of the REDNET terminal supportfacility is the use of virtual circuits for communications(see Figure 2). Once a virtual circuit has been establishedbetween, for example, a terminal and a software function,data exchange can be carried out between the two as if therewere a physical link between them.

TERMINALSUPPORTSUBSYSTEMS

INTER-FUNCTl'GHHICOMMUNICATOR

MENU SUPPOKTPACKA6C

REDNET H'QST PROCESSOR

INTER-FUNCTIONCOMMUNICATOR

MENU SUPTOKTPACKACC

REDNET HOST PROCESSOR

FIGURE 2: Communications in REDNET Terminal SupportFacility

Page 248: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 230 -

TIIP jiioni.noring and control of both the virtualcircuits and the d'*':a flow in them are provided by theTerminal Pr.pnort Subsystems, Inter-Function Communicatorsand Menu Support Packages.

4. TERMINA.L SUPPORT SUBSYSTEMS

Each Terminal Support Subsystem (TSS) providesdynamically assignable interconnections for all communica-tions between terminals and host processor functions in theREDNST system.

4 .1 Physj ca.1 Description

ftach subsystem uses a CATV-type coaxial cable codistribute data, with terminals and host processors accessingthe subsystem facilities via modems. Cables may be extendedwhen necessary to cover other areas, and modems may beconnected or disconnected at any time, even during systemoperation.

The computer modems use the CC1TT X.2 5 protocol[3] for communications with the REDNET host processors,while each terminal modern supports a local protocol matchedto the attached terninal type.

Each local protocol is translated into an internalTSS protocol used for transactions between modems. Thisinternal protocol supports virtual circuits dynamicallyestablished between terminals and/or X.2 5 logical channels.

Modems have access to a common RF channel whichconsists o." a transmit carrier frequency in the 5 to 115 MHzband operated as an "up-link", and a receive carrierfrequency in the 160 to 260 ME.?, band operated as a "down-link" . The translation from "up-lirk" to "down-.1 ink"carrier frequency is carried out by a remodulator at one endof the cable run.

The modem RF components support a 50 kBaud datarate with a bandwidth (including guard bands) of 200 kHz.Based on previous experience with similar RF components [4],error rates ever the ceble of less than one in 10^ bitsaro expected.

To avoid common-mode failures, dual redundantremodulators.. line amplifiers and other active componentsare used.

The communications channel is shared by each.modem using a CRNL-developed time division multiplex method.Each m.)dfin he.s access to a guaranteed minimum data trans-port capacity. A higher capacity is also available to the

Page 249: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 231 -

modem on a "burst" basis, and is shared by all modems on asubsystem.

For modems that have much higher data transportrequirements (e.g. modems for processors and line printers),a second RF channel is available. Instantaneous access tothis channel is not guaranteed but the channel utilizationis more efficient.

In addition to the modems, each subsystem includesthe following hardware:

i. a Diagnostic/Test Monitor for performance monitor-ing and fault diagnosis,

ii. a pair of Arbiters for maintaining a dual redun-dant functional-to-physical address list necessaryfor the TSS addressing feature, arbitrating the"burst" requests and performing some subsystemhousekeeping functions, and

iii. an Inter-Channel Communicator for controlling thecoupling of virtual circuits between subsystems.

4 .2 Operating Features

4.2.1 Virtual Circuits

All transactions in the subsystems take place overvirtual circuits (VCs) (see Figure 2). Each end of a VC is"attached" to a terminal or an X.2 5 logical channel. A VCis dynamically established from a terminal by entering aconnect command:

#CN name, data

where # is a prefix to indicate to the terminal modemthat the following text string is to be inter-preted as a command;

CN is the connect command mnemonic;

name is the desired function or terminal name; or is aconcatenation of host processor and functionnames if the function on a specific host isrequired; and

data are optional data for the function.

Alternatively, a VC is established from a pro-cessor with an X.2 5 "call request" packet.

Page 250: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

A-Jinor •--. 'JC has been established, the terminal orsorcwcire function at each end of the VC is supplied by theTSS with type codes to identify remote terminal type or X.2 5logical channel. The type codes permit software functionsto make use of any special terminal features (e.g. graphicsand color) that may be available.

The VC is deallocated either from the terminal bythe release command:

#RL

or from the processor with an X.25 "clear request" packet.

Data transfers over established VCs take place infull duplex with a choice of transmission modes in eachdirection. Modes supported are: character by character,line by line and block by block.

Permanent VCs, established at system generationtime, are available for the TSS and the host processors tomonitor the performance of each other.

4.2.2 Addressiny

The TSS addressing capability enables a functionto be accessed either in any suitable host processor (aschosen by the T33}, or in a particular processor (as directedin the "connect" command) . In the first case, only thefunction name needs to be supplied when establishing a VC,while in the latter case, a concatenation of the processorand function names is needed.

Within the TSS, addressing is a distributed functionresident partially in each modem and partially within theArbiters. Each modem has a short list of commonly usedfunctions or terminals with their physical addresses, and hasaccess to a larger master list in each Arbiter. In theunlikely event that both Arbiters fail, the modems can con-tinue to provide the addressing service using the locallists•

If a function is supported in several host pro-cessors, the function will have more than one physicaladdress in the address lists. When establishing a VC, if thehost identity is not supplied, the TSS will automaticallyrequest a connection to the function in different hosts insequence until successful.

The Arbiters maintain up-to-date master addresslists by interacting at regular intervals with the hostprocessors on each subsystem over permanent, virtualcircuits.

Page 251: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 233 -

4.2.3 Terminal Modem Features

Since terminals with different capabilities areused in the REDNET system, the terminal modems facilitatekeyboard entry by providing the following minimum localfeatures where necessary:

i. echo of characters as they are typed in, and

ii. text editing functions such as character delete,line delete and horizontal tabulation.

The terminal modems also provide a macro-likefacility whereby a single character input from the terminalcan be expanded into a command string which is then pro-cessed normally by the TSS. In the REDNET system, thisfeature will be used to force a terminal into the "home"state (see Section 2.1).

5. INTER-FUNCTION COMMUNICATOR

The Inter-Function Communicator (IFC) is a groupof programs in each REDNET host processor that enablesseveral software functions within the host to communicatewith the Terminal Support Subsystem. It employs the X.2 5protocol for all interactions with the subsystem, and, ineffect, extends the X.25 logical channels from the TSSvirtual circuits to the software functions in the host (seeFigure 2).

5.1 IFC Services

A number of IFC services are available to both thesoftware functions within the processor and the TSS.Software functions invoke the services by submitting i/orequests to the IFC through the executive services of theoperating system, while the transactions with the TSS arecarried out by exchanging X.2 5 packets containing directivesor data.

5.1.1. Connect Service

A connect service request from a software functionis used for establishing a communications link with a namedterminal device or another function in a host processor.The IFC reserves a logical channel, and a "call request"packet is forwarded on that channel to the TSS for finalresolution. The logical channel is actually allocated fortransactions only after the TSS returns a "call connected"packet.

An "incoming call" packet from the TSS representsan attempt to establish a communications link with a soft-ware function, and specifies the logical channel to be used.

Page 252: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

-- 234

If ~;K? .inrli ",nto i 'inaction is available in the host processor,it. is p.ctivatei and the logical channel is allocated fortransactions with the function.

The function in. either case is given the logicalchannel "allegiance", which means that only this functioncan receive data on that channel.

5.1.2 Disconnect Service

A disconnect service request on a logical channelfrom a software function forces the IFC to deallocate thechanael and forward a "clear request" packet to the TSS.The latter then returns a "clear confirmation" packet to theIFC when the logical channel has been freed and the remoteterminal or function has been notified.

If the "clear indication" packet is received fromthe TSS, the IFC deallocates the specified logical channeland aborts the activation of the software functionassociated with that channel.

5.1.3 Transfer/Accept Allegiance

Transfer allegiance is a service provided by theIFC to enable a function that currently has a logicalchannel allegianr.e {and henca the right to receive data onthat logical channel) to transfer it to another function inthe processor. This special service is available only forsoftware functions and has been provided to satisfy require-ments for the experimenter and operator level terminalsupport in the REDNET system (see Section 6).

Actual transfer of allegiance only takes placeafter the recipient function has submitted an acceptallegiance request to the IFC.

5.1.4 Send/Receive Data

A send data service request from a softwarefunction causes the IFC to assemble the supplied data intoX.2 5 data packets, and pass them to the TSS on the specifiedlogical channel for transmission to the remote function orterminal.

The receive data service request from softwarefunction causes the IFC to monitor the specified logicalchannel for data packets arriving from the TSS, providedthat the function has the logical channel allegiance. Whendata are received on that channel they are transferred tothe function.

Page 253: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 235 -

If data packets are received by the IFC on alogical channel currently in use prior to a receive dataservice request, the data packets are saved until thecorrect function makes a request to receive the data.

5.2 Software Structure

The IFC consists of three modules: Doorman, ModemReceiver Task and Modem Transmitter Symbiont (see Figure 3).

TOCOMPUTERMODEM

r~

!MODEM

RECEIVERTASK

CX.25 FRAME LEVEL)

—tm-

i 1MODEM

TRANSMITTERSYMBIONT(X.25 FRAME LEVEL)

INTER-FUNCTION

DOORMAN(X.25 PfSCKET

LEVEL)

COMMUNICATOR

TOFUNCTIONS

5.2.1

FIGURE 3: Structure of Inter-Function Communicator

Doorman

The Doorman, the main control and monitoringmodule of the IFC, is a symbiont* that interfaces thesoftware functions in the processor to the TSS. Itsactivities are:

i. monitoring and control of logical channels andtheir utilization;

ii. receipt and processing of service requests fromsoftware functions for establishing communicationslinks and for data exchange with remote terminalsand functions via the TSS;

*A symbiont is a special privileged task that processes i/orequests directed to it by the executive services of theoperating system, and operates as an intelligent devicehandler.

Page 254: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

-- 236

iix. acceptance and processing of X.2 5 packets on eachlogical channel from the TSS (via the ModemReceiver Task), and transmission of data to theappropriate functions;

iv. assembly of dar.a from software functions into X.25packer .-and transmission of the packets to the TSS(via the Modem Transmitter Symbiont);

v. performance of software function activations anddeactivations as necessary; and

vi,. maintenance of address lists in Arbiters onrequest from the TSS.

Most of the X.2 5 packet level (Level 3) servicesare performed by the Doorman. The packet REJECT service,which is designated as optional in the X.2 5 recommendation,is not implemented since it involves the overhead and addedcomplexity of maintaining a queue of unacknowledged packetsfor each operating logical channel.

5.2.2 Modem Receiver Task

The Modern receiver Task (MRT) constantly monitorsth^ host processor1G X.25 link level (Level 2) interfacewith the TSS, and awaits frames from the TSS.

In addition to the link level supervisory andcontrol services, the MRT ensures that information framesare received in correct sequence, and passes to the Doormanthe X.2 5 packets contained in the information field of thevalid franes. If errors are detected in the frame sequence,retransmission of frames is requested from the TSS.

The [IRT also monitors the acknowledgement of theframes received by the TSS. If it detects that TSS is notreceiving frames in correct sequence, or if a retransmitrequest is received from the TSS, the Modem TransmitterSymbiont is asked to retransmit the necessary frames.

5.2.3 Modem Transmitter Symbiont

The Modem 'Jransmitter Symbiont (MTS) operates onrequests from the Doorman and MRT modules to transmit framesto the TSS. It assumes that the frames are received in theX.2 5 link level format. The MTS supplies the frame sequencenumbers and maintains a queue of unacknowledged frames whichare retransmitted if and when indicated by the MRT.

Page 255: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 237 -

6. MENU SUPPORT PACKAGE

The Menu Support Package in the host processor isdesigned to provide the experimenters and operators with aman-machine interface enabling simplified access to systemfunctions. This package is activated whet, a user forces aterminal to revert to the standard "home" state by pressingthe designated function key (see Section 2.1). Upon activa-tion, the basic menu is displayed on the terminal screen,following which the package awaits further input from theterminal keyboard.

If a user enters a function request by keying inparameters and pressing a function key, the text string istransmitted by the TSS to the Menu Support Package, whichdecodes it and activates the necessary software function.The allegiance of the logical channel on which the trans-actions are occurring is transferred to the function so thatthe latter can carry out interactive dialog with the user,if necessary.

After the function has completed direct inter-action with the user, the logical channel allegiance istransferred back to the Menu Support Package, which thenresumes monitoring terminal input for further functionrequests.

7. CURRENT STATUS

Prototypes of the TSS modems have been built andare undergoing extensive testing. The terminal modemsoftware and the TSS internal protocol are operating, andfurther refinements in addressing and failure recoveryfunctions are being implemented. The X.2 5 software for thecomputer modem is being designed.

The preliminary design of the IFC and the MenuSupport Package is complete, and detailed design of thesemodules is in progress.

Page 256: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 238 -

8. REFERENCES

1. YAN, G., L'ARCHEVEQUE, J.V.R. and WATKINS, L.M.,"Distributed Computer Control Systems in Future NuclearPower Plants", Nuclear Power Plant Control andInstrumentation, Vol. II, International Atomic EnergyAgency, Vienna, 1978.

2. CAPEL, A.C and YAN, G. , "Distributed Systems DesignUsing Separable Communications", IWG/NPPCI Specialists'Meeting on Distribute^ Systems for Nuclear PowerPlants, International Atomic Energy Agency, May 14-16,1980.

3. "Provisional Recommendations X.3, X.25, X.28 and X.29on Packet-Switched Data Transmission Services", Inter-national Telegraph and Telephone Consultative Committee(CCITT), Geneva, 1978.

4. CAPEL, A.C., YAN, G., "An Experimental DistributedSystem Development Facility", IEEE Trans, on NuclearScience NS-2 4, No. 1, pg. 395-400, February 1977.

Page 257: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 239 -

SESSION 4 - REQUIREMENTS AND DESIGN CONSIDERATIONS - II

Page 258: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 240 -

DISTRIBUTED SYSTEMS IN THE HEAVY WATER PLANT ENVIRONMENT

by

O.V. Galpin, G.E. Kean, and J.C. McCardle

ABSTRACT

This paper describes the data acquisition system on theAtomic Energy of Canada Limited, Glace Bay heavy waterplant. After four years of successful operation andfrequent changes during this period in response to changingrequirements the availability of the system has stabilizedat 99.5%. The plant support group carry out all hardwareand sofware maintenance. Based on the experience withthis system consideration is being given to a computercontrol system for the plant. A distributed control systemstructure appears to be well suited to the heavy waterplant requirements.

Page 259: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 241 -

1.0 INTRODUCTION

The data acquisition system installed on the Glace Bay

Heavy Water Plant (GBHWP) has been in operation now for

four years. This paper gives a brief description of the

system and some details on its performance over the past

two years. A more detailed description of the system is

given in Reference 1.

Both the Glace Bay and Port Hawkesbury heavy water plants

are considering implementing some form of computer control.

The requirements of the control system are outlined. A

distributed computer system architecture appears to provide

a good match for the control and reliability requirements

of a heavy water plant.

2".O DESCRIPTION OF THE GLACE BAY HEAVY WATER PLANT

The process used by the Glace Bay Plant is the hydrogen

sulphide-water dual temperature exchange or, as it is more

commonly known, the Girdler-Sulfide (GS) process.

In principle, the process is fairly simple. As shown in

Figure 1, water is fed to the top of a column that contains

a large number of trays "and has a cold zone at about

30°C and a hot zone at about 130°C. Hydrogen

sulphide gas (H2S) is circulated countercurrently to

the water flow. The trays serve to facilitate good

gas-liquid contact. In the cold zone d^utbrium tends to

concentrate in tha water and in the f ot zone the deuterium

tends to concentrate in the gas. As a result, deuterium

atoms migrate from gas to liquid in the cold zone and from

liquid to gas in the hot zone. Since the gas and liquid

flows are countercurrent, the net transport of deuterium is

to the center region of the column. In this region the

deuterium concentration in the water is typically 4 or 5

times greater than in the inlet water. In the effluent

Page 260: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 242 -

water the deuterium concentration is about 80% of the

concentration in the inlet water.

This concentration process is repeated by taking a side

stream of enriched liquid or gas from the region of maximum

enrichment and feeding it to another, similar column or

second stage of enrichment where the concentrating

mechanism is repeated.. After three stages of enrichment,

the heavy water concentration is 10-20% (Fig. 2 ) . This

water is then processed in a vacuum distillation finishing

unit to give reactor grade heavy water which is typically

99.75% deuterium.

The tower systems or enriching units are connected by a

liquid feed and return flow. Each tower system is designed

so that it can be operated while isolated from the other

units. In this mode deuterium concentrations build up to a

saturation level, in the first stages after about 8 hours

of operation. This design feature allows the plant to

achieve a better availability by minimizing the restart

time after a failure in a connected unit.

The remainder of this paper describes the computer data

acquisition system, called HEWAC, installed at the Glace

Bay plant. The following topics will be covered in order,

the system description, its role in overall process

control of the plant, a description of the computer to

computer communications, and the svstem performance and

staffing to date. Finally the basic requirements of a

heavy water plant control system are described.

3.0 SYSTEM DESCRIPTION

3.1 Hardware

The hardware layout of the HEWAC system is shown in

Page 261: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 243 -

Figure 3. The system hardware is comprised of the

fol1owi ng:

a) Master computer -'Varian 620/L-100 with 32K

core memory, analog and digital I/O circuitry,

1M word fixed head disc memory, and 2

nine-track magnetic tape drives. This

equipment is located in the computer room in

the control building.

b) Remote Computers - Varian 620/L-100 with 4k

core memory, and analog and digital input

circuitry. This equipment is located in a

small computer cubicle, at the 17 meter level

on the pipe rack, in both North & South

pi ants.

c) Operator's Station - includes a push button

function panel, an operator's terminal

(keyboard and CRT d i s p l a y ) , 2 graphical

display monitors, an alarm display, and a line

printer. This equipment is located in the

main control room.

d) System Maintenance Station - includes a

terminal (keyboard and CRT display), a paper

tape reader and punch, a graphical display

monitor, a plotter, and a line printer. This

equipment is located in the computer room with

the Master computer.

e) Engineering Station - consists of a

terminal (keyboard and CRT display) with

a built-in thermal printer for hard-copy

capability, and a graphical display monitor.

This equipment is located in the Technical

Department offices.

Page 262: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 244 -

f) Lab Station - consists of an ASR-33 Teletype

located in the Chemistry Lab and is used to

run the Mass Spectrometer Analysis programs.

These programs will be described later.

g) L/G Recorders - The computer calculates the

liquid to gas flow ratios (L/G) every 5

seconds and outputs the values via the Analog

Output System to the L/G Recorders mounted on

the main control panels. The production of

each to;ver system is very sensitive to changes

in the L/G ratio.

3. 2 System Functions

The three primary users of the system are the control room

operators, process engineers, and pient management.

Although originally intended to supplement the information

available to the operators from a standard analog control

panel the computer is now the primary tool for monitoring

the plant process conditions. There are no direct control

functions in the computer so that a failure of the computer

does not cause a shutdown of the plant.

The process engineers access the on-line data base through

a terminal in the Technical Department office to optimize

the plant control, analyze upsets, and study long term

trends. Management information is supplied through a daily

production and inventory report.

A brief description of the most important system functions

follows:

a) Input Scanning - There are approximately 600

analog and 600 contact digital inputs to the

Page 263: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 245 -

system. Master computer analog inputs are scanned

at a 5 second interval and all remote analog

inputs at ;< 10 second i.nterval. Aproximately 400

of the digital inputs are scanned every 10

milliseconds, the remaining 200 are scanned every

1 second.

b) Al arr.i Reporting - Alarms are generated whenever a

digital input changes state, or whenever an analog

input or calculated variable exceeds its specified

limits (i.e. high or low instrument limit, high or

low process limit, or rate of change l i m i t ) . The

alarm message contains the time, alarm state,

point I.D. and description. It provides a

chronological list of events which is very

valuable for analyzing plant upsets.

c) Real Time Data Presentation - This function

provides the operator with access to current

values of analog inputs, calculated variables and

manually entered deuterium concentrations. The

operator may select to have the information

printed or displayed in alphanumeric tabular form,

superimposed and updated every 5 seconds on a

process schematic, or plotted graphically.

d) Historical Data Presentation - Data are

accumulated and averaged for several different

time periods and saved in historical files (e.g.

ten-minute averages, sixty-minute averages). This

data can be displayed on a numerical printout or

on a graphical plot.

e) Calcuiati ons - Analog inputs and manually entered

deuterium concentrations are used to calculate

several different parameters and production

Page 264: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 246 -

figures. These calculations are of great interestto operations personnel since it provides anindication of the current state of the plant.

f) Report Generation - There are 20 reports printed

daily including the production and inventory

report and more detailed reports summarizing the

performance of each tower system.

g) Deuterium Analysis Entry - One of the more

interesting recent developments on HEWAC concerns

the entry of laboratory analysis of deuterium

samples. The performance of each tower system is

monitored by taking process samples from the

mid-point and bottom of each hot and cold tower in

the process. The deuterium concentration of each

sample is determined by analysis on a mass

spectrometer. A complete set of approximately 35

samples is taken twice daily. Based on the sample

analysis, adjustments are made to the Liquid and

gas flow rates of the extraction units to achieve

maximum production.

The normal procedure at the plant had been to

process a complete set of samples in the lab and

then enter the results into the computer system

from a terminal. There is a considerable amount

of calculation to be done while processing the

samples. Each sample is bracketed by standards

and the results of each sample must be corrected

for drift in the mass spectrometer. This

procedure led to frequent error either in

processing the samples or in entering the results.

To overcome the problem the mass spectrometers

have been connected to the computer system. An

Page 265: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 247 -

analog output signal from each of the two mass

spectrometers is connected to the analog input

system of the master computer and a hard copy

terminal has been located near the mass

spectrometers. When the laboratory technician

begins the sample analysis an interactive program

is initiated on the terminal. The technician

enters the sample identification for each sample

and when the computer detects that the mass

spectrometer output has stabilized the calibration

calculations are performed and the results output

on the terminal for verification by the technician

before being stored in the one line data base.

This development has speeded up the processing of

the samples and greatly reduced the frequency of

errors in entering deuterium concentrations.

4.0 OVERVIEW OF PLANT PROCESS CONTROL

The role of HEWAC in the overall control of the Glace Bay

plant is illustrated in Fig. 4 The plant Technical

Department is responsible for specifying the process

conditions in the plant. A terminal in the Technical

Department provides access to the current state of the

plant and a 24 hour history from the on-line data base.

The piant-HEWAC-Technical Department loop provides the

information for the daily control of the plant.

HEWAC records the value of all analog inputs (600) on

magnetic tape every minute and summary information such as

alarm messages and lab data every hour. The tapes are then

sent to the Chalk River Nuclear Laboratories (CRNL)

Computing Centre where they are stored. Both the Technical

Department at Glace Bay and the Chemical Engineering Branch

at CRNL use this data archive to a) study long term trends,

Page 266: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 248 -

b) a n a l y z e p r o b l e m s and i n c i d e n t s , and c) verify computer

plant, s i m u l a t i o n p r o g r a m s . This work by both groups

p r o v i d e s for the long term o p t i m i z a t i o n and u n d e r s t a n d i n g

of the plant o p e r a t i o n .

5.0 M A S T E R / R E M O T E C O M M U N I C A T I O N S

A l a r g e number of the input s e n s o r s are located on or close

to t h e main e x c h a n g e t o w e r s . The a v e r a g e distance of these

s e n s o r s from the c o m p u t e r room is about 300 m e t e r s . There

i s , t h e r e f o r e a c o n s i d e r a b l e saving in cabling if these

i n p u t s are m u l t i p l e x e d close to the s e n s o r s . It was for

t h i s reason the two r e m o t e m u l t i p l e x o r s were i n s t a l l e d , one

in the North Plant and one in the South P l a n t , both located

17 m e t e r s above ground l e v e l .

T h e r e m o t e m u l t i p l e x e r s are located in e n v i r o n m e n t a l l y

c o n t r o l l e d c u b i c l e s . T h e s e c u b i c l e s are e q u i p p e d with

h e a t e r s and air c o n d i t i o n e r s and are m a i n t a i n e d at s l i g h t l y

p o s i t i v e p r e s s u r e with plant instrument air supply purge in

o r d e r to p r e v e n t the ingress of the c o r r o s i v e and toxic

h y d r o g e n s u l f i d e g a s .

D a t a are t r a n s m i t t e d b e t w e e n the r e m o t e c o m p u t e r s and the

M a s t e r C o m p u t e r over serial data links at 4 8 0 0 b i t s / s e c .

(Fig. 5 ) . T h e r e is also a data link between the two

r e m o t e c o m p u t e r s so that if the direct link b e t w e e n a

r e m o t e and the M a s t e r f a i l s , data may be t r a n s m i t t e d from

that r e m o t e to the M a s t e r via the other remote c o m p u t e r .

The M a s t e r c o m p u t e r i n i t i a t e s all c o m m u n i c a t i o n s on the

data l i n k s . It m o n i t o r s the o p e r a t i o n of the data links

and issues alarm m e s s a g e s if an abnormal s i t u a t i o n e x i s t s .

If the direct link to a r e m o t e is not o p e r a t i n g , an

i n d i r e c t request for the data will be initiated via the

o t h e r r e m o t e . The data links are serviced on a c h a r a c t e r

by c h a r a c t e r basis initiated by an i n t e r r u p t .

Page 267: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 249 -

The remote computers decode the data request and respond

by transmitting the data to the computer which made the

request. All data link communications are accompanied by

a checksum word. This word is compared to a checksum

generated by the receiving computer to detect any errors in

transmission.

The Master computer updates all analog data values every 10

seconds and all digital data every second. In addition to

scanning inputs and responding to requests to transmit data

to the Master computer, the remote computers do additional

processing of the analog input signals. Each analog input

value sent to the Master computer is an average of 16

readings of the input signal taken by the remote computer

and corrected for offset.

6.0 SYSTEM PERFORMANCE AND STAFFING

The specification for the HEWAC system was released for

tender in December 1972 and the contract awarded in March

1973. All of the software including the basic executive,

device drivers, and data base was custom designed for the

system and programmed in assembler language. Figure 6

shows the total person-years of effort by the supplier, our

consultant, and AECL required to bring the system to an

operational state by the end of 1975.

The system support staff of 3 people has been constant for

the past three years. About one half a person-year from

this group is required to support other computer equipment

at the plant.

About 50» of the current support effort on HEWAC goes into

new developments and enhancements of the system, the

remainder be required to maintain the system.

Page 268: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 250 -

The a v a i l a b i l i t y of HEWAC for the past 27 months is shown

in Figure 7. The average a v a i l a b i l i t y for the period shown

was 9 9 . 5 % . Fig. 8 gives the breakdown of the 0.5%

u n a v a i l a b i l i t y over this period. Planned down time

a c c o u n t s for 2 5 % of the u n a v a i l a b i l i t y . Most of the

p l a n n e d down time is for installing and testing new

s o f t w a r e . The remaining 7 5 % u n p l a n n e d down time is made up

of 4 0 % c a u s e d by s o f t w a r e , 2 0 % h a r d w a r e , and 1 5 %

uni denti fi ed.

The plant support group is r e s p o n s i b l e for all hardware and

s o f t w a r e m a i n t e n a n c e . A c o m p l e t e set of spare parts

i n c l u d i n g a spare CPU, d i s k , and printed circuit b o a r d s , is

m a i n t a i n e d on site. Most of the integrated circuit chips

( I C ' s ) in the system are also stocked. Faulty circuit

b o a r d s are repaired on site.

T h e r e have been no major h a r d w a r e r e l i a b i l i t y p r o b l e m s .

The fixed head disk unit has operated trouble free for six

y e a r s . F a i l u r e s of the air c o n d i t i o n e r s in the remote

c u b i c l e s have resulted in several circuit board failures on

the remote c o m p u t e r s due to o v e r h e a t i n g .

As m e n t i o n e d above the operating system was custom d e s i g n e d

for this a p p l i c a t i o n . A l t h o u g h this may seem to be an

i n e f f i c i e n t a p p r o a c h , so far it has worked out well for

this system. It has forced the support group to know the

o p e r a t i n g system as well as they know the a p p l i c a t i o n s

p r o g r a m s . T h e r e is no h e s i t a t i o n to modifying the

o p e r a t i n g system to meet new requirements and there is no

d e p e n d a n c e on outside s u p p o r t . To the best of our

k n o w l e d g e , there are no operating systems a v a i l a b l e in 1973

that could equal the p e r f o r m a n c e achieved on HEWAC. The

extra cost if any of this flexibility and independance is

a slightly larger software support group.

Page 269: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 251 -

There are two important contributing factors to our

satisfaction with the custom designed software. The first

is the early and extensive involvement of the eventual

support group in the design and development of the system

as indicated in Fig. 6. The second factor has been a

relatively stable support group.

7.0 CONTROL SYSTEM REQUIREMENTS

Although the basic GS process is the same, the Glace Bay

plant uses a flow sheet that is quite different from other

Canadian heavy water plants. Process control at Glace Bay

is considerably more difficult because of the larger number

of tower systems and more importantly a greater sensitivity

to process conditions (Reference 2 ) . Since start up in

1976 the plant operation has matured and the technical

staff has developed the experience and confidence to the

point where computer control is now being investigated.

The Port Hawkesbury heavy water plant the technical staff

is also investigasting some type of computer control but

for a different reason. At Port Hawkesbury part of their

data acquisition system and the analog panel

instrumentation are both obsolete. Thus there is an

opportunity to replace both with a new computer control

system.

The basic requirements of the control system at both plants

are the same. The requirements although needing

considerably more detailed definition, are briefly outlined

be!ow:

Reliability

A control system failure which would result in all systems

snutt-; ng down would mean 2-3 days of lost production with a

Page 270: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 252 -

value the order of $0.5M. This would rule out any type ofsingle computer system.

Retrofit Capability

Again because of the value of lost production to the

operating plants, the new control system must be

implemented with the minimum downtime and disruption to the

plant.

Control Values

Each tower system has 11 or more control valves. The

control system must perform normall process control

functions, including a feed forward loop, to maintain the

specified setpoints.

Heat Recovery Loops

An important feature of the GS process is the large amount

of energy transfer required to economically maintain the

100OC temperature differential between the cold and hot

tower sections. Energy is transferred between

recirculating liquid flows from the hot and cold sections

in large heat exchanger banks. Excess heat is removed from

the cold tower by cooling water and heat is added to the

hot tower with steam.

There is considerable scope for using sophisticated heat

optimization routines to conserve energy. At Glace Bay

besides the steam and cooling water flows there are four

independent reel rculating flows which must be controlled.

Page 271: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 253 -

Steam A11ocati on

At full production steam is a limiting resource. When

t o w e r systems are out of seryice the available steam must

be a l l o c a t e d in the most efficient manner, Conversly if

there is a reduction in the steam supply, cutbacks must be

taken in the most efficient m a n n e r .

M i n i m i z e U p s e t s

The controll system should respond to process upsets or

e q u i p m e n t failures so as to minimize the effect on

c o n n e c t e d s y s t e m s .

L/G Ratio

The p r o d u c t i o n of each enriching tower is very sensitive to

the liquid to gas (L/G) flow ratio in each t o w e r . The L/G

ratios are adjusted based on deuterium sample analysis in

each tower. There is a large degree of interaction between

the towers in response to changes in L/G r a t i o s . The

i n t e r a c t i o n , s e n s i t i v i t y , and requirement for lab analysis

m a k e L/G controll the most difficult control function in a

heavy w a t e r plant.

A d i s t r i b u t e d control system as illustrated in Fig 9 is

being c o n s i d e r e d for Glace Bay. Each enriching unit would

have its own processor with memory, process I/O, and

o p e r a t o r s t a t i o n . All of the processors and HEWAC would be

connected on a communications channel.

The individual processors would have the inputs, o u t p u t s ,

and programs required to operate the 11 control valves to

m a i n t a i n the specified setpoints independent of the other

p r o c e s s o r s in the system. Overall plant control functions

Page 272: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 254 -

such as heat recovery loop optimization, steam allocation,

and L/G ratio control would be done by HEWAC si:^ce it

at eady contains all process variables and lab data.

Changes to the process conditions would be accomplished by

HEWAC updating setpoints in the individual processors.

The traffic on the communications channel would include

status information for each processor, setpoint

communication, and plant status information so that the

control system can respond to minimize the effect of major

process upsets and failures of plant equipment.

One of the attractive features of the distributed system to

the operating heavy water plant is the ability to develope

and test control strategies on a single system without a

major cost or commitment. In summary the distributed

architecture satisfies the requirements outlined above very

we! 1.

8.0 CONCLUSION

The HEWAC data acquisition system at the Glace Bay plant

has played a 'av role in providing the information required

by the process control engineers to bring the plant to

mature production. It has also been a valuable tool for

the plant operators in monitoring the process and in

identifying the causes of plant trips and upsets.

A support group of three people are responsible for all

software and hardware maintenance. Familiarity with the

custom designed software has allowed the support group to

respond to changing demands on the system. Despite

frequent changes to the software the system availability

has averaged 99.5%. The remote multiplexors were a

relatively novel feature v/hen they were installed.

Page 273: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 255 -

Although located in a very hostile environment the remote

multiplexors have performed very well.

Sufficient understanding and experience with the control

characteristics of the plant has been developed to *. -insider

defining a computer control strategy- A computer control

system appears to be justified on the basis of better

process control and optimizing energy consumption. A

distributed system architecture best meets the heavy water

plant requirements.

9.0 REFERENCES

Y.H. Bajwa, J.C. McCdrdle, "Application of a Computerized

Data Acquisition and Presentation System to a Heavy Water

Plant" presented at the Canadian Electrical Association

Spring Meeting, Toronto, Ontario, March 22-24 1976.

K.J. Bradley, C.J. Lettington, "Development of a Control

Strategy for the Glace Bay Heavy Water Plant" presented at

the Canadian Conference on Automatic Control, University of

New Brunswick, September 24th and 25th, 1973.

ACKNOWLEDMENTS

The authors would like to express their thanks to

Mr. D.A. Odedra and Dr. D, Bhattacharyya of the Glace

Bay Heavy Water Plant and Dr. D.E. Minns and

Mr. L.J. McCormick for their contributions to this paper.

Page 274: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

PRODUCTION OF HEAVY WATER (D2O)GLACE BAY PLANT

COLD SECTION OF EXCHANGE TOWER

AT LOW TEMPERATURES,DEUTERIUM MIGRATESFROM HYDROGEN SULPHIDEGAS TO WATER

WATER FLOWS ACROSSPERFORATED TRAYS

WATER

GAS BUBBLE

DEUTERIUM EXCHANGE

71 \l,^

: • " ( . " I • • : ] " " ! 1 ) .I >• » A \ '••, y i &5;SS IV

/ L-*—.-—•>—f-r—'_j—?_• " ?,

GAS BUBBLES UPTHROUGH WATER

TOWER WALL

HYDROGENSULPHIDE GAS

EXCHANGE TOWER(TRAYS NOT SHOWN)

WATER ENRICHED IN DEUTERIUM

HOT SECTION OF EXCHANGE TOWER

• ol^Z^lllTs^' GAS ENRICHED ,N DEUTER-UMFROM WATER TOHYDROGEN SUIPIIIDE GAS

WAIER

GA5 RUBHIE

OEU1ERIUM FXCIIANGEDEUTERIUM EXCHANGE OCCURS AS

GAS BUBBLES THROUGH WATER

FEEDWATER

COLD SECTION(87°F/30°C)DEUTERIUM FROM GASENRICHES FEED WATER

ENRICHED WATERTO NEXT TOWER

/ (PROCESS ISREPEATED)

DEPLETED WATERRETURN FROMNEXT TOWER

HOT SECTION(279°F/137°C)DEUTERIUM FROM WATERENRICHES HYDROGENSULPHIDE GAS

= \ DEPLETEDWATER

to

Pi gure 1.

Page 275: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 257 -

Figure 2.DEUTERIUM PRODUCTION AT G.EHWP

booster lower

400 pprn 0 5 0

200

1st flagsbooster tower

4O0 pom DjO

240

Ictstsoebooster tower

400 ppm D2O

SOOO pom D20

Ut «Uoebooster tower

400 ppm D20

140

comb 1st*

2nd stage

SOOO ppm D20

3rd tug*

15 percent DjO

t2B0

distillabon unit

t pnsduc* bnshing

99.73 Dercsnt D,0

| t2B0 '|kgio«y;

Page 276: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

^7h r r&?T

00

EQUIPMENT LAVOUTF i g u r e 3

Page 277: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 259 -

PLAIMT PROCESS CONTROL

GLACE BAYHEAVY WATER PLANT

CRNLComputingCentre

SimulationPrograms

ChemicalEngineeringBranch

CHALK RIVERNUCLEAR LABORATORIES

Figure 4

Page 278: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 260 -

MASTER-REMOTE COMMUNICATIONS

NORTHI COMPUTER I

NORTHDATA LINK

INTERIREMOTEI

mammaDATALINK

All data links4800 baud

SOUTHI COMPUTER |

SOUTHDATA LINK

MASTERI COMPUTER I

Figure 5

Page 279: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 261 -

PERSON-YEARS

HEWAC DEVELOPMENT EFFORT

System Supplier (12 person-yrs) V/.

•—Consultant (6person-yrs) »;*J

AECL 7 person-yrs to end __zzzzx / of 75 H

• - - » t i n . !

72 73 M 75 j 75 77 78 7S•^System Operations!

YEARS

Figure 6

Page 280: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 262 -

HEWAC AVAILABILITY

100

CQ

2

98-

AVERAGE^ 99.5%

»

1978 1979 1980TIME

Figure 7

Page 281: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 263 -

HEWAC UJMAVAILABILITY-

0.5-

Unavailability

Software4 0 %

Unplanned (75%)

Planned2 5 %

- (1978 JAN to 1980 MARCH)

Figure 8

Page 282: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 264 -

DISTRIBUTED CONTROL FOR GLACE BAY PLANT

2100 2200 2300 1100 1200 1300 1400 36O0

! ! i i

ho |LQ

HEWAC

—— Process Signals

H H Processor with Memory& Process I/O

•—• Communications Link

O Operator Station

Figure 9

Page 283: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 265 -

LA HIERARCHIE DES FONCTIONS ESSENTIELLES DE CONTROLE DU REACTEUR

CANDU DANS UN SYSTEME DISTRIBUE

par: Pierre Mercier

Hydro-Québec

R E S U M E

Dans les centrales nucléaires de type CANDU les fonctions de régula-tion sont programmées dans deux miniordinateurs centralisés et redon-dants et les fonctions de sûreté (protection) sont confiées à des sys-tèmes analogiques conventionnels. Cet arrangement est le fruit denormes, de considérations économiques et techniques qui sont maintenantmodifiées par l'apparition des microordinateurs, le progtè des moyensde communication digitale et le développement des modèles mathématiquesdes procédés .

A partir des systèmes de régulation et de sGretë installés â Gentilly-2,on analyse les diverses tendances qui affecteront l'implantation desfonctions essentielles de contrôle dans un système distribué. On in-siste surtout sur les caractéristiques que le logiciel des systèmesd'avenir devra incorporer afin de respecter les exigences particu-lièrement importantes pour l'exploitant des centrales nucléaires.

Control funct- •.- in CANDU nuclear generating stations are programmedwithin two cent ali.ztd and redundant minicomputers while safety func-tions are cohered by conventionel analog systems. This stt-up is aproduct of standards, economic and technical considerations which arenow being modified *-" the maturing of microprocessors, the progressin digital communications and the development of mathematical processmode Is .

Starting from the control and safety systems installed in Gentilly-2,this paper analyses the various trends that will affect the implemen-tation of essential control functions within a distributed system.In particular, it emphasises the characteristics of future softwaresystems that must be built-in in order to comply with important ope-rational requirements of nuclear generating stations

Communication presentee au "IWG/NPPCI Specialists' meeting on distri-buted systems for nuclear power plants" tenu à Chalk River, Ontariodu 14 au 16 mai 1980.

Page 284: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 266 -

1.0 INTRODUCTION

Le Canada est un des pionniers dans le domaine de la régula-tion par ordinateur des centrales nucléaires. Le système deGentilly-2 (réf.4), CANDU-600MWe, est un des plus récents etest basé sur deux mini-ordinateurs redondants. Ce type desystème a fait ses preuves à Gentilly-1, à Pickering et Bruceet sera probablement installé dans les centrales de la prochai-ne génération.

Cependant les progrès dans la technologie des ordinateurs etcertains problêmes de croissance, associés au système centraliséactuel ont provoqué au Canada un examen de la situation (reflO)et les laboratoires travaillent déjà à la mise au point d'unsystème distribué expérimental (réf.3). Cet effort n'est pasque canadien; en effet, dans plusieurs pays (ref 1 et 11) onéturiie, propose et installe des systèmes basés sur plusieursordinateurs de types différents.

Prenant comme point de départ les systèmes de Gentilly-2,cette communication traite de 1 ' adaptabi1ité des fonctionsessentielles de contrôle (régulation et sûreté) à un systèmedistribué. L'accent est placé sur le logiciel plutôt que sur1'équipement .

2.0 DESCRIPTION DES FONCTIONS ESSENTIELLES

Les fonctions essentielles de contrôle des CANDU sa retrouventprincipalement dans:a) le système de régulationb) les systèmes de sûreté ou de protectionPour chacun, une brève description ainsi que 1'enumeration desprincipes de conception situera le lecteur.

2 .1 Le système de régulationII est maintenant connu que la régulation des principauxprocédés des CANDU est confiée à un système centraliséde deux ordinateurs redondants (réf. 2 et 4 ) . La figure1 en présente les principales fonctions ainsi que lesliens qu'elles ont entre elles; le tableau 1 énumêre,quelques caractéristiques des principaux programmes ouse retrouvent ces fonctions (réf. 5,6,7,8) .

Associés et inséparables de ces fonctions de régulation onretrouve dans les ordinateurs de G-2 tous les programmesd'interface homme-machine, l'exécutif et les programmesservant à la commande et â la surveillance des machinesà combustible. Tous ces programmes forment un volumineuxensemble d'environ 800 000 mots de 16 bits qui se répartitde la façon indiquée à la figure 2.

Page 285: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 267 -

2.2 Les systèmes de protectionA Gentilly-2 les fonctions proprement dites de sQretésont remplies par:1) les deux systèmes d'arrêt d'urgence du réacteur

(SAU ifl et SAU #2)2) le système de refroidissement d'urgence du coeur.

(ECC: emergency core cooling system)3) le confinement (containment)4) les systèmes de support de sûreté: alimentation

électrique d'urgence (EPS) et eau d'urgence (EWS)

Les systèmes doivent être indépendants, fiables, fré-quemment testés et le plus simple possible. En plus,ils sont le reflet des principes canadiens (réf. 12)en matière de sQreté. Ceux-ci exigent notamment que larégulation et protection soient conceptuellement, fonc-tionnellement et géographiquement séparés et qu'enparticulier, il y ait deux systèmes d'arrêt d'urgencedu réacteur, complètement indépendants.

Dans ce qui suit nous nous intéressons aux SAU et c'estpourquoi leurs principales caractéristiques sont donnéesau tableau 2 .

3.0 LES TENDANCES DE L'AVENIRLes contraintes dont doit tenir compte le concepteur des fonc-tions de contrôle deviennent plus difficiles et sont, souventcontradictoires. D'un côté on cherche à optimiser les équipe-ments des centrales afin de réduire les coûts, de l'autre onspécifie dos conditions de sûreté de plus en plus exigeantes

Fort heureusement les moyens mis à notre disposition évoluentet rendent possible la construction d'équipements plus perfor-mants et plus sûrs. Ces moyens sont les microprocesseurs etles modèles mathématiques de plus en plus exacts des phénomè-nes physiques des procédés. Les paragraphes qui suivent, ex-pliquent comment ces moyens sont pleinement utilisés dans unsystème distribué des fonctions essentielles de contrôle.

3 .1 L'avenir en régulation

3.1.1 Système centralisé actuelLe système actuel est très apprécié de l'exploi-tant et ses performances à Pickering et à Bruce ontdémontré sa très grande fiabilité. Cependant iléprouve des problêmes de croissance propres auxsystèmes centralisés.

En ce qui regarde les fonctions essentielles deuxproblêmes doivent être mentionnés. Premièrementles algorithmes sont de plus en plus complexes car

Page 286: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 268 -

le concepteur veut que son programme puisse faireface au plus grands nombres de situations possibleset veui y intégrer l'information des modèles mathé-matiques pour optimiser le fonctionnement (ie pro-gramme FLLT et CCA). Deuxièmement les fonctionsessentielles sont loin deceuxquis'oceupent de1'équipement. En effet, le personnel spécialisérequis pour programmer et gérer l'ensemble comple-xe des programmes d'ordinateur constitue un inter-médiaire supplémentaire entre l'équipement et l'al-gorithme de contrôle.

Un système distribué pourra améliorer cette situa-tion, si sa conception respecte certaines caracté-ristiques que nous allons maintenant examiner.

3.1.2 Caractéristiques d'un système distribué idéal

1) Définir les fonctions essentielles (FF.) eninsistant sur l'indépendance fonctionnelle etla simplification. Ce genre de travail fut ef-fectué pour le CANDU 600 MWe (réf. S ) . On yconstate que, bien qu'actuellement centraliséesgéographiquement les principales fonctions derégulations de G2 sont fonetionne1lement isolées.De plus, le nombre et la taille des FE sont fai-bles si on prend soin de les définir selon descritères reliés à la nature du procédé à régula-riser et des conséquences de la défaillance d'unefonction sur la marche de la centrale. Le tableau3 résume la réf. 9. Ainsi, dans un système dis-tribué les FF rudimentaires seraient confiées àun équipement très fiable et beaucoup plus petitque celui actuellement employé. Pour les autresfouctions les caractéristiques de l'équipementdépendront des besoins.

2) Utilisation accrue des fonctions d'optimisation(F0) basées sui les modèles mathématiques desprocédés. En effet ces modèles gagnent en pré-cision et il est de plus en plus profitable deles utiliser en régulation pour améliorer lerendement. Un système distribué devrait facili-ter l'emploi des FO puisqu'il sera possiblede choisir des calculateurs appropriés diffé-rents de ceux des FE. Cette séparation entreles FO et les FF, implique aussi des communica-tions ordinateur-ordinateur. Ces liens nedoivent toutefois pas rendre le fonetionnmentde la FF. "trop dépendant" de la FO. En touttemps, la FE devra pouvoir s'isoler de la FOet maintenir la centrale en marche dans un état

Page 287: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 269 -

ne nécessitant pas la FO (ie puissancerédui te).

3) Un interface homme-machine (IHM) distribué.L'IFM de Geiitilly-2 est déjà d'une tailleimposante (voir figure 2) et il devient im-portant de hiérarchiser ce qui est présentéà l'opérateur. Ceci passera par la définitiond'un interface réduit (mais suffisamment)associé aux fonctions essentielles qui seracomplété par un système plus sophistiqué,utile mais non essentiel. L'interface réduitsera donc indépendant de l'interface sophisti-qué .

4) Emploi accru de programme testLa nécessité de vérifier les performances desalgorithmes de régulation de G-2 a justifié ledéveloppement de programmes de vérificationstatique et dynamique hors-ligne. Ces program-mes s'avèrent utiles et il est souhaitable quetout le logiciel en profite. Ces programmessont volumineux (3 à 5 fois la taille du pro-gramme vérifié) et il est utopique dans unsystème centralisé de les exécuter en ligne.Dans un système distribué, un ordinateur puis-sant effecturait tous les tests détaillés detous les programmes pendant que chaque microne ferait que quelques tests sur ses programmes.

La régulation par ordinateur siutile au CANDUprendra un nouvel essor si les fonctions essen-tielles sont isolées et confiées â de petits or-dinateurs très fiables. En effet, la régula-tion de base sera beaucoup plus simple et l'onpourra profiter d'algoritmes plus complexesmais non essentiels au fonctionnement.

3.2 L'avenir des systèmes de sOretéLes fonctions de sûreté sont essentielles. Jusqu'à toutrécemment elles étaient des plus simples et consistaienten des systèmes passifs ou en quelques boucles de déclen-chement rudimentaires.

Cependant ces temps sont révolus et déjà Gentilly-2 esttouché par un accroissement rapide du nombre de systèmes,du nombre grandissant de boucles et d'une logique pluscomplexe causée par le conditionnement des paramètres.En conséquence, il est sérieusement question d'introduiredss micros dans les chaînes de protection des deux systè-mes d'arrêt d'urgence.

Page 288: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 270 -

3.2.1 Tendances des fonctions de sOretéL'on peut maintenant dégager certaines tendancesqui favoriseront l'usage des ordinateurs en sfl-r e t ê ••

1) Les systèmes de sûreté deviendront plus comple-xe s •J'ai mentionné plus tôt le conflit que doit ré-soudre le concepteur entre une centrale plusperformante et des marges de sûreté exigeante.La solution vient en partie d'une meilleureconnaissance mathématique des phénomènes phy-siques (modèle) et des relations de cause àeffet des divers paramètres. Puisque la listedes incidents et accidents contre lesquels ondoit se protéger s'allonge, il en résultera dessystèmes comportant plusieurs mesures dont cer-taines serviront à en conditionner d'autres.

2) Tests automatisés. Afin de s'assurer de la fia-bilité des systèmes de sûreté, l'exploitantdoit procéder â des vérifications périodiquesde l'équipement. Avec le nombre croissant destests, il devient nécessaire de les automatiserpour libérer l'opérateur. Encore ici la tech-nologie digitale offre d'énormes possibilités.

3) L'interface homme-machine des systèmes de sûre-té a un urgent besoin d'ordinateur. Ce besoinest criant lorsque l'on compare le panneau duSRR à celui du SAU #1 (voir photo figure 3 et 4)Grâce â l'ordinateur et à l'écran cathodique onparvient à présenter de bien meilleure façon uneinformation beaucoup plus volumineuse.

4) Communication régulation-sûre té. Au Canada toutlien régulation-sûreté est banni. Cette sépara-tion commence au niveau des capteurs et se conti-nue jusqu'aux actionneurs (voir figure 5). Cettesituation pourrait changer avantageusement grâceau couplage optique.

A Gentilly-2 le système de régulation est beau-coup mieux équipé que les systèmes d'arrêt pourdétecter des défaillances de l'instrumentationet pour prévenir l'opérateur. Ceci est non seu-lement dû à l'emploi d'ordinateurs mais aussiau fait que la régulation possède beaucoup plusde capteurs (5000 vs 100) et connait bien l'étatde la centrale. Comment alors faire profiter lasûreté de ces avantages sans courir le risquede couplages ou de grossir (compliquer) la logi-que de la sûreté pour analyser l'instrumentation?

Page 289: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 271 -

Soulignons ici que dans d'autres pays on s'en-gage dans la voie de systèmes de sûreté comple-xes et reliés à la régulation (ref 1 et 11).

Chose certaine un minimum de risques et beau-coup d'avantages résulterait de l'accepta-tion d'un lien uni-directionnel comme celui pré-senté à Ta figure 6~. ï~ï s 'agit simplement, auxsystèmes de sQretë de transmettre tous leurssignaux à la régulation par lien optique. Dansun tel arrangement la régulation ne peut affec-ter la sûreté et celle-ci bénéficie d'un système,aussi puissant que désiré, de surveillance descapteurs.

CONCLUSIONLe système de l'avenir prend forme lorsqu'on regroupe les di-vers points discutés plus haut. Ce système est schématisé âla figure 7 et ses caractéristiques touchant les fonctionsessentielles de sûreté et de régulation sont:1) Système décentralisé comportant plusieurs ordinateurs

reliés par une voie de communication.

2) Lien optique uni-directionnel entre la régulation et lasûreté

3) Les fonctions essentielles sont programmées dans de petitsordinateurs très fiables et autonomes.

4) Les autres fonctions bénéficieront alors d'équipementsmieux adaptés à leur tâche et pourront être programméesen langage de haut niveau.

Un tel système n'est pas encore à point. Cependant, il cons-titue un objectif valable pour les chercheurs et les concepteursde centrales nucléaires. Ses inconvénients comme ses avantagespeuvent se discuter selon divers points de vue dont le plus im-portant gravite autour de la fiabilité.

Dans cette communication, on s'intérasse surtout aux fonctionsessentielles de contrôle. De ce point de vue un système distri-bué l'emporte sur le système centralisé parce qu'il permet avanttout de simplifier et d'isoler au maximum les fonctions essen-tielles tout en permettant l'emploi de fonctions d'optimisationaussi complexes que désirées, mais non essentielles. Cette ca-ractéristique de simplifier et d'isoler l'essentiel du contrôleprend toute sa valeur dans de l'exploitation des centrales nu-cléaires.

REMERCIEMENT

L'auteur remercie Messieurs R. Tawa, M. Doyon , J.P. Dietrichet D. Rancourt pour leurs avis dans la préparation de ce texte.

Page 290: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

PRESSURISSEUR

ALTERNATEUR

TURBINE.DEMARRAGE.PUISSANCE

h CONSIGNEMODE REACTEUR PRIORITAIRE

f CONSIGNEMODE TURBINE PWQH1TAIRE

I

to

Figure 1: PRINCIPAUX PROGRAMMES DE REGULATION DE GENTILLY-2

Page 291: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 273 -

EXECUTIF.REGULATION

DESPROCEDES

ESPACEDE

TRAVAIL

COMMANOE ET SURVEILLANCE

OU

RECHA6EMENT COMBUSTIBLE.

SURVEILLANCE ET

INTERFACE HOMME - MACHINE

TOTAL! 780,000 mols de 16 bits.

F IGURE %

GUANTITE RELATIVE DE MEM0IF1E OCCUPEE PAR

LES PROGRAMMES D'ORDINATEUR A GENTILLY-2

WV rt\oi SO

Page 292: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 274 -

•apaqQQQQQ

FIGURE 3 : Panneau du systeme d'arret d'urgence no.l du reacteur

de Gentilly-2*

Page 293: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 275 -

FIGURE 4 : Panneau du systeme de regulation du reacteur

de Gentilly-2.

Page 294: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 276 -

CENTRALE

IOOEA

1

i

LOGIQUE

CABLEE

1#

BARRES

S A U # I_ _ _ _ J

IOOEA1

LDGIQUECABLEE

\

INJECTIOI\

SAU**2J

| 3 0 0 0 EN 2 0 0 0 EA

1

ORDINATEUR 1

REGULATION/SURVEILLANCE |

JACTIONNEURS

Figure 5' \ SYSTEMES ACTUELS

CENTRALE

CAPTEURSJ\ | CAPTEURS |

1MICROS

T 1

BARRES

i

L_ <

MICROS

U/V

INJECTION

f,

1

CAPTEURS

MICROS + MIMIS

11

•ie.

ACTIONNEURS

Figure 6 : SYSTEMES ENVISAGES

LEGENDEI if- INTERFACE HOMME MACHINE

• A V . COUPLAGE OPTIQUE

Pl"\

Page 295: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

ALARME

i

t

if

L SAU. 1

t

i

! !• SYSTÈME D 1 /

D'URGENCE

SAU. 2

\ IHRRET

INTERFACE

OPTIMISATION

DONNEES

t

L SYSTEMEDE

SÛRETÉ

TT Ïce

• ECC

• EPS

• EWS

• CONFINEMENT

AFFICHAGE ENTRETIEN

i .

FONCTION

ESSENTIELLE

*

FONCTION

ESSENTIELLE

!

TRANSFERT.

t .

1

REGULATION

! 1-A

TIO

NÉG

U

K • PUISSANCE DU REACTEUR • MACH. A COMBUSTIBLE.1 • PRESS. DU CALOPORTEUR • SERVICES

I • PRESS. ET NIVEAU DU 6.Y.1 • FRÉQUENCE DE LA TURBINE.

HOMME MACHINE LOCAL.

FIGURET '.SYSTÈME DE CONTRÔLE IDEAL

I

Page 296: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

TABLEAU 1 : ENTREE SORTIE DES PROGRAMMES

PROGRAMME

NOM (SYMBOLE)

R é g u l a t i o n de l a p u i s s a n c e du r é a c t e u r (SRR)

Recul r a p i d e de p u i s s a n c e (RRP)

Carte de Flux (FLU)

Régulat ion de la température du modérateur (RTM)

Régulation de la pression et inventaire du caloporteur(CCA)

Régulation de la pression des générateurs de vapeur (RPG)

Régulation du niveau des générateurs de vapeur (RNG)

Régulat ion de la puissance de la c e n t r a l e (RPC)

Surve i l l ance du t u r b o - a l t e r n a t e u r (STA)

Montée en v i t e s s e de la tu rb ine (MVT)

DIMEN-SION

1 2 K

1 K

3 2 K

3 K

1 0 K

7 K

7 K

6 K

6 K

10 K

ENTREE

(2)

EA

140

50

1 0 2

17

9 5

61

8 8

25

55

22

EN

8 8

14

28

14

11

33

61

11

47

91

LOGI-CEL

27

0

1500

4

11

4

4

4

4

SORTIE

SA

15

0

3

11

8

12

0

0

0

SN

65

10

10

17

18

20

20

0

18

LOGI-CEL

21

1

15

0

0

2

0

1

1

1

MESSAGE

, 3 ) 7

i ©110

51

216

136

263

120 |

112

192

NOTE:(T) I I y a p lus d ' e n t r é e type l o g i c i e l que de sort ie puisqu'on y inclut les actions de l 'opérateur

(T) Plusieurs EA sont répétés d'un programme â l ' a u t r e . Ainsi les 28 détecteurs de flux sont lus par SRR,RRP, CCA, RPG et RNG.

(T) SRR passe â FLU 16 variable " logiciel"et FLU lui en retourne 16.

(£) Les messages de RRP sont t r a i t é s par SRR.

EA: entrée analogique, EN: entrée numériqueSA: sort ie analogique, SN: sor t ie numérique

I

ro

co

I

Page 297: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

TABLEAU 2

CARACTERISTIQUES DES SYSTEMES D'ARRET D'URGENCE DE GENTILLY-2

PARAMETRES

1 HAUT FLUX LOCAL

2 TAUX LOG (FLUX)

3 HAUTE PRESSION CALOPORTEUR

4 BASSE PRESSION CALOPORTEUR

5 BAS DEBIT CALOPORTEUR. DEBITMF.TRE

.AP

6 BAS NIVEAU PRESSURISSEUR

7 BAS NIVEAU GV

8 BASSE PRESSION D'EAU D'ALIMENTATION

9 HAUTE PRESSION ENCEINTE

10 MANUEL

SAU §

CAPTEUR

2x34 $

3 CI

6 P

6 P

6 F

-

3 L

6 L

6 P

3 P

1

CONDI.

*

*

*

*

-

*

*

*

SAU it 2

CAPTEUR

2 x 2 3 <t>

3 CI

6 P

6 P

-

6AP

3 L

6 L

6 P

3 P

CONDI.

*

*

*

-

*

*

*

*

NOTES: <t> '• detecteurs de flux en-pile

CI: chambre d ' ionisation hors-pile

*: declenchement conditionne (puissance, # de pompe, seuil variable)

NJ

Page 298: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 280 -

TABLEAU 3

CLASSEMENT DES PRINCIPALES FONCTIONS DE REGULATION DE

GENTILLY-2

FONCTIONS DE REGULATION

FONCTIONS ESSENTIELLES (2)

i) puissance du réacteur

2) pression du primaire

3) pression de la vapeur au GV

4 ) niveau des GV

5) délestage au condenseur

6) fréquence du turbo-alternateur

7) Recul Rapide (STEPBACK)

AUTRES FONCTIONS (î)

8) réserve de réactivité

9) délestage à l'atmosphère

10) inventaire du primaire

11) température du modérateur

12) retrait des barres d'arrêt

13) calibration des mesures de puissance

14) cartographie du flux

15) mode de fonctionnement de la centrale

16) démarrage de la turbine

17) surveillance de la centrale

CRITERES

. procédé rapide

. si bris, arrêt

. ensemble suffisant

pour maintenir la

centrale â puissance

cons tante

. procédé lent

. si bris, réglage

manuel possible

. procédé très lent

. fonction non périodique

. si bris, baisse de

puissance

NOTES: 1) Seules sont classées les fonctions programmées dans

l'ordinateur de Gentilly-2.

2) Chacune possède un interface homme-machine minimum.

Page 299: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 281 -

REFERENCES

, , Proceedings of the IAFA/NPPCI Specialist's Meetingon Use of Computers for Protection Systems and Auto-matic Control , Mlinchen, 11-13 may 197(T

2 A. Pearson, Digital Computer Control in Canadian Nuclear PowerPlants Experience to Date and Future Outlook, AECL -5916,1977.

3 G. Yan, J. L'Archeveque, L.M.Watkins, Distributed ComputerControl Systems in Future Nuclear Power Plants, IAEA-SM-226/100, april 1978.

4 R.E. Askwell, R.A. Olmstead, N. Yanofsky, 60QMWe Generating sta-tion Digital Control Computer System; Design Descrip-tion-xx-66400-1, rev 0, June 77.

5 P. Mercier, Aper;us des fondements et regies du contrSle de lapuissance du reacteur CANDU 600 MWe, ETN-KI-78-01,j anvier 79.

6 P. Mercier, ContrSle des principaux precedes du caloporteur pri-maire et du generateur de vapeur pour la centraleCANDU-PHW-600 MWe, ETN-RI-78-03, avril 1978.

7 M. Doyon, ContrSle global de la puissance de la centrale Gentilly-2_, Description Generale 66550-1, fevrier 1979.

8 M. Doyon, ContrSle par ordinateur des procedes de la centraleGentilly-2, Description Generale 66550-2, fevrier 1979.

9 P. Mercier, P. richer, Les fonctions essentielles de regulationd'une centrale nucleaire CANDU-PMW-6Q0 MWe, communica-tion donnee au congres de 1'Association NucleaireCanadienne, juin 1978.

10 Record of Station Control Computer Seminar, OntarioHydro Design and Development Division, april 17-18,1978, Orangeville, Ontario.

11 J. Bruno, B.M. Cook, D.N. Katz, J.B. Reid, Microprocessors inNuclear Plant Protection System, IEEE, A 80 102-4, 1980.

12 , Gentilly-2 600 MWe Nuclear Power Station Safety Report,sept. 79.

Page 300: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 282 -

SYSTEMES DISTRIBUES POUR LA PROTECTION DES CENTRALES NUCLEAIRES

P. JOVER

Centre d'Etudes Nucléaires de Saclay9119Ô - Gif-sur-Yvette (FRANCE)

1. - INTRODUCTION

Les avantages des systèmes de commande d is t r ibués sont présentésgénéralement de la façon suivante :

- amélioration de l ' exp lo i t a t i on ,- réduction des coûts,- adaptation au changement de technologie.

Ces avantages sont évidemment t r è s in téressants pour l e s systèmes decommande des cent ra les nuc léa i res . Il y a quelques années,EPRI (Elec t r icPower Research Ins t i tu te ) a montré que le multiplexage des signaux esttechniquement poss ib le , q u ' i l permet de s a t i s f a i r e l e s spécif icat ions ded isponib i l i t é e t q u ' i l permet de réduire les coûts (1) . Depuis, beaucoupde systèmes de commande d is t r ibués sont proposés par l e s constructeurs .

Cette note donne quelques commentaires sur l ' app l i ca t ion du conceptde d i s t r ibu t ion aux systèmes de protect ion - que f au t - i l d i s t r ibue r ? -et. se termine par une brève description d'un système de protection basésur l e s microprocesseurs pour les centra les pressur isées en cours deconstruction en FRANCE.

2. - SYSTEMES DISTRIBUES

Un système de commande dis t r ibué comprend un cer ta in nombre de s t a -t ions qui contiennent des uni tés d 'appl icat ion : ces s ta t ions communiquententre e l l e s à t ravers un réseau de transmission (Fig. 1). Le concept ded is t r ibu t ion implique principalement (a) l ' in t roduct ion de s ta t ions àproximité des capteurs et des actionneurs dans la zone processus ( d i s t r i -bution géographique), (b) la détermination des fonctions à confier à cess ta t ions (d is t r ibut ion fonct ionnel le) . Puisqu'une er reur de transmissiondans le réseau de transmission peut avoir de sérieuses conséquences, i l

Page 301: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 283 -

faut des spécifications sévères pour obtenir une très grande intégritédes données transmises et Line très grande disponibilité du réseau. On doitnoter, à ce sujet, qu'un groupe de travail de la Commission Electrotech-nique Internationale (CEI/CT 65) a été mis en place pour définir unenorme pour les communications à haut niveau entre stations dans les sys-tèmes de commande distribués (PROWAY = a process data highway). Lepremier projet de ce groupe t ra i te des spécifications fonctionnelles(structure du système, protocoles, intégrité des données, disponibilité,e t c . ) . Puisque ce travail doit constituer une norme, les concepteursdevront, plus tard, en tenir compte.

Dans le cas de l'application aux centrales nucléaires, i l fautconsidérer d'autres aspects :

- les équipements pour le contrôle et la commande sont divisés endeux groupes : équipements importants pour la sûreté, pour lesquels i ly a des spécifications particulières, et. les autres équipements.

- beaucoup d'équipements et de composants concernant le contrôle etla commande sont placés à l ' intérieur de l'enceinte de confinement etpeuvent être soumis à des conditions d'environnement accidentelles(température, humidité, pression et radiations).

Ces deux points sont développés dans le chapitre suivant.

3. - SYSTBE DE PROTECTION'

5.1. - Sgêçifiçations_partiçulières

On rappelle que le système de protection a pour but de déclen-cher des actions automatiques dans le système d'actionneurs de sûreté.Après le déclenchement d'une action, la séquence prévue doit se poursuivrejusqu'à ce que la tâche de sûreté soit accomplie.

Les exigences fonctionnelles des systèmes de protection dépen-dent du type de réacteur nucléaire. Ces exigences concernent les para-mètres importants pour la sûreté : les diagrammes logiques ou les algo-rithmes pour les fonctions de protection, le nombre et l'emplacement descapteurs et des actionneurs, les objectifs de temps de réponse et deprécision.

D'autre part, le système de protection doit suivre la régle-mentation en matière de sûreté, en ce qui concerne le critère de défail-lance unique (une défaillance unique ne doit pas empêcher une action deprotection), et en ce qui concerne les bases de conception : indépendanceet diversité, redondance, intervention de l'opérateur dans l'action deprotection, essais périodiques, et qualification des équipements. Enfin,le nouveau concept de défense en profondeur peut conduire à subdiviserle système de protection en sous-systèmes indépendants.

Page 302: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 284 -

3,2. - Svstème_de_groteçtion_distribué

En vue d'examiner ce qui doit être distribué, on doit considé-rer d'abord quelques points qui concernent la possibilité de distributiongéographique :

- introduction de stations dans la zone processus

L'installation de stations intelligentes dans l'enceinte deconfinement est intéressante pour économiser les câbles et les pénétra-tions. Mais, i l faut considérer le regroupement des capteurs et desactionneurs- Dans l'application aux PWR, par exemple, une grande partiedes capteurs du système de protection est située à l ' intérieur del'enceinte, alors qu'une grande partie des actionneurs de sûreté estplacée à l 'extérieur de cette même enceinte ; pour beaucoup de tâches deprotection, les actionneurs placés à l 'extérieur de l'enceinte sont com-mandés par des paramètres mesurés à l 'aide de capteurs placés à l ' i n té -rieur de cette même enceinte : i l faut donc prévoir des liaisons sûresentre l ' intérieur et l 'extérieur par l'intermédiaire des stations, et lemeilleur moyen pour faire ces liaisons n'est pas évident.

En ce qui concerne les équipements à l ' intér ieur de l 'enceinte,la qualification pour les conditions d'environnement accidentelles (classe1 E) et la maintenance sont des problèmes supplémentaires.

- communications entre stations redondantes

De façon à respecter le critère de défaillance unique pendantle fonctionnement normal et pendant les essais, i l faut installer t roisou quatre stations indépendantes pour déclencher une action de protection.L'effet de filtrage pour les déclenchements intempestifs est généralementobtenu par un traitement des signaux logiques avant décision : ce quidemande des transferts de données rapides et sûrs entre stations redon-dantes, et conduit à des communications deux-à-deux pour éviter les"embouteillages" dans la ligne de transmission.

- intervention de l'opérateur dans l'action de protection

En plus du déclenchement automatique de l'action de protection,des commandes manuelles de secours doivent être prévues pour l 'arrêtrapide du réacteur et pour ls déclenchement des autres actions de sûreté.La réglementation précise que les possibilités de commande manuelledoivent être prévues en salle de conduite pour commander aussi directementque possible les' actionneurs : ceci conduit à installer des liaisonscâblées entre la salle de commande et les dispositifs actionneurs.

En conclusion, on remarquera la difficulté de trouver un bon emplace-ment pour les stations, l'emplacement optimal n'est peut-être pas àl ' intérieur de l'enceinte, mais à l 'extérieur, les liaisons entre capteurset stations pouvant être faites par des liaisons série, et les liaisonsentre stations et cellules de commande d'actionneurs en f i l -à-f i l (Fig--)-

Page 303: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 285 -

Un autre aspect de l ' a rch i tec ture distr ibuée concerne la d i s t r i -bution fonctionnelle : c ' es t le cas des systèmes décr i t s au chapitresuivant.

4. - SYSTEMES DE PROTECTION BASES SUR LES MICROPROCESSEURS

Un pas vers les systèmes ^vecarchitecture distr ibuée a été fa i t enFRANCE et dans d 'autres pays, avec les systèmes de protection basés surles microprocesseurs en cours de développement pour les applicationsPWR (2) (5). Ces systèmes u t i l i s en t largement les poss ib i l i t é s offertespar les microprocesseurs et les transmissions multiplexers, mais leséquipements électroniques sont montés dans des armoires disposées à l ' e x -t é r i eu r de l 'enceinte étanche : on n'économise donc pas de câbles ou depénétrations pour r e l i e r les capteurs aux microprocesseurs.

Le SPIN (Système de Protection Intégré Numérique) est un système deprotection basé sur l ' u t i l i s a t i o n des microprocesseurs : i l es t développépar FRAMVTOî-E et MERLIN-GERIN en association avec le CE.A. , pour lescentrales nucléaires PWR, 1300 MMs, 4-Boucles.

Les caractér is t iques principales de ce système sont :

- d is t r ibut ion des fonctions de protection dans des unités fonction-nelles,

- liaisons f i l -à-f i l entre capteurs et unités fonctionnelles,- cornu nications entre unités redondantes au moyen de liaisons série,- communications entre chaque unité redondante et le système de

traitement d'informations centralisé au moyen de liaisons série,- liaisons f i l -à- f i l entre unités fonctionnelles et système d'action-

neurs de sûreté.

Le SPIN est un système quadri-redondant (Fig. 3). Il comprend :

- quatre unités de traitement redondantes, reliées aux quatre groupesredondants de capteurs,

- deux unités logiques redondantes reliées aux deux groupes d'action-neurs de sauvegarde.

Ces deux unités logiques redondantes sont commandées par lessignaux logiques créés dans les quatre unités de traitement.

Il y a deux niveaux de redondance :

- redondance au niveau des déclenchements partiels (logique 2/4 avecpossibilité d'inhibition),

- redondance au niveau de la commande des actionneurs (logique 2/2pour chaque groupe d'actionneurs de sauvegarde).

Page 304: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 286 -

Sept uni tés fonctionnelles dans chaque unité de traitement redondan-te sont prévues pour l ' acqu i s i t ion des mesures en provenance des capteurs,les t rai tements et le vote en 2/4 avec inhibi t ion- Une unité fonctionnellecommunique avec les t r o i s uni tés fonctionnelles homologues des t r o i sautres unités de traitement redondantes, par l ' in termédiai re de mémoires-tampon et de transmissions sé r i e . Une bonne séparation physique e t é l e c -t r ique es t obtenue par l ' u t i l i s a t i o n de f ibres optiques. I l y a quatreunités spécia l isées pour l e s transmissions sér ie (une pour l 'émission,et t r o i s pour la récept ion) , dans chaque unité redondante. Deux uni tésde transmission sér ie supplêmentairessont prévues pour les l i a i sons avecla sa l le de commande cen t ra l i sée .

L'ensemble du système u t i l i s e cinquante-deux microprocesseurs(MDTOROLA 6800).

5. - CONCLUSION

Le concept de d i s t r ibu t ion pour les systèmes de protect ion n ' e s t pasencore largement développé, à cause des d i f f i cu l t é s dues aux spécif ica-t ions p a r t i c u l i è r e s de ces systèmes. Cependant, l ' u t i l i s a t i o n de nouvellestechnologies, t e l l e s que l es microprocesseurs e t les composants pour l estransmissions s é r i e , conduit à des nouvelles a rchi tec tures proches dece l l e s des systèmes d i s t r ibués .

REFERENCES

(1) A.B. LONG, Assessment of new instrumentation and control technologiesRemote multiplexing a case study, IAEA-SM-226/76, CANNES (France),24-28 Avril 1978.

(2) J.M. GALLAGHER, e t a l . , Design of in ternal archi tecture forWestinghouse microprocessor based integrated protect ion system,IAEA-SM-226/112, CANNES (France), 24-28 avr i l 1978.

(3) J .L . SAVORNIN, et a l . , Système de Protection Intégré Numérique.IAEA-SM-226/93, CANNES (France), 24-28 avr i l 1978.

Page 305: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

7.one "Processus"

Li^ne iie transmission

Zono "Salle de conduite"

Coupleur lif.ne

Station (K)

CouploursAppli cation

oCapteur Actionne;ur

Coupleur ligne

Station (K+1)

CoupleursApplication

OCapteur Actionneur

Coupleur li.gne

S ta t ion (N)

CoupleursApplication

•r Ecran

Clavier

I

CO

I

l.-jj^. 1 : Systcme dc conminiidc distrilnio.

Page 306: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

i f t n e d e t r a r i f . m i n i . i o n

Batiment des Auxiliaire

jMultiplexeur

Captour

Batiment Reacteur

tooooo

r[ifjj__2 : Systemo do protocf.ion distribu«

Page 307: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

new

3 —

Station '?.

•SS9DBRR39

— 1

ssra

Station

— 1— 2.

Station A

0 O O OCapteur 1 Capteur 2 Capteur 3 Capi.our k

Gtation.1-z•i

Station B

Actionncur"Train A"

StationCentrale

Salle tie conduiti

Actionneiir"Train B"

VJ&. \ : S I' I N

00

Page 308: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 290 -

THE OPERATOR'S ROLE AND SAFETY FUNCTIONS

by

W.R. Corcoran, D.J. Finnicum, F.R. Hubbard, III, C.R. Musickand P.F. Walzer

ABSTRACT

A nuclear power plant can be thought of as a singlssystem with two major subsystems: equipment and people.Both play important roles in the nuclear safety. Whereas,in the past, the role of equipment had been emphasized innuclear safety, the accident at Three Mile Island (TMI) andits subsequent investigations point out the vital role ofthe operator. This paper outlines the operator's roles innuclear safety and suggests how the concept of safetyfunctions can be used to reduce economic losses and increasesafety margins.

Page 309: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 291 -

THE OPERATOR'S ROLE AND SAFETY FUNCTIONS

INTRODUCTIONTMI demonstrated that nuclear plants are conserva-

tively designed and operated such that there is only asmall probability of an event seriously affecting publichcalth and safety. The TMI plant was pushed beyond itsdesign basis, yet very little radiation was released to thegeneral public. However, TMI also emphasized that theemotional strain and financial losses of the people livingin the area around the plant, the electrical power con-sumer, and the utility owners and managers can be severe.Much investigation""1" and reflection has been done sinceTMI to improve equipment and operator performance,thus making such events even less likely in the future.

During the course of the Three Mile Island (TMI)event and subsequent investigations, frequent referencewas made to the operator's "mindset" during the acci-dent.'" The inference was that the operator's trainingand experience had not prepared him to fully recognizethe situation that was unfolding in front of him."':i His"mindset" caused him to ignore or reject certain infor-mation that was essential for him to analyze the situationproperly and take timely correct action.

The designer's "mindset" of the operator's role in plantsafety was also reviewed, designers make assumptionsof the operator's role, both during normal plant opera-tion and during plant accidents. This information mustbe eonveyd to the operator in a practical form. Themethod us d to accomplish this has been through theplant op.rating and emergency procedures guidelinesand the safety analysis report including the technicalspecifications. However, over the years, the number andsi/e of these procedures have become very large anddifficult (or an operator to use. Moreover, the safetyanalysis report was never primarily intended for thatpurpose and the technical specifications are seldomconsidered to be an operational aid.

The intent of this paper is to outline the operator'srole in nuclear safety and to introduce the concept of

1\ )

\

OPfRATORACTION

1LANT SAFfcIV EVALUATION

PWtDICltDACCEPTABLE

RESULTS

k -INITIAL PLANT

CONDITIONS ANf'SETUP

"safety functions." Safety functions are a group ofactions iha! prevent core melt or minimize radiationreleases to the general public. They can be used to providea hierarchy of practical plant protection that an operatorshould use. This paper focuses on the pressurized waterreactor as provided by Combustion Engineering, butis applicable, with formal changes, to other designsand types.

"An accident identical to that at Three Mile Island isnot going to happen again." said the Rogovin investi-gators.12' The next serious threat to safety will be differentfrom the TM I sequence. To concentrate design, manage-ment and operations improvements on the specificsequence at TMI is therefore unwise. The concepts putforward in this paper are intended to help the operatoravoid serious consequences from the next unexpectedthreat.

THE OPERATOR'S ROLE IN NUCLEAR SAFETYThe plant safety evaluation uses four inputs in pre-

dicting the results of an event (Figure 1): the eventinitiator, the plant design, the initial plant conditionsand set up, and the operator actions. Ifany of these inputsare not as assumed in the evaluation, confidence thatthe consequences will be as predicted is reduced.

Based on the safety evaluation, the operator* hasthree roles in assuring that the consequences of an eventwill be no worse than the predicted acceptable results.These three operator roles (Figure 2) are to:

1. keep the plant set up so that it will respond properlyto disturbances.

2. operate the plant so as to minimize the likelihoodand severity of event initiators and disturbances,and

3. assist in accomplishing safety functions during theevent.

Proper execution of these three roles will keep the con-sequences of design basis events within bounds and willtend to reduce the consequences of events that have notbeen evaluated.

• In this context, the term operator means primarily ihe personnel in directcommand of the plant, hut also includes all members of ihc utililv learn (hatcontnhule to effective operation.

I II II 1

Fig. J: Operator's role during an event as assumed inthe safety evaluation process Fig. 2: The operator'* ro/es in nuclear safety

Page 310: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 292 -

Plant Set i'/iIn keeping the plant set up to respond properly to

adverse events, the operator must treat the plant as acomplete system consisting of equipment and people.Therefore, proper plant set up involves both equipmentfunctionability and personnel readiness.

In keeping the plant equipment set up. the operatoris guided by the equipment functionability consideredin the "Limiting Conditions for Operation" and "Sur-veillance Requirements" in the technical specificationsas shown in Table I. ANS Standard 58.4 providesguidance for determining the minimum characteristicsof a proper plant set up. Although the use of the existingtechnical specifications has not led to any adverse con-sequences, there is room fora great deal of improvement.Implementation of ANS Standard 58.4 in developingtechnical specifications would increase the level ofassurance that the plant equipment is set up properly.

Personnel readiness likewise has four considerations,the first of which is the number of people available. Ifthere aie not enough operators on hand, those that are

TABLE I

EQUIPMENT FUNCTIONABILITY CONSIDERATIONS

1. Status of Applies to equipment and componentequipment operability.

Examples: A. Number of ECCS pumpsoperable

B. Ability of rods to insertinto the core within aspecified time

C. Number of diesel generatorsoperable

2. Operating state Applies to system or component actionsof equipment or describes the position or running

condition of equipment.

Examples: A. Valve positionB. Control rod positionC. Setpoints of the reactor

protection and engineeredsafety features actuationsystems

3. Values of process Applies to flows, temperatures,parameters pressures, etc.

Examples: A. Reactor coolant specificactivity

B. ECCS tank contentsC. Thermal and hydraulic

condition of the reactorcoolant or primary coolant

4. Condition ofequipment andstructures

Applies to the preservation ofquality

Examples: A. Integrity of fission productbarriers

B. Existence/growth of flawsin components

C. Monitoring of radiationdamage

(Derived from Table 1 of ANS Standard 58.4, Reference 5)

present will have difficulty in operating and maintainingthe plant and in responding to adverse events. A secondconcern is the locations of the operators and their activi-ties.- If the people assigned to the station are not propertypositioned, or are involved in activities that distract themfrom plant operation., they are not ready. The state ofmind and body of each operator is the third concern,as his ability to respond to adverse events will be reducedif he is, for example, tired, sick, on medication, dis-traught or has used alcohol. This should be specificallyaddressed in shift turnover procedures. The final con-sideration is his degree of training, specifically, theadequacy of his initial training and the- frequency ofhis retraining. The Institute for Nuclear I'ower Opera-tions (1NPO) is expected to establish programs dealingwith these issues.""

Fewer, Milder y.ventsThe operator's second role is to minimize the frequencv

and severity of adverse events.* To fulfill this role. 'heoperator must have a good understanding of the plantand its capabilities, know the operating slate of the plant,know the planned changes in the operating state, beaware of plant activities that may affect the operatingstale of the plant, and always be prepared for an un-planned event.

Take, for example, maintenance, a plant activity thatcould affect the plant operating slate. The operatorshould ensure that redundant equipment is operating,that backup systems and equipment are ready andproperly lined -ip and that the plant operating state isconsistent with the equipment available to mitigatepossible disturbances. In particular, if maintenance wereplanned on the feedwater system, it n;ay be possible toschedule this maintenance to correspond with otheractivities that require operation at a lower power whereone pump is adequate. That way the system could toleratethe loss of one pump. The objective is to maintain plantsafety margins by avoiding unnecessary challenges toplant protection systems. Also, the operator should beprepared for the possible loss of the remaining pumpby verifying the operability of the auxiliary feedwatersystem and being ready to take the appropriate correctiveactions if required.

Appendix 1 proposes four categories of practicalguidelines that an operator could use in incident preven-tion. However, such suggestions could bt counter-productive if translated into hard and fast rules orapplied without thinking. The operator must alwaysevaluate the situation at hand when applying any rules.An example of this is the instruction to plant operatorsnot to Jet the reactor coolant system "go solid." Thepurpose for this instruction is to prevent possible over-stress of the reactor coolant system piping. At TMI. the

• By emphasising the operator's role in preventing initialing e\cnls. we do notminimize the role of ha'dware. Reference 7 isa recent study highlighting potentialadvances in this area.

Page 311: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 293 -

operators followed this instruction by turning off thesafety injection pumps based on the high level indica-tion in the prcssurizer. This was done without consider-ing the reactor coolant system temperature and pressureindications or asking how the system could be solid whenthe pressure was low enough for the safety injection.system to be actuated. The operators apparently followeda rule without evaluating the situation at hand ami theygot into trouble.

Accomplishment of Safety FunctionsAssisting installed equipment in the accomplishment

of safety functions is the operator's third role. He needsto monitor the plant to verify that the safety functionsare accomplished. In addition, he has to actuate thosesystems that are not fully automated and intervene wherethe automatically actuated systems are not operatingas intended. There are three prerequisites to the fulfill-ment of this role:

1) information that identifies the plant state,2) procedures that cover the situations encountered

during events, and3) comprehensive training to use the information and

procedures to best aovantage in responding toevents.

Application of the concept of safety functions can beused to make significant improvements in each itemabove.

SAFETY FUNCTIONSThe operator needs a systematic approach to mitigat-

ing the consequences of an event. The concept of "safetyfunction" introduces that systematic approach andpresents a hierarchy of protection. If the operator canquickly identify the initiating event from the symptomsand correct the problem from his event procedures,carrying out the safety functions is implicit. If the opera-tor has difficulty for any reason, the systematic safetyfunction approach allows accomplishing the overall taskof mitigating consequences. A safety function is definedas a group of actions that prevent core melt or minimizeradiation releases to the general public. Actions mayresult from automatic or manual actuation of a system(reactor protection system generates a trip, operatoraligns the shutdown cooling system), from passive systemperformance (safely injection tanks feed water to thereactoi coolant system), or from natural feedback in-herent in the plant design (control of reactivity by voidingin the reactor).

There are ten safety functions needed to mitigateevents and contain stored radioactivity (Table II). Thesesafety functions may be divided into four classes:

1) Anti-core melt safety functions2) Containment integrity safety functions3) Indirect radioactive release safety functions4) Maintenance of the vital auxiliaries needed to

support the other safety functions

The relationship between these classes is shown onFigure 3. The arrows signify that the vita! auxilariesare necessary to support the other safety functions. Theplus signs indicate that ic is necessary to prevent coremelt, maintain containment integrity and control in-direct radioactive releases in order to limit the releaseof radioactivity to the general pub'ic. In all safety func-tions, the word control means accomplishment of thesafety function such that core melt is prevented orradioactive releases are kept within acceptable limits.Control involves manual or automatic actuation ofequipment, or the natural passive capabilities built intothe plant. For example, the preferred method of reac-tivity control is insertion of the control rods followedby boration if the shutdown is necessary. However, inthe hypothetical case where the control rods do notinsert, reactivity control can still be achieved throughnegative moderator reactivity feedback (natural feed-back) followed by boron injection (personnel andequipment action).181 The systems listed in the vitalauxiliaries and indirect radioactive release classes arenot exhaustive, and are provided only as an indicationof what systems should be considered.

The anti-core melt class contains five safety functions:Reactivity control, reactor coolant system (RCS) inven-tory control, RCS pressure control, core heat removal,and RCS heat removal. The purpose of the first anti-coremelt safety function, reactivity control, is to shut thereactor down and keep it shut down, thereby reducing

TABLED

SAFETY FUNCTIONS

Safety Function

Reactivity Control

Reactor Coolant SystemInventory Control

Reactor Coolant SystemPressure Control

Core Heat Removal

Reactor Coolant SystemHeat Removal

Containment isolation

Containment Temp-erature and PressureControl

Combustible GasControl

Maintenance of VitalAuxiliaries

Indirect RadioactivityRelease Control

Purpose

Shut reactor down to reduce heatproduction

Maintain a coolant medium aroundcore

Maintain the coolant in the properstate

Transfer heat from core to a coolant

Transfer heat from the core coolant

Close openings in containment toprevent radiation releases

Keep from damaging containmentand equipment

Remove and redistribute hydrogento prevent explosion inside con-tainment

Maintain operability of systemsneeded to support safety systems

Contain miscellaneous stored radio-activity to protect public and avoiddistracting operators from protec-tion of larger sources

Page 312: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 294 -

the amount of heat generated in the core. Reactivity iscontrolled in the short term by insertion of the controlrods and/or through the natural feedback mechanismof voiding in the reactor coolant, in the long term, reac-tivity is controlled by the addition of borated water to thereactor coolant system. Borated water can be added tothe reactor coolant system using the charging and boricacid addition portions of the chemical and volume controlsystem, the high and low pressure safety injection systemand /or the safety injection tanks. Figure 4 shows someof the systems that can be used to accomplish this function.

The purpose of the second and third anti-core meltsafety functions, reactor coolant system (RCS) pressureand inventory control, is to keep the core covered withan effective coolant medium. RCS pressure control caninvolve either pressure maintenance or pressure limita-tion. Likewise. RCS inventory control can involve citherinventory maintenance or inventory limitation. Undernormal circumstances, RCS pressure and inventorycontrol is maintained automatically by the pressurizerpressure and ievel control systems in conjunction withthe reactor coolant system pressure boundary. Thesesystems use the pressurizer spray valves and the letdownsystem to control pressure and inventory respectively,and they use the pressurizer heaters and charging systemto maintain pressure and inventory respectively. If thepressure and level control systems are unable to limitRCS pressure and inventory, the pressure and inventory

can be kept within bounds by action of the primary safetyvalves. In the event that RCS inventory and (or pressurebecomes inappropriately low due to an opening in thereactor coolant pressure boundary or excessive coolingof the reactor coolant system from excess steam flow.RCS inventory is maintained by injection of boratedwater by the safety injection system or the safety injectiontanks. Figures 5 and 6 are schematics of the PWR show-ing the execution of these two safety functions.

The purpose of fourth anti-core melt safety function,core heat removal, is to remove the heat generated inthe core by radioactive decay and transfer i! to a pointwhere it can be removed from the RCS to prevent thefuel from melting. This is accomplished by passing acoolant medium through the core to a heat removalpoint Normally, the reactor coolant pumps are usedto provide forced reactor coolant flow through the reactorcore to the steam generators. In the absence of forcedreactor coolant flow, the core can still be cooled by anatural circulation induced by a temperature differentialfrom the steam generators to the core. (This implies thatthe steam generators must be available to act as a heatsink). If natural circulation cannot be established, heatcan be removed from the core by boiling and movementof the steam so a point such that it can be dischargedthrough a break in the reactor coolant system piping.(See Figure 7)

The final anti-core melt safety functions is RCS heat

ANTI-CORE MELT

• REACT |V!TY CONTROL

• RCS INVENTORY CONTROL

• RCS PRESSURE CONTROL

• CORE HEAT REMOVAL

• RCS HEAT REMOVAL

ANTI-RADIOACTIVITY RELEASE

CONTAINMENT INTEGRITY

• ISOLATION

• PRESSURE/TEMPERATURECONTROL

• COMBUSTIBLE GASCONTROL

• INDIRECTRADIOACTIVE

RELEASE CONTROLFUEL POOL COOLINGWASTE PROCESSINGSPRAY CHEMICALADDITION

MAINTENANCE OF VITAL AUXILIARIES

ULTIMATE HEAT SINKELECTRIC POWERCOMPONENT COOLING WATERINSTRUMENT AIRHABITABILITY

Fig. 3: Classes of safety functions

Page 313: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 295 -

Systems to accomplish anti-core meli safely functions

Fig. 4: Reactivity Control

i 4-

fig. 5: RCS Pressure Control

Fig. 6; RCS Inventory Control

Fig. 8: RCS Heof Removal

removal. The purpose of this safety function is to transferheat from the core coolant to another heat sink. If thisis not done, core heat removal will not be possible. RCSheat removal is normally accomplished by transferringheat from the reactor coolant to the secondary systemin the steam generator. The secondary system water issupplied by the main feedwater system or the auxiliaryfeedwater system. Reactor coolant heat can be trans-ferred to the component coiling water via the shut-down cooling heat exchanger, provided that the reactorcoolant system pressure is less than the shutdown cool-ing system pressure interlock setpoint. If no other heatsink is available, reactor coolant system heat removalcan also be accomplished by discharging the hot reactorcoolant directly into the containment through a pressureboundary opening of a primary relief valve. (See Figure 8)

The foregoing discussion of the five anti-core melt

Page 314: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 296 -

Systems *o accomplish containment integrity safety functions

I"" |=»C: "

Fig. 9: Isolation g. 70." Pressure/Temperature Control

safety functions illustrates that each safety function canbe accomplished by a multiplicity of systems, and, inaddition, many of the systems support more than onesafety function. Under some circumstances, the execu-tion of one safety function causes another safety functionto be accomplished. Particular methods of accomplish-ing one safety function sometimes facilitate and some-times prevent a particular method of accomplishinganother safety function. This interaction, or snyergyamong the safety functions in an important feature ofthis concept.

The containment inlegrity safely function class con-tains three safety functions: containment isolation,containment pressure and temperature control andcombustible gas control. The primary objective of thesesafety functions is to preve.it major radioactive releaseby maintaining the integrity of the containment structure.Accomplishing the first safety function, containmentisolation, assists in maintaining containment integrityby ensuring that all normal containment penetrationsare closed off. Containment Isolation system is accom-plished by sensors for measuring containment pressure,electronic equipment to generate and transmit an isola-tion signal when the containment pressure exceeds asetpoint, and a set of valves for isolating each contain-ment penetration. (These valves are generally part ofother systems also.) Each containment penetration isprovided with two isolation valves, one inside contain-ment and one outside containment. (See Figure 9)

The purpose of the second containment integritysafety function, containment temperature and pressurecontrol is to prevent overstress of the containmentstructure and damage to other equipment from a hostileenvironment by keeping containment pressure and

Fig. 71: Combustible Gas Control

temperatures within prescribed limits. Containmentpressure and temperature are controlled using the con-tainment spray system and the containment coolingsystem. (See Figure 10)

Likewise, combustible gas control, the third contain-ment integrity safety function, is needed to preventcontainment overstress caused by explosion of hydrogengas inside containment. The hydrogen would evolvefrom metal-water reaction in the event of failure of oneor more of the anti-core melt safety functions. Hydrogengas is removed from the containment atmosphere by

Page 315: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 297 -

the hydrogen recombiners. The containment spraysystem and the fan coolers can also help in combustiblegas control by redistributing the hydrogen gas through-out containment, thus preventing the formation offlammable pockets of hydrogen gas. (See Figure 11)

The third safety function class, indirect radioactiverelease control, contains only one safety function; controlof indirect radioactivity leleases. The purpose of thissafety function is to prevent radioactive releases fromsources outside containment. These sources include thespent fuel pool and the radioactive wastestorage facilities(gaseous, solid and liquid, including radioactive coolant).The systems used to control releases from these sourcesinclude the radiation monitoring system, the spent fuelpool cooling system and the waste management andprocessing systems.

The fourth safety function class, maintenance of vitalauxiliaries, likewise includes only one safety function;maintenance of the vital auxiliaries. The systems usedto accomplish the nine safety functions discussed aboveare all supported by various auxiliary systems. Theseauxiliary systems provide such services as instrumentair needed for opening and closing valves, electric powerfor running pump motors and operating instruments,and an ultimate heat sink to which RCS and core heatcan be transferred. Vital auxiliaries must be maintainedin order to successfully accomplish the other safetyfunctions.

Each anti-core melt safety function has priority relativeto the others as shown in Figure 12. In general, reactivitycontrol is the foremost function because the amount ofheat that must be removed from the core is determinedby how well this function is accomplished. Next in prece-

REACTIVITYCONTROL

RCSPRESSURECONTROL

MAINTENANCEOF VITAL

AUXILIARIESCORE HEAT REMOVAL

RCS HEAT REMOVAL

f i g . 12: Hierarchy of anti-core melt safety functions

dence are those functions for appropriately maintaininga core cooling medium. To achieve this, actions must beaccomplished to maintain an adequate reactor coolantsystem inventory and an appropriate reactor coolantsystem pressure. Finally, if core heat removal is notcarried out, then reactor coolant system heat removalis irrelevant. Not only should the operator keep thishierarchy in mind, but he should also recognize the needfor the vital auxiliaries to carry out these safety functions.

Multiple Success Paths

Nuclear power plants are designed so that there are twoor more ways that can be potentially used to accomplishsafety functions. That is, for each safety function thereare several possible success paths. Table 3 lists thosesystems and subsystems in the plant which are typicalof those in success paths which accomplish the anti-coremelt safety functions. The success paths shown here aresome of the possible success paths which apply to variousevents. In general, the effectiveness of a particular successpath for accomplishing a safely function depends uponwhat systems are operable in the plant and on whetheror not the process variables are within the design rangeof the particular system or subsystem that will be used.!n other words, the method of accomplishing a safetyfunction depends on the plant state at the time the func-tion is to be executed. This is the state that exists at thetime of an event, as affected by the event, and by operatorand system actions.

To accomplish the safety functions, the operator doesnot need to know what event has occurred. He does,however, need to know what safety functions must beaccomplished, what success paths are possible and theconditions of the plant. This information defines thestate of the process variables and the state of the plantequipment. The plant state can be correlated to theappropriateness and availability of the various successpaths for a given safety function. Table 4 describes thestate for three possible core heat removal success pathsand Table 5 identifies some of the systems needed toaccomplish the anti-core melt safety functions for threeevents: turbine trip, small reactor coolant system pipebreak and large reactor coolant system pipe break. Bothtables demonstrate the dependence of the method ofaccomplishing a safety function upon the conditionswhich exist at the time the function is to be executed. Toreword, the means of mitigating an event depend onthe plant siate produced by the event.

Combustion Engineering currently uses Sequence ofEvents Diagrams'9' to represent the success paths avail-able to mitigate an event. Sequence of Event Diagramsare intended to illustrate the possible ways to accomplisheach safety function challenged duringa particularevent.Figure 13 shows a portion of a typical Sequence of EventsDiagram. These diagrams present the success paths withthe appropriate system actions. Both automatic andmanually actuated system actions are shown. Sequence

Page 316: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 298 -

TABLE III

TYPICAL SUCCESS PATHS FORTHE ANTI-CORE MELT SAFETY FUNCTIONS

ANTI-CORE MELTSAFETY FUNCTIONS

Reactivity Control

Reactor Coolant SystemPressure Control

Reactor Coolant SystemInventory Control

Core Heat Removal

Reactor Coolant SystemHeat Removal

POSSIBLESUCCESS PATHS ASSOCIATED EQUIPMENT*

Control Element Drive Mechanism Control System, Control ElementAssemblies, Motor Generator Sets. Chemical and Volume ControlSystem (charging and letdown), Refueling Water Tank

Reactor Protection System, Reactor Trip Switchgear, Control ElementAssemblies. Chemical and Volume Control System, Boric AcidMakeup Tank

Reactor Protection System. Reactor Trip Switchgear Control ElementAssemblies, Engineered Safety Features Actuation System, SafetyIniection System Refuelinp. Water Tank

Voiding, Engineered Safety Features Actuation System. Safety Iniec-tion Systems, Refueling Water Tank.

Pressunzer Pressure Control System. Pressunzer Spray Valves.Pressunzer Heaters. Reaclor Coolant Pumps.

Primary Safety Valves, Auxiliary Spray Valves, Chemical and VolumeControl System. Refueling Water Tank.

Pressunzer Level Control System. Chemical and Volume ControlSystem — (Letdown and Charging)

Engineered Safety Features Actuation System, Safety Injection Sys-tem, Refueling Water Tank.

Reactor Coolant Pumps. Steam Generators. Shutdown Cooling System.

Steam Generators (and RCS heat removal),Isubcooled natural circu-lation], Shutdown Cooling System

Steam Generators (and RCS heat removal), [reflux heat transfernatural circulation]. Shutdown Cooling System.

Main Feedwaler System, Turbine Generator Control System. Con-denser, Steam Bypass Control System, Turbine Bypass Valves. Shut-down Cooling Heat Exchanger (component cooling water side).

Main Feedwatei System, (runback). Turbine Generator Control System.Auxiliary Feedwater System. Steam Bypass Control System. TurbineBypass Valves, Condenser, Shutdown Cooling Heat Exchanger (com-ponent cooling water side).

Main Feedwater System (runback), Turbine Generator Control System,Auxiliary Feedwater System, Atmospheric Steam Dump halves, Shut-down Cooling Heat Exchanger (component cooling water side).

•Vital Auxiliaries are omitted from the list.

TABLE IV

CORE HEAT REMOVAL SUCCESS PATHSFOR VARIOUS PLANT STATES

SAFETYFUNCTION

Core HeatRemoval

Core HeatRemoval

Core HeatRemoval

SUCCESS PATHFROM TABLE 2

1

2

3

EXAMPLES OF INITIAL PLANT STATE FOR WHICH SUCCESSPATH APPLIES IMMEDIATELY FOLLOWING EVENT INITIATION

o Reactor Coolant System Intacto Power Available to Reactor Coolant Pumpso Reactor Coolant Pumps Intacto Reactor Coolant System Subcooledo Reactor Coolant Pump Auxiliary Systems Functional

o Reacior Coolant System Intacto Reactor Coolant Subcooledo Steam Generator Intact

c Reactor Coolant System Intacto Reactor Coolant Saturatedo Steam Generator Intact

Page 317: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 299 -

TABLE V

TYPICAL SYSTEMS USED TO ACCOMPLISH ANTI-CORE MELTSAFETY FUNCTIONS FOR THREE EVENTS

SafetyFunction

ReactivityControl

RCS PressureControl

RCS InventoryControl

Core HeatRemoval

RCS HeatRemoval

VitalAuxiliaries

EVENT

Turbine Trip

Control Element Assemblies,Chemical Volume and Control Sys-tem, Boric Acid Makeup Tank

Pressurizer heaters, Pressurizersprays

Chemical and Volume ControlSystem (charging, letdown)

Reactor Coolant Pumps, SteamGenerators, Shutdown CoolingSystem

Main Feedwater System, TurbineBypass System, Condenser, Tur-bine Generator Control System(trip)

Non-Emergency AC Power, Instru-ment Air. Component CoolingWater, Non-Emergency DC Power

Small RCS pipe break with loss ofoff-site power as a result of turbinetrip.

Control Element Assemblies,Safety Injection System, RefuelingWater Tank

Safety Injection System, RefuelingWater Tank, Containment Sump

Natural circulation, Steam Genera-tors, Shutdown Cooling System

Auxiliary Feedwater System, At-mospheric Steam Dump System,Shutdown Cooling System, TurbineGenerator Control System (trip)

Diesel Generators, ComponentCooling Water, Nuclear ServiceWater, Emergency DC Power

Large RCS pipe break with loss ofoff-site power as a result of turbinetrip.

(Voids), Safety Injection System,Refueling Water Tank

Safety Injection System, RefuelingWater Tank, Safety Injection Tanks,Containment Sump

(Boiling)

(Break)

Diesel Generators, Emergency DCPower

fPRESSURIZER\PRESSURE

HIGH

- » TO OTHER SAFETY FUNCTIONS

PRESSURIZERSPRAY VAIVES

TRAIN A TRAIN B

SPRAY TO CONTROLPRESSURE INCREASE

AUXILIARY SPRAYVALVES

TRAIN A TRAIN B

PRESSURIZER HEATERS

PROPORTWNAl BACKUP

/PRESSURIZESHEAT TO CONTROL PRESSUREPRESSURE DECREASE \ LOW

I

—I

CHARGING PUMPS(CHEMICAL AND VOLUME

CONTROL SYSTEM)

PUMP A PUMP 8 PUMP C

.^INCREASE^PRESSURIZER

LEVEL TOINCREASEPRESSURE

POWER OPERATEDRELIEF VALVES

TRAIN A TRAIN B

OPEN TO LIMIT PRESSURE INCREASE

RCSPRESSURECONTROL

Fig. 13: Sample section of sequence of evenfs diagram

Page 318: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 300 -

of Events Analysis is currently required by the NRCrequirements for Safety Analysis Reports,'"" and Com-bustion Engineering has used Sequence of Events Anal-yses as a design review tool for San Onofre Units 2 & 3.Forked River. St. Lucie Unit 2 and for our System 80IM

Standard Safety Analysis Report.'"• '2| '" In performingthis work, it has been found that this type of techniqueis essential for the understanding of how the operatorand plant systems work together to mitigate the conse-quences of events.

Use of the Safety Function Concept to Assist the OperatorThe safety function concept, which incorporates the

principles of safety function hierarchy and multiplesuccess paths dependent on the plant state, can helpthe operator fulfill his role of assisting the plant systemsto mitigate the consequences of an event. In order toassist in accomplishing the safety functions he needsthe following:

1) Sufficient and intelligible information about theplant state

2) Comprehensive procedures prescribing preferredand alternate success paths for each safety function

3) Adequate training in the concept and execution ofsafety functions

The rectangles in Figure 14 define the judgments neededto be made during an event. The information neededincludes the plant state produced by the event whichleads to identification of the safety functions in jeopardy,and to identification of systems available to accomplishthe safety functions. It is necessary to consider theseinformation needs in the design of the control roomdisplays."4 '"{Note: The operator does not need to knowthe initiating event as long as he can determine the plant

PLANTPROCESS

CONDITIONS

SUCCESS PATHNOt BEINGEXECUTED

INITIATINGEVENT

SAFETY FUNCTIONSIN JEOPARDV

SUCCESS PATHAVAILABILITY

THREATENED

HOW TO RESTOREPRINCIPAL

SUCCESS PATH

AVAILABILITYOF ALTERNATESUCCESS PATHS

HOW TO SELECTALTERNATE

SUCCESS PATH

Fig. 14: Operator information needs during crisis

state and therefore determine the safety functions injeopardy.)

Currently, an operator responds to an event by fol-lowing one of the event specific emergency proceduresof which there are N. If none of the event specific pro-cedures apply, he resorts to an unwritten (N+l)" (pro-nounced "N plus first") procedure. Simply stated, hedoes what he thinks he should to mitigate the event. Thesafety function concept and making the procedures plantstate dependent can be used to improve the N currentemergency procedures as suggested in Reference 16. Inaddition, these approaches can be used to develop adocumented version of the (N+l)" procedure.

Assuming the use of the safety function concept.Figure 15 summarizes operator actions during an event.Using the safety function concept, the individual pro-cedures can be standardized. A typical procedure wouldidentify, for a given set of plant symptoms, what safetyfunctions must be accomplished, what automatic systemsare available to accomplish them, which backup systemsmust be actuated if the automatic systems fail, and whatthe expected plant response is. Likewise, the safety func-tion concept can be used to handle, in a single procedure,events which produce similar plant states. ;

The operator's actions during an event depend on thesafety functions which need to be accon.olished and thesuccess paths which can be used. The operator deter-mines this by the symptoms, not by knoving whatspecific event is taking place. For the situation whereeither none of the previously developed proceduresapply or the plant did not respond as expected, the(N+l)" procedure would provide the operator with botha set of guidelines to identify the safety functions injeopardy and the success paths available, and a checklistfor assuring that all safety functions are accomplished.All N+l procedures should reflect the safety functionconcept. The main benefit of this approach is that guid-ance will be provided for all eventualities.

In addition to the safety function concept, the emer-gency procedures should incorporate some commonsense priorities for the operator. Establishing operatorpriorities, such as those in Table 6, suggested by Dr. E. L.Zebroski of NSAC, helps the operator avoid overlookingimportant tasks because of his attention to less impor-tant ones.

In structuring operator training, the safety functionconcept is meaningful because it contributes to a morecomprehensive awareness of how the plant functionsas a unit and how the various systems work together

TABLE VI

OPERATOR PREVENTION PRIORITIES

PREVENT MAJOR RADIATION RELEASE/CORE MELTPREVENT MAJOR EQUIPMENT DAMAGEPREVENT MINOR EQUIPMENT DAMAGEPREVENT ADMINISTRATIVE VIOLATIONS

Page 319: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 301 -

\1 ! ' UFig. 15: Operator action during an event

to accomplish each safety function. Not only will thisawareness help the operator mitigate the consequencesof an event, but it will help him set up and operate theplant in such a manner that the frequency and severityof the initiating events will be reduced.

Other Safety Function UsesIn addition to the described applications, the safety

function concept can be productively applied elsewhere.In particular, this concept is useful in the design ofnuclear plant systems and sub-systems, for providingmeaningful and complete information in the event ofan emergency, and for the evaluation of past experience.

In addition to the sequence of events analysis, faulttree analysis is an analytic technique based on the safetyfunction concept used at Combustion Engineering. Thetop element of the fault tree is a threat to a specific safetyfunction. Each minimum cut set of the fault tree definesthe component and subsystem failures as well as per-sonnel errors which must occur to threaten the safetyfunction.

If an emergency should occur, the safety functionconcept can be used for describing the status of the plantthroughout the event. This is an appropriate format forthe plant operators, shift supervisor, technical advisors,emergency response personnel, and the general public.In fact, effective media notifications could be based ona description of the initiating event and the conditionof each of the safety functions. This approach is credible

because it is complete and understandable.The safety function concept is beneficial in the review

of past experience. At Combustion Engineering, theAvailability Date Program is the activity which collectsexperience data from operating plants. The importanceof occurrences is evaluated with respect to its effects onsafety (unction maintenance. A hierarchy of importanceis assigned to effects such as:

1) loss of a safety function whether or not that safetyfunction was needed to mitigate an event,

2) unavailability of either a preferred or alternatesuccess paths,

3) actual challenge to an alternate success path,4) multiple failures within safety function success

paths,5) loss of redundancy or reduced redundancy within

a success path.The impact on safety functions are then used to deter-mine the appropriate corrective actions. CombustionEngineering is currently working with NSAC in theirSignificant Event Evaluation Program. The abovemethodology is being applied to the generic evaluationof Licensee Event Reports.

SUMMARYThe operator's three roles in nuclear plant safety

are to:1) Set up the plant to respond properly to adverse

events.2) Operate the plant so as to minimize the frequency

and severity of adverse events.3) Assist in mitigating the consequences of adverse

events.The primary considerations for plant set up are equip-

ment functionability and personnel readiness. The pri-mary considerations for incident prevention are equip-ment, actions, attitudes, and management. The primary'considerations for event mitigation are information,procedures, and training. The operator, with his currentlevel of training, can fulfill his first two roles by follow-ing the technical specifications, the operating procedures,and a set of common sense guidelines. In mitigating anevent, the operator can use the concept of safety func-tions to be better able to fulfill his third role.

A safety function is defined as something that must beaccomplished to prevent core melt or to minimize radia-tion releases. Safety functions can be divided into fourmajor classes:

1) Anti-core melt safety functions2) Anti-radioactivity release safety functions3) Indirect radioactive release control safety functions4) Maintenance of vital auxiliariesThere are ten safety functions. Safety functions should

be viewed as having a certain priority, i.e., some safetyfunctions should have precedence over others in theoperator's mind. This does not mean that any safetyfunctions are unimportant. All safety functions are im-

Page 320: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 302 -

REUULATOHYPUBLIC SAFETY

UTILITY MANAGEMENT.

TECHNICALSUPPORTCENTER

EXPERIENCEEVALUATION

SAFETY FUNCTIONS

PROCEDURES

OPERATORROLES

CONTROL ROOMINFORMATION

Fig. 76: Central role ol safety functions

portant. Not does it mean that maintenance of one safetyfunction will inherently carry out an arbitrary othersafety function. All safety functions must be carried out.There are several possible success paths for accomplish-ing each safety function. The availability or appropriate-ness of a given success path for mitigating an eventdepends on the existing plant conditions.

Figure 16 shows the central position of the safetyfunction concept in many areas. This concept is alreadybeing applied in several of these areas. For example, the

safety function concept is being used in operator train-ing, to structure emergency procedures, to design nuclearplant systems, and to evaluate past operating experi-ences. It can be used to develop a procedure for unfore-seen situations, to design better control room displays,and to format media information about an event.

ACKNOWLEDGEMENTThe general content of this paper has been presented

orally to a wide cross section of the nuclear industryover the past six months. Specifically, presentationshave been made to operators of several operating nuclearpower plants. The Committee on Power Design. Con-struction and Operation and The Post-TMI PolicyCommittee of the Atomic Industrial Forum, the latter'sSubcommittee on Safety Criteria. Nuclear Safety Anal-ysis Center personnel, the Smart Instrumentation PanelWorking Group of the January 15-17. 1980 NRC/IEEEWorking Conference on Advanced ElectrotechnologyApplications to Nuclear Power Plants, and severalgroups of Combustion Engineering personnel. Manyconstructive comments have been received from a widevariety of sources throughout this process. The authorsacknowledge and appreciate this input.

In addition, the authors would like to acknowledgeW. E. Abbott, F. Bevilacqua, J. C. Braun, C. Ferguson,W. J. Gill, J. Gorski, R. M. Hartranft, .1. J. Herbst,A. C. Klein, S. M. Metzler. J. P. Pasquenza, M. F.Reisinger, F. J. Safryn, C. F. Sears, E.I. Trapp, and R. S.Turk for their suggestions during the review of this paper.We would also like to thank Mr. S. E. Morrey for hiswork in preparing the graphics for this paper. Dr. C. L.Kling was particularly helpful in the early formulationof these concepts. The support and encouragement ofMr. Ruble A. Thomas of Southern Company Servicesand Drs. Robert J. Breen and Edward L. Zebroski ofNSAC were important to this work.

Page 321: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 303 -

REFERENCES

1. "Report of the President's Commission on the Acci-dent at Three Mile Island, "John G. Kemeny, Chair-man, Washington, D.C., October 1979.

2. "Three Mile Island. A Report to the Commissionersand to the Public," Nuclear Regulatory CommissionSpecial Inquiry Group, Mitchell Rogovin, Director,January 1980.

3. U.S. Nuclear ReguValory Commission, "TM(-2Lessons Learned Task Force Status Report andShort-Term Recommendations," USNRC ReportNUREG-0578, July 1979.

4. U.S. Nuclear Regulatory Commission, "TMI-2Lessons Learned Task Force Final Report," USNRCReport NUREG-0585, October 1979.

5. American Nuclear Society, "Criteria for TechnicalSpecifications for Nuclear Power Plants," ANSI/ANS-58.4-1979, January 1979.

6. J. MAFFRE, "INPO Management Team Set, WorkStarts on Initial Goals," Nuclear Industry, 27 (1): 12,January 1980.

7. "Component Failures at Pressurized Water Reac-tors, Final Report," Combustion Engineering, Inc.,REISINGER, M. F., Program Manager, Sandia Con-tract No. 13-6442, to be published.

8. Combustion Engineering, Inc.; "A TWS EarlyVerification, Response to NRC Letter of February15, 1979, for Combustion Engineering NSSS's,"CENPD-263; November, 1979; Section 2.2.

9. FORTNEV, R. A.; SNEDKKER, J. T.; HOWARD, J. E.;

LARSON, W. W.; "Safety Function and ProtectionSequence Analysis," presented at The AmericanNuclear Society Winter Meeting; November, 1973.

10. U.S. Nuclear Regulatory Commission, "Standard!'•rmat and Content of Safety Analysis Reportstor Nuclear Power Plants," Regulatory Guide 1.70,Rev. 02.

11. "Analysis for San Onofre Nuclear Generating Sta-tions2 & J."S-SEA-05GA, November, /977, Com-bustion Engineering, Inc.

12. St. Lucie-U nit 2 Final Safety Analysis Report, Chap-ter 15, to be published.

13. CESSAR-F, Combustion Engineering System 80Standard PWR Nuclear Steam Supply SystemSafety Analysis Report; Chapter 15, STN50-470F,December 1979.

14. American Nuclear Society, "Functional Require-ments for Accident Monitoring in a Nuclear PowerGenerating Station," ANS-4.5. Draft 4; November1979.

15. U.S. Nuclear Regulatory Commission, "Instrumen-tation for Light- Water-Cooled Nuclear Power Plantsto Access Plant and Environs Conditions Duringand Following an Accident," Draft RegulatoryGuide 1.97, Rev. 02. December 1979.

16. HUBBARD. III., F.R.; JAQUITH, R. E.; O'NEILL, R. P.;

Preparation of Comprehensive Emergency Proce-dure Guidelines," to be presented at the 1980 ANSTopical Meeting on Thermal Reactor Safety; April1980.

Page 322: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 304 -

APPENDIX 1

OPERATOR GUIDELINES FORINCIDENT PREVENTION

GUIDELINES FOR THE FOUR INDIVIDUAL PREVENTION CATEGORIES

CATEGORY GUIDELINES

Equipment I. Know ItOperate within your established

guidelinesBe aware of all plant limitationsBe aware of changes in background

noise2. Take Care of It

Minimize CyclingCorrect all potential safety and fire

hazards

3. Check ItCheck motors and controllers for

excessive heat during toursBe aware of all maintenance in

progress and anticipate whatcould go wrong

Be aware of safety system availability4. Believe It

Believe your indicationsDon't override interlocksDon't turn off actuated safety systems

without confirming they are notneeded

Actions 1. ProceduresFollow the proceduresDon't take short cutsUse only authorized proceduresObserve safety precautionsFollow emergency proceduresDon't actuate interlocks deliberately

2. Operating BandsMonitor all indicators frequentlyInvestigate all out of specification

conditionsMake frequent tours of accessible spacesReview plant chemistry to determine

trends3. Changes and Adjustments

Anticipate and react to the effects oftransients on plant chemistry

Anticipate effects of all adjustmentsfollow up

Attitudes 1. Be AlertBe physically readyBe aware of the board

Avoid distractions that interfere withthe operatic, of the plant

Use hearing, seeing, and feeling-have "the picture"

2. Be Suspicious

Expect things to go wrongConsider the consequences before

taking any actions that couldaffect plant operations

If the expected doesn't occur, besuspicious, stop, and investigate

When in doubt, ask questionsUse common sense, procedures may

not always fit the given situationRead records back to your last shift

on dutyEvaluate readings by comparing

primary and backup indicationsConfirm indications by inference

from diverse indicationsDon't count on luckBack up your buddy, check on himIf you don't know what action to take,

hands off, step back, and reevaluatethe situation

Don't buy a pig in a poke—knowwhat you are taking over

4. Have ForesightKnow what is going to happen on

your shiftPrepare for the shift plans: people/

hardware/ procedures/ adverseweather

Carefully plan abnormal or infrequentevolutions

Know your immediate actions forfault situations

5. Be FormalUse concise communications at all

timesGive orders clearly and

unambiguouslyUse standard phraseologyInsist on feedback — of order

— of its executionFormally change the shift

6. Encourage Safe PracticesAcknowledge information reportedThank people for asking questions

Page 323: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 305 -

CATEGORY GUIDELINES

Management 1. Report All Out of Spec Conditionsand Reasons, if KnownNever accept alarms or equipment

problems as normal eventsLook for trends when reviewing log

2. Keep Your Supervisors/Manap-rsInformed

3. Record all Changes to Plant/Equip-ment Operating Conditions

4. Maintain Proper Plant Chemistry

5. Before Assuming the Shift EnsureYou Know the Status of YourEquipment Fully

6. Don't Let the Abnormal Becomethe Normal

7. Keep Junk Out of the Control Room(Animal, Vegetable, or Mineral)

Page 324: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 306 -

Fail-Safe Design Criteria for Computer-Eased Reactor Protection Systems

by

A B KEATS - UKAEA, Winfrith

SUMMARY

The increasing size and complexity of nuclear power plants is accompanied by anincrease in the quantity and complexity of the instrumentation required for theircontrol and protection. This trend provides a strong incentive for using on linecomputers rather than individual dedicated instruments as the basis of the controland protection systems. In many industrial control and instrumentationapplications, on-line computers, using multiplexed sampled data are already wellestablished, but their application to nuclear reactor protection systems requiresspecial measures to satisfy the very high reliability which is demanded in theinterests of safety and availability.' Some existing codes of practice, relatingto segregation of replicated subsystems, continue to be applicable and will exerta strong influence upon the way in which the computer system is configured. Theirapplication leads to division of the computer functions into two distinct parts.The first function is the implementation of trip algorithms, ie the equivalent ofthe function of the 'trip units' in a conventional instrumentation scheme. Thefirst computer is therefore referred to as the Trip Algorithm Computer (TAC) whichmay incidentally, also control the multiplexer. The second function is voting, oneach group of inputs, of the status (healthy or tripped) yielded by the tripalgorithm computers. This function, equivalent to the protection system logic,is performed by the. Vote Algorithm Computer (VAC). Whilst the configuration andpartitioning of the computer-based protection system tend to be dictated byexisting codes of practice, the conceptual disparaties between traditionalhardwired reactor-protection systems and those employing computers give rise to aneed for some new criteria. An important objective of these criteria is toeliminate, or at least to minimise, the need for a failure-mode-ar.d-effeet-analysis (FMEA) of the computer software. This demands some well-defined, butsimple constraints upon the way in which data are stored in the computers butthe objective is achieved almost entirely by "hardware" properties of the system.The first of these is che systematic use of hardwired test inputs which causeexcursions of the trip algorithms into the tripped state in a uniquely orderedbut easily recognisable sequence. The second is the use of hardwired "patternrecognition logic" which generates a dynamic "healthy" stimulus for the shutdownactuators only in response to the unique sequence generated by the hardwired inputsignal pattern. It therefore detects abnormal states of any of the system inputs,or software errors, wiring errors and hardware failures. This hardwired logic isconceptually simple, is fail-safe, and is amenable to simple FMEA. The adoptionof the proposed design criteria ensure not only failure-to-safety .a the hardwarebut the elimination, or at least minimisation, of the dependence on the correctfunctioning of the computer software for the safety of tho system.

Page 325: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 307 -

Fail-Sate Design Criteria for Computer-Based Reactor Protection Systems

by

A B KEATS

1. Introduction

The increasing si?.e and complexity of nuclear power plants is accompanied by anincrease in the quantity and com; loxity of the instrumentation required for theircontrol and protection. This trend provides a strong incentive for using on-linecomputers rather than individual dedicated instruments as the basis of the controland protection systems. The potential advantages are four-fold. Firstly, thecomputer is capable of implementing more complex signal-processing algorithms.Secondly, because the time-shared computer hardware replaces many dedicated signal-processing instruments, less equipment will be used and therefore a lower failurerate can be expected. Thirdly, because less equipment will be required, the costwill be lower. Fourthly, a cost and reliability advantage may also be expectedfrom the use of remote data-sampling and multiplexing which reduces the number ofwires and connectors required between the planf measurement transducers and thacentralised computer hardware. In many industrial control and instrumentationapplications, on-line computers using multiplexed sampled-data are already wellestablished, but their application to nuclear reactor protection systems requiresspecial measures to satisfy the very high reliability which is demanded in theinterests of safety and availability.

In conventional protection systems employing analogue trip units and hardwiredlogic, design criteria for satisfying performance requirements (speed, accuracy,spurious trip rate etc) are well established and the failure modes are wellunderstood. The availability (spurious trip rate) and safety requirements(fractional dead-time) are satisfied by the use of redundancy; common-modefailures are avoided by segregation and diversity. Codes of practice alreadyexist in this field (References I, 2 and 3) and techniques for ensuring thatpractically all failures result in safe action (failure-to-safety) are wellproven and are applied on a ro'jtine basis. Protection systems employing digitalcomputers can also be designed to satisfy given performance requirements, buttheir failure modes are far more complex than in hardwired analogue and logicsystems and, as demonstrated below, potentially unsafe conditions can occurunless adequate precautions are taken. Some of the requirements can be met bythe above codes of practice, but some new techniques are required because of theconceptual disparities between hardwired and computer-based systems. The designcriteria proposed in this paper are an extrapolation of the fail-safe mode ofoperation used in the UK in hardwired reactor-protection systems (References 4 and5). Thit -s achieved by making the "operational" condition of the reactordependent upon an "energetic" state of the protection system components. In theshutdown state, the system components relax to a less-energetic state. As mostcomponent failures cause relaxation to a less-energetic state, the more probable(preferred) mode of failure is to the shutdown (safe) state. This objective cinbe achieved in a computer-based system by exploiting the inherently dynamic(ie, energetic) property of Lhe data-sampling and multiplexing processes.

Page 326: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 308 -

An important objective of the proposed design criteria is to eliminate, or atleast to minimise, the need for a failure-mode-and-effeet-analysis (FMEA) of thecomputer software. This demands some well-defined but simple constraints uponthe way in which dato are stored in the computers but the objective is achievedalmost entirely by "hardware" properties of the system. The first of these is thesystematic use of hardwired test inputs which cause transient excursions intothe tripped state in a uniquely ordered but easily recognisable sequence. Thesecond is the use of hardwired"pattern recognition logic" which generates adynamic "healthy" stimulus for che shutdown actuators only in response to theunique sequence formi. : by the hardwired input signal pattern. It thereforedetects abnormal static of any of the system inputs, or software errors, wiringerrors and hardware failures. This hardwired logic is conceptually simple, isfail-safe, and is amenably to simple FMEA.

Whilst these techniques eliminate the safety-related connotations of softwareerrors by ensuring that such errors lead to a safe state, the overall systemavailability can nevertheless be enhanced by some of the well-disciplined methodsof software production which have been evolved in wider fields oi applications offault-tolerar.t computing (Reference 6).

2. The Computer-Based Protection System Configuration

Replication (or redundancy) coupled with majority voting, is necessary in computer-based protection systems for the same reasons as it is necessary in conventionalhardwired protection systems, ie to achieve a specified overall system availabilityand fractional deadtime. The degree of redundancy will depend upon the expectedsystem failure rates and repair times. Some existing codes of practice, relatingto segregation of replicated subsystems, continue to be applicable and will exerta strong influence upon the way in which the computer system is configured. Theirapplication leads to division of the computer functions into two distinct partsshown in Figure 1. The first function is the implementation of trip algorithms,ie the equivalent of the function of the 'trip units' to a conventionalinstrumentation scheme. The first computer is therefore referred to as the TripAlgorithm Computer (TAC) which may incidentally, also control the multiplexer.The second function is voting, on each group of inputs, of the status (healthy ortripped) yielded by the trip algorithm computers. This function, equivalent tothe protection system logic, is performed by the Vote Algorithm Computer (VAC).This configuration and the modus operandi which fellow are relevant to any typeof digital computer. However, in most protection system applications, the TACsand VACs will be microprocessors dedicated to those specific tasks with theirprograms securely stored in read-only-memories (ROMs) and with strict controlmaintained over access to them.

To maintain segregation of the replicated safety channels up to the stage wherethey are combined by majority voting, each channel of a group monitoring any onereactor parameter, eg neutron flux, muse be sampled by a separate multiplexer andprocessed by a separate trip algorithm computer. It follows from this that thedegree of replication provided for the multiplexers must be at least the same asthat provided for the sensors. Furthermore there must be at least one TAC toprocess the output of each multiplexer. A minimum redundant configuration ofmultiplexers and TACs is shown in Figure I for triple redundancy of themeasurement sensors. This ensures that a failure of a single multiplexer or TACaffects only one measurement of any one parameter and does not constitute acommon-mode failure.

Page 327: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 309 -

Transmission of the status (healthy or tripped) outputs of the TACs to the VoteAlgorithm Computer (VACs) must be via unidirectional, electrically isolatedpaths in order to prevent the possibility of £ failure within a VAC beingpropagated to all the TACs and becoming a common-mode failure. This electricalisolation can be achieved by using optical coupler units or preferably, fibreoptic cables.

As pointed out above, redundancy of hardware is normally essential to satisfy thereliability requirements. The degree of redundancy of the VACs is notnecessarily the same as that provided for (.he earlier parts of the system.Figure 1 shows triple redundancy of the VACs by way of example although clearlyhigher degrees of redundancy may be required to satisfy given reliabilitycriteria. The form of the final voting logic (=guard lino voting) will in turnbe influenced by the number and arrangement of the shutdown actuators. The finalvoting may, for example be implemented on contactors which precede the shutdownmechanisms or on multiple inputs to the shutdown actuators themselves.

3. Potential Failure Modes Peculiar to Multiplexed Computer~Based Systems

The fundamental difference between a traditional reactor protection system,comprising individual dedicated trip instruments combined by hardwired logic,and one using computers, is that the latter operates on sampled data using acommon central processing unit (the CPU) in a time-sharing mode. This means thatinstead of each of the measurement transducers being continuously connected to adedicated signal processing instrument, they are connected sequentially to thecommon processor by a multiplexer which samples each of them in turn. The time-division-multiplexed sampltd-data is normally converted to digital form by ananalogue-to-digital converter (ADC) before passing into the common processor.The common processor normally contains a store in which the current value of eachinput is memorised during the time interval between consecutive sampling instants.Multiplexed sampled-data systems of this type are widely used in process plantdata acquisition and control systems. There are however certain potential modesof failure of the data acquisition hardware which, in the context of reactorprotection systems must be rendered "safe" and self revealing. They are:-

(1) Failure of one or more of the multiplexer address bits to change state(ie stuck-at-1 or stuck-at-0 faults) which causes the multiplexer torepeatedly sample a limited subset of the full input address range.

(2) Complete stoppage of the multiplexer which causes the memory to retain thelast set of values stored prior to the fault.

(3) Limited or complete failure of any part of the common, time-shared signalpath (including the ADC) between the multiplexer and the processor, toaccurately convey the sampled data (eg one or more data bits out of the ADC"stuck-at-l" or "stuck-at-0").

4. Methods of Protection Against Failures

4.1 The Use of Test Inputs to the Multiplexer

The first of the failure modes referred to in 3 above is made self-revealing bythe introduction of "test inputs" to the multiplexer. The signals applied to thetest inputs are chosen to be readily distinguishable from the signals originatingfrom plant measurement transducers. The order in which the test inputs areinterleaved between the plant signals is chosen so that a unique but easilyrecognisable pattern is generated (Figure 3) only when the multiplexer scans itsfull address range. The pattern cannot then be reproduced by repeated scanning

Page 328: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 310 -

of a subset of the full address range. A preferred arrangement of the order ofthe test inputs is shown in Figure 2 for a 128-input multiplexer. Thisparticular arrangement is chosen so that it is subsequently recognisable by simplelogical shifting and comparison functions (see Paragraph 4.4 l-elow). Theproperties of the test inputs (magnitude, rate of change, power spectral density,etc) are chosen to cause excursions of the trip algorithms into the tripped state.Differing properties may be ascribed to any or all of the test inputs so that alltrip algorithms are exercised on every scan of the multiplexer inputs.

The response of the trip algorithms to the test and transducer signal inputs yieldsa sequential pattern of "status" bits in which a 1 represents the healthy status(not tripped) and a 0 represents the tripped status. Under noriuil conditions, thetrip algorithms will yield a I status from transducer signal inputs and a 0 statusfrom the test inputs (figure 3). The unique status bit pattern thus generated isdictated by the order in which the test inputs are physically wired into themultiplexer.

The excursions caused by the test inputs, also provide a continuous dynamic checkof the common signal path from the multiplexer through the ADC and into theprocessor, including the telemetry link where provided. The test input signal maybe sufficiently accurately defined to provide a continuous calibration check of theADC and any common amplification provided.

4.2 Polarity Reversal on Alternate Multiplexer Scans

In addition to ensuring that the input multiplexer is sampling all of the inputs,it is also necessary to check that Uie input data are being refreshed on eachcycle of the multiplexer. It would otherwise be possible for the multiplexer tostop, leaving the last set of input data retained in the memory and the processorrepeatedly reusing this obsolete data. This mode of failure may be prevented inone of two ways. The preferred solution is to force some property, such aspolarity, of the input data to change on consecutive cycles of the multiplexer,A polarity reversing switch, following the multiplexer, which changes state oncompletion of every cycle of the multiplexer, causes the polarity of the input datastored in the processor's memory to reverse each time it is refreshed. Theprograms whi^h process these data must be written to expect this regular polarityreversal and if it fails to occur, due to a failure of the multiplexer to refreshthe memory, an incorrect status bit pattern will be generated and recognised. Afurther advantage of this technique is that it augments the continuous dynamicchecking of the common data path which is provided by the test inputs. (A simplerbut less comprehensive implementation of this principle is to reverse the polarityof the test inputs only on each multiplexer cycle.)

The use of polarity reversal to protect against the repeated re-use of obsoletedata may also be applicable at later stages in the system.

4.3 Restriction of the Computer's Memory Capacity

An alternative to polarity reversal on alternate multiplexer scans, to ensure thatthe input data are being continuously refreshed, is to limit the capacity of thememory area available for storage of the input data to less than that required tostore a complete set of values of all the inputs. This would ensure that on eachcomplete scan of inputs, data acquired early in the scan were overwritten bythose acquired later. Therefore, if the multiplexer stopped, it would be impossibleto reproduce the complete sequence of data, with the test signals in their correctorder, simply by repeated use of the stored subset of input data.

Page 329: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 311 -

4.4 Hardwired Pattern Recognition Logic

The dynamically generated status word sequence is used throughout the subsequentfunctions of the TACs and VACs to represent the "healthy" state. It isultimately used at the outputs of the VACs to generate a dynamic operationalstimulus for the plant shutdown actuators. Recognition of the correct statuspattern at the computer output is implemented in hardwired logic so that theoverall self-monitoring and fail—safe properties are not dependent upon correctoperation of computer software. The pattern recognition logic will remove theoperational stimulus from the plant actuator, if it fails to recognise the correctpattern due to (a) deviation of any one of the system inputs beyond the prescribedlimits or (b) a hardware fault, or (c) a software error or (d) a wiring error.

Pattern recognition logic suitable for the particular sequence of status wordsdescribed above comprises a shift register and a comparator as shown in Figure 4.To initialise the logic elements, the first word, formed from the first 8 inputsto the multiplexer is loaded into the shift register at the same time as thecorresponding status word is generated by the computer. Thereafter, the "referencepattern" held in the shift register, is shifted by one place each time a new statusword is generated by the computer. The reference and output patterns should,therefore, shift in synchronism. To maintain fully dynamic operation andcontinuous monitoring of the comparator itself, the pattern m.ttch is tested beforeand after shifting the reference pattern, ie twice for each new status wordgenerated by the computer. The output of the comparator should, therefore, be 0before shifting (indicating a mismatch) and 1 after shifting (indicating a correctmatch). The alternating 1 and 0 outputs of the comparator provides the dynamicstimulus, after amplification, for the plant shutdown actuators. The shifting ofthe reference pattern is made conditional upon recognition of the correct match.The logic, therefore, becomes 'latched' if a mismatch is detected until manuallyreset.

A slight variation of this simple logical process is required to completely matchthe pattern generated by the arrangement of test inputs shown in Figure 2 for a128-way multiplexer. It requires reversal of the direction of shifting thereference pattern to correspond with the reversal of the order of the test inputsover the two halves of the multiplexer.

5. Conclusions

5.1 The conceptual disparities between a traditional reactor protection systembased on conventional instrumentation combined with hardwired logic and onebased on the use of computers, give rise to some new design criteria andspecial requirements for their modus operandi. Some existing codes ofpractice for reactor protection systems are shown to be applicable tocomputer-based systems and strongly influence the computer configuration.However, additional failure modes, peculiar to computer-based systems havebeen identified and techniques to overcome them are put forward as designcriteria. These should ensure not only failure-to-saiety in the hardware,but the elimination, or at least minimisation, of the dependence upon thecorrect functioning of the computer software tor the safety of the system.

5.2 The proposed modus operandi have evolved from the well-established practice,in reactor protection systems, of requiring the sysv.w.1. components tomaintain an energetic (or stimulated) state to enable them to sustainreactor operational conditions. The shutdown or tripped condition isachieved by relaxation to a less energetic (or de-energised) state. Thisresults in a preferred mode of failure to the safe (ie less energetic)state. An example of this practice is the use of relays which are energised

Page 330: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 312 -

to maintain operational conditions and de-energised to initiate a reactortrip or shutdown. In some later UK reactor protection sysn.ins, relays havebeen replaced by solid state logic devices such as Laddie or Ltmiconductorlogic elements. In these devices operational conditions are maintained bya dynamic or alternating state (eg of magnetic flux) and the- shutdown stateby a static condition; Che dynamic condition being the more "energetic".Again, most failures result in relaxation to the static or Jess energeticcondition giving a preferred mode of failure to the safe state. This fail-safe design criterion can be met in a computer-based reactor protectionsystem by utilising the inherently dynamic property of the data sampling andmultiplexing processes. The property is exploited by interleaving test inputsbetween the plant-sensor signals, which continuously exercise the commonmultiplexed data channel. The parameters of the test input signals arechosen to cause excursions of the trip algorithms into the tripped state.The unique order in which these excursions occur is detected dynamically byhardwired pattern-recognition-logic at the computer output. The reactoroperational condition is maintained by continuous recognition of the dynamicstatus pattern. The pattern recognition logic will remove the operationalstimulus if it fails to recognise the unique pattern due to (a) a deviationof any one of the plant-sensor inputs beyond the prescribed limits or(b) a hardware fault or (c) a software error, or (d) a wiring error. Thecomputer itself has no knowledge of the unique operational status pattern;it can be generated only in response to correct operation of the inputmultiplexer and correct implementation of the trip algorithm. This resultsin a highly fail-safe diagnostic computer—based reactor-protection system.

REFERENCES

1. CEGB Specification US 76/10. "Instruments and Control Equipment GeneralTechnical Requirements".

2. CEGB Specification for Reactor Safety Systems, AGR Design Safety RequirementsAnnex VII.

3. IEEE Standards 279 - (1971).

4. A B KEATS. "Safety Circuits Based on Contemporary LSI Techniques".IAEA/NPPCI Specialists' Meeting, Cologne, 15-16 October 1973.

5. A B KEATS. "A Fail-Safe Computer-Based Reactor Protection System".IAEA/NPPCI Specialists' Meeting, Munich, 11-13 May 1976.

6. Proceedings of the 9th Annual International Symposium on Fault TolerantComputing, (FTCS-9), Madison U3, 20-22 June 1979.

Page 331: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

TRIP PARAMETERSENSORS

MULTIPLEXERSiMUX)

PARAMETERS

ANA100UE-TO - DIGITALCONVERTERS

MUX

c

TRIP ALGORITHMCOMPUTERS

(TAC)

VOTE ALGORITHMCOMPUTERS

(VAC)

MUX

A

( ADC)

\ •\ ,/

/

AOCA

t

TAC

A

PATTERNRECOGNITION

LOGIC(PRL)

MUX

B

\

/

ADC

aTAC

aTO

-SHUTDOWNACTUATORS

I

toI-1

U>

I

FIG. 1. COMPUTER-BASED REACTOR-PROTECTION SYSTEM

Page 332: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 314 -

TEST INPUTS(OCTAL ADDRESSES!

• 000002

00S

H I •

1 U -

107no 1

117

11

|

> 11

1" ' t

TO AOC

-•FROMADDRESSINGLOGIC

177

FIG 2. MULTIPLEXER INPUT WIRING PATTERN

Page 333: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

momz

oom

ozCO

mxmSO

o—i

TEST INCUT

—o — •

o —

o

•—

\

TEST INPUT

T£ST INPUT

TEST INPUT

n ro c

ci J> ""

tf*

1

- J

O

cJ O

O1

- gie -

Page 334: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

STATUS BYTEFROM VAC

NEW STATUS BYTE

FROM VAC

DELAYMONOSTABIE5 SHIFT PULSE1 1 0 1 1 0 1 1

TO SHUTQOWNACTUATORS

OYNAMIC OUTPUT PR.?. «

(NUMBER OF INPUTS) x (SAMPLING FREQUENCY )

128 x SO800 Hi (TYPICALLY)

FIG. 4. PATTERN RECOGNITION LOGIC

Page 335: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 317 -

STATUS OF FIRST GROUP OF 8 INPUTS 1

STATUS OF SECOND CROUP OF 8 INPUTS

STATUS OF 'HiRD GROUP OF 8 INPUTS

ETC

1

1

0

1

1

0

1

1

1

1

1

0

1

1

0

1

1

1

0

1

1

0

1

1

1

1

1

1

1

0

1

1

0

1

0

1

1

0

1

1

1

1

0

1

1

1

1

0

1

1

0

1

1

0

1

1

1

1

0

1

0

1

1

1

1

0

1

1

1

0

1

1

t

1

0

1

1

1

0

1

1

1

0

1

0

1

1

1

1

0

1

1

0

1

1

0

1

J

1

1

0

1

1

1

1

0

1

1

0

1

0

1

1

0

1

1

1

1

1

1

1

0

1

1

0

1

1

1

0

1

1

0

1

1

1 * -

16 BYTE BLOCK REPRESENTSS STATUS OF COMPLETE SET

OF I2B INPUTS

— BEGINNING OF NEXT BLOCK

FIG. 5. FORMAT OF STATUS DATA AFTER IMPLEMENTATION OFTRIP ALGORITHMS

Page 336: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 318 -

AN INTELLIGENT SAFETY SYSTEM CONCEPT FORFUTURE CANDU REACTORS

H.W. Hinds*

ABSTRACT

A review of the current Regional Over-power Trip (ROPT) system employed on theBruce NGS-A reactors confirmed the beliefthat future reactors should have an improvedROPT system. We are developing such an"intelligent" safety system. It uses more ofthe available information on reactor statusand employs modern computer technology. Fast,triplicated safety computers compute maps offuel channel power, based on readings from .prompt-responding flux detectors. The coeffi-cients for this calculation are downloadedperiodically from a fourth supervisorcomputer. These coefficients are based on adetailed 3-D flux shape derived from physicsdata and other plant information. A demon-stration of one of three safety channels ofsuch a system is planned.

NOMENCLATURE

Symbol Description Units

a coefficients dim'less

C,C flux mapping matrices

D diffusion coefficient m

E,E~ channel power mapping matrices

f "deflection", modifying function dim'less

F "force" dim'less

H flux-to-power conversion factor Wn~2Tn2-s

P channel power W

Q dynamic error margin function dim'less

r distance (normalized) dim'less

x,y,z spatial co-ordinates (normalized) dim'less

£^ macroscopic cross section m~'

V" = absorption

number of neutrons per fission dim'less

flux n-m-^s-1

flux distribution from diffusion nem-2's~l

code

Superscripts

« measured

mapped, estimated using themapping scheme

average

*Atomic Energy of Canada LimitedResearch CompanyChalk River Nuclear LaboratoriesChalk River, OntarioKOJ 1J0

1.

Subscripts

i force (or detector) index

j channel index

k bundle index

M using mapping (vanadium)detectors

max maximum

"> using safety (platinum)detectors

INTRODUCTION

1.1 Historical Background

Due to economic incentives, the designfuel ratings of CANDU* reactors haveincreased over the years. The current limitis based on the rating at which centre-linefuel melting occurs; the consequences of thisevent are somewhat speculative and consideredundesirable. The regional overpower trip(ROPT) systems in Bruce NGS-A and subsequentreactors are designed to prevent fuel melting.

Under CANDU operating conditions, meltingcan occur only after a breakdown in heattransfer to the two-phase coolant, i.e. afterdryout. To compute the thermal power requiredto melt fuel in a given channel, the follow-ing thermohydraulic parameters and corre-lations are required:

- axial heat flux profile- dryout correlation (point values)- coolant flow, inlet temperature,pressure

- post-dryout heat transfer.

Using nominal conditions for the above, thedesigners of Bruce NGS-A computed thechannel power at which centre-line meltingoccurs and obtained a channel power limit,after applying suitable error margins.

The "measured" maximum channel power ina Bruce reactor is inferred from a set of"readings" from platinum self-powered fluxdetectors. The relationships between maximumchannel powers and detector readings wereestablished during the design studies. Thedesigners chose a large set of flux shapes byconsidering various combinations of reacti-vity control device positions; some shapesalso included dynamic xenon effects. Foreach shape, they calculated the detectorreadings and maximum channel power. Thedesigners then found a set of trip settingswhich ensured that, for every shape consi-dered, the reactor would trip before the maxi-mum channel power exceeded the pre-determinedlimit. They performed all the above calcula-tions using an equilibrium-fuelled reactor.

*CANada Deuterium Uranium

Page 337: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 319 -

For actual reactors, having fuels of varyingburnups, the operators add a channel powerpeaking factor (CPPF) to the detector cali-brations to account for the channel-to-channe.1 ripple.

This design process yielded a system thatwas relatively simple to implement: in opera-tion, each detector output is compared to itstrip setting* to decide whether to trip ornot. Conventional analog/relay hardware isused to execute the trip logic.

There are two ROPT systems at BruceNGS-.A, corresponding to the two shutdownsysrwns, SDS1 and SDS2. Each system is tri-plicated in the conventional manner to ensurereliability, and each safety channel containsapproximately 13 detectors for SDS1 and 6detectors for SDS2. Future reactors, e.g.Bruce NGS-B will have slightly more detectorsper safety channel.

Detector calibration is the main weaknessof the Bruce ROPT concept. When the reactoris in the nominal** condition, the detectorsshould read bulk thermal power times theCPPF. Thus each detector reading, which isactually representative of the flux over ashort length, is also equivalent to the powerin the potentially hottest channel in thereactor. As the relationship between flux atone point and channel power somewhere else iscontinually changing due to burnup and refuel-ling, the calibration of detectors varies con-tinuously. Poor or out-of-date calibrationcould result in potentially unsafe operationand/or unnecessary reactor trips and is alsoa nuisance for the operating staff as rscali-bration must be carried out frequently andmanually.

Analog/relay hardware is used in theBruce NGS-A safety system. Since high-quality relays are becoming increasingly dif-ficult to obtain, they should be phased outof future safety system designs. Meanwhile,computers are becoming more reliable as wellas less expensive; they also offer a very highdegree of flexibility. This increased flexi-bility is very important if the algorithmsused to decide whether to trip become morecomplex, i.e. if the safety system is more"intelligent".

One of the operator's main worries is asmall margin-to-trip. A more intelligentsafety system could alleviate this problem byobtaining a more accurate "measurement" ofthe maximum channel power and hence allowinga reduction in the error margin required.

1.2 The Intelligent Safety System Project

The basic objective of this project is todevelop an Intelligent Safety System for fu-ture CANDU reactors (beyond those currently

* In Bruce NGS-A, some trip settings are ad-justed automatically as functions of boosteroperation.

**Nominal refers to the normal steady-stateoperating condition with zone levels near40% full and boosters and control absorbevsout.

committed) that uses the best informationavailable on the status of the reactor anddecides, taking all this information intoaccount, whether to trip. In other words,the trip will be a computed parameter and notsimply a one-for-one comparison of readingsagainst fixed trip settings. We consiucj.mainly the problem of fuel melting by over-power. Thus trips based exclusively on pro-cess quantities (e.g. boiler level) areassumed to be retained as before, without anychange in their algorithms.

The information that is available con-sists of:

- outputs from self-powered safetydetectors of the platinum or Inconeltype

- outputs from self-powered mappingdetectors of the vanadium type

- outputs from self-powered controldetectors of the platinum or Inconeltype

- positions (levels) of zone controlelements

- positions of adjuster rods- positions of mechanical controlabsorbers

- fuel burnup- bulk thermal power- instrumented fuel channel powers (flow,temperature, quality)

- inlet header temperatures- outlet header pressures, and- pressure drops from inlet to outletheaders.

This paper outlines the mathematicalalgorithms for processing the information ina logical manner, and a distributed computerarchitecture applicable to power plants andcapable of implementing the algorithms withsufficient speed. For the future, it is ourintention to assemble a single channel ofsuch a computer system as a realistic demon-stration, and demonstrate and evaluate itseffectiveness under many simulated reactorconditions. This demonstration will providea testing ground for both the algorithms andthe reliability of new hardware components.

2. ALGORITHMS

2.1 Flux Mapping

The procedure of flux mapping can bestated generally as follows. A mathematicalform for the flux is assumed having M freeparameters, and the flux is measured at Nlocations. If N = M, then the equations canbe solved exactly; if N > M, they can besolved to minimize the errors between themapped and measured fluxes in a least-squares sense; if N < M, they can be solvedto minimize the deviations of the free para-meters in a least-squares sense. Examples ofthese three cases are: the bent-plate schemeII], modal scheme [£], and finite-differencescheme [3], respectively. Philosophically,there are implications of which informationis being believed, as shown in Table X. Itis our contention that the measurement mustbe believed in preference to the Mathematicalform.

Page 338: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 320 -

TABLE 1

PHILOSOPHY OF MAPPING SCHEMFS

N = number of detectors

M = number of free parar-'-*ters

•I'd

N

N

N

.tion

= M

>• M

< M

r.xsif'-.-V?

Bent-Plate Scheme

Modal Scheme

Finiue-jiiierenceScheme

Detector P^idings

believed

not believed

believec

Mathematical Form

believed

believed

not believed(excessive parameters

included)

The above three schemes are "f orm-detprministiu"; they depend solely on thematrematical forms chosen. There is a fourthme_hod whirh is dependent on an a prioriknowledge of a set of answers. In thisscheme, a relationship between the inputs andthe results is assumed, with a number of freeparameters; typically one might choose amatrix (linear) relationship. The free para-meters that give the "best" answers are thenfound; of course, the number of knownanswers must equal or exceed the number offree parameters. "Best" will typically meanwith least-squares error; or in the case ofa s.irety s-'stem, it may have a uni-polar(conservative; .^plication. The ROPT schemecurrently employed may be put in this lattercategory.

We have chosen a mapping scheme of thefirst type: it is form-deterministic, N = M,and the mapped flux will pass through themeasured points. An initial estimate of theflux shape is obtained using a 3-D diffusionequation, for example,

V-DVIJJ - (I)

where the reactor physics parameters D,X)aand v2lf are obtained using the knowledge ofburnup and the device positions. This calcu-lation is not perfect, and the actual flux isgiven by the calculated shape times a modi-fying function

* = fij> (2)

A form is now assumed for this function

f = ir[ In r| + aQ + ayy +

( y - y i )2 + (z-zi)

(3)

(4)

and the mapped flux is given by £i|i.

These equations were originally developed asa 2-D interpolant [1); we have made them3-dimensional. There is no physical

justi.ii'ation tor the form chosen* except thatit provides J. smooth 3-D interpolant with con-tinuous low-order derivatives.

Knowing the fluxes If at the detectorlocations, we can solve equation (2) for thedeflections £ at the detectors and thenequations (3) and (5) for the F values. Sub-stituting back, we can find the fluxeseverywhere. In matrix notation, this can beshown to yield

= Cf (7)

The thermal power of a given fuel channel isthe vpighted sum of the fluxes along thatchannel,

P. = n £ H.. (8)

where n is initially assumed to be unity andthe flux-to-powor conversion factor H isburnup dependent. Combining equations (7) and(8) gives, in matrix notation.

P = Ef (9)

Thus equation (7) can be used to find theflux at any desired location in the core whileequation (9) maps the measured deflections intothe channel powers. Equations (7) and (9)could have Leen written in terms of measuredfluxes^instead of measured deflections, i.e.$ - C'<|> and P = E'$, and the choice is reallydependent on programming convenience.

These equations ar^ general and occurirrespective of the actual mapping schemeused; the mapped fluxes and channel powers arelinear combinations of the measured fluxes.

2.2 Overall Scheme

The above outlines the basic mappingscheme proposed. However, in our case, someadditional features are incorporated. Thetotal scheme is shown schematically inFigure 1. With the reacto^ at significant

*In 2-D, this form is the equation for thedeflection of a plate subjected to transverseforces.

Page 339: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 321 -

MfiPPif.0 JLTtClOHi.

DEVICE POS*1|QMS

REFUELLING INFORMQTlON mmit

C1U.

muicsPI IMUU

D. I

H

SlIMtOlfFUJIW

C001

3-DfUll

2-D

E N U D CHPNNEL POKERS

SOFETtOETECTORS

ONE OF T H H E C S f l f E T r

( H I G H SPEED REQUIRED)

ROPT UNO PPOCE5S I R l P S

ROM TuQ C1HER SD' ETT • .HGNNUd

"~~~| ?, I xaoid—» ^ - * -

PROCESS INFORMATION

FIGURE 1 PROPOSED «LGOR1TH« FOR THE IK1ELLIGEH1 S»FETY SUSIE*

power, the calculation is begun by accessingthe sampled reactivity device positions andobtaining any new refuelling information.From this information, plus the stored burnupdata, physics parameters are computed and atheoretical flux distribution, ifi, is obtained,via a diffusion code such as CHEBY [4]. Thesampled outputs of the mapping detectors* areaccessed and compared to this flux distri-bution, a rationality check is performed, andany significant deviations are resolved.

The mapping scheme outlined above isthen used to produce the mapped flux distri-bution ()>M, using the mapping detector fluxesand equation (7). A mapped power distri-bution, PM, is then obtained using equation(8) .

Redundant power information is alsoavailable from the instrumented fuel channelsand the bulk thermal power measurement. Arationality check is performed and irrationalmeasurements can be dealt with manually.This information is used to obtain a bettervalue for the constant n which was previouslyassumed to be unity; a weighted least-squaresapproach is assumed. Filters to matchdynamic responses and a rationality check onthe value of n are also required. This

•Platinum or Inconel detectors, although rela-tively prompt responding, are not consideredto be as accurate at steady state as vana-dium detectors. Thus only the accuratevanadium detectors are included here.

normalization constant may be thought of asa correction for systematic errors in theabsolute detector calibration, the value ofH, and/or the ratio of flux in the moderatorto that in the fuel.

As the mapping detectors are more accu-rate than the safety detectors, the sensiti-vities of the safety detectors are adjustedto force

Again, filtering and a rationality check arerequired. In simple terms, we are cali-brating the platinum safety detectors againstthe more accurate vanadium mapping detectors.

The flux shape JM is then used as thereference flux (replacing vf»J in a secondapplication of the mapping scheme using thesafety detectors. With this reference shapeand the safety detector sensitivities, thedetector outputs of each safety channel canbe converted to deflections Equation (9)can then be applied to give a power map Psbased on the safety detector outputs.

An additional feature of our scheme isthe dynamic error margin. After a series ofstudies, we found that the error in the maxi-mum channel power can be correlated to ameasurable parameter

> Q (11)

Page 340: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 322 -

P —• P

max max

1 - k h

(12)

(13)

and Q is a negative, simple, piecewiselinear function. From this relationship, weobtain

max - 1+Q (14)

Thus the right hand side of equation (14)provides a conservative estimate of the maxi-mum channel power. Alternatively, thefactor (1+Q) can be applied to the powerlimit as shown in Figure 1.

It should be noted that if the zeferenceflux shape is the same as that measuredusing the safety detectors, then fj = f. AsQ(0) 2; 0/ there is little or no penaltyassociated with this procedure. However, ifthe measured and reference flux shapes differsignificantly, penalties against "measured"maximum channel powers up to 10-15% may berequired.

The channel power limit is computed as afunction of process variables, a tilt para-meter, and dynamic error margin. Comparisonwith the mapped channel powers yields a localROPT trip signal. Local process trips areobtained from suitable algorithms. By inter-comparing the local trip signals from thethree safety computers (see the next section),a two-out-of-three functional coincidence canbe determined, and a safety channel tripinitiated. A final two-out-of-three ex-computer vote is required to activate theshutdown system.

3. HARDWARE

A long series of calculations was de-scribed in the previous section. To produceresults rapidly, it is necessary to partitionthis series into at least two tasks: a fasttask that produces a yes/no trip vote and aslow task that produces the parameters forthe fast task. The fast task is indicatedbelow the dotted line in Figure 1; the upperportion is the slow task.

The hardware is similarly partitioned,as shown in Figure 2. To obtain the reli-ability required for reactor safety, tripli-cated circuits are usually employed. Thesame philosophy is applied here; the safetycomputer with its sensors, etc., is tripli-cated. The safety channel trip decisions ofthese computers are dealt with via conven-tional two-out-of-three ex-computer votinglogic.

PROCESS IHEORHflTION

S G F n r COMPUTERS

FIGURE 2: SCHEMATIC OF INTELLIGENT S«FETT STS1EU

The fast task must perform a completeflu>: mapping calculation which is equivalentto a matrix-vector multiply. This matrixconsists typically of 4 80x20 elements. Thereaction time of the safety system to theworst accident must be less than 100 ms, andthus we are aiming for a fast task with acycle time of 50 ms. To achieve such aspeed, an array processor is required as partof each safety computer.

In cc/itrast, we do not recommend thatthe slow task be triplicated, as the sen-sors, computers, etc., become too expensive.The alternative is to manually verify thatthe slow task is functioning correctly beforepermitting the data trar.sfer to the fasttask, and manually checking for correct datatransfer.

The slow or supervisor computer willalso perform a number of monitoring functionsnot shown in Figure 1. For example, it willperiodically (say every 5 minutes) monitorthe outputs from the safety computers andcompare them to similar values calculatedfrom mapping detectors and instrumented fuelchannels. This intercomparison of redundantdata will lead to rapid diagnosis of failedinstruments and/or wilZ indicate that thereference shape is becoming out-of-date.

The interconnections from the supervisorcomputer to the safety computers and amongthe, safety computers must be fail-safe.Suitable design techniques will ensure thatthis criterion is met. Watchdog timers (notshown) are also required to ensure that thesafety computers are actually operating.

We plan to assemble a demonstration ofsuch a system, consisting of 3 interconnected

Page 341: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

- 323 -

computers,

- a supervisor computer,- a single safety computer with its. array processor, analog inputs, andwatchdog timer, and

- a computer to simulate the reactor.

The first two will be assembled and programmedin as realistic a manner as possible, so thatthey could be used directly in a powerstation. The computer for the reactor simu-lation will consist of the Hybrid ComputerSystem (Digital Equipment Corp. PDP-11/55 andtwo Applied Dynamics AD/5 analog computers)presently operational in the Dynamic AnalysisLaboratory of the Reactor Control Branch.

This demonstration will provide atesting ground for the proposed algorithms aswell as the hardware. The use of computersin safety systems is a new concept for CANDUreactors and their application to this rolemust be demonstrated. Array processors arerelatively new devices with which we are notyet familiar. This demonstration will pro-vide valuable experience with their use,capabilities and reliability. The inter-connection of computers is becoming wide-spread, and new concepts in data transmission,e.g. INTRAN [5], will be examined by means ofthis demonstration.

4. PROGRESS TO DATE

The mapping scheme outlined above hasbeen examined, and accuracies of 3.5% rms areachievable in computer studies with ^20detectors. A large number of flux shapes hasbeen examined, and a suitable dynamic margincurve has been obtained.

The scheme calls for the solution of astatic diffusion code to provide the refer-ence shape for the £lux mapping procedure.The code CHEBY [4] which solves the dif-fusion equation in 2 energy groups has beenconverted to run on a PDP-11/55 computer.This exercise shows that it is feasible torun a large diffusion code on a mini-computer.Although running times are considerablyslower than on a CDC CYBER 170/6600 system,convergence is achievable in 1-2 hours. Mostof this time is spent in transferring dataand overlays between the computer memory anddisk. We believe that the present generationof mini-computers with large virtual addresscapability or memory management could produceresults in a much shorter time.

5. CONCLUSIONS

An outline has been presented for thedesign of an Intelligent Safety System forthe regional overpower protection of a reactorcore. This system breaks with tradition inthat it uses a computed value as a tripparameter. The computation is relativelycomplex as it implicitly contains a full 3-Dneutron diffusion calculation blended with aflux mapping procedure. Another new concept .is the use not only of computers but also ofarray processors as essential major elementsof the safety system. We have maintained 'thetraditional two-out-of-three arrangement ofhardware redundancy.

An attempt has been made to deal in abetter way than in the past with redundantinformation. The guiding principle is thatif all the information agrees, then reactorpower is allowed to approach the safetylimit fairly closely. However, as disagree-ment increases, a penalty (the dynamic errormargin) is applied which lowers the maximumpermissible fuel channel power. In thisway, either better calibration of instru-ments or a more accurate and up-to-datecalculation will lead to a system having alarger margin-to-trip.

Also, the system is designed to approachan optimum in any steady-state situation.In other words, the dynamic error margin willbe a minimum and the margin-to-trip amaximum immediately after downloading of anew matrix from the supervisor computer tothe safety computers. If the reactor is insteady state, this condition will persist.Subsequent manoeuvres however will causesome departure in the reactor flux shape fromthe shape stored in the safety computers.This will increase the uncertainty in thecomputed fuel channel powers and result in areduction of the permissible power.

6. REFERENCES

[11 W.E. Gabler and H.D. Fulcher, "Babcock &Wilcox On-Line Computer Advancements inCalculating Uninstrumented AssemblyPowers", Presented at ANS-CNA JointMeeting, Toronto, Ontario, 1976 June13-18, pp. 4,5,6,7. Also BSW reportTP-649.

[2] A.M. Lopez, J.R. Enselmoz and G. Kugler,"Early Operating Experience with theBruce NGS-A Flux Mapping System", paperin unpublished Atomic Energy of Canadareport WNRE—308, 1978 April.

[3] F.N. McDonnell, private communication.

[4] M.H.M. Roshd, "The Physics of CANDUReactor Design", Presented at the ANSConference, Toronto, Ontario,1976 June 14-18, p. 5. Also availableas AECL-5803.

[5] A. Capel and G. Yan, "DistributedSystems Design Using SeparableCommunications", Paper to be presentedat the IAEA Specialists' Meeting onDistributed Systems for Nuclear PowerPlants, Chalk River, Ontario,1980 May 14-16.

Page 342: DISTRIBUTED SYSTEMS FOR NUCLEAR POWER PLANTS

ISSN 0067 - 0367

To identify individual documents in the series

we have assigned an AECL- number to each.

Please refer to the AECL- number when re-

questing additional copies of this document

from

Scientific Document Distribution Office

Atomic Energy of Canada Limited

Chalk River, Ontario, Canada

KOJ 1J0

ISSN 0067 - 0367

Pour identifier les rapports individuels faisant

partie de cette s6rie nous avons assign^

un num6ro AECL- a chacun.

Veuillez faire mention du nume'ro AECL- si

vous demandez d'autres exemplaires de ce

rapport

au

Service de Distribution des Documents Officiels

L'Energie Atomique du Canada LimitSe

Chalk River, Ontario, Canada

KOJ 1J0

Price $15.00 per copy Prix $15.00 par exemplaire

1941-80