9

Click here to load reader

Ethernet in substation automation

  • Upload
    c

  • View
    220

  • Download
    3

Embed Size (px)

Citation preview

Page 1: Ethernet in substation automation

In the energy distributionworld, a substation is an in-stallation where the energyis combined, split, ortransformed. A substa-tion automation (SA)

system is dedicated to themonitoring and protection ofthe primary equipment ofsuch a substation and its asso-ciated feeders. In addition, theSA system has administrative du-ties such as configuration, com-munication management, andsoftware management.

In this article, we investi-gate whether Ethernet hassufficient performancecharacteristics to meetthe real-time demandsof substation automa-tion. More precisely,the evaluation is car-ried out with respect toswitched fast Ethernetand UDP/IP as the time-criti-cal protocol. In short:

• The problem: Substation auto-mation is a very demanding and expensive application.

• The opportunity: Ethernet, which has steadily becomefaster and more efficient while remaining low in cost.

• The challenge: Making substation automation run ontop of Ethernet using a standard networking protocol.

Traditional Hierarchical Systemsand the VisionTraditionally, the functionality of SA systems has beenlogically allocated on three distinct levels: station, bay,and process levels (a classical hierarchical system archi-tecture).

1) The process-level functionality is more or less an inter-face to the primary equipment. Typical functionsspecified at this level are data acquisition (sampling)and issuing of I/O commands.

June 2002 IEEE Control Systems Magazine 430272-1708/02/$17.00©2002IEEE

Skeie ([email protected]) is with ABB Corporate Research, Nor-way, and with Simula Research Laboratory, Norway. Johannessen iswith ABB Corporate Research, Norway. Brunner is with ABB HighVoltage Technologies Ltd., CH-4050 Zürich, Switzerland.

©19

98C

OR

BIS

CO

RP

OR

ATIO

N

Page 2: Ethernet in substation automation

2) The bay-level functionality is concerned with coor-dinated measurement and control relating to awell-defined subpart of a substation (usually de-noted as a bay).

3) At the topmost station level we find the functions thatprotect and control the entire substation or largerparts of it. As part of the station-level functions, we of-ten also find human-machine interface (HMI) func-tions as well as linkages to remote control centers.

When substation automation systems were first intro-duced several years ago, the basic architecture was as shownin Fig. 1. The connections between station- and bay-level de-vices were replaced with a communication bus, usually im-plementing a vendor-specific communication protocol. Theconnection to the process-level devices was implementedwith parallel wiring using standardized interfaces.

Due to progress in technology, new electronic currentand voltage sensors became available that might providesignificant cost reduction. This technology, however, re-quired a different kind of interface to the rest of the system.The fact that these new sensors incorporated a micropro-cessor provided the opportunity to replace the analog inter-face with a digital communication link. In addition, theintroduction of microprocessor-based electronics in theswitching devices made it possible to replace the wiring be-tween the bay and process levels with a communicationbus. Such a solution meant a significant reduction in engi-neering cost and labor and is illustrated in Fig. 2.

Some of the first installations using this new technologywere based on vendor-specific communication solutions withthe goal of gaining experience with the new technology. How-

ever, since the connection between protection equipment atthe bay level and sensors at the process level is typically amultivendor connection (a typical substation will have two re-dundant protections from different vendors), there has been astrong customer drive for standardized solutions.

Enter the EthernetIn the mid-1990s, standardization activities were startedboth in the United States and in Europe. While the U.S. activi-ties (UCA 2.0—Utility Communication Architecture) primar-ily focused on standardization between the station and baylevels, the European approach (driven by IEC TC57, WG 10,11, and 12) included the communication down to thetime-critical process level from the beginning. In 1998, thetwo activities were merged to define one worldwide applica-ble standard: IEC 61850 [1]. Instead of debating betweenseveral competing fieldbuses, an agreement was reached touse Ethernet as a communication base for the station bus.

This agreement was based on the fact that the Ethernettechnology has evolved significantly. Starting out as a networksolution for office and business applications, Ethernet today isapplied more and more as a solution for high-speed communi-cation backbone applications between PCs and industrial net-works [2]. The high-speed properties of current Ethernettechnology, together with its dominant position in the localarea network (LAN) field, makes Ethernet an interesting com-munication technology for substation automation usage.

While classical 10-Mbit/s Ethernet used to have somecompetition (most notably IEEE 802.5 token ring), fast (100Mbit/s and up) switched Ethernet currently has none. Othercommunication standards like ATM [3] and fiber distributed

44 IEEE Control Systems Magazine June 2002

Figure 1. Classical substation automation system, in which the process-level devices are hardwired directly to the control and protection units.

Page 3: Ethernet in substation automation

data interface (FDDI) [4] have had little or no success in theLAN field. An emerging communication technology, theInfiniBand Architecture, recently specified by a consortiumof the world’s seven largest computer companies, mightpresent gigabit Ethernet with some competition in the future.InfiniBand is high-performance interconnect technology(where link speeds span from 2.5 to 30 Gbit/s) for connectingCPUs to the I/O devices (in that respect, it will replace today’snonscalable PCI bus), as well as for networking of computerswhere switches and routers are key building components [5].

For various economic and standardization reasons, theindustry would like to eliminate the various (and usually in-compatible) communication networks at the traditional SAlevels, migrating to a single, all-encompassing network con-cept. Therefore, it was natural to consider the use of Ether-net technology for the communication between bay andprocess levels as well. However, those levels pose two verydemanding data transfer challenges:

• The transmission of sampled values from the sensorsto the protection devices. Here we have a large amountof data (usually a sampling rate of 1,440 Hz, with cur-rent and voltage information from three phases).These data should be transmitted with a maximum de-lay of about 4 ms. Any loss of data should be detected.

• The transmission of trip signals. This is a short butmission-critical data packet. The maximum admissi-ble transmission delay is in the same range as that forthe sampled values.

In this article, we shall examine the potential ofstate-of-the-art switched Ethernet in such a common networkconcept holding multiple coexisting traffic types. To our

knowledge, very little work has been done to examine theperformance of switched Ethernet for substation automa-tion. Tengdin et al. [6] studied LAN congestion scenarios inEthernet-based substations. The research presented here,however, differs from [6] in several key aspects:

• The substation configurations do not contain any(shared) process bus. Moreover, the current and volt-age transformers (process data sources) are hard-wired directly to the protective relays.

• No protocols specified by the Internet EngineeringTask Force (IETF) are deployed above the Ethernet(MAC: medium access control) layer (examples ofsuch protocols are TCP/IP and UDP/IP).

• There is no background traffic load. The system isbased on the assumption that a background traffic offive times normal SCADA traffic plus two large filetransfers will have little impact on message deliverytimes. Our results indicate that such is not the case.

In this article, we also discuss the latest achievement ofEthernet technology, traffic class prioritization. This tech-nology makes it possible to give mission-critical data prefer-ential treatment over noncritical data, an importantmilestone on the road toward deterministic Ethernet [7].

Just How DemandingCan that Application Be?As mentioned earlier, the critical path in a substation auto-mation system is the information flow sensor, protection re-lay, and circuit breaker. The overall communication delaymust remain in the low milliseconds and may not be influ-

June 2002 IEEE Control Systems Magazine 45

StationLevel

BayLevel

ProcessLevel

Station Bus

To NetworkControlCenter

U/I Combisensor

WithAdditionalNon-ABBProducts

Interbay Bus

IntegratedControl andProtection

BayControl,Feeder

Protection,Busbar

Protection,Metering

EXK 01 ELK 14 ELK 34 PASS M0 PASS M1 PASS M2

CB DriveDS/ES/FES/CDES Drive

Process Bus

Figure 2. Modern substation automation system realizing a process bus for real-time traffic.

Page 4: Ethernet in substation automation

enced by other data services such as file transfers to uploadmonitoring information stored in a device or to downloadnew parameters. In summary, we have the following toughset of requirements:

• A data sampling rate of 1,440 Hz (IEC TC57 WG 12, pro-tection and control class 4, 60-Hz system).

• Since everything is three-phase, we need three sets ofmeasurements for each measurement point.

• A typical setup has eight to 12 measurement points.• The measurement data must be sent to multiple (two

to four) destinations.• In addition to the measurement data, we must be able to

handle administrative data, trip data, and file transfers.

A gross estimate of the amount of data traffic in thesubstation can be made from the information givenabove and estimating the administrative overhead. Thefinal result is that we will have about 140,000 packets persecond on the substation network. If a standard payloadfor the measurement data is 32 bytes with a total proto-col overhead of 60 bytes, a standard packet will be 736bits. Multiplying this number by the estimated numberof packets per second gives an estimated data volume ofabout 103 Mbit/s, slightly more than the gross capacityof fast Ethernet.

How to Squeeze a LargeAmount of Data Through EthernetSince the worst-case data volume is greater than fast Ether-net can handle, it may seem that there is no point to furtherinvestigation. However, Ethernet offers several possibilitiesthat have not even been looked at. Interesting avenues opento exploration include:

• Transmitting measurement data to multiple destina-tions at the same time (multicasting). This approachwill reduce the data volume to about 30 Mbit/s.

• Permitting one measurement node to transmit mea-surement data for three phases instead of one. This ap-proach will reduce the data volume to about 50 Mbit/s.

• Using switched fast Ethernet. This approach will notreduce the data volume, but it will increase the avail-able data transfer bandwidth [1], [7]-[9].

• Using gigabit Ethernet. This is the most expensivemethod, but it has sufficient bandwidth to handle thedata volume easily.

Having several solutions available to us, the next step isto find out whether any of them will work in practice andthen select the “best” (least expensive, fastest, most reli-able) of the workable solutions.

How to Test Network PerformanceWithout Installing AnythingTraditionally, testing these network alternatives would in-volve buying or leasing network components and usingthem to test out the communication in a test setup. In ourcase such a test is impractical for various reasons:

• A high-voltage substation is such an expensive pieceof equipment that even the capital interest involved in

dedicating it to run network tests for afew months might be prohibitive.

• An important parameter is the time fromwhen an abnormal network condition isdiscovered until the circuit breaker trips.Such a condition is not easy to provoke,and thus usually only a few measure-ments will be made.

The alternative is to simulate the whole net-work. Using a competent simulator package, itis possible to model the data nodes and the net-

work traffic for both normal and abnormal traffic situations.It is even possible to simulate abnormal conditions severaltimes per second! Predicting network performance accu-rately by queuing theory (analytically) is not feasible forswitched networks because it is difficult to model all the in-teractions between connected network elements at a de-tailed level (in the literature, the analytical approach hasbeen shown to be practical for predicting performance ofsingle-switch networks only). In that respect, the only prom-ising approach is simulations [10].

The key word in the paragraph above is competent. Notmany network simulators exist, and their capabilities differsignificantly. After some research, we decided on OPNETfrom MIL3 [11]. This simulator, originally developed at theMassachusetts Institute of Technology, has many advan-tages for our purposes. One advantage is that the simulatoris object oriented; the user can create new objects from ex-isting general-purpose objects at will.

Setting the SceneBefore we start simulating (or measuring) performance, weneed to address some important issues. These are:

• Protocol: A de facto protocol standard for networkcommunication is the IETF protocol suite, usuallycalled TCP/IP. This suite contains two transport pro-tocols: TCP and UDP. The main differences are thatTCP is slow, reliable, and connection oriented,whereas UDP is fast, unreliable, and connectionless.In the case of high-speed measurement data, TCP ismore or less useless due to the protocol overhead in-volved. Therefore, we chose UDP as the measurement

46 IEEE Control Systems Magazine June 2002

The functionality of substationautomation systems has beenlogically allocated on three distinctlevels: station, bay, andprocess.

Page 5: Ethernet in substation automation

data real-time protocol. This turns out to be more thansatisfactory, since we sample the measurement val-ues at a very high rate. Therefore, if a data set is lost,another set will be coming along shortly anyhow (toachieve reliability with a connectionless protocol likeUDP, the IEC 61850 standardization committee is dis-cussing a repeating transmission scheme for mis-sion-critical messages).

• Disturbances: If measurement data were the only traf-fic on the network, speed and response would be easyto calculate. At times, however, a node will be up-graded or reconfigured, something that implies thetransfer of long, structured data. We must incorporatesuch a scenario in our simulation, and we chose to usethe file transfer protocol (FTP) for such a transfer.

• Performance requirements: The normal response re-quirement for substation automation is 4 ms (event—protection calculation—action). The extra-high-per-formance requirement is 1 ms.

• Substation topology and message rate: For the experi-ments performed in this study, these are dictated byactual substation topologies (configurations) andmessage rates.

• Switched Ethernet characteristics: Switched Ethernet isin some respects fundamentally different from classicEthernet. The most important differences are:• No collisions: Switched Ethernet can still lose pack-

ets, but the collision mechanism is not used.• Full duplex: A switched Ethernet connection can

transmit and receive different packets simulta-neously.

• Store and forward: The time a packet takes to travelthrough a switched Ethernet fabric is very difficultto predict, as the packet may be forced to wait in abuffer inside the switch.

Simulating a Switch-Based“Flat” NetworkAlthough we could have chosen one tentative solution afteranother and run a test case for each, for the purpose of thisarticle, we will concentrate on a somewhat mixed solution.Since the main difference (from a network point of view) be-tween a multicast solution and a standard solution is re-duced network traffic, we will ignore multicast for now.What we will simulate is a medium number of producernodes (16), called PISAs (process interface for sensors andactuators) in the SA context, that are transmitting me-dium-size packets (60 bytes payload) to two different re-ceive nodes. Recall that the sampling rate is 1,440 Hz. Inaddition, one PISA node will be the subject of FTP uploadand download from a dedicated server and the file size is101 kbytes (50% send and 50% receive). If this succeeds wecan avoid the expense of gigabit Ethernet or the protocolhassle of multicasting.

Fig. 3 shows the resulting OPNET simulator setup. Boththe PISAs and the controllers are simulated using the prede-fined “Ethernet Advanced Workstation” object (this objectis closest to what we are trying to achieve). The importantconfiguration parameter turns out to be the IP processingrate, which is set to a default 5,000 packets/s for the PISAsand a hefty 20,000 packets/s for the controllers.

Most of the simulation application layers tend to empha-size the client-server or the request-response paradigms.Since we are focusing on data acquisition traffic (which ismore one-way in nature), we settled for a modifiedvideoconference application. This videoconference can beconfigured for different traffic loads in different directionsand runs on top of UDP, making it an excellent simulation ve-hicle for our purposes. In short, this application layer allowsus to specify the amount of UDP traffic to be generated, thedestination(s) of the packets, and so on.

One caveat: In a videoconference session, just as in SAapplications, a packet going in one direction is not a result ofa packet going in the other direction. Thus, a round-trip de-lay must be estimated as the sum of the delay in one direc-tion, the delay in the other direction, plus an estimatedreaction time in the controller.

June 2002 IEEE Control Systems Magazine 47

Figure 3. The measurement setup in the OPNET simulator

UDP Transfer Delay for Test Node900800700600500400300200100

0100 102 104 106 108 110

Time [s]

Sink to SourceSource to Sink

Del

ay [

s]µ

Figure 4. The end-to-end delay for a packet at the application level.

Page 6: Ethernet in substation automation

Discussing the Simulated PerformanceFig. 4 shows two important components of the system reac-tion time:

1) The time it takes a measurement packet to travel fromthe measurement software in the PISA to the applica-tion layer in the controller;

2) The corresponding time for a control packet to travelfrom the controller back to the PISA.

Under normal circumstances, the delay from the PISA tothe controller is just below 0.3 ms and the delay going theother way is just above 0.3 ms, resulting in a total round-tripdelay of about 0.6 ms.

Under abnormal circumstances (heavy FTP traffic), thedelay from the controller to the PISA increases to about 0.85ms, adding up to a total round-trip delay of less than 1.2 ms.This increase stems mostly from additional protocol stacksoftware involvement, since the increase in end-to-end de-lay on the Ethernet level is about 20 µs and is otherwise con-stant at 19 µs.

Simulation Results for aSwitch-Based Multilevel NetworkIn real life, the measurement points in a high-voltage substa-tion may be far apart, and it is thus an advantage to have lo-

cal “data concentrators” to simplify cabling. An example ofsuch a data concentrator is a local Ethernet switch; thus, weneed to investigate the effect of multiple switches in thedata path. The simulation setup for such a multilevel switchconfiguration is shown in Fig. 5.

In this scenario, the Ethernet end-to-end delay for incom-ing UDP traffic to a PISA under normal load circumstances isabout 29 µs. Thus, we observe an increase in the latency of10 µs due to the introduction of one more switches in thedata path between sinks and some of the PISAs. Duringheavy FTP load, we also notice that the Ethernet delay in-creases more than that of the single-switch network. Thereason for this is that hooking up six PISAs by the drop linkbetween switch 1 and switch 4 causes some buffering delay(head-of-line blocking) related to this link. At the same time,the UDP delay from the sinks to the PISAs connected toswitch 4 increases during the FTP session (see Fig. 6).

Theme and VariationsLooking closely at the test results for the previous tests, wesee that the “IP delay” (the time a packet spends inside anode traversing the protocol stack) is a major part of theend-to-end delay time. To show the effect of this delay, we re-ran the multilevel tests with a high-performance PISA testnode having an IP service rate of 20,000 packets/s. Fig. 7shows the results for the UDP end-to-end delay. If we com-pare it with Fig. 6, we see a dramatic difference; theworst-case delay has been reduced from about 1.3 ms toabout 370 µs. Such a high-performance PISA would tradi-tionally mean a powerful CPU and a large amount of mem-ory, but recently several companies have succeeded inincorporating large parts of the IP in hardware, dramaticallylowering the software processing overhead.

Fig. 7 also shows the Ethernet delay for the “sink tosource” traffic. Observe that the increase in UDP “sink tosource” delay closely tracks the increased Ethernet delay.

We have already determined that using a hub instead of aswitch in the setup illustrated in Fig. 3 will not work, and ashort simulation shows more than 60,000 collisions everysecond and a saturated network. Introducing hubs insteadof the lower switches (switch 4 and switch 5) in the setup il-

48 IEEE Control Systems Magazine June 2002

UDP Transfer Delay for Test Node

800

600

400

200

0100 105 110 115

Time [s]

Sink to SourceSource to Sink

Del

ay [

s]µ

1000

1200

1400

Figure 6. End-to-end delay at the application level for a multilevelswitch setup.

Figure 5. A multilevel switch test setup

UDP Transfer Delay for Test Node400

350

300

250

200

150

100

50

0100 102 104 106 108 110 112

Time [s]

Sink to SourceSource to Sink

Del

ay [

s]µ

Ethernet

Figure 7. Data transfer delay with a high-performance PISA.

Page 7: Ethernet in substation automation

lustrated in Fig. 5, and keeping the high-perfor-mance PISAs discussed above (a low-performance PISA was not even able to keep upwith the traffic caused by collisions and subse-quent retries), we reran the simulation. The re-sults indicated not only that it was possible touse such a mixed network configuration butalso that it is much more sensitive to abnormaltraffic circumstances than the layered switchsolution. The collision count leveled out atabout 2,800 collisions per second and two pack-ets were lost at the Ethernet level (a packet islost after 16 retransmissions). Packet losscauses retransmission at the TCP level (invoked by FTP) andhas an impact on the UDP delay as well. Thus, even if it is pos-sible to use hubs in such a setup, it is not recommended.

Simulating a Real ConfigurationAfter the “practice” simulations on the previous examples,we decided to tackle a realistic setup for substation automa-tion. Fig. 8 shows the setup we decided on, consisting ofeight feeder bays and two transformer bays. Each bay is analmost perfect subnetwork; most of the traffic stays withinthe bay. Unlike conventional multilevel architectures, as dis-cussed in the introduction, this configuration will applyEthernet as a single medium for both process and sta-tion/interbay communication.

A feeder bay consists of:• Three current transformer (CT)/voltage transformer

(VT) PISAs;• Four distance earth PISAs;• One fast earth PISA (same communication specifica-

tion as the distance earth PISAs);• One circuit breaker PISA;• A bay controller, a protection unit, and a differential

protection unit.The traffic pattern of such a feeder bay is fairly complex,

but the important data streams follow the pattern men-tioned earlier:

1) A high-speed stream of CT/VT data from the CT/VTPISAs to the local bay controller, protection unit, dif-ferential protection unit, and the global busbar. In thiscontext the data-sampling rate was specified as 1,000Hz instead of 1,440 Hz.

2) A medium-speed stream of controller data to the cir-cuit breaker PISA from the same nodes, of frequency250 Hz and packet size 16 bytes.

3) A low-speed data exchange between all PISAs in a bayand the local bay controller, protection unit, and dif-ferential protection unit. The frequency of these datastreams is 10 Hz and the packet size is 32 bytes.

4) Each PISA in some of the bays does a file transfer, usu-ally a download, but one PISA also does an upload(25% of the total FTP transfer). The file transfers takeplace at different times and the file size is 1 Mbyte. In

addition to this pattern, the controller nodes requestfile download.

A transformer bay may have a different purpose, but thecommunication requirements are very similar. Thus, forsimulation purposes, the transformer bay has the same traf-fic pattern as a feeder bay.

In this setup we specified PISAs with IP service rates of10,000 packets/s. The busbar node must handle a lot ofpackets, and thus a service rate of 50,000 packets/s wasspecified for that node.

The simulation results fall naturally into two classes:intrabay and interbay traffic. The important intrabay delay(delay inside a bay) is the delay from when a measurement isfinished (CT/VT PISA) to when a trip command arrives at thecircuit breaker (CB PISA). The simulations indicate a maxi-mum delay from a CT/VT PISA to the local protection unit of160 µs and a maximum delay from the local protection unit tothe circuit breaker of about the same length (see Fig. 9).Heavy FTP traffic increases the delays by less than 300 µs.

The important interbay delay is the sum of the delay froma CT/VT PISA inside one of the bays to the busbar and fromthe busbar to the CB PISA inside one of the bays. In our case,the simulations indicate a maximum delay from a CT/VT PISAto the busbar (taken over all CT/VT PISAs) of about 180 µs

June 2002 IEEE Control Systems Magazine 49

Traffic class prioritization makes itpossible to give mission-critical data

preferential treatment overnoncritical data, an importantmilestone on the road toward

deterministic Ethernet.

Figure 8. The simulation setup for a normal feeder substation.

Page 8: Ethernet in substation automation

and the maximum delay from the busbar back to the CBPISAs of about 170 µs. Again, heavy FTP traffic increasesboth delays by less than 300 µs. Fig. 10 shows both delays;the small peaks are due to FTP download and the large peakis due to FTP upload.

Performance ImpactThe excellent results above led us to suspect that thehigh-performance nodes might have a large impact on the re-sults. We therefore reran the simulations twice with reducedperformance specifications. In the first simulation, the PISAperformance was reduced to 5,000 packets/s and the busbarnode performance was reduced to 40,000 packets/s. The sec-ond simulation mantained the node performance but re-duced the network speed inside the bays to 10 Mbit/s.

Fig. 11 shows the resulting delay between a U/I PISA andthe busbar node. A comparison with Fig. 10 shows thatwhile the “steady-state” delay is of the same order of magni-tude, the transient behavior is very bad. The peaks are dueto FTP transmission from a PISA and show up in the sameway on the local PISA-to-bay delays. These results empha-size that for 100-Mbit/s switched Ethernet, the actual criti-cal part of substation networks is the performance of theend nodes.

Fig. 12 shows what happens when the network speed in-side a bay is reduced. The low-speed link is fully able to keepup with the “steady-state” traffic, but the FTP transmissionfrom the U/I PISA introduces a long delay due to transmis-sion queue blocking. In summary, the FTP upload becomesvery critical for lower performance end nodes and slowercommunication links. Clearly, some queue priority mecha-nism must be introduced at the MAC layer if the added delayis to be kept at a reasonable level.

We have seen that processing of UDP/TCP/IP layersplaces great demands on the CPU and, in that respect, couldraise objections to this protocol stack. We hasten to add,however, that these demands may now be significantly re-duced or even eliminated. The iReady company has re-cently announced a hardware implementation of TCP/IP[12], while another company, Netsilicon, has an IP imple-mentation that is five times faster than other relevant IP im-plementations [13].

Introducing Traffic Class ExpeditingThe IEEE 802.1 Interworking Task Group recently ratified theLAN enhancement standard IEEE 802.1p, Traffic Class Expe-diting and Dynamic Multicast Filtering [14]. This standardgives the provision for expedited traffic (packet prioritiza-tion) capabilities. Such capabilities may help support thetransmission of time-critical data in a LAN, as well as defin-ing layer 2 protocols that support efficient multicasting in aswitched or bridged LAN environment (the latter featurewill not be discussed further here). Standard Ethernet offersno encapsulation of quality of service (QoS) information inits packet format; however, this weakness was remedied by

50 IEEE Control Systems Magazine June 2002

Del

ay [

s]µLocal End-to- nd UDP DelaysE

020406080

100120140160180

0 20 40 60 80 100 120 140 160Time [s]

CT/VT to BayBay to CB

Figure 9. Intrabay end-to-end delays.

Global End-to-End UDP Delays

0

50

100

150

200

250

300

350

0 50 100 150Time [s]

CT/VT toBusbarBusbar to CB

Del

ay [

s]µ

Figure 10. Global end-to-end delays.

Maximum Delay from U/I PISA to Busbar

0

1000

2000

3000

4000

5000

6000

0 20 40 60 80 100 120 140 160Time [s]

UDP LevelEthernetLevelD

elay

[s]µ

Figure 11. Global end-to-end delays with slower PISAs.

Del

ay [

s]µ

Maximum Delay from U/I PISA to Busbar

02,0004,0006,0008,000

10,00012,00014,00016,00018,00020,000

0 20 40 60 80 100 120 140 160Time [s]

UDP LevelEthernetLevel

Figure 12. Global end-to-end delays with slower network.

Page 9: Ethernet in substation automation

the IEEE 802.1Q standard, which defines an extended Ether-net packet format holding three priority bits as part of a ded-icated tag control information field [15]. In that respect,IEEE 802.1Q complements IEEE 802.1p but otherwise dis-cusses the operation and administration of virtual LAN(VLAN) topologies in a switch-based LAN environment.

The main driving force for these new standards has beenthe multimedia market. This market area, dominated by ap-plications such as voice over IP (VOIP), video on demand,and video conferencing, is expanding rapidly [7]. These ap-plications may be characterized as isochronal traffic, oftenwith multiple recipients. In that respect, the multimediamarket has driven the need for LANs to be able to delivervarious types of time-critical and non-time-critical data. Forthis reason, the automation and control industries havestarted evaluating these new technologies for possible ben-efits. For a more comprehensive introduction to the priori-tization mechanisms of switched Ethernet, refer to [7].

For the purpose of substation automation, our simula-tions show no immediate need for traffic expediting (i.e., be-cause we ascertain little jitter at the Ethernet level). This isbecause the Ethernet switches are fairly lightly loaded, caus-ing little internal buffering (head-of-line blocking). We will,however, recommend that IEEE 802.1p-compliant compo-nents be chosen for the next-generation network concept.This will ensure that despite any future system configurationmigration, hard real-time requirements can still be met.

ConclusionIn this article, we have elaborated on Ethernet’s usability ascommunication technology for substation automation.Through extensive simulations, we have studied whether:

1) Switched Ethernet has sufficient performance charac-teristics to meet the real-time demands of substationautomation.

2) UDP/IP on top of Ethernet may be used as the real-timeprotocol.

Moreover, we examined Ethernet’s potential for use as acommon network handling multiple coexisting traffic types.

The main conclusions from the simulations are:• A switch-based fast Ethernet network handles various

SA configurations with ease under all tested load con-ditions.

• Connecting fast data sources through hubs instead ofswitches is not recommended.

• The application end-to-end latency mainly stems fromtraversing the stacks.

• The stack protocol handling performance of thenodes has a dominating influence on the UDPend-to-end latency.

• UDP/IP as a real-time protocol is able to meet the time re-quirements, but the end nodes must be fairly high-per-formance machines. This problem can be reduced/eliminated in the future by recently launched hardwareand trimmed Internet stack implementations.

References[1] “IEC 61850 Communication Networks and Systems in Substations, Part 5:Communication Requirements for Functions and Device Models, Part 7-2: Ba-sic Communication Structure for Substations and Feeder Equipment,” 1999.[2] C. LeBlanc, “The future of industrial networking and connectivity,” Dedi-cated Syst. Mag., pp. 9-11, Mar. 2000.[3] Asynchronous transfer mode [Online]. Available: http://www.atmforum.com[4] Fiber distributed data interface [Online]. Available: http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/fddi.htm[5] “InfiniBandTM Architecture Specification 1.0,” Oct. 2000 [Online]. Avail-able: http://www.infinibandta.com[6] J. Tengdin, M.S. Simon, and C.R. Sufana, “LAN congestion scenario andperformance evaluation,” in Proc. IEEE Power Engineering Society Winter Meet-ing 1999, pp. 919-924.[7] Ø. Holmeide and T. Skeie, “VoIP drives realtime Ethernet,” in IndustrialEthernet Book, vol. 5, Mar. 2001, pp. 26-29.[8] IBM, “Migration to switched Ethernet LANs,” IBM, Tech. Rep., 1998.[9] White paper: “Real time services (QoS) in Ethernet based industrial auto-mation networks,” Hirschmann Rheinmetall Elektronik, Tech. rep., 1999.[10] J. Duato, S. Yalamanchilli, and L. Ni, Interconnection Networks, an Engi-neering Approach. Los Alamitos, CA: IEEE Computer Society, 1997.[11]OPNETmodeler[Online].Available:http://www.mil3.com/opnet_home.html[12] IReady, TCP/IP implementation in hardware [Online]. Available:http://www.iready.com/technology[13] Netsilicon, Net+Fast IP [Online]. Available: http://www.netsilicon.com[14] “IEEE 802.1D, Information Technology—Telecommunications and Infor-mation Exchange Between Systems—Local and Metropolitan Area Net-works—Common Specification—Part 3: Media Access Control Bridges,” 1998(includes IEEE 802.1p).[15] “IEEE 802.1Q, Standard for Local and Metropolitan Area Networks: Vir-tual Bridged Local Area Networks,” 1998.[16] Cisco Systems, internetworking technology overview [Online]. Available:http://www.lsiinc.com/univercd/cc/td/doc/cisintwk/ito_doc/index.htm

Tor Skeie is a Senior Scientist at ABB Corporate Research, aswell as being affiliated with Simula Research Laboratory inNorway. He received a master’s degree in computer sciencefrom the University of Oslo in 1993 and became a Dr. Scient. in1998, also at the University of Oslo. At ABB Corporate Re-search he has conducted several international projects study-ing the applicability of Ethernet with regard to industrial use.

Svein Johannessen is an independent consultant with ABBCorporate Research. He received a master’s degree in math-ematics from the University of Oslo in 1970. Before consult-ing, he was a research scientist at the Central Institute forIndustrial Research in Norway, specializing in automation,microprocessor technology, and robotics. In 1985 he joinedTandberg Data Display, where he later became Director ofR&D. He has been an associate lecturer at the University ofOslo, Department of Physics, since 1982.

Christoph Brunner graduated as an electrical engineer atthe Swiss Federal Institute of Technology in 1983. He startedhis career as a hardware development engineer. Later hewas a project manager and development manager fortelecontrol systems and RTUs used for utility automation.He now works as a project manager at ABB High VoltageTechnologies Ltd. in Zurich, Switzerland, where he is re-sponsible for the communication architecture of the substa-tion automation system. He is convenor of the WG 12 of theIEC TC57. He is a member of IEEE-PES and IEEE-SA.

June 2002 IEEE Control Systems Magazine 51