3
978 Nuclear Instruments and Methods in Physics Research B40/41 (1989) 978-980 North-Holland, Amsterdam A HIGH-PERFORMANCE NETWORK FOR A DISTRIBUTED-CONTROL SYSTEM G. CUTTONE ‘I, F. AGHION and D. GIOVE 2, ‘) Isiiiuio Nazionale di Fisica Nucleare, Laboratorio Narionale de1 Sud, V. le A. Doria ang. V. S. Sofia, 95123 Catania, Italy ” Istituto Nazionaie di Fisica Nucleare and University of Milan, Via F. lli Cervi 201, 20090 St-grate (MI), Italy Local area networks play a central rule in modem distributed-control systems for accelerators. For a superconducting cyclotron under construction at the University of Milan, an optical Ethernet network has been implemented for the interconnection of multicomputer-based stations. Controller boards, with VLSI protocol chips, have been used. The higher levels of the IS0 OS1 model have been implemented to suit real-time control requirements. The experimental setup for measuring the data throughput between stations will be described. The effect of memory-to-memory data transfer with respect to the packet size has been studied for packets ranging from 200 bytes to 10 Kbytes. Results, showing the data throughput to range from 0.2 to 1.1 Mbit/s, will be discussed. 1. Introduction Local area networks (LANs) are one of the most significant areas in computing systems today. During the last decade the importance of standards for the design and implementation of networks has been proven to be vital. Nowadays the architecture of networks is based on the well known seven-layer open systems interconnection (OSI) reference model proposed by the ISO. The establishment of standards has encouraged the development of VLSI chips which implement large parts of protocols and procedures in silicon. 2. Application of LANs in control systems The availability of low-cost powerful micro- processors to be embedded inside the equipment to be controlled, and the development of interconnecting strategies between systems based on these components, make possible new control system designs. Centralized- control architecture and single-master protocol have been substituted by multicomputer systems in the past few years. In a distributed environment the network becomes a critical element; the choice of topology, protocol, data rate and transmission media can affect the overall per- formance of the control system. The following features are to be considered: - reliability: hardware components must be insensitive to inductive and cross-talk interferences; - broadcasting: the possibility of sending data or tim- ing informations simultaneously to every connected node; - access: every unit can directly communicate with any other unit; 0168-583X/89/$03.50 0 Elsevier Science Publishers B.V. (North-Holland Physics Publishing Division) standard protocol: it allows to get benefit of in- tegrated components available on the market, and to easily interface equipment from different marmfac- turers; speed: every unit can access data and send com- mands on the network in a real-time mode; fault identification and isolation: as critical inter- locks and severe real-time applications are under the control of single units, independent operation must be provided in case of failures on the network. 3. The superconducting cyclotron control system The control system designed for the superconducting cyclotron under construction at the University of Milan consists of a three-level distributed architecture having two levels of networking [l]. An Ethernet IEEE 802.3 network connects multiprocessor stations (dedicated to the process control), local and central consoles. Acceler- ator equipment and subsystems are connected to these multiprocessor assemblies by a multidrop bus confor- ming to the INTEL proposal for interconnection of eight-bit microcontrollers (Bitbus). In addition, a bridge is available to a Decnet (on Ethernet) laboratory net- work for off-line data analysis using graphic worksta- tions (VAXSTATION 2000). 4. The interprucessor network For choosing the interprocessor network we have weighed all the pros and cons of the different available solutions on the market. A good compromise for all the requirements discussed so far has been found using the IEEE 802.3 standard communication protocol (Ether- net).

A high-performance network for a distributed-control system

Embed Size (px)

Citation preview

978 Nuclear Instruments and Methods in Physics Research B40/41 (1989) 978-980 North-Holland, Amsterdam

A HIGH-PERFORMANCE NETWORK FOR A DISTRIBUTED-CONTROL SYSTEM

G. CUTTONE ‘I, F. AGHION and D. GIOVE 2,

‘) Isiiiuio Nazionale di Fisica Nucleare, Laboratorio Narionale de1 Sud, V. le A. Doria ang. V. S. Sofia, 95123 Catania, Italy ” Istituto Nazionaie di Fisica Nucleare and University of Milan, Via F. lli Cervi 201, 20090 St-grate (MI), Italy

Local area networks play a central rule in modem distributed-control systems for accelerators. For a superconducting cyclotron under construction at the University of Milan, an optical Ethernet network has been implemented for the interconnection of multicomputer-based stations. Controller boards, with VLSI protocol chips, have been used. The higher levels of the IS0 OS1 model have been implemented to suit real-time control requirements. The experimental setup for measuring the data throughput between stations will be described. The effect of memory-to-memory data transfer with respect to the packet size has been studied for packets ranging from 200 bytes to 10 Kbytes. Results, showing the data throughput to range from 0.2 to 1.1 Mbit/s, will be discussed.

1. Introduction

Local area networks (LANs) are one of the most significant areas in computing systems today. During the last decade the importance of standards for the design and implementation of networks has been proven to be vital. Nowadays the architecture of networks is based on the well known seven-layer open systems interconnection (OSI) reference model proposed by the ISO. The establishment of standards has encouraged the development of VLSI chips which implement large parts of protocols and procedures in silicon.

2. Application of LANs in control systems

The availability of low-cost powerful micro- processors to be embedded inside the equipment to be controlled, and the development of interconnecting strategies between systems based on these components, make possible new control system designs. Centralized- control architecture and single-master protocol have been substituted by multicomputer systems in the past few years.

In a distributed environment the network becomes a critical element; the choice of topology, protocol, data rate and transmission media can affect the overall per- formance of the control system. The following features are to be considered: - reliability: hardware components must be insensitive

to inductive and cross-talk interferences; - broadcasting: the possibility of sending data or tim-

ing informations simultaneously to every connected node;

- access: every unit can directly communicate with any other unit;

0168-583X/89/$03.50 0 Elsevier Science Publishers B.V. (North-Holland Physics Publishing Division)

standard protocol: it allows to get benefit of in- tegrated components available on the market, and to easily interface equipment from different marmfac- turers; speed: every unit can access data and send com- mands on the network in a real-time mode; fault identification and isolation: as critical inter- locks and severe real-time applications are under the control of single units, independent operation must be provided in case of failures on the network.

3. The superconducting cyclotron control system

The control system designed for the superconducting cyclotron under construction at the University of Milan consists of a three-level distributed architecture having two levels of networking [l]. An Ethernet IEEE 802.3 network connects multiprocessor stations (dedicated to the process control), local and central consoles. Acceler- ator equipment and subsystems are connected to these multiprocessor assemblies by a multidrop bus confor- ming to the INTEL proposal for interconnection of eight-bit microcontrollers (Bitbus). In addition, a bridge is available to a Decnet (on Ethernet) laboratory net- work for off-line data analysis using graphic worksta- tions (VAXSTATION 2000).

4. The interprucessor network

For choosing the interprocessor network we have weighed all the pros and cons of the different available solutions on the market. A good compromise for all the requirements discussed so far has been found using the IEEE 802.3 standard communication protocol (Ether- net).

G. Cuttone et al. /A high-performance network for a distributed-control system 979

Ethernet provides a bit rate of 10 Mbits/s, and uses a Carrier-Sense Multiple-Access with Collision Detec- tion (CSMA/CD) technique. A couple of drawbacks in the Ethernet standard seems to be undesirable for con- trol systems [2]: - The CSMA/CD technique does not guarantee access

to the network in a predictable time, due to the collision resolving method; this is significant in case of a very large number of nodes (> 100) or when severe synchronizations are to be performed through the network.

- Throughput performances of the Ethernet protocol, which are excellent when large amounts of data are transmitted, decrease for short messages, due to a fixed overhead in both software and hardware con- trollers. These limitations have been considered not signifi-

cant for the Milan control system, as, at full operation, the number of nodes should be lower than 20; moreover the time critical controls are performed by the periph- eral control stations.

The standard interface between the control stations and the Ethernet network is an Intel board (iSBC 186/51). It is based on three VLSI chips: a Manchester encoder/decoder (82501) a LAN controller (82586) and an 80186 microprocessor. The 80186 provides ad- ditional local intelligence and completely controls the 82586 which acts as a specialized coprocessor. Network services are managed on the controller board by appli- cative programs (burned on EPROM) conforming to the IS0 OS1 protocol up to the transport layer. Frame buffering has been added on the controller itself, in- stead of using the host station’s system memory, to avoid critical constraints on bus access latency and throughput. The application level has been developed by us according to the particular requirements of the control system. Communication with the host CPU board is performed by means of a home-made protocol developed for message exchange on Multibus I.

An optical transmission medium has been preferred to the standard coaxial cable for EMI-insensitivity and for complete isolation among the nodes. The interface between the optical bus and the Ethernet electronic components is performed by an optical transceiver (Codenoll Technology Corp). The main element of the network is a complete passive optical star coupler capa- ble to connect up to sixteen nodes with a flux budget of 27 dB. The following flux budget calculations guarantee a reliable operation, taking into account temperature variations or component aging: - Flux budget > SL + 2A,L,, + M + 4A,;

- Star coupler loss (SL) = 13 dB (typical); - Flux budget margin (M) = 3 dB; - Fiber loss attenuation (A,) = 6 dB/km; - Connector losses (A,) = 1.5 dB; - Maximum length between the coupler and a trans-

ceiver (L,,) = 0.3 km

Exhaustive tests have demonstrated a good reliability of the hardware.

5. System performances

There is a large number of examples in the literature describing performance measurements on the Ethernet, but the majority of them offer only partial references to a control system designer [3,4]. In fact, most are based on analytical or simulation models to deal with network utilization and data transfer rate for the exchange of large files between different hosts. In a control system LAN a critical component for traffic evaluation is due to short packets used for sending commands and data packets that seldom exceed a few Kbytes.

We made measurements with four connected sta- tions to assess the performance of our network. All the measurements show an average of 1000 memory-to- memory transfers of data packets with a size ranging from 100 bytes up to 10 Kbytes. Fig. 1 shows the performances obtained with packets of more than 1500 bytes for the unloaded situation and using virtual cir- cuit services. A throughput of more than 1 Mbit/s has been measured. The cost in protocol overhead for send- ing a short message that fits in one physical packet (maximal 1500 bytes) has been evaluated to be essen- tially independent of the message length. For these messages it is significant to quote the message/time figure rather than the Kbit/s one. In our system we have measured a performance of 50 messages/s. Signifi- cant differences were not found using datagram services (which does not guarantee a safe delivery). On the other hand, a careful use of the capabilities of the board may lead to obtaining valuable results. Measurements were taken for short messages after having increased the number of internal buffers by a factor of 3. An increase of 50% in throughput was experienced. Moreover, run-

4001 I I I I I 1 1 1 I I 0 2 4 6 6 10

DATA LENGTH (Kbytd

Fig. 1. Ethernet performance.

VII. ACCELERATOR TECHNOLOGY

980 G. Cuttone et al. / A high-performance network for a distributed-control system

ning several virtual circuits in parallel a figure of 100 messages/s has been measured.

6. The equipment network

Accelerator equipment can be connected to the con- trol stations directly or by means of a multidrop serial bus (BITBUS). This standard has been developed by INTEL and since chips have become available it has been adopted by major firms in industrial control sys- tems (Westinghouse, Siemens, Digital, etc.). The proto- col is fully implemented on the 8 x 44 microcontroller chips; an 8051 eight-bit microcontroller and a serial interface unit for network management have been in- tegrated in the same component. The availability of such a device allows to implement a performing field bus and distribute computing power as close as possible to the equipment.

The Bitbus protocol has a hierarchical structure with a master and a number of slaves and allows command/ response access to each device. A single command may be followed by 1 to 50 bytes of data in read- or write-mode operation. Longer messages are constructed with series of data packets. At the physical level Bitbus is based on the RS 485 specification and the data bus uses twisted pairs of wires. The connection of up to 250 nodes is made by means of passive drops. The distance covered by the bus depends on the transmission speed and on the quality of the cable. At the nominal frequency of 2.4 Mbits/s a maximum distance of 30 m may be reached. At 375 Kbits/s the extension ranges up to 900 m and at 62.5 Kbits/s a maximum distance of 13 200 m can be reached. To overcame distance limitations, fiber-optics interface modules have been developed.

The s/w which implements message-passing is sim- ple and very flexible. The same mechanism has been implemented even for the exchange of data between remote tasks or for tasks running on the same node. This allows to think about Bitbus as a distributed operating system from the s/w point of view. Perfor- mance evaluations of the Bitbus network have been carried out. Experimental results are shown in fig. 2. Three different configurations have been tested:

5 10 15

Message length (bytes)

Fig. 2. Bitbus performance.

- message exchanged within the same node; - messages exchanged between a master (iSBX 344)

and a slave (iRCB 44/10); - messages exchanged between a host Mbus I CPU and

the Bitbus master. All the measurements make reference to a nominal

375 Kbits/s frequency. Throughput seems satisfactory for the requirements of a field bus. A bottleneck has been verified on the interface to the host CPU, but this is due to the poor nature of the FIFO structure on the master board. Tests on an improved version of the iSBX 344 have shown a 70% increase in performance.

We wish to thank Mr. F. Zibra for his work during Bitbus performance measurements.

References

111

121

[31 [41

F. A&ion et al., Proc. 11th Int. Conf. on Cyclotrons and their Applications, Tokyo (1986) Ionics (1987) p. 428. R. Raush, Proc. Europhysics Conf. on Control Systems for Experimental Physics (1987) CERN Report, to be pub- lished. E.P. Elkins, Nucl. Instr. and Meth. 247 (1986) 197. F. Rake et al., Interfaces in Computing 2 (1984) 221.