Upload
a-luchetta
View
213
Download
1
Embed Size (px)
Citation preview
MDSPLUS data acquisition in RFX and its integration in legacysystems
A. Luchetta, G. Manduchi *, C. Taliercio
Consorzio RFX, Ricerche sulla Fusione, Associazione EURATOM-ENEA per, Corso Stati Uniti 4, 35127 Padua, Italy
Abstract
The current reconstruction of the RFX power supplies requires a re-engineering of the data acquisition system. The
use of MDSPLUS has been retained, based however on a different architectural organisation. This has been possible as
MDSPLUS is available under many software platforms, including WINDOWS and several flavours of UNIX. The RFX data
acquisition architecture defines now a set of compactPCI (cPCI) crates, each hosting a CPU running embedded LINUX.
Each CPU supervises local data acquisition, possibly performing data pre-processing to reduce the amount of data to
be stored. Communication among distributed components is achieved using mdsip, the TCP/IP-based data
communication layer of MDSPLUS. This distributed approach allows also the integration of legacy CAMAC-based
data acquisition systems and new diagnostic systems using WINDOWS PCs for data acquisition. The same approach has
been taken for integrating MDSPLUS subsystems in other experiments, such as spectroscopic and Thomson Scattering
diagnostics currently used at FTU and TCV, respectively.
# 2003 Elsevier Science B.V. All rights reserved.
Keywords: Data acquisition; Embedded system; MDSPLUS
1. Introduction
The MDSPLUS data acquisition has been used in
RFX since its early operation in 1991 [1]. Despite
the many changes in computer technology which
occurred in this last decade, the basic architecture
of MDSPLUS is still valid, and this is one of the
reasons for the increased use of MDSPLUS for data
acquisition and data access in fusion devices. The
system, originally developed for the openVMS
architecture, has now been ported to other wide-
spread operating platforms such as WINDOWS and
many flavours of UNIX [2].
It may seem surprising that the experiments
using MDSPLUS since its early development (RFX,
CMOD and TCV), and whose teams contributed
to the development of the system and to its
migration to UNIX and WINDOWS, are still mostly
using openVMS as operating platform. This can
be explained by the fact that the effort required to
develop and configure data acquisition for a large
system, such as a fusion device, discourages radical
changes in the underlying architecture, unless there
are compelling reasons to do this. This has been
the case of RFX where, due to the reconstruction
of the power supplies, the data acquisition system
* Corresponding author. Tel.: �/39-049-829-5039; fax: �/39-
049-870-0718.
E-mail address: [email protected] (G. Manduchi).
Fusion Engineering and Design 66�/68 (2003) 959�/963
www.elsevier.com/locate/fusengdes
0920-3796/03/$ - see front matter # 2003 Elsevier Science B.V. All rights reserved.
doi:10.1016/S0920-3796(03)00384-3
for a large part of the device had to be developedfrom scratch, using new hardware for data acqui-
sition and processing. RFX is, therefore, the first
large experiment making extensive use of MDSPLUS
on UNIX and WINDOWS systems.
The integration of the systems using these new
platforms does not represent, however, the only
major architectural change in the RFX data
acquisition system. The current trend towardssmaller, but still powerful, systems led us to
develop a more distributed architecture employing
a set of CPUs, each supervising a set of data
acquisition devices.
This modular approach allows also an easy
integration of new components that can be devel-
oped and tested separately. The same approach
can also be used for the development of compo-nents for use in other fusion devices, as it has been
done in RFX for the development of the Thomson
Scattering diagnostic, now used at TCV, and of a
spectroscopic diagnostic currently in use at FTU.
2. Data acquisition architecture for RFX power
supplies
The original configuration for data acquisition
in RFX was a centralised one, where data acquisi-
tion, processing and storage were achieved in an
openVMS cluster consisting of two VAX (later
alpha) computers sharing the same disks. The
CAMAC front end allowed data to be recorded
from many crates, but its organisation was cen-tralised, where the CAMAC data acquisition for
an entire loop was seen by the system as a unique
device.
This configuration caused several bottlenecks,
e.g. due to the necessary serialisation of the
CAMAC data acquisition. Several strategies have
been adopted to reduce the bottleneck effects, such
as increasing the number of CAMAC highways toparallelise CAMAC readout, and adding disk
caches to reduce disk contention. All these solu-
tions proved expensive, and often new actions
were required to achieve an appropriate resource
balancing when integrating new system compo-
nents.
In the new subsystems, compactPCI (cPCI)
crates are used instead of CAMAC crates (Fig.
1). There has been recently a growing interest for
cPCI data acquisition in the fusion community
because it allows in-crate processing. Each cPCI
crate is in fact supervised by a CPU mounted on
the same rack. When a local disk is available, it is
possible to achieve full parallelism in data acquisi-
tion letting each CPU supervise local crate specific
devices. Moreover, data pre-processing can be
achieved locally, with possible reduction of the
amount of data to be transferred over the network
to the central pulse database.
The disadvantage in using cPCI crates is cur-
rently given by the limited number of available
front-end devices, although companies that used
to produce CAMAC devices are now entering the
field.
Pentium CPUs have been selected for the super-
vision of cPCI crates. The reason is mainly due to
the availability of widely used, well supported,
stable LINUX distributions [3] for this computer
platform. In our case, we have decided to adopt
the same hardware architecture for both desktop
development computers (Pentium based) and
cPCI-embedded CPUs.
Using a distributed architecture requires the
ability of managing network communication in
data acquisition. In other words, it is necessary to
achieve proper communication among tasks in
order to provide a co-ordinated execution of the
initialisation and data acquisition procedures on
each cPCI crate of the system. Proper co-ordina-
tion during the pulse sequence is achieved in
MDSPLUS by defining a set of data types specifying
the actions to be performed and their temporal
dependencies with the other actions. A dedicated
tool known as Dispatcher [4] reads the current
configuration at the beginning of the sequence and
then supervises and co-ordinates the execution of
the various tasks defined in the configuration. The
Dispatcher does not execute tasks directly, but
delegates their execution to a set of servers.
Distribution in data acquisition was achieved in
the original system by providing ad-hoc commu-
nication between the Dispatcher and the servers,
still retaining centralisation in data management
A. Luchetta et al. / Fusion Engineering and Design 66�/68 (2003) 959�/963960
(achieved using shared disks in an openVMS
Cluster configuration).In the new version of the Dispatcher, commu-
nication between it and the servers is achieved
using standard TCP/IP communication, thus al-
lowing the co-ordinated execution of tasks in a
distributed environment, defining a server task for
every cPCI CPU [5].
Two different configurations have been evalu-
ated in the new system. In the first one, the whole
pulse database is maintained at a single site. A
server task for every cPCI CPU supervises local
(crate-specific) data acquisition. The task receives
notification of the work to be performed from the
central Dispatcher, and uses network-based re-
mote data access for retrieving set-up information
and storing acquired (and possibly pre-elaborated)
data.
In the second configuration, the pulse database
is split among the nodes of the system. In this
configuration each cPCI CPU uses a local disk for
configuration set-up and storage of acquired data,
still getting messages from the central Dispatcher.
In this case a second task is active in each CPU for
exporting local data, possibly required by other
system components. Such a configuration is pos-
sible because MDSPLUS allows distributed database
components to be handled as if they were local,
transparently managing the required network
communication. Users are unaware of the actual
location of data, as the logical model offered by
the lower data access layer of MDSPLUS is a local
database organised as a tree, possibly composed of
separate components, presented as subtrees.
The advantage of the first (centralised data)
configuration is simplicity in data management.
The data being stored at a single site, it is easy to
maintain pulse databases. The second (distributed
configuration) allows a better workload distribu-
tion. In particular, network traffic during data
acquisition is reduced since most of the data
transfer is achieved locally. Data access bottle-
Fig. 1. Architecture of RFX data acquisition.
A. Luchetta et al. / Fusion Engineering and Design 66�/68 (2003) 959�/963 961
necks are also avoided, by handling data access fordisplay and on-line computation on different
computers.
It is still not clear if the advantages offered by
the fully distributed configuration can justify the
more complex data organisation; and the optimal
choice may depend on the system and network
configuration as well as on the actual data
organisation. The answer will be provided by amore extensive use of the system. It is, however,
worth noting that it is possible to switch from one
configuration to the other simply by changing a set
of environment variables.
3. Integration of legacy systems
The system organisation described above refersto the new system components, in which data
acquisition is carried out by cPCI devices. The rest
of the system, however, still uses legacy CAMAC
data acquisition. In particular, diagnostic systems
developed prior to the current shutdown of RFX
make exclusive use of CAMAC for data acquisi-
tion. It has been possible to reuse the CAMAC
Serial Highways as they were connected to thealpha CPUs by means of standard SCSI ports. In
the new configuration, CAMAC Serial Highways
are connected to Pentium PCs running REDHAT
LINUX, and each CAMAC subsystem can be
integrated in the whole system in exactly the
same way as cPCI components.
This modular approach has the advantage that
subsystems can be developed and tested sepa-rately, using a local pulse database in the stand-
alone system. Once the subsystem has been
commissioned, its integration in the central data
acquisition system is straightforward. The only
action required is to redirect data access to the
central pulse database, when the centralised data
organisation is chosen, or to integrate the local
pulse database as a subtree of the main pulsedatabase in the distributed configuration.
Not only can components be developed sepa-
rately before their integration in the central
system, but single components can be later inte-
grated in other experiments, even not using
MDSPLUS. In RFX such an approach has been
chosen for two components, currently used inother experiments. The first component is the
data acquisition system for a part of the Thomson
Scattering diagnostic, currently in use at TCV in
Lausanne. In this case, the data acquisition system
is implemented in a WINDOWS PC and supervises
two cPCI racks providing data acquisition for 56
high speed channels and about 100 slow channels.
It defines its own pulse database, used during thedevelopment and test. The central data acquisition
of TCV being based on MDSPLUS, the system has
then been integrated by writing the results of the
on-line data processing in the central pulse data-
base (TCV adopts a centralised data organisation),
retaining the local pulse database for storing raw
data.
The system developed for the FTU device alsouses a WINDOWS PC. It supervises a spectrographic
diagnostic, connected to the PC via GPIB. Also
there, a local pulse database is defined for stand-
alone operation. The use of the local pulse
database is in this case retained also when the
system has been integrated in the central data
acquisition of FTU.
4. Conclusions
The development from scratch of the data
acquisition system of the RFX power supplies
has given us the occasion for the redesign of the
system architecture, still retaining the use of the
MDSPLUS software. We have moved from a
centralised organisation using two or threeCPUs, sharing data storage for pulse databases,
to a fully distributed architecture where several
components carry out data acquisition in cPCI
crates, possibly storing data locally. Despite the
possible distribution in data organisation, the
user’s view of a centralised database is not
changed.
The main advantage of this configuration is itsscalability, especially when a distributed data
organisation is defined. In this case the subsystems
work mostly independently, with the interaction
between subsystems limited to the central action
supervision and to the need of network data
transfer when data stored on a subsystem has to
A. Luchetta et al. / Fusion Engineering and Design 66�/68 (2003) 959�/963962
be accessed by a remote client for data processing
or display.
Though the full distribution in data organisa-
tion seems the most promising configuration,
further experience in running the system is re-
quired in order to validate this choice. We expect
also that new diagnostics will also be developed
following the same approach used for the devel-
opment of the Thomson Scattering and the
Spectrographic systems, i.e. by developing compo-
nents as stand-alone systems, which are then easily
integrated in the central system.
References
[1] G. Flor, G. Manduchi, T.W. Fredian, J.A. Stillerman, K.A.
Klare, MDSPLUS: a software for fast control and data
acquisition in fusion experiment, in: Proceedings of the
Seventh IEEE NPSS Real Time Conference, Julich, Ger-
many, 1991, pp. 109�/116.
[2] J. Stillerman, T.W. Fredian, The MDSPLUS data acquisition
system, current status and future directions, Fusion En-
gineering and Design 43 (3, 4) (1999) 301�/308.
[3] Red Hat web-site: http://www.redhat.com.
[4] G. Flor, G. Manduchi, V. Schmidt, T.W. Fredian, J.A.
Stillerman, K.A. Klare, P.L. Klinger, R.W. Wilkins, Model
driven data acquisition system, in: Proceedings of SOFE,
Knoxville TN USA, 1989, pp. 171�/174.
[5] O. Barana, A. Luchetta, G. Manduchi, C. Taliercio, Java
development in mdsplus, Fusion Engineering and Design 60
(2002) 311�/317.
A. Luchetta et al. / Fusion Engineering and Design 66�/68 (2003) 959�/963 963