Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Project Documentation Document SPEC-0021
Revision G
TCS Software Design Description
Alan Greer, Chris Mayer, Dave Terrett & Pat Wallace Observatory Sciences Ltd.
& Rutherford Appleton Laboratory
Date 11th September 2015
TCS Software Design Description
SPEC-0021 Rev G Page i
Revision Summary:
1. Date: 16th December 2004 Revision: A Changes: Original Version
2. Date: 9th May 2006 Revision: B Changes: Updated to reflect Panguitch release
3. Date: 11th September 2006 Revision: C Changes: Updated for TCS CDR
4. Date: 28th September 2010 Revision: D Changes: Major update for TCS PDR to incorporate software infrastructure design
changes and new requirements over last 4 years
5. Date: 28th January 2011 Revision: E Changes: Updated for TCS FDR to incorporate comments from the PDR
6. Date: 11th December 2012 Revision: E1 Changes: Updates for acceptance. Rewrite of section on interlocks. Added sections
on non-solar targets, planets, orbital elements. Updated diagrams and scanning descriptions.
7. Date: 28th February 2013
Revision: F Changes: Updates following FAT review. Added table of health events and alarms.
8. Date: 11th September 2015
Revision: G
Changes: Updated to Canary 9 release.
TCS Software Design Description
SPEC-0021 Rev. G Page ii
Table Of Contents
1. Introduction .......................................................................................................... 1 1.1 Purpose ........................................................................................................................................... 1 1.2 Scope .............................................................................................................................................. 1 1.3 Definitions, Acronyms and Abbreviations ..................................................................................... 1
1.3.1 Definitions ............................................................................................................................... 1 2. System Overview .................................................................................................. 2
2.1 Introduction .................................................................................................................................... 2 2.2 TCS Deployment Diagram ............................................................................................................. 5 2.3 Implementation Language .............................................................................................................. 7 2.4 Source Code ................................................................................................................................... 8 2.5 Overall Use of DKIST Common Services Framework .................................................................. 8
3. System Context .................................................................................................... 8 3.1 Context Diagram ............................................................................................................................ 8 3.2 Principal Systems ......................................................................................................................... 10
3.2.1 The Observatory Control System .......................................................................................... 10 3.2.2 Instrument Control System.................................................................................................... 10 3.2.3 The Data Handling System ................................................................................................... 10
3.3 TCS Subsystems ........................................................................................................................... 10 3.3.1 Mount Control System .......................................................................................................... 10 3.3.2 Enclosure Control System ..................................................................................................... 11 3.3.3 M1 Control System ............................................................................................................... 12 3.3.4 Top End Optical Assembly Control System ......................................................................... 12 3.3.5 Feed Optics Control System .................................................................................................. 13 3.3.6 The Wavefront Correction Control System........................................................................... 14 3.3.7 The Acquisition Control System ........................................................................................... 14 3.3.8 The Polarimetry Analysis & Calibration System .................................................................. 14 3.3.9 Control of occulter ................................................................................................................ 16
3.4 Other systems ............................................................................................................................... 16 3.4.1 Time System .......................................................................................................................... 16 3.4.2 Stellar Wavefront Sensor ...................................................................................................... 17 3.4.3 Weather Station ..................................................................................................................... 17 3.4.4 Hand-box ............................................................................................................................... 17
4. System Design .................................................................................................... 17 4.1 Configurations .............................................................................................................................. 18 4.2 Events ........................................................................................................................................... 21 4.3 The Controller Model ................................................................................................................... 22 4.4 Controller Commands .................................................................................................................. 23
4.4.1 Load....................................................................................................................................... 24 4.4.2 Init ......................................................................................................................................... 24 4.4.3 Startup ................................................................................................................................... 24 4.4.4 Shutdown............................................................................................................................... 25 4.4.5 Uninit..................................................................................................................................... 25 4.4.6 Remove ................................................................................................................................. 25 4.4.7 Submit ................................................................................................................................... 25 4.4.8 Pause ..................................................................................................................................... 26 4.4.9 Resume .................................................................................................................................. 26 4.4.10 Cancel .................................................................................................................................... 26 4.4.11 Get ......................................................................................................................................... 27
TCS Software Design Description
SPEC-0021 Rev G Page iii
4.4.12 Set .......................................................................................................................................... 27 4.5 Engineering Screens ..................................................................................................................... 27 4.6 TCS Controllers ............................................................................................................................ 31 4.7 Base Controllers ........................................................................................................................... 33
4.7.1 BaseController ....................................................................................................................... 33 4.7.2 ManagementController.......................................................................................................... 33 4.7.3 Timer Controller .................................................................................................................... 35
4.8 Base Technical Architecture Blocks............................................................................................. 36 4.8.1 PropertyTAB ......................................................................................................................... 36 4.8.2 PostingTAB ........................................................................................................................... 38 4.8.3 TCS Custom TABS ............................................................................................................... 40
4.9 Health ........................................................................................................................................... 40 4.10 Alarms ....................................................................................................................................... 40 4.11 Logging ..................................................................................................................................... 40
4.11.1 Status messages ..................................................................................................................... 41 4.11.2 Debug messages .................................................................................................................... 41
4.12 Properties .................................................................................................................................. 41 4.13 Scripting .................................................................................................................................... 41 4.14 Pointing and Tracking ............................................................................................................... 42
4.14.1 The Interface to the Telescope Control System .................................................................... 42 4.14.2 Supported Solar Coordinate Systems .................................................................................... 49 4.14.3 Solar Ephemerides ................................................................................................................ 51 4.14.4 Thermal Corrections .............................................................................................................. 52 4.14.5 Quasi static alignment ........................................................................................................... 53 4.14.6 Time to Limits ....................................................................................................................... 53
4.15 The sollib library ....................................................................................................................... 53 4.16 Scanning ................................................................................................................................... 54
4.16.1 Random ................................................................................................................................. 55 4.16.2 Grid ....................................................................................................................................... 56 4.16.3 Raster..................................................................................................................................... 56 4.16.4 Spiral ..................................................................................................................................... 59
4.17 Controlling Devices That Track ............................................................................................... 59 4.17.1 Change of mode to tracking .................................................................................................. 60 4.17.2 Change of track identifier ...................................................................................................... 61
4.18 Position feedback ...................................................................................................................... 61 4.19 Telescope and Shutter Alignment ............................................................................................. 62
4.19.1 Enclosure Aperture Plate ....................................................................................................... 62 4.20 Handling The Zenith Blind Spot ............................................................................................... 63
4.20.1 Zenith blind spot use case ..................................................................................................... 63 4.21 World Coordinate System Information ..................................................................................... 64 4.22 Failure Modes ........................................................................................................................... 65
4.22.1 Responding to invalid data .................................................................................................... 65 4.22.2 Posting invalid data ............................................................................................................... 65 4.22.3 Not posting data .................................................................................................................... 66 4.22.4 Internal Failures .................................................................................................................... 66 4.22.5 Subsystem Failures ................................................................................................................ 66
5. DETAILED DESIGN ............................................................................................. 66 5.1 Pointing Kernel Package (tpk) ...................................................................................................... 66
5.1.1 Controller “atst.tcs.tpk” ......................................................................................................... 68 5.1.2 Controller “atst.tcs.tpk.pk” .................................................................................................... 70 5.1.3 Controller “atst.tcs.tpk.sollib” ............................................................................................... 77
TCS Software Design Description
SPEC-0021 Rev G Page iv
5.1.4 Package Lifecycle ................................................................................................................. 79 5.1.5 Controller "atst.tcs.tpk.tpoint" ............................................................................................... 83
5.2 Guide Package (guide) ................................................................................................................. 83 5.2.1 Controller “atst.tcs.guide” ..................................................................................................... 83 5.2.2 Package Lifecycle ................................................................................................................. 84
5.3 Time Package (time)..................................................................................................................... 84 5.3.1 Controller “atst.tcs.time” ....................................................................................................... 85 5.3.2 Controller “atst.tcs.time.iers” ................................................................................................ 85 5.3.3 Package Lifecycle ................................................................................................................. 87
5.4 Ephemeris Package (ephem) ........................................................................................................ 89 5.4.1 Controller “atst.tcs.ephem” ................................................................................................... 89 5.4.2 Package Lifecycle ................................................................................................................. 90
5.5 Environment Package (environment) ........................................................................................... 92 5.5.1 Controller “atst.tcs.environment” .......................................................................................... 92 5.5.2 Package Lifecycle ................................................................................................................. 93
5.6 Thermal Package (thermal) .......................................................................................................... 94 5.6.1 Controller “atst.tcs.thermal” .................................................................................................. 94 5.6.2 Package Lifecycle ................................................................................................................. 94
5.7 System Package (sys) .................................................................................................................. 95 5.7.1 Controller “atst.tcs” ............................................................................................................... 95 5.7.2 Controller “atst.tcs.seq”......................................................................................................... 98 5.7.3 Package Lifecycle ................................................................................................................. 98
5.8 System Loading and Initialization ................................................................................................ 99 5.9 System Startup ............................................................................................................................ 100 5.10 Operational Startup ................................................................................................................. 100
5.10.1 Tracking but closed up ........................................................................................................ 101 5.10.2 Opening ............................................................................................................................... 104 5.10.3 Acquisition and calibration ................................................................................................. 105 5.10.4 Calibration and Observing .................................................................................................. 106
5.11 Operational Shutdown ............................................................................................................ 106 5.11.1 Stowing the telescope .......................................................................................................... 106 5.11.2 Powering down .................................................................................................................... 106
5.12 TCS Level Configurations ...................................................................................................... 107 5.13 Health and Alarms .................................................................................................................. 107 5.14 Interlocks ................................................................................................................................ 108 5.15 Wavefront correction Modes .................................................................................................. 108 5.16 Night time Pointing ................................................................................................................. 109
5.16.1 Pointing Update Tool .......................................................................................................... 109 5.16.2 Night time pointing ............................................................................................................. 110
5.17 Hand Box Interface ................................................................................................................. 111 5.17.1 Movement of telescope and enclosure axes ........................................................................ 112 5.17.2 Selection of coordinate system and tracking mode ............................................................. 112 5.17.3 Movement of tip-tilt motion for M2 and Feed Optics mirrors ............................................ 112 5.17.4 Open and close M1 mirror cover and enclosure cover ........................................................ 113 5.17.5 Movement of each PA&C calibration optic ........................................................................ 113 5.17.6 Enabling & disabling mechanisms ...................................................................................... 114
6. Testing .............................................................................................................. 114 7. Compliance Matrix ........................................................................................... 115 8. REFERENCES ................................................................................................... 118
TCS Software Design Description
SPEC-0021 Rev G Page 1 of 119
1. INTRODUCTION
1.1 PURPOSE The intention of this document is to describe the structure of the software that constitutes the DKIST
Telescope Control System, how it interfaces to the remainder of the DKIST control system and how the
requirements expressed in the DKIST TCS Specification [1] are met.
The intended audiences of this document are:
- The reviewers of the TCS Acceptance Release - The developers of the TCS work package - The developers of the TCS sub-system work packages - The developers of the OCS and instrument systems
The layout of this document is as follows:
Section 2 provides a brief overview of the whole TCS system. It describes what the TCS is and how it is
structured and deployed. Section 3 sets the TCS within the context of the remainder of the DKIST control
system and in particular specifies which other systems the TCS must interface with. The detail of what
passes over each of these interfaces is described in separate Interface Control Documents (ICDs). Section
4 is a more detailed view of the infrastructure that the TCS will be built with, known as the DKIST
Common Services. Finally Section 5 moves on to the detailed design of the TCS. It describes the
packages that make up the TCS together with specifics of the TCS components and classes.
A compliance matrix can be found in Section 7 cross referencing how the design described in this
document meets the TCS design requirements [1]. The master compliance matrix can be found in [30].
This design is an update of that presented at the original TCS CDR in October 2006 to incorporate the
changes to both the Common Services Framework (CSF) and the TCS subsystems that have occurred
over the previous six years.
1.2 SCOPE The software described by this design is the DKIST Telescope Control System. It is referred to
throughout this and other DKIST documentation as the Telescope Control System or more usually by its
acronym the TCS.
The purpose of the TCS is to provide a high quality stable image of a specified point on the solar disk or
corona to instruments at the Gregorian or Coudé focal planes. The TCS achieves this by coordinating and
controlling the activities of its subsystems under instruction from the Observatory Control System (OCS).
Note that as defined here the DKIST Telescope Control System does not include direct control of any
DKIST hardware. That job is the responsibility of the TCS subsystems which are described in separate
design documents.
1.3 DEFINITIONS, ACRONYMS AND ABBREVIATIONS Specific definitions, acronyms and abbreviations used in this document are described below. For a more
general glossary and acronym list as used in DKIST see [4].
1.3.1 Definitions
Telescope subsystems – the subsystems of the TCS are the Telescope Mount Assembly (TMA), the M1
Control System (M1CS), the Top End Optical Assembly (TEOACS) which consists of the M2 and heat
TCS Software Design Description
SPEC-0021 Rev G Page 2 of 119
stop assemblies, the Feed Optics Control System (FOCS), the Acquisition Control System (ACS), the
Wavefront Correction Control System (WCCS), the Enclosure Control System (ECS) and the Polarimetry
Analysis and Calibration System (PA&C) containing the Gregorian Optical Station (GOS).
Virtual telescope – the astrometric building block used to construct the pointing kernel. The virtual
telescope is an ideal telescope that responds instantaneously to new demands. The demands can be in the
form of a changed sky target or image position. The virtual telescope can also predict target or image
coordinates knowing the mount encoder readings.
Slew – a discontinuous change in position or velocity demand from the TCS.
Track – a continuously changing position or velocity demand from the TCS.
2. SYSTEM OVERVIEW
2.1 INTRODUCTION The DKIST control system consists of four principal systems, the telescope control system (TCS), the
observatory control system (OCS), the data handling system (DHS) and the instrument control system
(ICS). The OCS is responsible for high level observatory operations like scheduling, allocating resources
and running “experiments”. Experiments consist of a series of observations with a particular
instrumentation setup. The data from these experiments is stored and displayed by the DHS.
The role of the TCS in this system is to
point and track the telescope in a range of coordinate systems.
perform scans and offsets coordinated with other observatory activities
monitor and set the modes of the active and adaptive optics systems
provide interactive control for the observatory operators
Before looking in detail at how the TCS specific software performs these functions, an overview of the
structure of the TCS software in relation to the infrastructure it must be built with is shown in the package
diagram below. For a full description of the Common Services Framework (CSF) see [2].
The CSF package at the center of the diagram contains the classes that the TCS is built on. This includes
the CSF classes and the DKIST application base classes. The DKIST application base is a suite of
support software that extends the DKIST Common Services beyond the technical architecture. The
application base includes controllers that provide generally useful functional behavior that is not specific
to a particular DKIST principal system. All TCS controller classes are sub-classed from one of these
other classes. The CSF package also contains the Container Manager (CM) classes that will be
responsible for the deployment and lifecycle of the TCS. The GUI and TCS packages depend on the CSF
package. There is also a dependency of the CSF package on the DB package. The databases maintained
inside the DB package provide all permanent storage (including logs) required by the TCS. There are
clearly defined interfaces into the DB package provided by the CSF package and the TCS utilizes these
fully through the use of CSF classes.
At the same level as the CSF package is the GUI package. The GUI package contains the Java
Engineering Screens (JES) package. The JES is an extension of the Component (CSF) class that can be
used to graphically layout an engineering user interface using the Java Swing widget set. Once designed,
the screens can be activated such that they connect to other components as well as register for events.
This allows configurations to be submitted as well as status to be displayed. Full details of the JES and
how it can be used can be found in [24].
TCS Software Design Description
SPEC-0021 Rev G Page 3 of 119
Figure 1 TCS overview package diagram. Packages provided by the CSF are shown in blue.
The Container Manager is used to load, initialize and startup the various components of the TCS.
Depending on how it is configured, this can be done automatically or step by step. The manager is
flexible enough that it can start up any container of the overall DKIST control system and then load and
initialize any components into these containers. The selection of containers and components is done
graphically. Together, the JES and CM provide the interactive control of the TCS for the observatory
operations. The CM also provides a “service” that allows automated booting of containers upon a
hardware boot. This ensures that the TCS will be operational and ready to receive input within 5 minutes
of a cold, power-off start of its hardware as required in [1].
The package to the bottom right of the figure constitutes the TCS itself. The diagram below shows a
more detailed view of this package.
TCS Software Design Description
SPEC-0021 Rev G Page 4 of 119
Figure 2 TCS package diagram.
This diagram shows the individual TCS packages and the main classes contained within each package.
Each class present in this diagram is a CSF controller; they are all derived from base controller classes.
This diagram can therefore be thought of as a controller view of the TCS.
At the heart of the TCS lies the pointing kernel package called tpk. This software system produces a
stream of demands to the tracking mechanisms of the DKIST in order to accurately follow the current
science target or guide object. Its main role therefore is to meet the requirements of the first and second
bullet points above. The exact value of the demands produced will depend on how the TCS is configured.
Three of the four controllers present in the tpk package are completely generic and could be configured to
control a range of different telescopes. In order to keep telescope specific code out of the kernel the
fourth controller (of class “atst.tcs.tpk.AtstPk”) translates the outputs of the kernel into the signals
required by the DKIST. It also manages any commands destined for the pointing kernel and applies any
changes required before submitting telescope generic configurations to the pointing kernel controllers. A
more detailed description of this package can be found in Section 5.1.
Configuration is handled by the system package. This package verifies and manipulates the
configurations received from the OCS. Configurations consist of a set of attribute value pairs that the
TCS will match by sending configurations to its subsystems and to its own internal components. A
configuration might consist of a heliocentric coordinate pair plus an observing mode for example. In
response to such a configuration the TCS would adjust its internal state such that suitable streams of
position demands were generated for the mount, enclosure and coudé rotator. The practical result of this
TCS Software Design Description
SPEC-0021 Rev G Page 5 of 119
would be that the telescope would slew to the target such that it would appear stationary at the specified
point in the coudé rotator frame.
Originally the TCS had an important role to play in managing and monitoring the thermal environment of
the telescope. That responsibility has been taken over by the Facility Management System (FMS) and its
associated Facility Control System (FCS). The TCS has retained a thermal controller for historic reasons
but currently it is just a stub.
The environment package handles the reading and averaging of the environmental parameters needed by
the TCS such as temperature, humidity and pressure.
The guide package can handle incoming data from a range of guiders. Its main role will be to take slowly
varying offloads from the WCCS and apply them to its pointing model. This will result in slow
corrections to the pointing of the mount.
The time package handles the reading of the International Earth Rotation Service (IERS) parameters and
the subsequent submission of those parameters to the pointing kernel. It also communicates with the time
base controller (available from the CSF) to query status information, including (but not limited to) the
readout mode, interrupt generation and time card status.
Finally the ephem package provides a standalone ephemeris service. It consists of three controllers that
maintain their own pointing kernel data context. The ephemeris service can be queried to provide
information about target positions for future reference. It has no interaction with any of the other TCS
packages (other than the basic lifecycle commands not shown in the diagram). It is described in Section
5.4.
2.2 TCS DEPLOYMENT DIAGRAM Another view of the TCS can be found by looking at the TCS deployment diagram. This is shown in the
figure below.
TCS Software Design Description
SPEC-0021 Rev G Page 6 of 119
Figure 3 TCS deployment diagram.
The deployment diagram shows where the main components of the TCS will run. As can be seen, the
deployment diagram is quite straightforward. The TCS will run inside three containers deployed onto one
dedicated Linux workstation. One of the containers is for the Java controller classes and is named
“TCS1”, the next is for the C++ controller classes and is named “TCS2” and the last is for the TPOINT
controller and is named "TCS3". Standard hardware for running DKIST Linux applications has not been
specified. In the case of the TCS where there are no hardware interfaces almost any general purpose
Linux box will do. With regard to the OS, it is constrained to be compatible with the CSF. The intention
is that the TCS development machine will become the TCS summit workstation once development is
complete and the TCS becomes operational.
This workstation will have at least two separate network ports. One will be configured to communicate on
the OCS network and the other will communicate with the TCS subsystem network. The amount and type
of data that will be passed to the TMA will not require it to have its own separate control LAN (see REQ
4.4-0600)
TCS Software Design Description
SPEC-0021 Rev G Page 7 of 119
Only the TCS containers and associated JES screens are supplied as part of the TCS system. All other
systems/services are utilized by the TCS but are provided as part of the DKIST CSF. The CSF server
runs on a separate dedicated Linux machine. The permanent data storage (PostgreSQL databases) is also
maintained on this machine. The Environment component is also shown as running on a separate
machine which is expected to host the DKIST weather server. It could however also run the on the TCS
machine as it no longer has to read information directly from the weather station but just registers for the
appropriate events. The OCS has been included to show how it can communicate with the TCS through
the CSF server also.
The TCS engineering interface will be written in Java and communicate with the TCS using the DKIST
common services framework. Multiple engineering interfaces can be run concurrently (if required) and
the diagram shows two such instances running on two separate Linux machines. The engineering
interface running on a separate machine is not a requirement and it could just as easily run on the same
machine as the TCS, in fact the interface can be run on any machine supporting the Java DKIST CSF.
The current expectation however, is that the TCS GUI will execute on a separate machine from the TCS
itself to avoid any potential problems with the graphical display starving the TCS of resources. For
further details refer to Section 4.5.
The device at the bottom of the diagram is the hand-box. This is a portable hand-held device for
controlling engineering operations. It is not part of the TCS but the TCS must provide an interface to
support it. This interface is the same as the OCS/TCS public interface. A description of the hand-box
functionality can be found in Section 5.17.
The TCS is expected to be robust and run continuously. Its normal operational state will be “running” i.e.
it will have executed its init and startup functionality. It will only be shut down for engineering purposes
and not for example at the end of each observing day. If the TCS is shut down it will return all system
resources to the operating system (i.e. there will be no memory leaks, unreturned buffers, blocked TCP
sockets etc.). Shutting down the TCS will not automatically shut down its subsystems nor will it try to
place them in a default state before it is shutdown.
The TCS will assume that all communications within this environment are secure; it will not implement
any firewall, encryption or password access. All security considerations will be dealt with by the DKIST
CSF and the observatory wide computer systems.
2.3 IMPLEMENTATION LANGUAGE Earlier versions of the TCS design envisioned the TCS to be completely written in C++. The main
reasons for this were that the pointing kernel of the TCS although requiring modification for the DKIST
was written as a combination of C/C++ and that the DKIST Common Services supported C++ containers
and components/controllers.
However previous prototyping was carried out using Java containers, components and controllers and this
has demonstrated that there is no compelling reason to write the remainder of the TCS in C++ just
because the pointing kernel was in C. In fact some of the base classes (including specialized management
controllers) are only going to be supported in Java and it is therefore desirable to have some of the TCS
controllers written in Java to utilize this already present functionality.
Another option looked at was writing as much of the TCS as possible in Java and simply wrapping the C
code using JNI (and this was used for some of the prototyping). However, given the current state of the
CSF implementation in both Java and C++ it has been decided that the TCS can in fact be written in a
combination of both Java and C++ controllers, fully utilizing the fact that communication between CSF
controllers is language independent.
TCS Software Design Description
SPEC-0021 Rev G Page 8 of 119
The design presented here assumes that the controllers interfacing to the pointing kernel, sollib (the solar
ephemeris library), TPOINT and the ephemeris service (a copy of the pointing kernel controllers) shall be
written in C++ and all other controllers shall be written in Java. This allows construction of the TCS with
minimal (or no) wrapping code required between languages.
2.4 SOURCE CODE
All source code written and developed as part of the TCS contract will be provided. The code will
initially be developed and held in a local CVS repository to which the DKIST project will have access at
any time. Following the first delivery the code base will be switched to the DKIST repository in Tucson.
Proprietary source code owned by TPOINT software used to build the pointing kernel is not freely
distributable but will be provided to the project on CD in a sealed envelope.
All source code shall be documented in a manner consistent with good software practices. All source
files shall contain a header containing the version number, revisions, author(s), and functional description.
All functions within a source file shall have a description of the interface and operation of the function,
and all source shall be clearly commented.
2.5 OVERALL USE OF DKIST COMMON SERVICES FRAMEWORK
The TCS is one of the major systems of the DKIST. As such, it must work seamlessly with the other
components that make up the overall DKIST control system. In particular it must accept and act on
configurations sent by the OCS and in turn must send configurations to its subsystems. The TCS is
therefore built using the DKIST Common Services Framework which in turn constrains its design.
At the highest level the TCS will consist of three containers as described above. One container will be
used for the Java controllers and the other two for the C++ controllers. The TCS containers will then hold
a number of controllers. Each of these controllers will be initialized and then started by the container
manager via the init and startup commands methods. During this phase the TCS components will attempt
to make connections to the other DKIST components with which they need to communicate and will
retrieve their initial state from the runtime database through the use of the property service.
The various pieces of the system will be implemented as DKIST controllers (sub-classing from base
controllers) providing the command/action/response behavior needed to handle configurations. Details of
the controller model in general and the particular controllers and components within the TCS can be
found in [23] and section 4.3. The base classes that will be used are described in section 4.7.
3. SYSTEM CONTEXT This section describes the TCS in relation to the other components of the overall DKST control system. It
concentrates in particular on the functioning of the TCS subsystems and what the TCS must do to support
those subsystems. These sections describe what the TCS has to do. The details of how the TCS
implements the requirements are described in Sections 4 and 5.
3.1 CONTEXT DIAGRAM The TCS context diagram is shown in Figure 4 below. The nodes at the top of the diagram are the
Observatory Control System, Data Handling System and the Instrument Control System (ICS). The ICS
consists of a number of individual instrument controllers. These, together with the Telescope Control
system are the principal systems of the DKIST control hierarchy.
TCS Software Design Description
SPEC-0021 Rev G Page 9 of 119
The eight nodes towards the bottom of the diagram are the TCS subsystems. The TCS will monitor and
control these sub systems such that the other principal systems do not have to worry about the details of
how they are structured or controlled. The flows labeled “Configuration” and “Event” between the TCS
and the Subsystems box are duplicated for each TCS subsystem i.e. the TCS maintains a separate
connection to each of its subsystems and posts and receives separate event streams from each of them.
The remaining nodes are the Handbox, Weather Station and Stellar Guider. These are stand alone systems
that the TCS does not control but which can use the TCS’s public interface to send configurations or from
which the TCS can accept data.
Figure 4 TCS Context Diagram
An important point to note about this diagram is that all configurations flow from the OCS through the
TCS and then on to the TCS subsystems. The DKIST control system is hierarchical in this respect. Events
can flow freely in either direction between any components within the DKIST software system whereas
TCS Software Design Description
SPEC-0021 Rev G Page 10 of 119
configurations can only flow from a master system to its subsystems. A TCS subsystem will never send a
configuration to the TCS only responses to configurations that the TCS has sent to it.
Neither the ICS nor the DHS directly configure the TCS other than by mediation through the OCS. The
TCS does not generate DHS data directly. Any header data required by the DHS from the TCS is
accessed through the DKIST database.
3.2 PRINCIPAL SYSTEMS
3.2.1 The Observatory Control System Within the DKIST hierarchy some principal systems are more equal than others. The OCS in particular is
the main coordinator of the observatory and the master of the TCS. As can be seen from the context
diagram the OCS sends configurations to the TCS and listens to events generated by it. The details of
these configurations and events can be found in [15]. It should also be noted that there are no event flows
from the OCS that the TCS listens to and neither does the TCS send configurations to the OCS
3.2.2 Instrument Control System Each instrument on the DKIST will be controlled by its own dedicated instrument controller that will
control its internal components and coordinate data collection. Each instrument controller is itself under
the control of the Instrument Control System. Access to the TCS by the Instrument Control System will
be mediated by the OCS to avoid access conflicts by multiple instrument controllers.
There is no special interface that the TCS provides specifically for instruments and no restrictions on the
commands that can be sent via the OCS. An instrument is free to monitor any events published by the
TCS or even the TCS subsystem events if it wishes. The TCS however does not monitor any events
generated by an instrument nor does it send any instrument a configuration to execute.
3.2.3 The Data Handling System The role of the DHS is to manage archiving and retrieving of DKIST instrument data as well as providing
quick look and some data reduction capabilities. The DHS will gather all data from the TCS via the OCS
and it is not expected to ever send the TCS a configuration.
3.3 TCS SUBSYSTEMS
3.3.1 Mount Control System The main role of the TCS in relation to the mount control system is to supply a continuous stream of
position demands to the azimuth, altitude and coude rotator in order to track a target and stabilize the
rotation of the image. Overall its functions are:
Initialize all servo electronics, sensing hardware, and other electronics;
Shut down all of the above equipment;
Control the configuration and motion of the three rotational axes;
Move to specified demand positions;
Follow a stream of position demands;
Open and close the mirror cover;
Enable the laser signals to the enclosure aperture plate;
Record the metrology from the TMA sensors;
Prevent the TMA from performing invalid or incorrect configurations; and
Provide up-to-date status information on all TMA equipment.
TCS Software Design Description
SPEC-0021 Rev G Page 11 of 119
The position demands to the mount will consist of the coefficients of an interpolating polynomial that has
been fitted to the position demands of an axis at times ti.
𝑝𝑛(𝑡) =∑𝑎𝑘
𝑛
𝑖=0
∏(𝑡 − 𝑡𝑗)
𝑖−1
𝑗=0
The position demands will have passed through the full pointing chain to convert the user’s target
position in whatever input frame was specified to the mount frame. With these coefficients the mount can
evaluate the demand at any time t by the following algorithm
Set bk = ak For i = k-1,…0, do:
Set bi = ai + (t – ti+1)/bi+1 p(t) = b0
The TCS will provide a 2nd
order polynomial so n will be equal to 2and will evaluate the position
demands at times t0, t0+0.5 and t0+1.0 s. This will also allow the mount to compute velocity and
acceleration demands if need be.
The advantages of the TCS providing the polynomial coefficients in a single event rather than say a single
position demand are
The mount does not have to wait for several events to arrive before it can compute velocity and acceleration demands
The 2nd order fit means that any temporary interruptions to the demand stream from the TCS have no impact on the mount tracking performance. If need be the mount can use the same set of
coefficients for several seconds without loss of tracking accuracy although in practice they will be
refreshed every 50ms.
3.3.2 Enclosure Control System The main mechanisms that the TCS must control via the ECS are the carousel, shutter and entrance
aperture cover. The entrance aperture cover is a straightforward open/close device but the others require
a stream of position demands when tracking just as does the mount azimuth, altitude and rotator axes.
Internally the ECS may have an entrance aperture that it can be used in fine tuning of the position of the
enclosure but this aperture will not be directly controlled by the TCS
In order to meet the requirements on the TCS for control of these mechanisms, the TCS will employ a
separate pointing flow for the enclosure that can have a different target from that of the mount. The
current specification of the enclosure is that the aperture plate can be controlled independently from the
shutter and tracks guide signals from a laser alignment system that locks the aperture plate to the
movements of the mount. Initial studies of the enclosure however indicate alignment of the entrance
aperture can be maintained without the need for close loop control via this laser system. The assumption
in this document is that guide signals from the laser alignment system will not be needed during
observing but that the alignment system will be used to calibrate the pointing model of the carousel and
shutter.
During solar observing the starting point for the enclosure pointing model will by default be the center of
the solar disk although it will be possible to override this by providing a helioprojective x, y. The pointing
model will take account of any offset of the center of rotation of the enclosure from that of the mount.
The output of this pointing model will thus be a stream of position demands in azimuth and altitude
encoded as a set of polynomial coefficients as described above for the mount.
TCS Software Design Description
SPEC-0021 Rev G Page 12 of 119
Using the center of the solar disk as the target for the enclosure will ensure that the carousel and shutter
always receive a smooth position demand stream no matter what offsets the mount may be performing.
However, an option will be provided to override the enclosure target with the mount base target should it
be required that the enclosure follows the mount. At this stage we further assume that it will be up to the
ECS to decide how to meet the altitude demands by moving the shutter continuously or by a combination
of shutter and aperture plate movements.
3.3.3 M1 Control System The TCS’s role in the control of M1 lies mainly in the setting of modes for primary mirror actuators and
the thermal control.
The mode of the primary mirror actuators can be off, passive or active. In the off mode the actuators are
unpowered and stationary. As part of the start of day preparations prior to sun rise the actuators will be
instructed to move to the passive mode. In this mode a force map will be applied via the actuators
dependent on the mirror orientation and perhaps temperature. Once observing and with the WCCS system
generating active optics corrections the actuator mode will be moved to active. The force map applied
will now be the sum of the passive pre-calibrated force map plus the integrated corrections from the
WCCS.
In the TCS requirements document passive mode is referred to as “Open loop” and active mode
corresponds to “closed loop active optics” and “closed loop adaptive optics” (REQ 4.4-0200). The M1CS
will not know whether it is in closed loop “active” or “adaptive” optics mode.
The purpose of the M1CS thermal control system is to keep the primary mirror at or below ambient
temperature even with a full heat load to minimize the effects of mirror seeing. This is achieved by
cooling the primary from the rear. There are four thermal control modes in the M1CS; off, on,
precondition and active.
If the mode is set to off then no mirror cooling is applied. The on state corresponds to a fixed amount of
cooling which can correspond to between 0 and 100% of the maximum cooling available. The remaining
two modes are the ones that will be used most widely. In the pre-condition state the M1CS will attempt to
match a specified set point temperature at sun rise using historical information on the typical rates of
external temperature change. The set point time will be calculated by the TCS to be the time at which the
sun will rise above the minimum elevation limit of the telescope. The default set point temperature will be
the expected ambient temperature at that time based on historical data less the offset from ambient that
will be used when the thermal mode transitions to active. The TCS can send a start time attribute to the
M1CS for the next day at any time after sun rise for the current day.
3.3.4 Top End Optical Assembly Control System The top end optical assembly consists of two independent systems, the heat stop assembly and the M2
assembly. The M2 assembly in turn consists of the M2 positioner for all 6 degrees of freedom and fast
M2 tip-tilt system that provides rigid body motion of M2 at up to 80Hz with amplitude up to 26 arcsecs.
The TCS is required to perform the following operations:
Select the wave front correction mode for the M2 positioner and tip-tilt systems
Monitor the status and health
Select the thermal cooling mode
Select the Lyot stop position
TCS Software Design Description
SPEC-0021 Rev G Page 13 of 119
The heat stop assembly plays a critical role in the DKIST in that it acts as a field stop to transmit a 5
arcmin FOV and drastically reduce the heat load on the subsequent optics. During coronal observations
the Lyot stop and (optional) occulter can be deployed.
The occulter is used to block the light from the solar disk when observing points in the corona. The TCS
will generate a trajectory stream for the occulter consisting of radius r and an angle Θ where
Cyxp ,
Where Θp is the parallactic angle of the point being observed, Θx,y is the angle of the tangent to the solar
limb at the point where the line from the solar center to the helioprojective x,y coordinates of the target
cuts the limb and C is a constant that will need to be calibrated. The parallactic angle will change
continuously as the target is tracked.
The radial coordinate of the occulter in units of the solar radius will be
122 yxr
Where x and y are the helioprojective coordinates of the target. Conversion to linear displacement will
use the current solar radius (in arcsecs) and an image scale of 23.93 arcsec/mm [27]. Due to the poor
image quality at prime focus it will be necessary to add an offset to this radial position rather than try and
place the occulter exactly on the limb.
The M2 assembly is used to position the secondary mirror, control the fast tip/tilt system and manage the
thermal control. An open loop model will be used to maintain the position of M2 supplemented by
adjustments from the WCCS when available. Note that the TCS has no control of the fast tip / tilt system
as all interactions between it and the WCCS take place outside the bandwidth of the TCS.
3.3.5 Feed Optics Control System The Feed Optics Control System controls and monitors the thermal state of M3, M4 and M6 as well as
maintaining the positions the moveable mirrors M3 and M6. These two mirrors are positioned using an
open loop lookup table supplemented by quasi-static alignment from the WCCS. The TCS will maintain
the FOCS position control in one of three modes:
Off – in this mode the mirror drives are disabled and the mirrors are stationary
Passive – The drives are energized and the mirrors are servoed to the positions predicted by the internal lookup tables
Active – in this mode the mirrors will be positioned using the lookup tables but will add on any quasi-static adjustments generated by the WCCS.
The thermal modes provided by the FOCS are:
Off – the thermal control system is off
On – the thermal control system is running at a fixed percentage of the maximum cooling rate where that percentage is controlled by the user
Precondition – the mirrors are being sub-cooled in preparation for the start of observing
TCS Software Design Description
SPEC-0021 Rev G Page 14 of 119
Active – the thermal control system is running and maintaining the mirror temperatures at a set point based upon expected full disk solar illumination.
As for M1, we can see that TCS must send a command to the FOCS to sub cool the mirrors in preparation
to start of day observing. Although the time at which preconditioning will end will be the same as for M1
i.e. the predicted time of sun rise above the lower elevation limit, it may be that the times and frequency
that the command is sent will be different due to the different thermal capacities of the mirrors themselves
and the cooling available.
3.3.6 The Wavefront Correction Control System The TCS is required to perform the following operations on the WCCS:
Select the wave front correction mode for the AO/aO systems
Monitor the health and status
Select the output storage type for the context viewer
Select the calibration mode for control of the PA&C.
The WCCS itself operates in the following main modes [10]:
“off” Controllers are power up but devices are powered down and in an undefined state
“aoCalibrate” The WCCS coordinates the calibration of the various WFC systems.
“idle” The devices are powered up and held in their center or flat position.
“open” The devices are driven by lookup tables and the aO system runs.m
“closed” The aO and HOAO systems will run in closed loop
“limbTracking” The GOS limb tracker will run driving M2 Tip/Tilt.
The WCCS operates somewhat differently compared to the other TCS subsystems in that it maintains a
direct connection to the PA&C so that it can calibrate its various WFC systems. When direct control is
passed to it via the aoCalibrate mode the TCS will reject any configurations that it is given that would
affect the PA&C.
3.3.7 The Acquisition Control System The TCS interface to this system has been simplified since its guiding function was removed. The TCS’s
main task is now simply to turn the camera on at the start of the day and off at the end. A camera can be
in one of three modes;
Off – the acquisition camera will be powered off
Passive – the camera will be powered on and producing images using the current default parameters. These images will not however be passed to the DHS.
Active – the camera is powered on and producing images that are sent to the DHS. By default these images are not tagged to be saved.
It is TBD whether the TCS will set the camera to passive at the end of the day or whether it will set it to
off.
3.3.8 The Polarimetry Analysis & Calibration System
The PA&C is a complex system in that it consists of a large number of polarizers, retarders and
modulators. It is divided into two main components known as the upper and lower Gregorian Optical
TCS Software Design Description
SPEC-0021 Rev G Page 15 of 119
station (GOS). The upper GOS has three linear slides each containing four rotating optical mounts and
two stationary mounts. Those in the third slide are modulators that can rotate continuously. All the
elements of the upper GOS are placed in the beam before the Gregorian focus. The lower GOS is located
at the Gregorian focus and consists of a rotating carousel that can place different elements into the beam.
Available elements consist of apertures, calibration targets and an occulter that will supplement the
occulter at prime focus. In general the TCS will control all these elements by specifying that the PA&C
go to a particular mode. The modes available are:
3.3.8.1 ACQUIRE
This mode is used when pointing to a solar target. If the configuration is valid the PA&C will
automatically position all its sub-systems to allow the beam path to pass through. The upper GOS will
move all three slides to the clear position, the lower GOS target carousel will select the clear position, all
lower GOS artificial light sources will be deactivated, the telescope calibrator will be stowed, and the
pupil mask will be stowed. No sub-system attributes are specified with this mode.
3.3.8.2 SETUP This mode is used in advance of experiments to assist in location and marking of solar target as well as
instrument setup, test, and alignment. If the configuration is valid the polarizer, retarder, modulator,
target carousel, artificial light sources, telescope calibrator, and pupil mask shall begin motion / activation
to a demanded position. The demanded position is provided in the same configuration and may be any or
all of the position attributes for the sub-systems.
3.3.8.3 POLCAL This mode is used during collection of polarimetry calibration data for instruments. It provides the ability
to change the configuration of a subset of the PA&C components through the use of named
configurations. These named configurations will be maintained in the Property database and provide
settings for the upper GOS polarizer stage and rotation angle, upper GOS retarder stage and rotation
angle, and lower GOS selection of aperture, occulter, or dark shutter.
3.3.8.4 TELCAL This mode is used during collection of polarimetry calibration data for the telescope. It provides the
ability to change the configuration of a subset of PA&C components through the use of named
configurations. These named configurations will be maintained in the Property database and provide
settings for the upper GOS polarizer stage and rotation angle, upper GOS retarder stage and rotation
angle, lower GOS selection of aperture or dark shutter, telescope calibrator deployment and
polarizer/retarder orientations and pupil mask .
3.3.8.5 WAVECAL This mode is used to allow instruments to do wavelength calibrations. It only provides the ability to
switch one or more of the lower GOS calibration light sources flags to the active or inactive state.
3.3.8.6 GAIN This mode is used to flat field the cameras. It provides the ability to activate one or more lower GOS
artificial light sources or use the sun itself. In the case where one or more artificial light sources are
activated the lower GOS target carousel will also automatically select the dark shutter position to block
out sun light. In the case where no artificial light sources are activated the lower GOS target carousel will
automatically select the clear position to allow sun light through.
TCS Software Design Description
SPEC-0021 Rev G Page 16 of 119
3.3.8.7 DARK This mode is used to flat field the cameras. If the configuration is valid this mode will cause the lower
GOS target carousel to automatically select the dark shutter position.
3.3.8.8 FOCUS This mode is used to support focus activities for the instruments. Different instruments will require
certain lower GOS targets for their focusing routines. If the configuration is valid the lower GOS target
carousel will be moved to the demand position.
3.3.8.9 ALIGN This mode is used to support alignment activities for instruments. Different instruments will require
certain targets for their alignments routines. If the configuration is valid the lower GOS target carousel
will be moved to the demand position.
3.3.8.10 TARGET This mode serves as a catch-all for any other calibration or alignment needs of instruments. The Focus
and Align modes are abstracted versions of the Target mode that limit which components of the PA&C
can be configured. However in the Target mode configuration may also be specified for the upper GOS
polarizer stage and orientation angle, upper GOS retarder stage and orientation angle, lower GOS target
carousel position, and lower GOS light source selection.
3.3.8.11 OBSERVE This mode is used once the instruments start collecting observation data. In the Observe mode the PA&C
will simply maintain the current configuration of its subsystems. No sub-system attributes are specified
with this mode.
3.3.8.12 AOCALIBRATE This mode is used to indicate if the PA&C is currently receiving configurations from the Wavefront
Correction Control System (WCCS). If the configuration is valid the PA&C will set an .aoCalibrateFlag
to the demanded value. Note that this flag is for informational purposes only and does not drive any
functionality in the PA&C.
Once the TCS has placed the PA&C in this mode it will reject any configurations sent to it that try to
affect the PA&C. The WCCS will have a direct connection to the PA&C and will be in full control.
3.3.9 Control of occulter The control of the PA&C occulter in the lower GOS will be similar to that of the prime focus occulter.
The demand angle will be the sum of the parallactic angle of the target, the angle of the tangent to the
solar limb in helioprojective coordinates and a constant. The constant will need to be calibrated. The
radial coordinate will also be calculated in a similar fashion except that the image scale used will be 3.94
arcsecs/mm [27].
3.4 OTHER SYSTEMS This section describes the remaining nodes in the context diagram. The nodes described here are neither
principal systems nor TCS subsystems but do interact with the TCS either by sending configurations or
by providing data through events.
3.4.1 Time System The time system is not shown as a separate node in the context diagram as it is contained within the TCS.
It is intended that the TCS will use the standard time interface hardware recommended by the project i.e.
a Spectracom T-Sync PCI express card. An important consideration for the TCS is that it uses TAI as its
time standard and not UTC which is discontinuous at the time of a leap second. The TCS will therefore be
TCS Software Design Description
SPEC-0021 Rev G Page 17 of 119
locked to GPS time which is 19 s offset from TAI. The host computer clock will also be synchronized to
this same time. This is important as many components within the TCS will require to access accurate
time from Java and we want to avoid needing to always do this via JNI as the Spectracom drivers are
written in C.
The TCS will use UTC for all for all status information.
3.4.2 Stellar Wavefront Sensor The TCS is required to be able to construct pointing maps using night time point sources (REQ 4.4-0315)
and perform closed loop guiding during construction and integration (see Section 2.1.1 of [1])
Currently REQ 4.4-0150 specifies that the format of the guide signals is as defined in the interface
between the Acquisition Control system and the TCS. However, the acquisition system has no guiding
capability and so no guide signal format is specified.
In view of the lack of detailed requirements and specification of this stellar wavefront sensor it is
proposed that the sensor produce an event atst.wccs.tipTiltOffload with the attributes id, dx, dy, type and
frame. The identifier will be a unique id that the WFS will change each time it measures a new guide
signal. Dx and dy will be the offsets from the reference position measured by the sensor in arcsecs. The
ftype will be one of “relative” or “absolute” in terms of applying the correction as a change to the
pointing model or a change to the target position. Finally, the frame attribute can be “azel”, “coude”,
“m6”, “gos” or “prime”. This attribute is used by the guiding controller to determine which de-rotation
and possibly reflection to apply to the incoming signals.
3.4.3 Weather Station The type of weather station that will be deployed at the DKIST is TBD. The working assumption at this
time is that the TCS will interface to an event stream atst.ws.environment that will at a minimum contain
the attributes temperature, pressure, humidity and corresponding validity flags. The TCS will use these
events to predict and periodically adjust the refraction corrections that it is using in its pointing model.
3.4.4 Hand-box The hand-box is a portable device that will be used by the operator, engineer or observer to control the
following aspects of the TCS (see REQ 4.4-0665)
Movement of each telescope and enclosure axis at slow and fast fixed rates
Selection of coordinate system and tracking mode
Movement of tip-tilt motion for M2 and feed optics mirrors
Open and close M1 mirror cover and enclosure cover
Movement of each PA&C calibration optic
Enabling and disabling mechanisms
The hand-box will be connected by cable to the hand-box PC that will interpret the switch settings etc. of
the physical hand-box and construct configurations that will be sent to the TCS. This hand-box interface
is described in more detail in Section 5.17.
4. SYSTEM DESIGN
This section describes the system design of the TCS and in particular its use of the DKIST common
services. The TCS design is constrained by the need to interact seamlessly with the rest of the DKIST
control system and to use the DKIST CSF in its operation [23]. The first part of this section therefore
TCS Software Design Description
SPEC-0021 Rev G Page 18 of 119
recaps and summarizes the DKIST design with regard to the use of configurations and in particular the
controller model. Section 4.3 then describes the controller model as applied to the TCS.
4.1 CONFIGURATIONS
Configurations lie at the heart of the DKIST CSF. Configurations consist of a list of attributes and values
that the system to which the configuration has been sent must match. This matching is handled by an
DKIST controller which is described later. The internal configuration of a controller can be set up by
issuing a series of set commands (section 4.4.12) or more usually by sending a complete configuration
with a submit command (section 4.4.6).
The class diagram for a configuration is shown below. The diagram shows that a configuration is a
specialized form of an “AttributeTable” class that also contains a unique configuration ID and header tag.
An “AttributeTable” consists of a collection of “Attribute” objects each of which has a name and value.
Internally an “Attribute” value is stored as a string representation of the real value and so multiple
attributes of different types can be stored together in the same “AttributeTable”.
Configurations go through a set of states during their lifecycle as illustrated below. These states are set
by the controller that receives the configuration depending on which command the controller receives, or
if the controller completes its action (either successfully or unsuccessfully). The commands are submit,
cancel, pause, resume and abort. The completion possibilities are Done, Cancelled or Aborted.
A configuration is submitted to a controller (using the submit() method). The controller then performs a
quick check of the attributes in the configuration to make sure that those attributes are valid. The
controller then passes the attributes to the doSubmit() method to make sure that the combination of
attributes make a valid configuration. If the attributes do make up a valid configuration the configuration
is scheduled, and the submit methods returns OK. If the configuration is not valid then the submit
method returns with a problem code. When the action’s scheduled execution time is reached (the default
is immediate execution) then the doCanRun() method is called, providing a hook for final checking that
the configuration can be executed at that moment. If the doCanRun() method returns true then the
doSerialAction() method is called followed by the starting of an action thread. The action thread calls
the doAction() method which does the ‘work’. Eventually the method and action complete. Three
events are usually generated for an action. An event is generated when the actions is scheduled. Another
event is generated when the action is started. A final event is generated when the action completes.
Configurations can also be paused, resumed, and cancelled. This will lead to additional events being
generated.
TCS Software Design Description
SPEC-0021 Rev G Page 19 of 119
Figure 5 Configuration class diagram.
TCS Software Design Description
SPEC-0021 Rev G Page 20 of 119
Figure 6 Configuration states.
Another way in which the configuration can end is via a cancel command. Earlier designs of the DKIST
common services envisioned an attribute that would control how severe the cancel is. This is no longer
the case. The TCS will take the following actions on receipt of a cancel command.
1. If the configuration is queued then it will be removed from the queue and an abort event sent for it. This behavior is provided by the CSF.
2. If the configuration is running then the action thread propagates the cancel command to all the controllers involved in the configuration and then aborts and destroys the configuration. The
propagation of cancel commands is handled automatically if using the Base
“ManagementController”. Action threads must check for the abort status at regular intervals and
then enforce the abort manually.
The existence of queued configurations has some consequences for the TCS. In particular configurations
that are valid now may not be valid later or vice versa. The obvious configurations where this is a
problem are configurations that contain target parameters. The target (typically a point on the Sun) may
TCS Software Design Description
SPEC-0021 Rev G Page 21 of 119
be below the horizon when the configuration is submitted or it may have set by the time the configuration
is de-queued. The TCS handles this by use of the starttime attribute and a TCS specific
“horizonChecking” attribute. When a configuration is received that contains target attributes then the
TCS will first look for the starttime attribute. If starttime is present then the TCS will use that time to
validate the configuration otherwise it will use the current time. If the result of this check is a demand
elevation that is below the horizon then whether the configuration is rejected or not will depend on
whether horizonChecking is on or off. With horizonChecking off, the configuration will be accepted but
the demands to the mount and enclosure shutters will be clamped at the lower elevation limit. The
telescope will thus track along the lower elevation limit until the target rises.
With regard to the starttime attribute, the TCS is not expecting to use this in any configurations that it
generates for its subsystems. The only controller that will make use of starttime will be the top level TCS
head controller if it receives a starttime from the OCS. If a queue-able configuration is received then this
will be sent for immediate execution by the subsystems once it is de-queued by the TCS.
Note that the above does not prevent the OCS from submitting queue-able configurations to TCS
subsystems provided they do not also include the “atst.tcs.starttime” attribute as well as the subsystem
specific starttime attribute in the same configuration. All configurations submitted to the TCS will be
accepted or rejected within 0.1 seconds, regardless of their starttime and final destination.
4.2 EVENTS
As well as providing configurations that can be sent from one component or controller to another, the
CSF also provides events that can be used to signal changes of state or status. Although both
configurations and events can be used to pass data between components, they do so by very different
mechanisms.
Configurations require that there is a connection maintained between the two communicating
components. Events on the other hand use a “publish-subscribe” mechanism. The source of the data
publishes it and then any number of clients can subscribe. The publisher does not care how many or who
the subscribers are and there is no permanent connection between the two. If a publisher is not present
then a subscriber receives no events until the publisher is started.
In the CSF the data sent with events is encoded as an attribute table. Arbitrary amounts of data can
therefore be sent as an event, all a subscriber needs to know is the name of the event so that it can register
to receive it. Events are used internally by the CSF to signal the matching of configurations and will be
used extensively by the TCS to report status and also to send trajectory streams to its subsystems.
There is currently one issue with the event system that will affect the final design of the TCS and that is
the problem of what to do about non-periodic or low frequency events. Suppose for example the status of
the M1 mirror cover was signaled by an event each time it was opened or closed. Suppose also that after
it has been opened the TCS is shut down and then restarted. Currently in this situation, the TCS would
not discover the state of the M1 cover until it was closed again.
There are a number of ways to deal with this
insist that all events are periodic at some “reasonable” rate e.g. 1 Hz
modify the event system such that when a system subscribes for an event it is immediately sent the last value published
insist that every non-periodic event or periodic but low frequency event has a corresponding attribute that can be accessed via a “get”
TCS Software Design Description
SPEC-0021 Rev G Page 22 of 119
Each method has advantages and disadvantages. The advantage of periodic events is that you are
guaranteed to receive an event within a fixed time. The disadvantage is that artificial delays will be
introduced that will reduce efficiency. The ideal system is probably the second where you receive an
event as soon as it is published with no inserted delay but you receive a cached value when you first
register so you always have an up to date value. The only disadvantage of this scheme is that the current
event system does not support it so the current designs call for a “cStatus” event that will be published at
1 Hz that contains all the status information needed by the TCS.
The final option is probably the least appealing as it requires all clients to make all their published data
"gettable" as well as requiring any subscriber to always do a get when first registering.
4.3 THE CONTROLLER MODEL
A DKIST Controller implements what is called the command/action/response model. In this model
“commands” are separate from the “actions” that they trigger. In this way many commands may be sent
to a controller resulting in many simultaneous actions and in particular a controller is not blocked whilst
waiting for a previous command/action to complete. On receipt of a command, the controller will send an
immediate response to the sender saying whether the attributes sent with the command are acceptable or
not. It will then queue the command for either immediate or later action. Once queued, the controller is
ready to accept another command. The actions started by commands are handled by separate threads
under the control of an action manager. Actions can complete as either “done” or “aborted”. Normal
completion for an action would be “done” but if an error occurred then it would be “aborted”. This
response is advertised by the posting of an action status event, which is automatically monitored by the
CSF but controllers can attach a callback to perform processing on the receipt of the event. If no further
processing is required then the callback can be null. The status event’s name is prefixed with the name of
the controller posting the event, as is the name of the status attribute contained in that event’s value. The
event name is made up of the prefix and the text “status”. For example, if the controller “atst.tcs” posts
an action status event, the name of that event is “atst.tcs.status” and that matches the name of the action
status attribute within the event’s value.
Controllers accept configurations as the parameters of their “submit” commands. It is important for the
TCS and its subsystems to distinguish between the completion of an action and the state of an underlying
piece of hardware. A DKIST configuration is in the running state until it is done or aborted. This may or
may not coincide with an underlying piece of hardware being physically stationary or not. For example if
a filter wheel is moved from one position to another then when the hardware reaches the new filter
demand a signal must be sent to the action thread to tell the configuration it is done. In this case the
hardware device stopping coincides with the configuration being done. In the case of slewing the mount
to a new astronomic target however the action is done when the mount, coudé and enclosure first match
the position and velocity tolerances. The mechanisms will continue to track and it would be incorrect to
consider the configuration as still running because of this. Another way of looking at this is to consider
that the configuration is done when the mount is stationary in the frame in which the demand coordinates
are specified.
The class diagram of the DKIST Controller is shown in the figure below.
TCS Software Design Description
SPEC-0021 Rev G Page 23 of 119
Figure 7 Controller class diagram.
4.4 CONTROLLER COMMANDS
The life cycle of a controller is shown below.
TCS Software Design Description
SPEC-0021 Rev G Page 24 of 119
Figure 8 Controller lifecycle.
The following sections discuss the commands that trigger movements between these states.
4.4.1 Load
This is not a command to the controller itself but the action of a container or container manager that loads
the controller from disk, creating the controller by invoking that controllers default constructor. At this
time none of the CSF services are available to the controller. For this reason a controller should do very
little at load time and constructors should be kept to an absolute minimum. The ideal constructor is
empty. Once loaded the controller will be connected to the CSF and other controllers will be able to
perform low level functions on this controller through those services such as setting and getting attribute
information. A controller should execute no functional behavior nor allocate any resources at this level. If
it is in control of hardware it must certainly not attempt to move it.
4.4.2 Init
Initialization is the process of preparing a controller for operation, but stops short of starting that
operation. Typical steps taken during initialization include:
1. Metadata about the controller’s attributes is loaded. 2. Any special memory needs (fixed buffers, memory maps, etc.) are satisfied.
The first of these is performed automatically by the CSF. The latter is an action based upon the functional
behavior required of the controller and so must be done by the controller developer.
The initialized state is the state that many subsystems will be brought to at night when they are not being
used operationally. It is essentially a standby mode where the system is activated but not fully
operational. For example the M1 control system’s thermal control would still be running but the mirror
itself wouldn’t be being actively or passively controlled.
A controller will always reach the initialized state as a result of an “init” command. This is guaranteed by
the common services. Should some error occur during the initialization actions the controller should set
its health to bad or ill and raise one or more alarms depending on the severity of the failure.
4.4.3 Startup
Following an “init”, the next stage will be to receive a “startup”. This command takes the controller into
the running state which is the state for operational use. Only in the running state is the controller able to
accept functional directives, checking configurations are valid and