36
1 LTC Timing AB/CO/HT Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case Julian Lewis

1 LTC Timing AB/CO/HT Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case Julian Lewis

Embed Size (px)

Citation preview

1

LTC Timing AB/CO/HT

Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case

Julian Lewis

2

LTC Timing AB/CO/HT

Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case

3

CCC TIMING RACK 4SYNC

CCC TIMING RACK 5CBCM B

CCC TIMING RACK 3CBCM A

Main MTG A

LHC MTG A

Main MTG B

LHC MTG B

HP GPS Receiver

GPS Antenna

1PPS 10MHz

LHC Switch

Main Switch

Reflective memoryHUB

DSC Comunications

DSC Synchronization

96

HardwareExternal

Conditionsfrom patch pannel

1PPS

10MHz

40MHz

Sync

A1PPS40MHz

Sync

A1PPS

40MHz

Sync

A1PPS40MHz

Sync

A1PPS

40MHz

Sync

A1PPS

GMTs PSB LEI CPS ADE SPS GMTs PSB LEI CPS ADE SPS

LHC GMT

LHC GMT

16 Ext Events

16 Ext Events 16 Ext Events

16 Ext Events

LHC Beam Energy/Intensity

GMT

LHC Beam Energy/Intensity

GMT

Work-Station /Server

Timing EventGMT

to CCC

SPS Intensity SPS Intensity

Master Slave configuration

GPSOne pulse per Second

GPS

SymmetricomXLI

PLLOne pulse

per Second

Phase locked10MHz

Basic Period1200/900/600 ms

Advanced (100us)One pulseper Second

Synchronized 1KHz(slow timing clock)

Phase locked10MHz

Phase looked40 MHz Eventencoding clock

40MHz PLL

Synchronizationmodule in eachtiming generatorcrate

RS485Timing

MTTMultitaskTimingGeneratorMTT

UTC time (NTP or GPS)

Eventtables

External events

UTC Time and GPS

CERN UTC Time

Set once on startup & on Leap Seconds

RS485Timing

CTR

PPS

10 MHz

1 KHz

40MHz

Delay

Control SystemCERN UTC Time

25ns steps

Timing receiver

SymmetricomCS4000portable

Atomic Clock

5

Multi-tasking module

Two types of tasks System tasks for millisecond, external events, telegrams ... Event table tasks

Controlled across a FESA API

FESA API allows ...

Load/Unload event table

Run table N times

Synchronize it with an event

Run table for ever

Stop table

Abort table

LHC GMT

MT-CTG

16

16 Virtual CPUs produce the event output stream.The host front end system accesses MTT registers across VME bus (Beam Energy)The VMEP2 permits hardware triggering tasks like PM event sending

Reads thresholds

@1kHz-24Bit 108

Safe Machine Parameters Controller

for LHC

Energy A

I_beam1 & 2

Reads status

Energy B

BEM

BCT “A”

BEM

ManagementCritical

Settings

(CTRV)

CTRV

CTRx

LSA

BCT “B”

EXPLine

driver

LHCTiming

Generator

SMP @ 10Hz 16Bit 1010

(Flags, E & Int.)

BLMCTRVCTRV BLM BLM

CTRVCTRV

BIS BIS

BIS

CTRVCTRVCTRV

Kickers Kickers

Timing Network

Events, UTC, & Telegrams (including SMP)

Safe Machine Parameters Distribution

BIS

CTRxCTRxCTRx

EXP

Flags TTLHw Output

If length > 5m

Beam PermitFlags

BPF1/2

Current hardware status

The SMP “mark-1” will be installed by week 15 (90% confidence)

There are as yet no Beam-Permit Flags wired to the timing, so we can't test PM/XPOC.

Auto re-enable of Postmortem is waiting installation.

Transmission delay calibration will start next week.

8

LTC Timing AB/CO/HT

Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case

What is distributed on the LHC timing cable

The LHC telegram Its main function is to continuously retransmit (shadow) information that

has already been transmitted by events. Sent out each second, on the second.

LHC machine events An event is sent punctually when something happens that affects the

machine state. Some are asynchronous that come from external processes, e.g. post-mortems, energy, while others are produced from timing tables corresponding to running machine processes. Some are sent directly such as dump, commit transaction .

The UTC time of day Resolution is 25ns, jitter is less than 1ns peak to peak, wonder is

estimated to be around 10ns.

Some web addresses

http://ab-dep-co-ht.web.cern.ch/ab-dep-co-ht/timing/Seq/tgm.htm

This link shows the current telegram configuration. It also has information about the CTR hardware and

other useful stuff. http://ab-dep-co-ht.web.cern.ch/ab-dep-co-ht/timing/Seq/mtgConfig.htm

This link shows all defined timing events for the timing cables, and other useful stuff.

14

LTC Timing AB/CO/HT

Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case

Postmortem Event generation

Two Beam-Permit-Flags, one per LHC ring, arrive at the LHC central timing inputs from the Beam Interlock System.

Beam-Dump events may be sent from the LHC central timing to the control system to dump the beam in one or other ring.

The specification requires only one PM event for both rings. In some LHC machine modes such as “Inject & Dump” , sending the PM events will be inhibited. However the beam dumped events always go out.

When both rings are dumped, the postmortem event is sent twice within 1ms.

Postmortem Event suppression

Two counters are used in the CTR, one per Beam-Permit-Flag (BPF)Each counter clock is connected to one of the BPF flagsThe "Disable Post-Mortem Ring 1" disables the counter connected to BPF-1The "Enable Post-Mortem 1" enables the counter connected to BPF-1When the counter is disabled and the BPF goes down nothing happensWhen its enabled the counter makes an output triggering the PM event It will be sent twice if both counters are enabled and both rings are dumped

BPF1

BPF2

CTR CTG-MTT

LHC GMT

VME/P2

CLKDelay=1Disabled

CLKDelay=1Enabled

Disable-1, Enable-1Dumped1/21 x (PM)

Disabled

Enabled

PM-1SuppressTable

Loads

Warn-Inject

LSEQ

BPF1/2

Postmortem Event auto Re-Enable

CTR CTG-MTT

LHC GMT

VME/P2

CLKDelay=2Disabled

CLKDelay=2Enabled

Disable1/2

Re-Enable

Re-Enable

PM-1SuppressTable

Warn-Inject

BPF1/2

The Disable PM event also triggers a counter in a CTRIn this case 2ms later an output pulse triggers the MTT to send PM enable eventSo the next BPF transition will trigger the PM event to be sent

18

LTC Timing AB/CO/HT

Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case

19

2.2 G-Bit / S optical link64Mb Reflective memories

CMW ServerLHC MTG

GMTLHC

Clocks:40.00 MHz GPS clock1PPS (1Hz) clockBasic period clock

Event

Tables

SafeParams

Energy/RingIntensity/RingBIS Beam permit Flags

External Events

LSA High levelSequencer

LSA Core

FESA LHC API

Slave/Master

LHC Central Timing API

LSA and FESA The FESA API is implemented on the LHC

timing gateway Accesses timing generators across reflective

memory Implements

Load or Unload event table Get running tables list Set event table run count and synchronization event Stop or Abort event table Set telegram parameters Send an event Read the status of tasks and MTT module

21

LHC Beam RequestBTNI Next injection Beam TypeObviously the next injected beam type is determined by the

settings in the injector chain and by nothing else. The LSEQ may request a certain type of beam to be injected, but if the requested value does not correspond to the actual beam type being provided by the injector chain, then the request can not be fulfilled and no injection can take place. This value is thus inherited from the injector chain

BKNI Next injection RF BucketThere are 35640 RF buckets around the LHC ring. It is essential

that this parameter is established before RF re-synchronization starts between the CPS and the SPS RF systems, namely 450ms before CPS extraction towards the SPS

RNGI Next injection RingThis parameter determines the value of the SPS beam destination

in the DEST group of the telegram. Various ways to do this are possible. Its an OP decision.

BCNT Number of CPS batches

22

LTC Timing AB/CO/HT

Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case

23

LHCCentral Timing

Generation

CBCM Controlling

Injector Chain

GatewayFESA API

LSATimingService

SPS destination request R1,R2

CPS Batch Request 1,2,3,4

LIC Timing

HIX.FW1K

LHC TimingInhibitsRequestsInterlocks

SIS TI8/TI2& SPS DumpInhibits

LHC Fill Requests:BucketRingBatches

MasterON/OFF

CTR

LHC Fill Requests:BucketRingBatches

ReflectiveMemoryLink

CBCM SequenceManager

LIC Sequence

TI8/TI2Dump

RequestTI8/TI2Dump

LHCUser

RequestLHC User

NormalSpare

LSA changes Allowed

LSA Master

SEX.FW1KSPS.COMLN.LSA_ALW

LHC – LICSignal Exchange

25

Linac PSB CPS SPS

D3Dump

TI8Dump

TI2DumpSPS

Dump

TI8

TI2

CNGS

TCLP

Default when no LSEQ request and LSEQ is master

Default when no TI8/2 dump and LSEQ is not master

Only possible when SIS dump status is “IN” and requested

Only possible when LSEQ is master and no TI8/2 SIS inhibits

Default when no batches were requested and LSEQ is master

LEIR

CBCM Sequence Manager

PSB1 PSB1 PSB2PSB2 PSB3PSB3 PSB4 PSB4

CPS Batch 1 CPS Batch 2 CPS Batch 3 CPS Batch 4

SPS Cycle for the LHC

SPS injection plateaux

The LHC Beam

LSA Beam request:RF bucketRingCPS batches

ExtractionForewarning

LHC Injection plateaux

Injection Injection

ExtractionExtraction

The LHC timing is onlycoupled by extraction

start-rampevent

ExtractionForewarning

28

PSB1 PSB1 PSB2PSB2 PSB3PSB3 PSB4 PSB4

CPS Batch 1 CPS Batch 2 CPS Batch 3 CPS Batch 4

SPS Cycle for the LHC

SPS injection plateaux

The LHC beamOperators mark the SPS cycle as “TOLHC”Inheritance mechanism propagates “TOLHC” to all cycles in the beamLSEQ Control affects the way “TOLHC” beams are played

The SPS telegram contains a new “DYNAMIC” destination calculated on the flyTI8/TI2/TI8_DMP/TI2_DMP/SPS_DMP/CNGS/FT

The CBCM evaluates LHC beam requests 1.2 seconds before the first PSB cyclein the CBCM time domain. The CBCM time domain is 2.4 seconds ahead of theaccelerator complex time domain. So the request must be 3.6 seconds ahead.

Any bad condition will provoke the spare response as usual.

“TOLHC”

29

Killing a “TOLHC”

batch The Linac Tail-Clipper timing cuts 99% of the beam

at the Linac. The PSB plays the cycle with no or very little beam. The CPS destination is forced from the SPS to D3 The SPS cycle continues as usual, but no CPS beam

is injected, destination is the internal dump. The SPS injection timing for the suppressed batch

fires anyway.

TCLP

D3Dump

30

Basic behavior TOLHC1. Any abnormal interlock drives the beam into spare. This

may result in SPS going in to economy mode. N.B. TI8 dump or TI2 dump are only possible when the dump status

(From SIS) indicates that the dumps are in place.• TI8TI8 or TI2TI2 destinations are only possible when LSEQ is

master, and when there is a valid LSEQ beam request, and there are no TI8/2TI8/2 SIS inhibits.

• When LSEQ is not the master the default destination is the SPS dumpSPS dump. A TI8TI8 or TI2TI2 dump can be requested. The number of CPS batches delivered is controlable.

• When LSEQ is master and there is no request, the beam is killed, but the magnetic cycle takes place.

• Mastership can only be changed in the absence of the LHC User request

31

Nominal fill Use case 1Prepare injector chain

LHC operator asks SPS operator to prepare to fill the LHC.

SPS operator removes the SPS LHC cycle requestSPS LHC cycle request. . If we don’t want to deliver the beam to the SPS dump SPS dump

straight away.– LSEQ mastership can not be changed while the LHC

beam is playing !!! SPS operator loads/runs the LHC fill sequence.

The BCD starts up with LHC beams in spare, and the SPS may be in economy mode.

32

Nominal fill Use case 2Beam to TI8/TI2 Dumps

SPS operator want to send the beam to a TI8/TI2 dumpTI8/TI2 dump LSEQ is not the master OP sets the dump targets to move into place and waits (Minutes) OP selects TI8/TI2 dump requestTI8/TI2 dump request external conditions on central

timing. OP sets LHC user requestLHC user request on. OP sets the CPS batch count to N

N x CPS batches are now sent to a TI8/TI2TI8/TI2 dump

33

Nominal fill Use case 3Beam to SPS dump

OP want to send the beam to SPS dump. LSEQ is not the master The TI8/TI2 Dump requestsTI8/TI2 Dump requests must be removed. The LHC User requestLHC User request must be present

N CPS batches are now delivered to the SPS SPS dumpdump

34

Nominal fill Use case 4LSEQ takes mastership

LSEQ now wants to become master.– The LHC user requestLHC user request must be removed.– The current SPS super-cycle will finish and then the LHC beam is

spared (Economy Mode) The SPS telegram bit SPS.COMLN.LSEQ_ALW gets set by the CBCM. LSEQ calls the API to request mastership.

If SPS.COMLN.LSEQ_ALW isn’t ready, an error is returned. LSEQ becomes master, and the LHC user request is turned

back on by SPS operations. All beams are played in normal but TCLP, D3 insure there is

no beam injected into the SPS. The SPS destination is SPS dump, and there is no extraction

timing.

35

Nominal fill Use case 5LSEQ Sends beam to LHC

LSEQ wants to send beam to the LHC LSEQ must be the master There must be no TI8/2TI8/2 SIS inhibits. LHC user requestLHC user request must be asserted

LSEQ makes a request for 1/2/3/4 batches to ring 1/2 bucket N

On the next SPS super cycle the request is executed, then cleared.

36

LIC – LHC filling

It would be a good idea to test the fill use case out in a dry-run once all cabling and hardware installation has been completed.

LSEQ Takes mastership Makes a beam request PM Enable/Disable with BPF transitions