Niko Neufeld CERN PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter

Embed Size (px)

Citation preview

  • Slide 1
  • Niko Neufeld CERN PH/LBC
  • Slide 2
  • Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter Farm up to 4000 servers UX85B Point 8 surface subfarm switch TFC 500 6 x 100 Gbit/s subfarm switch Online storage Clock & fast commands 8800 Versatile Link 8800 Versatile Link throttle from PCIe40 Clock & fast commands 6 x 100 Gbit/s ECS Online upgrade status - Niko Neufeld2
  • Slide 3
  • Full event-building of every bunch-crossing (40 MHz) no bottle-neck for event-building in the system DAQ, ECS and TFC relying on the same universal hardware module PCIe40 New DAQ with very challenging I/O in the event-builders and the network Cost-effectiveness dictates very compact system concentrated in new data-center, connected via long- distance links to the detector FEs ECS and TFC continued smoothly from current system (modulo the significant changes in the TFC due to the trigger-less nature of the read-out) Online upgrade status - Niko Neufeld3
  • Slide 4
  • Most compact system achieved by locating all Online components in a single location Power, space and cooling constraints allow such an arrangement only on the surface: containerized data- centre Versatile links connecting detector to readout-boards need to cover 300 m Online upgrade status - Niko Neufeld4
  • Slide 5
  • CPPM, Bologna, CERN (supporting)
  • Slide 6
  • PCIe40: the universal hardware module, a PCIe Gen3 x 16 card with up to 48 optical transceivers TELL40: a PCIe40 with the DAQ firmware with up to 48 GBT receivers SOL40: a PCIe40 with the ECS/TFC firmware (replacing SPECS and various TFC modules) with 48 GBT transceivers
  • Slide 7
  • Online upgrade status - Niko Neufeld7 slides by JP. Cachemiche
  • Slide 8
  • Online upgrade status - Niko Neufeld8
  • Slide 9
  • 9
  • Slide 10
  • 10 slide by F. Pisani
  • Slide 11
  • Online upgrade status - Niko Neufeld11
  • Slide 12
  • Bologna INFN and U., CNAF, CERN
  • Slide 13
  • Performances tests performed at CNAF with a test bed similar to the CERN one Exploiting the best performances required some tuning Bind processes according to NUMA topology and switch off power saving modes Very close to saturation 52.5 Gbit/s!!! Online upgrade status - Niko Neufeld13 A.Falabella et al
  • Slide 14
  • Extensive tests need to be done on a bigger cluster We aim at the new CINECA Galileo TIER-1 cluster Possible to test on a scale similar to the LHCb Upgraded DAQ network The cluster is in production from the last week of January 2015 First tests in few weeks managed by CNAF team ModelIBM NeXtScale Cluster Nodes516 Processor 2 8-core Intel Haswell 2.40GHz per node RAM 128 GB/node, 8 GB/core Network InfiniBand with 4x QDR switches Online upgrade status - Niko Neufeld14
  • Slide 15
  • CERN (thanks to technical coordinatioe team for their help!)
  • Slide 16
  • In real conditions First test loop-back of AMC40 Soon with front-end prototype Watch out for bit-errors / verify optical margin Use a MiniDAQ setup Online upgrade status - Niko Neufeld16
  • Slide 17
  • PCIe40 Patch cord court MPO(2m-5m) Cble longue distance (300m) MPO-MPO Adaptateur Cassette MPO vers 12x LC ou SC Cble longue distance (300m) Patch panel rack en souterrain x12 Config 1: Config 2: (repartition de charges) Fan Out court (2m-5m) PCIe40 MPO-MPO Adaptateur A B slide by L. Roy Online upgrade status - Niko Neufeld17
  • Slide 18
  • Online upgrade status - Niko Neufeld18
  • Slide 19
  • Online upgrade status - Niko Neufeld19
  • Slide 20
  • no error (12 links for 4 weeks) over 700 m More links and different tests to follow Online upgrade status - Niko Neufeld20
  • Slide 21
  • Annecy, CPPM, CERN + sub-detector experts
  • Slide 22
  • TFC & ECS firmware (SOL40) DAQ firmware (for MiniDAQ and PCIe40) Prototype ECS for MiniDAQ Global firmware framework