9
High Speed WAN Data Transfers Goals Move bulk data traffic from USLHCNet servers in CERN to storage server in Caltech (Pasadena) A Proof of Concept, Designing 100G Data Cache Server in front of large Tier1 and Tier2 centers Achieve maximum Network to Disk ratio Reduce CPU utilization and application wise network to disk Latency An easy setup for others to use as a design guide Azher Mughal July 17, 2014 1

Azher CERN 100GDiskTests

Embed Size (px)

DESCRIPTION

Testes de 100G disco a disco CALTECH

Citation preview

  • High Speed WAN Data Transfers

    GoalsMove bulk data traffic from USLHCNet servers in

    CERN to storage server in Caltech (Pasadena) A Proof of Concept, Designing 100G Data Cache

    Server in front of large Tier1 and Tier2 centers Achieve maximum Network to Disk ratio Reduce CPU utilization and application wise network

    to disk Latency An easy setup for others to use as a design guide

    Azher MughalJuly 17, 2014

    1

    PedroTextboxOK

    PedroTextboxReferencia para mapa dos testes 100G

  • Sandy1(client)

    Sandy3(client)

    Storage Server

    Internet2AL2S

    CERN / Amsterdam

    Caltech CERN

    Maximum traffic allowed (80Gbps)

    80Gbps 40Gbps

    40Gbps

    Logical Layout from CERN (USLHCNet) to Caltech (Pasadena)

    2

    PedroRectangle

  • Physical Layout from CERN (USLHCNet) to Caltech (Pasadena)

    Storage Server

    Caltech CERN40Gbps

    40Gbps

    Sandy1

    Sandy3

    MLXe-8

    MLXe-16 MLXe-16 MLXe-16

    Internet2 / MANLAN

    MLXe-16

    SURFnet

    T1600

    CENIC

    RTT 165ms

    3

    PedroRectangle

  • Server and OS Specifications

    Storage Server: RHEL 7.0, Mellanox OFED (2.2-1.0.1) SuperMicro (X9DRX+-F) Dual Intel E5-2690-v2 (IVY Bridge) 64GB DDR3 RAM Intel NVMe SSD drives (Gen3 PCIe) Two Mellanox 40GE VPI NIC

    Client Systems: SL 6.5, Mellanox OFED SuperMicro (X9DR3-F) Dual Intel E5-2670 (Sandy Bridge) 64GB DDR3 RAM One Mellanox 40GE VPI NIC

    4

    PedroRectangle

    PedroRectangle

  • Data Transfer Tools, which to choose ?

    FastDataTransfer (http://fdt.cern.ch) Operates using TCP Breaks source files in parallel data streamsWritten in Java

    RDMA FTP (http://ftp100.cewit.stonybrook.edu/rftp/) Operates using RDMA or TCP RDMA Mode: Breaks source files in parallel data

    streams TCP Mode: Single stream for each source fileWritten in C

    5

  • Data Transfer using RFTP Software

    RFTP software in TCP mode requires multiple source files to be transferred in parallel.

    Test Configuration (Server) 4 RFTP daemons listening at unique TCP ports Each RFTP server handles 2 SSD drive mount

    points (total 8 system mount points)Test Configuration (Clients)

    System /dev/zero files are used for reading (50 aliases created in a common directory)

    Each physical server starts 4 RFTP clients Two client RFTP processes connect with one

    RFTPD daemon at destination Total of 8 RFTP clients on two client servers

    6

  • Data writing over the SSD Drives Destination server in Caltech

    Disk write at 8.2GB/sec

    7

  • Inbound data traffic from the WAN (CERN)

    70Gbps

    First 40GE NIC

    Second 40GE NIC

    8

  • Internet AL2S Layer2Traffic Statistics

    97.03 Gbps

    Traffic surge of 97.03 Gbps was observed during these transfers

    This is possible limiting factor

    on overall traffic at the receiving

    serverMicrobursts are

    often not reported by monitoring

    clients9

    Slide Number 1Slide Number 2Slide Number 3Slide Number 4Slide Number 5Slide Number 6Slide Number 7Slide Number 8Slide Number 9