Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Pacific Research Platform and CENIC – Pacific WaveAdvanced Networking Initiatives
PRESENTERS: Sana BellamineNetwork Engineer, CENIC
John HessNetwork Engineer, CENIC
[ 2 ]
• Infrastructure• Ongoing initiatives in support of research applications and projects• Data-driven, compute-intensive use-cases and experiments• Special Projects
TABLE OF CONTENTS
[ 3 ][ 3 ]
INFRASTRUCTURE
CENIC: CalREN DC, CalREN HPR, Pacific Wave
Pacific Research Platform
4
CENIC Manages 3 backbones: CalREN DC (Commodity) , CalREN HPR, Pacific Wave
Pacific wave International Exchange
CalREN HPRR&E
CalREN DCR&E and Commodity
SNVL SNVL
LALA
100GE
100GE
Internet
Over CENIC’s managed Optical system
Over Internet2’s optical system and over some partner/International links
100GE
100GE
SNVLSNVL
LALA
CalREN DC backbone: 3x100GE Core
SNVL
SNVL
LA
LA
EMVL SACR
RIVE
3x100GE 3x100GE
3x100GE
3x100GE
3x100GE
SLO
SOL
100GE
2x10GE
2x10GE
SDG
SDG/AeroDrL2
SFO
PDC
CSAC
Tustin
100GE 100GE 100GE 100GE
3x100GE3x100GE
3x100GE
FERG
BAKE
FRES
100GE
100GE
2x10GE
2x10GE
40GE
ECENL2
CORN
10GEColusaL2
100GE
100GE
10GE
The CalREN HPR backbone: 1x100GE Core
The Pacific Wave backbone
Tokyo
Seattle
Bay Area
LosAngeles
El PasoHouston
Dallas
Tulsa
Kansas City
Chicago
ALBQ
Denver
10GE(I2 AL2S)
100GE(Transpac)
100GE(I2)
2x100GE(I2)
100GE(Transtelco)
100GE(I2/LEARN)
100GE(LEARN)
100GE(I2)
100GE(I2)
100GE(I2)100GE
(I2)100GE(I2)
100GE(I2)
100GE(I2)
Note: The Pacific Wave Backbone does not use CENIC’s optical backbone
8
[ 9 ]
[ 10 ]
Atlantic Pacific Research and Education Exchange (AP-REX) CollaborationInternet2 and Pacific Wave have recently undertaken an initiative to more closely align the activities of R&E Exchanges on the East and West Coasts (MANLAN, WIX, and Pacific Wave):• Creating a seamless end-to-end experience for participants in the Exchanges
– Close collaboration among the exchanges and emerging R&E networking trends, such as the PRP, NRP and GRP.– Improving end-to-end performance for the research community.– Enhancing access to both community-hosted and cloud-provided compute and storage resources that are integral to
the success of research.• 100 Gbps express route from Los Angeles to Mclean and New York, with backups on two alternate paths.• Automation of circuit provisioning
– Enabling operations staff within both organizations to dynamically configure paths across the entire Exchange footprint.
– Reducing email interaction with participants to automated provisioning• Development of a shared website or portal to provide Exchange information
– Share connection costs, network policies, and service offerings.– Contact information for technical support and administrative questions.
• Providing consistency in performance measurement and reporting– SNMP traffic measurement, weather maps, link utilization– Integrated perfSONAR mesh with test points across the exchanges connected at both 10 Gbps and 100 Gbps– Provide a MADDASH with information on throughput and other metrics.
• Seeking additional opportunities for automation and simplification of service provisioning and service offerings.
[ 11 ]
[ 12 ][ 12 ]
ONGOING INITIATIVES IN SUPPORT OF RESEARCH
APPLICATIONS AND PROJECTS
Ongoing network scaling initiatives in support of data intensive projects
- Scaling of the CalREN DC and CalREN HPR backbones:- Prepare the core segments of the layer3 backbone for 400GE+.- Typically, the optical infrastructure is upgraded first ( and takes the longest )
- Accelerated deployment of Flex Spectrum ROADMs and CDC capable Add/Drop infrastructure. Prioritizing core paths of the network.
- Deploying high density 100GE core L2/L3 devices that will support 400GE termination ( with minor hardware upgrades )
- Pacific Wave: - Replacement of the Brocade MLXes with MX10Ks. ( Starting Q1/2020)- In select cases provision 100GE paths for experimental traffic. ( e.g. for testing
dynamic vlan provisioning)- Extending Pacific Wave’s international reach via AP-REX.
- Goal is for the network infrastructure to be the “fabric” for Science:- Readiness to deliver 400GE+ for data-intensive scientific projects:
- 11/2019: Ability to provision 400G+ superchannels over the Southern route. The San Diego Supercomputing Center is over this path.
- By mid-2020, we will be able to provision 400G+ between nodes on the Southern path and nodes on the Coastal path.
(GDC)
2015 Vision: The Pacific Research Platform will Connect Science DMZsCreating a Regional End-to-End Science-Driven Community Cyberinfrastructure
NSF CC*DNI Grant$6M 10/2015-10/2020Year 5 now Underway
PI: Larry Smarr, UC San Diego Calit2Co-PIs:• Camille Crittenden, UC Berkeley CITRIS, • Tom DeFanti, UC San Diego Calit2/QI, • Philip Papadopoulos, UCI • Frank Wuerthwein, UCSD Physics and SDSCLetters of Commitment from:• 50 Researchers from 15 Campuses• 32 IT/Network Organization Leaders
Source: John Hess, CENIC
[ 15 ]
Optical Infrastructure used by the DC and HPR backbones: Continue to Expand 400G capabilities
[ 16 ]
Scaling the DC Backbone: Starting from the CORE
SNVL
SNVL
LA
LA
EMVL SACR
RIVE
By Mid 2020
Optical Path can support 400G Superchannels - handoffs towards L2/L3 are 100GE - 400G validation in production completed 11/2019
SLO
SOL
100GE
2x10GE
2x10GE
SDG
SDG/AeroDrL2
SFO
PDC
CSAC
Tustin
100GE 100GE 100GE 100GE
3x100GE3x100GE
3x100GE
FERG
BAKE
FRES
100GE
100GE
2x10GE
2x10GE
40GE
ECENL2
CORN
10GEColusaL2
100GE
100GE
10GE
By end 2021
[ 17 ]
Pacific Wave BackboneTokyo
Seattle
Bay Area
LosAngeles
El PasoHouston
Dallas
Tulsa
Kansas City
Chicago
ALBQ
Denver
10GE(I2 AL2S)
100GE(Transpac)
2x100GE(I2)
2x100GE(I2)
100GE(Transtelco)
100GE(I2/LEARN)
100GE(LEARN)
100GE(I2)
100GE(I2)
100GE(I2)100GE
(I2)100GE(I2)
100GE(I2)
100GE(I2)
100G dedicated to experimental dynamic provisioning
[ 18 ]
Pacific Wave infrastructure-attached resources
100G-connected DTNs --● Dual-socket – (2) E5-2667v4 8-cores @ 3.2GHz w/256GB DRAM● 6.4TB raw on NVMe -- (2) Kingston (Liqid) 3.2TB NVMe PCIe 3.0 x8● 80TB raw on SAS3 – (8) 10TB HGST off of an LSI 9300-8i HBA● Mellanox ConnectX-5
100G-connected pS nodes – single-socket E5-1630v4 4-cores @ 3.7GHz w/64GB DRAM
10G-connected pS nodes
10G-connected ‘dynamic’ pS nodes
10G-connected x86 Hypervisor VM servers
[ 19 ]
Pacific Wave K8s• K8s-managed cluster, HA control-plane of 10G ‘dynamic’ pS nodes
• Cilium for the CNI: VXLAN tunnels, moving to direct routing
• CRI-O for the container runtime, exploring Singularity-CRI on at least one of the worker nodes
• EdgeFS for persistent storage (in-progress): NVMe for caching & metadata for data across SAS3 spindles
• Interdomain cluster federation with Admiralty.io: in testing with Starlight K8s cluster, planning toward federating with PRP Nautilus, other cluster domains
• Future work:– Dual-stack IPv4/v6 – Consider deploying GPU / TPU / FPGA– Collaborating with SENSE toward containerizing DTN-RM
•
[ 20 ][ 20 ]
DATA-DRIVEN, COMPUTE-INTENSIVE
USE-CASES AND EXPERIMENTS
More Scientists are using GPUs: Jules Jaffe has Over 1 Billion Images So Far!Requires Machine Learning for Automated Image Analysis and Classification
Phytoplankton: Diatoms
Zooplankton: Copepods
Zooplankton: Larvaceans
Source: Jules Jaffe, SIO
”We are using the FIONAs for image processing... this includes doing Particle Tracking Velocimetry that is very computationally intense.”-Jules Jaffe
[ 22 ]
Open Science Grid on PRP – start with two of the smaller big science projects
• Bullet 1– Bullet 2
• Bullet 3
Source: Igor Sfiligoi, UCSD
IceCube Neutrino Observatory
• IceCube Neutrino Observatory has been using 120 Nautilus GPUs since March 8
• This would cost $2,880/day in a commercial cloud (at $1/hr) or ~$20,000/week
• An 8-GPU FIONA8 for Nautilus costs $20,000 to buy
By Amble - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=8773726
GPU Simulations are Needed to Improve Ice Model.=> Results in Significant Improvement in Pointing Resolution
for Multi-Messenger Astrophysics
Source: Tom DeFanti, UCSD
PRP’s Kubernetes Nautilus GPU Usage For CHASE-CI Machine Learning and Application Projects
Grafana shows 332 32-bit GPUs on Nautilus
osggpus is IceCube Using 163 GPUs in this SnapshotMost of the Other Projects are Machine Learning
See https://grafana.nautilus.optiputer.net/d/KMsJWWPiz/cluster-usage?orgId=1
Source: Tom DeFanti, UCSD
[ 25 ]
SDSC and IceCube Center Conduct GPU Cloudburst Experiment• source: https://opensciencegrid.org/news/2019/11/22/gpu-cloudburst.html
• single HTCondor pool of 51K GPUs from AWS, Azure, Google Cloud, spanning 28 cloud regions, across 3 continents
• aggregate peak of about 380 PFLOP32s -- 95% of the processing power of Summit, (based at Oak Ridge National Laboratory)
2017-2019: HPWREN: 15 Years of NSF-Funded Real-Time Network Cameras and Meteorological Sensors on Top of San Diego Mountains for Environmental Observations Source: Hans Werner Braun, HPWREN PI
Pan-Cetin Forest Fire Detection Algorithm on Holy Fire (2018)
Chris Paolini, SDSU https://youtu.be/FzALm-hXZZ0
Once a Wildfire is Spotted, PRP Brings High-Resolution Weather Data to Fire Modeling Workflows in WIFIRE
Real-Time Meteorological Sensors
Weather Forecast
Landscape data
WIFIRE Firemap
Fire Perimeter
Work Flow
PRP
Source: Ilkay Altintas, SDSC
PRP Engineers Designed and Built Several Generations Of DTNs called Flash I/O Network Appliances (FIONAs)
UCSD-Designed FIONAs Addressed the Disk-to-Disk Data Transfer Problem at Near Full Speed on Best-Effort 10G, 40G and 100G Networks
FIONAS—10/40G, $8,000
FIONAs Designed by Phil Papadopoulos, John Graham, Joe Keefe, and Tom DeFanti
FIONette—1G, $250Used for
Training 50 Engineers in 2018-2019
John Graham & Dima Mishin at SC’17 with 100G FIONA8
Two FIONA DTNs at UC Santa Cruz: 40G & 100G
PRP/CHASE-CI Created FIONA8s By Adding 8 GPUs to Support PRP Big Data Applications Needing Machine Learning For Science
FIONA8 2U/4U Rack-Mounted Cluster NodesRunning Kubernetes in Science DMZs
Eight Nvidia GTX-1080 Ti or RTX 2080-Ti GPUs~$19,000-$29,000
44 CPU cores, 256GB RAM, 1TB NVMe SSDsUp to 8TB NVMe, 16TB SSD, 1TB RAM
Dual 10G portsDesign: John Graham, Calit2
• Elastiflow, iperf slides
ElastiFlow: See Inter-cluster Campus-Level Traffic Flow Grouped by AS
TMI4pptxAward
Source: Tom DeFanti, UCSD
Add PRP Hess slide
TMI4pptxAward
100G NVMe 6.4TBCaltech
40G 192TBUCSF
40G 160TB HPWREN40G 160TB4 FIONA8s*Calit2/UCI
35 FIONA2s12 FIONA8s
2x40G 160TB HPWRENUCSD
100G Epyc NVMe100G Gold NVMe
8 FIONA8s + 5 FIONA8s
SDSC @ UCSD
1 FIONA840G 160TB
UCR 40G 160TB
USC
100G NVMe 6.4TB2x40G 160TB
UCLA
1 FIONA8* 40G 160TB
Stanford U
2 FIONA8s*40G 192TB
UCSB
4.5 FIONA8s
100G NVMe 6.4TB40G 160TB
UCSC
Connected by PRP’s Use of CENIC 100G NetworkPRP’s Nautilus Hypercluster
10 FIONA2s2 FIONA8
40G 160TBUCM
14-Campus Nautilus Cluster: 3300 CPU Cores 122 Hosts
~4 PB Storage>350 GPUs: >30M Core/Hrs/Day
40G 160TB HPWREN100G NVMe 6.4TB
1 FIONA8* 2 FIONA4sFPGAs + 2PB BeeGFS
SDSU
PRP Disks
10G 3TBCSUSB
Minority Serving Institution
CHASE-CI
100G 48TB
NPS
*= July RT
Sources: John Graham and Tom DeFanti, UCSD
CENIC/PW Link40G 3TBU Hawaii
40G 160TBNCAR-WY
40G 192TBUWashington
100G FIONAI2 Chicago
100G FIONAI2 Kansas City 10G FIONA1
40G FIONAUIC
100G FIONAI2 NYC
40G 3TBStarLight
United States PRP/TNRP Nautilus HyperclusterAlso Connects 3 More Regionals and 3 Internet2 Sites
Source: Tom DeFanti, UCSD
TNRP = PRP (CENIC, PNWGP, FRGP, HI, and MREN) + OSG + ESnet + Quilt + NRP Pilot (I2, KINBER, Learn, GPN, NYSERnet) + MCNC + NM Tribal + …
Original PRP
CENIC/PW LinkNRP Pilot
I2 CENIC Link
Towards The NRP (TNRP)3-Year $2.5M NSF Grant
OAC-1826967
Source: Tom DeFanti, UCSD
Third NRP Workshop September 24-25
Minneapolis
Nautilus Has International Nodes TooThe Global Research Platform is Emerging
PRP
Guam
Australia
Korea
Singapore
Netherlands
10G 35TBUvA40G FIONA6
40G 28TBKISTI
10G (coming)U of Guam
100G 35TBU of Queensland
Transoceanic Nodes Show Distance is Not the Barrier to Above 5Gb/s Disk-to-Disk Performance
PRP’s Current International
PartnersGRP Workshop 9/17-18 at UCSD
Source: Tom DeFanti, UCSD
PRP Tech: In-progress and Coming soon• Support users with IoT/Robotics/Augmented Reality needs
– Nvidia Jetson Xaviers and Nanos• Also FPGA data-center capable boards (Xilinx U200s)
– Compute: application acceleration (e.g.,TensorFlow)– Climate/weather segmentation– Inferencing– Satellite imagery orthorectification (align w/wildfire maps)
– 100G SDX P4 build out (SDSU, USC, NU, FIU, UCSD, Caltech)• And Tensor Flow Cores and TPUs
– Nvidia 2080-Ti cards: 544 Tensor Cores each, 4,352 per FIONA8– Our Nautilus Users can Access Google Cloud TPUs– Google Edge TPU Coral Development Boards and USB-C Edge
TPU Accelerator/co-Processor
Source: Tom DeFanti, UCSD
Some PRP/TNRP Future Goals
• Harvesting Application Usage Patterns on PRP/OSG• Increasing Diversity in our PRP “Garden of Architectures”• Adopting more “Mindfully Parallel” scientific computing• Enabling Archival Services For PRP Datasets• Making These Datasets Discoverable• Federating Commercial & NSF Clouds• Continuing Migration Toward IPv6 while Maintaining IPv4• Adopting NextGen Software-Defined Networking/Storage Tools• Making Security More Robust
– Using PRP as a National-Scale “Honeypot” to Generate Data for ML Analysis of Malicious Network Attacks
Source: Tom DeFanti, UCSD
Conclusions: PRP/TNRP Monitoring/Measuring/Usage: Expanding in Manageable Ways is the Experiment and the Challenge
• Great Networking with 10-100Gbps Science DMZ Performance is a
Necessary but not Sufficient Condition to Serve Data-Driven Researchers
• They need ScienceDMZs & DTNs with Lots of Low-Cost Storage, Encryption,
Large RAM CPUs, GPUs, TPUs, FPGAs and High-Availability Computing
• Measuring and Monitoring at all Levels is Key to Better Usage and Security
• Compatibility with Google, Microsoft, and Amazon Clouds, and NSF/DOE
Supercomputers Helps Ensure Scalability and Survival Post-PRP
• Convergence with Open Science Grid and Slate is upon us!
Source: Tom DeFanti, UCSD
PRP/TNRP/CHASE-CI Support and Community:
• US National Science Foundation (NSF) awards to UCSD, NU, and SDSC � CNS-1456638, CNS-1730158, ACI-1540112, ACI-1541349, & OAC-1826967� OAC 1450871 (NU) and OAC-1659169 (SDSU)
• UC Office of the President, Calit2 and Calit2’s UCSD Qualcomm Institute • San Diego Supercomputer Center and UCSD’s Research IT and Instructional IT• Partner Campuses: UCB, UCSC, UCI, UCR, UCLA, USC, UCD, UCSB, SDSU, Caltech, NU,
UWash UChicago, UIC, UHM, CSUSB, HPWREN, UMo, MSU, NYU, UNeb, UNC,UIUC, UTA/Texas Advanced Computing Center, FIU, KISTI, UVA, AIST
• CENIC, Pacific Wave/PNWGP, StarLight/MREN, The Quilt, Kinber, Great Plains Network, NYSERNet, LEARN, Open Science Grid
• Internet2, DOE ESnet, NCAR/UCAR and Wyoming Supercomputing Center
And Developing: Indiana University’s EPOCSource: Tom DeFanti, UCSD
[ 41 ][ 41 ]
SPECIAL PROJECTS
CENIC DDoS Mitigation PilotVirtual Customer Equipment ( VCE )
MANRS RPKI Regional PilotPacific Wave IRNC-related activities
[ 42 ]
CENIC’s DDOS Mitigation Pilot for K12HSN
• One year pilot for K12HSN: Mach 2018 - March 2019– 26 COEs participated in the pilot
• Base Requirements:– DDOS detection via netflow– DDOS mitigation:
• On-demand, triggered by CENIC via BGP.– Reporting capabilities via portal. Multi-tenant preferred.– Scalability to support all K12HSN sites– Cost
• Per K12HSN, the pilot was considered successful. Many lessons learned. • CENIC is actively working with Internet2 on joining Internet2’s DDOS Mitigation
solution.
[ 43 ]
Cloud based DDOS detection/Mitigation : Setup
CENIC Backbone
K12HSN site
AS3356(Level3-ISP)
AS202 (scrubbing)
Internet
Los Angeles 100GE port:- vlan for the ISP path- vlan for the DDOS path, rate-limited to 5Gbps, primary.
Sunnyvale 100GE
Non-Level3 peers
→ traffic flow: red is unscrubbed traffic; green is scrubbed traffic
AS 202: scrubbing backbone
Sunnyvale 100GE port:- vlan for the ISP path- vlan for the DDOS path, rate-limited to 5Gbps, backup.
[ 44 ]
[ 45 ]
MANRS RPKI regional pilot
• Pilot will focus on facilitating MANRS adoption and validation of routing information by implementing Resource Public Key Infrastructure (RPKI) on a regional scale among CENIC and Pacific Wave research universities.
• The pilot is a collaborative effort involving contributors from CENIC, NSRC, ESnet, ARIN, as well as from the CENIC research university community.
• Initial focus on participation from California-based research institutions. We will later seek participation from Pacific Wave collaborators outside of California, including the University of Washington and its regional network, the PNWGP (Pacific NorthWest GigaPop).
• Progress updates will be part of a presentation at the 2020 CENIC Annual Conference
[ 46 ]
Routing Security and Trust models• Mutually Agreed Norms for Routing Security (MANRS) https://www.manrs.org
• A global initiative, supported by the Internet Society, toward reducing the most common threats to the routing ecosystem
• MANRS actions for Network Operators / Internet Service Providers (ISPs)– Filtering -- Prevent propagation of incorrect routing information– IP source validation -- Prevent traffic with spoofed source IP addresses– Coordination -- Facilitate global operational communication and coordination between
network operators– Global validation -- Facilitate validation of routing information on a global scale
• Resource Key Public Infrastructure (RPKI)– Certificates verify that a resource has been assigned to a specific entity– Route Origin Authorization (ROA) - a cryptographically-signed record that associate a
BGP route announcement with the correct originating AS number
[ 47 ]
MANRS/RPKI @ CENIC- Filtering: measures that prevent the propagation of invalid routing information:
- Filter route announcements received from downstream BGP customers.- Filter route announcements advertised out to BGP peers.- Reach out to our downstreams about restricting the source IP addresses leaving their border
routers.- Anti-spoofing: Prevent traffic from spoofed IP addresses:
- Implementing strict mode is a challenge in our environment: many asymmetric traffic vectors- Loose mode does not provide anti-spoofing protection- CENIC’s Phased approach: Access-list based, on-going:
- Phase1: “Log”- Use the BGP filters applied towards our downstream customers to
come up with an initial, target, anti-spoofing access-list. Add bogon IP space to the initial list.
- Apply the initial target access-list in a passive mode: do not block any traffic; log unexpected source IP addresses.
- Reach out to downstream sites about unexpected logged source IP addresses to determine if they are legitimate traffic. ( to account for sites that have IP space from another service provider )
[ 48 ]
MANRS/RPKI @ CENIC
- Anti-spoofing/CENIC’s approach continued:- Phase2: “Drop spoofed packets”- Our plan is to collaborate with our members for the transition from
passive mode ( log mode ) to active mode ( drop invalid source IP addresses ).
- Use the CENIC TAC meetings as a forum to communicate efforts/progress to our members.- In support of future automating efforts related to anti-spoofing filters, have special interface tags for edge ( customer facing ) interfaces.
- Coordination: Facilitate global operational communication and coordination between network operators:
- Keeping CENIC’s contact info up to date in the RIR, in the IRR and in peering db.
[ 49 ]
MANRS/RPKI @ CENIC
- Global validation: tools/objects that facilitate the validation of routing announcements globally:- Audit of our prefixes to ensure each prefix that is announced out has a matching route object.
Ensure we are not relying on Proxy registered route objects.- Audit our BGP export routing policies to ensure more specifics are not leaked ( more specifics
can be leaked in support of traffic engineering )- Continued compliance with MANRS guidelines in cloud based DDOS implementations.- Origin validation:
- RPKI discussions during our TAC and SEC meetings.- RPKI workshops in collaboration with ARIN and with NSRC.- Prioritize ROA creation for IPv6. ( non-legacy space )
- October 2019- March 2020
50
RPKI Validator instances synchronize their local ROA database with the (RIRs) trust anchors
Route Servers interact with RPKI Validators, using ROA validation status as a hook for determining BGP policy. Route Servers facilitate BGP policy for routing platforms which do not support for RPKI, and provide routing telemetry and other data to the Looking Glass instances
Looking Glass instances provide monitoring and debugging tools to network operators and participants
[ 51 ]
October 10-11, 2019 - CENIC offices La Mirada
OverviewMutually Agreed Norms for Routing Security (MANRS) is an initiative, supported by the Internet Society, focused on reducing the most common threats to the global routing ecosystem through a variety of localized implementation methods. Resource Public Key Infrastructure (RPKI) is a technology that enables network operators to verify the integrity of routing information. This workshop is part of a CENIC-initiated pilot to facilitate MANRS adoption and implement a RPKI deployment on a regional scale among CENIC and Pacific Wave research universities. The MANRS RPKI pilot is a collaborative effort involving contributors from CENIC, NSRC, ESnet, and ARIN, as well as from the research university community.
In this workshop participants will learn about the global MANRS effort, how RPKI fits within the MANRS framework, the CENIC MANRS RPKI pilot, and the roles and services offered by ARIN as to support RPKI deployments. The workshop will offer hands-on lab exercises to model routing best practices, as well as creating Route Origin Authorization objects (ROAs) within ARIN’s Operational Test & Evaluation Environment (OT&E).
Instructors: Mark Kosters (MK), ARIN; Jon Worley (JW), ARIN; Philip Smith (PS), NSRC; Sana Bellamine (SB), CENIC; John Hess (JH), CENIC
CENIC MANRS RPKI Workshop - abstract
[ 52 ]
CENIC MANRS RPKI Workshop - programThursday, October 10
9:00 - 10:30 Workshop Welcome (JH)
● CENIC MANRS RPKI Pilot overview● Workshop Objectives & Goals● Slack channel
10:30 - 10:45 Break
10:45 - 12:15 Session 1: Route Filtering - IRR and RPKI (SB)
● Campus aspects - BCP38 / Ingress Filtering, uRPF● RPKI and DDoS mitigation services
12:15 - 13:15 Lunch
13:15 - 14:45 Session 2: Introduction to RPKI (MK)
● Lab 1.1 ARIN OT&E Online interface to setup RPKI and provision ROA (JW)
14:45 - 15:00 Break
15:00 - 16:30 Session 3: Install and use RIPE NCC Validator with ARIN OT&E
● Lab 1.2 Install RIPE NCC validator and use ARIN OT&E TAL (MK, JW)● Lab 1.3 Validated results via RIPE NCC Validator (MK, JW)
16:30 - 17:00 Session 4: Day 1 debrief and review objectives for Day 2
17:00 Adjourn
Friday, October 11
9:00 - 10:30 Lab 2.1 MANRS elements for Network Operators (PS)
● Filtering; Anti-Spoofing; Coordination; and, Global Validation
10:30 - 10:45 Break
10:45 - 12:00 Lab 2.2 Setting up RPKI Validator with NRSC virtual environment (PS)
● Installing RPKI Validator Cache on NSRC virtual environment● Configuring routers to use validator cache
12:00 - 1:00 Lunch
13:00 - 14:30 Lab 2.3 Exploring RPKI Validator with real global data (PS)
● Exploring validator on NSRC virtual environment, using real global data● Using public RPKI tools - eg bgp.he.net, seacom looking glass, etc
14:30 - 15:00 Break
15:00 - 16:30 Session 3: Engineering Round-table, next steps for RPKI pilot deployment (ALL)
● Campus-specific use-cases, challenges● Next steps in coordination toward deployment within pilot
16:30 - 17:00 Session 4: Workshop debrief and closing thoughts (ALL)
17:00 Adjourn
53
[ 54 ]
Pacific Wave Dynamic Circuit Services: AutoGOLE / NSI and SENSE• NSI-orchestrated circuit services available to participants traversing each of the Seattle,
Sunnyvale, Los Angeles, and Tokyo GOLEs
• In our current implementation each Pacific Wave GOLE in the pilot operates as its own NSI domain, e.g. Los Angeles as lsanca.pacificwave.net:2016, with separate OpenNSA instances for each GOLE; SENSE Network-RM (NRM) co-resident on Los Angeles and Sunnyvale instances.
• Exploring development of OpenNSA toward domain aggregator functionality
• C-plane peering (with NSI Aggregators) -- ESnet, NetherLight, StarLight, SINET
• D-plane peering -- ESnet, StarLight, SINET, JGN-X, and Caltech. CalIT2 - UCSD soon
• Provisioning -- primarily RNP’s MEICAN webUI, and OpenNSA onsa CLI client https://wiki.rnp.br/display/secipo/AutoGOLE+MEICAN+Pilot
[ 55 ]
Slide TitleLorem ipsum dolor sit amet, consectetur adipiscing elit. In fringilla lacus neque, non placerat nisl pharetra vel. Sed non imperdiet ex. Morbi gravida ut leo eget rutrum. In hac habitasse platea dictumst. Duis mollis ullamcorper vestibulum. Vivamus in sem tempor, finibus ipsum eu, dignissim ipsum.
• Bullet 1– Bullet 2
• Bullet 3
Source: Gerben van Malenstein, SURF
[ 56 ]
[ 57 ]
[ 58 ]
[ 59 ][ 59 ]
QUESTIONS?