Click here to load reader
View
264
Download
6
Embed Size (px)
DOCSIS 3.1 & SDN: SDN Ground Truth Experiences
David Early, Ph.D. Applied Broadband, Inc.
SDN is easy, right?
Photo: Bell Labs
SDN and PacketCable Multimedia
Photo: Bell Labs
ApplicationApplication
Application Support
Orchestration
Abstraction
Control Support
Data transport & processing
Application-control Interface
Resource-control Interface
Application Layer
SDN ControlLayer
ResourceLayer
(From PKT-SP-MM-I06-110629)
OpenDaylight (ODL): Starting Point
Photo: Bell Labs
OpenDaylight is a collaborative, open source project to advance Software-Defined Networking (SDN). OpenDaylight is a community-led, open, industry-supported framework, consisting of code and blueprints, for accelerating adoption, fostering new innovation, reducing risk and creating a more transparent approach to Software-Defined Networking.
ODL PacketCable Plugin
Photo: Bell Labs
Created by CableLabs Provides all the necessary components to support basic PCMM:
Packetcable PCMM Provider Packetcable PCMM Model Southbound ODL plugin supporting PCMM/COPS protocol driver Packetcable PCMM RESTCONF Service API
Limited to SCN based gates OOTB Jumpstart: Modify Code rather than writing from scratch
Ground Truth: Implementation
Photo: Bell Labs
Adapt existing Applications Telephony Congestion
Use a common REST interface
Based on Yang models Common for all applications Adaptable to unique circumstances
ODL internal communications between the datastores and PCMM plugin
COPS-PR to the CMTS CMTS
ODL
Tele
phon
y
Con
gesti
on
Vid
eo
Ground Truth
Photo: Bell Labs
Successfully set gates in the lab Off the shelf opensource, minimal modifications
Premise shows promise Modular approach to scale and flexibility
Future:
Scalability Survivability Manageability
Ground Truth: Current Limitations
Photo: Bell Labs
Examples: Not all PCMM Traffic Profiles and DOCSIS MAC layer
scheduling types are supported yet
The ODL REST API does not return any explicit (ACK/NACK) operational information to the requesting ODL Application. Just 200 OK
Nothing earth shatteringly bad, but work to be done.
How do you eat an elephant?
Photo: Bell Labs
One bite at a time: Evolutionary approach
Do one thing well, move on Dont boil the ocean Only implement what is needed Leverage other peoples work Remember scale and flexibility
Keep it Modular
Thank you
VIRTUAL MACHINE PLACEMENT STRATEGIES FOR VIRTUAL
NETWORK FUNCTIONS
Adam Grochowski Juniper Networks
NFV/VNF Considered NFV gaining traction in the industry. OpenStackDefacto Virtual
Infrastructure Manager (VIM). NFV workloads differ from traditional
cloud workloads.
Its about the network now.
CPU RAM Storage Availability zone IOPS
Nova Workload Filters Filter examples:
Good candidate?
Host4
Host2
Host1
Host2
Host3
Host4
Filter
Host1
Host2
Host3
Host4
Weight
OpenStack doesnt care. Will plumb the network. Will also place VM on
non-viable host.
What About the Network?
What We Need New network-related attributes
Port bandwidth Requested bandwidth
Class NetworkFilter(filters.BaseHostFilter): ""Network Filter Utilization"" def host_passes(self, host_state, filter_properties): """Only return hosts with sufficient available BW.""" instance_type = filter_properties.get('instance_type') requested_bw = instance_type[bandwidth_mb']
Needed:
Its About the Network Now
New OpenStack development
Addition of network-based
placement
Provide quality service for network functions
Improve server utilization,
improve ROI
To:
Thank you.
Assessing Network and Equipment Failures in the New SDN/NFV
Architectures
Marlon Roa Infinera
Transport network Change in management topology
SDN Ctl
Legacy Management Low bandwidth Relax latency need, if any Connectivity protection objective
Failure not customer affecting HA at NMS/ OSS level
SDN/NFV Management: High bandwidth & QoS methods
Data & control traffic Service/data process latency critical
Forces clusters closer to nodes Connectivity protection requirement HA at former NMS level and near key nodes
NFV MANO
NFV MANO
SDN Ctl NMS NMS
NMS HA Cluster
Service continuity is not only a customer expectation, but often a
regulatory requirement, as telecommunication networks are considered to be part of critical
national infrastructure, and respective legal obligations for service
assurance/business continuity are in place (NFV ETSI [2] specification).
Effects: Extended downtime or
execution error
SDN Controller HA Clusters
NFV MANO HA Clusters
Sample functions SDN: Route, re-route, cross-connect, etc. NFV: Functions part of data-path Deliver carrier-grade performance
Failure prevention:
SDN/NFV: Hardware protection at certain nodes
Openflow 1.2: Slave/master/same No standard protection
Reboot/reset recovery complexity Hardware choices:
Processor, memory, I/O Operating temp SW compatibility
SDN Controller / NFV MANO Hardware
SDN Controller/Orchestrator NFV MANO connectivity
Sample functions Heartbeat monitor Sync backups SDN/NFV coexisting (ETSI/ONF) Controller/Orchestrator as in NMS/OSS model SDN/NFV HA Clusters closer to nodes
Failure prevention:
HA design: Cluster, HA monitors, Load balancers, low latency
Connectivity redundancy between host servers New sub-network
Security of link (Ex: IPsec) due to remote controllers
Link media upgrades to relief congestion
SDN/ NFV Controller to Node connectivity
Sample functions Legacy configuration and PM data SDN service affecting decisions Execution and OAM of VNFs Security for new customer traffic path
Failure prevention:
Redundant, symmetric, low latency paths (NFV)
Increase management link requirement QoS (Control, Customer), latency, speed
Security of link (Ex: IPsec) Function download and execution
validation
SDN/NFV deployment and the cost increase
New deployment challenges for control layer New Qualification testing
Multi-vendor, non-standard HW & SW New redundant control-layer HW/SW at nodes New carrier-grade connectivity intra-nodes & among
control clusters
Non-trivial cost of deployment today. Tomorrow: Industry certification of HW/SW bundles (Ex: MEF with ETH) As SDN & NFV mature, reduced cost and complexity
Normalize redundancy of connection to nodes Clear best-practices for resiliency of controllers should
become clear
Have you estimated OPEX & CAPEX of actual SDN & NFV deployment?
THANK YOU
Marlon Roa Technical Solutions Director [email protected]
DOCSIS 3.1 Overdrive: Dynamic Optimization Using a Programmable Physical Layer
Jason Schnitzer Applied Broadband, Inc.
Shannon & DOCSIS
Photo: Bell Labs
CMTS HFC CM
Photo: Bell Labs
DOCSIS 3.1 OFDM Profile Optimization
Profile Management Application (PMA)
Photo: Bell Labs SOURCE: CableLabs, VNE-TR-SDN-ARCH-V01-150623
D3.1 Programmable Optimization System
Photo: Bell Labs
Separation of Data and Control Planes Data forwarding via DOCSIS resources Control plane hosts application Abstraction of Network Complexity CMTS Vendor Abstraction layer (CVAL) Visibility instrumentation normalization Common control interface Programmatic Interaction Common Data Model (YANG Defined) RESTCONF API
Optimization Information Model A Big Data Problem
Photo: Bell Labs
DsRxMer FEC Codewords RxPower
Current Art
Photo: Bell Labs
Capabilities Measurement & control plane Complete data model collection in
production network Profile control proven in lab Basic optimization logic using
simple network policies Limitations Small number of simple profiles Proprietary control interfaces (CLI) Limited data model (SNMP MIBS)
Future Work
Photo: Bell Labs
Protocols Define standard profile control
interfaces on D3.1 CCAP API for OPT REQ-RSP Formalize PMA architecture OpenDaylight SDN integration Analysis & Control Historical data analysis Standard CMTS APIs Profile Optimization and control in
production D3.1 network
Thank you
Evaluation of virtualizing DOCSIS MAC by software in data center
Li Zhang, Xin Yang, Lifan Xu Huawei Technologies Co.,Ltd.
Virtualized DOCSIS MAC introduce DOCSIS MAC consists of data plane, control plane, and management plane.
the control and management plane are all software in CCAP implementation; the data plane is almost hardware like FPGA to guarantee high throughput and low latency.
In order to evaluate DOCSIS MAC virtualization, we break