Upload
others
View
5
Download
0
Embed Size (px)
Citation preview
1
Flotilla - Orchestration framework to deploy Containerized Network Functions and IoT applications using OpenStack. Sumanth M. Sathyanarayana
T-Labs Silicon Valley Innovation Center
Public Internet
Network Service Delivery in Enterprise Branches
(State of the Art)
Tenant B Branch
IPSec Tunnels
Branch Gateway
Branch
Gateway
FW DPI
NAT
URL Filter
IDS
IPS
FW DPI
NAT
URL Filter
IDS
IPS
Public /Private/Hybrid Cloud
Corporate IT Infrastructure
2
Pain Points & Challenges in TradiGonal Customer Premise Equipment
• Complex Workload: – CPEs are specialized hardware boxes.
– Complexity in updaGng these boxes, leading to slower service delivery. – MulGple enterprise branches are to be connected
• Service Provisioning issues: – VPN setup and service provisioning takes weeks. – Manual configuraGon and customizaGon requires wriGng complex scripts. – Pre-‐provisioning the network with more than adequate capacity.
• High Capex and Opex
Source: Colt technology Services
3
Public Internet
FLOTILLA: Containerized Network FuncGons delivered to CPEs
• ElasGc network service soluGon which is seamlessly and securely operated inside containers.
Branch Gateway
Branch
Gateway
FW DPI
NAT
URL Filter
IDS
IPS
Public/Private/Hybrid Cloud
Corporate IT Infrastructure
IPSec Tunnels
Containerized Customer Premise Equipment • Eliminate proprietary hardware for each service and host set of
containerized network services (FW, IPS, NAT, QoS) in a x86 pla5orm.
Cloud-based Network Service Orchestration • Setup secure tunnel (VPN) between the cloud gateway and different branches.
• Self-‐service network funcDon provisioning and managing on demand.
4
FW DPI
NAT
URL Filter
IDS
IPS
5
Why Containers?
• Lightweight footprint: Very small images with API-based control to automate the management of services
• Resource Overhead: Lower use of system resources (CPU, memory, etc.) by eliminating hypervisor & guest OS overhead
• Deployment time: Rapidly deploy applications with minimal run-time requirements
• Updates: Depending on requirements, updates, handling failures or scaling apps can be achieved by scaling containers up/down
KVM
Host-‐OS Hypervisor
Guest-‐OS
Libraries
VNF
Host-‐OS
Container Engine
Container A (ApplicaGon
+ Libraries)
Container B (ApplicaGon
+ Libraries)
Pod (container group) A
(ApplicaGon +
Libraries)
Kernel FuncGons and Modules: Namespaces, cgroups, capabiliGes, chroot, SELinux
Container-‐stack
Host-‐OS
Libraries
CNF
Container Engine
Benefits with Container Management SoluGon
Benefits DescripDon
Performance Since containers share the host OS and only require resource allocaGon for the respecGve network funcGon, they usually have be`er run-‐Gme performance compared to VMs.
Scaling As the Network FuncGons are directly run on the host OS, eliminaGng the provisioning and processing delay associated with spinning up of guest OSes, new containers can be spun up instantly for scaling up/scaling out.
ElasGcity & Agility VMs depend on guest OS, hypervisor type, accelerator used & host OS which increases the Gme to port a Network FuncGon running on it. The absence of these dependencies in case of containers makes it elasGc and faster to port.
Resilience
Container management soluGons such as Kubernetes provides self healing features such as auto-‐placement, restart and replacement of failed containers by conGnuously monitoring the health of individual or group of containers.
6
Challenges with Container Management SoluGon
Challenges DescripDon
Security Lower isolaGon between applicaGons. Managing constraints & authenGcaGon for righcul personnel.
Deployment at opGmum locaGon
Challenges of on-‐demand scaling with deployment in opGmized locaGon.
Resource Management
AllocaGon of appropriate resources to meet performance targets of Network Services which in the Service Provider context is usually five 9s.
7
h`ps://tools.iec.org/html/draf-‐natarajan-‐nfvrg-‐containers-‐for-‐nfv-‐00
An Analysis of Container-‐based Placorms for NFV
FloGlla Architecture
8
Cloud API
Horizon Murano CLI
Service AbstracGon Layer
API Handler
Nova Neutron Glance
Branch API
DB
Container Mgmt Soln FloGlla Agent Cloud API
Frontend
AbstracGon Layer
Branch API
DB
TLS
Docker Hub Cloud
Branch
FloGlla Features FloGlla provides the following 3 features. 1. Network Service OrchestraGon from the cloud:
– Self provisioning and management of network funcGons, connecGng mulGple branches to the cloud.
– Provides support for Network funcGon chaining.
2. Dynamic vpn tunnel deployment between the cloud and the branches. 3. Containerized Network FuncGon deployment at branches:
– FloGlla deploys Containerized Network FuncGons and eliminates complex hardware requirement at the branches.
– Helps in reducing the cost for installaGon and maintenance of these hardware appliances. – Helps in faster updates and modificaGons to the services.
9
Cloud API
FloGlla Workflow
Horizon Murano CLI
Service AbstracGon Layer
API Handler
Nova Neutron Glance
Branch API
DB
NoGfier
Container Mgmt Soln
Auth Handler
10
FloGlla Agent Cloud API
Horizon Murano CLI
Service AbstracGon Layer
API Handler
Nova Neutron Glance
Branch API
DB
TLS
Docker Hub
Cloud
Branch
FloGlla Workflow – Cloud Side
11
Cloud API
Horizon Murano CLI
Service AbstracGon Layer
API Handler
Nova Neutron Glance
DB
Cloud
FloGlla Workflow – VPN Setup
12
Cloud API
Horizon Murano CLI
Service AbstracGon Layer
API Handler
Nova Neutron Glance
Branch API
DB
NoGfier
Container Mgmt Soln
Auth Handler
FloGlla Agent Cloud API
Horizon Murano CLI
Service AbstracGon Layer
API Handler
Branch API
DB
TLS
Docker Hub
Cloud
Branch
FloGlla Workflow – Branch Side
13
Cloud API
Horizon Murano CLI
Service AbstracGon Layer
API Handler
Nova Neutron Glance
Branch API
DB
NoGfier
Container Mgmt Soln
Auth Handler
FloGlla Agent Cloud API
Horizon Murano CLI
Service AbstracGon Layer
API Handler
Nova Neutron Glance
Branch API
DB
TLS
Docker Hub
Cloud(FloGlla) Connected IoT Placorm
• Internet of Things (IoT) Devices have trouble interacGng with each other especially with the mulGtude of proprietary soluGons present out there.
• Data Management and security adds on to the exisGng complexity of different IoT soluGons.
• UC Berkeley’s Swarm Lab have come out with a secure, open, distributed data logs called the Global Data Plane (GDP), which also consists of the rouGng or the communicaGon placorm apart from the log server.
• Instead of various proprietary IoT devices each maintaining their own data store in a siloed environment, GDP provides a secure storage that can connect logs together with appropriate input/output permissions.
14
Personal Cloud for Smart Home/Enterprise Environments
• Our IoT Gateway placorm is based on GDP, which we have containerized.
• We use OpenStack based FloGlla’s framework on the cloud side to establish a dynamic vpn tunnel from the cloud gateway to the Swarm Box and deploy the containerized GDP IoT Placorm on it.
• We can then use FloGlla’s dashboard to deploy other containerized apps on top of GDP running on Swarm Box.
• The apps deployed on the swarm box would be having the capability to read/write logs either on the openstack based cloud or on the local distributed data store of GDP, to avoid the latency factor.
15
16
Global Data Plane
Cloud Cache Log
Sensor 2 Log
Actuator Log Sensor 1 Log
FloGlla Agent
Containerized Apps App App
Kubernetes
Docker Hub
Cloud API
AbstracGon Layer
Branch API
DB
Frontend
OpenStack
Dynamic VPN tunnel establishment
FloGlla
Berkeley GDP Log Server
WiFi ZigBee …
WiFi ZigBee …
Sensors
Actuator SwarmBox
SubscripGon
TransacGon
Log Read/Write
VPN Tunnel
17 17
Cloud Cache Log
Sensor 2 Log
Actuator Log Sensor 1 Log WiFi ZigBee …
WiFi ZigBee …
Sensors Actuator
Global Data Plane
FloGlla – OrchestraGon and Management Plane
ApplicaGon Plane
FloGlla Agent
App1 App2
GDP Router GDP Ro
uter
Swarm Box
Containerized GDP based IoT Placorm
Roadmap
• Use FloGlla to deploy actual Network FuncGons in a trial environment of enterprise branches. Test FloGlla
and GDP with different IoT applicaGons from the context of Smart Home and Enterprises. • Adding addiGonal features like remote monitoring & policing.
• Adding algorithms to write logs into local log or remote cloud servers depending on the applicaGon. Also cache content into local logs to retrieve faster for latency sensiGve applicaGons as in edge compuGng scenarios.
• ConGnue research on containerizaGon of Network FuncGons, trying Unikernels to deploy network
funcGons and finding soluGons for exisGng challenges.
18
THANK YOU!
19