Upload
cameroon45
View
1.369
Download
0
Tags:
Embed Size (px)
Citation preview
Virtualization-based Techniques for Enabling Multi-tenant Management Tools
Chang-Hao Tsai
Kang G. ShinUniversity of Michigan
Yaoping Ruan
Sambit Sahu
Anees Shaikh
T. J. Watson Research Center
2
Typical Network Monitoring Infrastructure
NetworkNetwork
AgentAgent
ManagementWorkstation
ManagementWorkstationDBDB
3
Customer CNetwork
Customer CNetwork
Customer BNetwork
Customer BNetwork
Monitor Multiple Customers Using Typical Infrastructure
Customer ANetwork
Customer ANetwork
AgentAgent
MgtWS
MgtWSDBDB
AgentAgent
MgtWS
MgtWSDBDB
AgentAgent
MgtWS
MgtWSDBDB
Service provider
Custom
ers
4
Multi-Tenant Network Monitoring Infrastructure
Customer ANetwork
Customer ANetwork
AgentAgent
Customer BNetwork
Customer BNetwork
AgentAgent
Customer CNetwork
Customer CNetwork
AgentAgent
DBDBManagementWorkstation
ManagementWorkstation
5
Issues
Significant re-design and re-implementation required New authentication, authorization, accounting system Flexible configurations (specific rules and preferences) Scalability Problematic for legacy software products
Network management service isn’t simply convertible Firewall Network address contention between customers
Private Internet addresses (10/8, 172.16/12, 192.168/16) Wide use of NAT-router
Some functions need L2 network access (DHCP, BOOTP…)
6
Goal: Make Single-tenant Tools Multi-tenant Capable
Approach Virtualization
Creating containers for each single-tenant instance Consolidation
Sharing common infrastructure
How? Demonstrate how to make a single-tenant network management
system multi-tenant capable
7
Example Tool: OpenNMS
Open-source with commercial support www.opennms.org / www.opennms.com
Java application Front-end: Java Servlets, JSP Database: PostgreSQL
Primary functions Device discovery Service and performance monitoring Event management Asset management
8
Outline
OpenNMS architecture and service model Approaches to enabling multi-tenancy
Virtualization-based back-end consolidation Database sharing Front-end consolidation
Evaluation Workload profile Scalability
Conclusion
9
OpenNMS Architecture
PostgreSQLNodes/Services/Events/Outages/Notifications/SNMP configuration/…
JVM
OpenNMS(main program)
JVM
Tomcat
OpenNMS UI
Response Time(RRD files)
Customer
Network
Customer
Network
10
OpenNMS Service Model
Customer Network
Customer NetworkJVM
OpenNMS
Network ManagementService Provider
L2VPN
JVM
Tomcat
UI 1PgSQL
RRDfiles
11
Back-end Consolidation
Goal: Minimum changes to the original system
Requirements Resource (memory, processes) isolation Independent file system Virtualized network layer
Virtualization Secure, private Low-overhead (Xen, OpenVZ) Performance isolation
12
Database Sharing and Front-end Consolidation
All instances use the same schema
Database: one database server Separate database user and database name Database privileges for access control
Front-end: one Tomcat server Different paths for different instances HTTP/S authentication
13
Multi-Tenancy Using Virtualization
Host OS (Dom 0)
Customer 1Network
Customer 1Network
Customer nNetwork
Customer nNetwork
VM 1
JVM 1
OpenNMS 1
VM N
JVM N
OpenNMS n
Network ManagementService Provider
VPN
JVM
Tomcat
UI 1
UI n
PgSQL
RRDfiles
VPN
14
Evaluation
Resource profiling Bottleneck identification Scalability with customer network size Software configuration – JVM heap size
Multi-tenant scalability Baseline Xen OpenVZ
15
Experiment Setup
Host OS (Dom 0) Customer 1
Network
Customer 1Network
Customer nNetwork
Customer nNetwork
VM 1
JVM 1
OpenNMS 1
VM N
JVM N
OpenNMS n
Network Management Service Provider
VPN
JVM
Tomcat
UI 1
UI n
PgSQL
RRDfiles
Apache
192.168.8.1…8.2009.200
VPN
Emulated Customer Network
VPN
PC Servers: Core 2 Duo E6600, 4GB RAM, (2) 7,200rpm HDD, GbE
16
Resource Profile: Memory & CPU Usage
0
20
40
60
80
100
120
140
160
180
0 10 20 30 40 50 60 70
Me
mo
ry u
sag
e (
MB
)
Time (minute)
OpenNMS
OpenNMS JVM heap
PostgreSQL
0
1
2
3
0 10 20 30 40 50 60 70
CP
U u
tiliz
atio
n (
%)
Time (minute)
Total
OpenNMS
PostgreSQL
Single-tenant, monitoring 200 hosts Memory is the bottleneck resource
17
Scalability: 200 ~ 1000 Hosts
44
46
48
50
52
54
56
58
0 60 120 180 240 300
He
ap
util
. (a
fte
r G
C,
MB
)
Time (minute)
200 hosts400 hosts600 hosts800 hosts
1000 hosts
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0 60 120 180 240 300
CP
U U
tiliz
atio
n (
%)
Time (minute)
200 hosts400 hosts600 hosts800 hosts
1000 hosts
2MB memory / 200 monitored hosts Minimal incremental cost
18
Impact of JVM Heap Size: 64 ~ 128 MB
1
2
3
4
64 80 96 112 128 0
5
10
15
20
25
30
35
GC
fre
qu
en
cy (
time
s/m
in)
Tim
e s
pe
nt
in G
C (
s)Maximum heap size (MB)
GC FrequencyTime spent in GC
0
0.5
1
1.5
2
2.5
3
0 60 120 180 240 300
CP
U u
tiliz
atio
n (
%)
Time (minute)
64MB72MB80MB96MB
128MB
GC frequency decreases with heap size Live objects take up space and increase GC workload OpenNMS + OpenVPN take 144MB to run
19
Baseline: Simple Consolidation
0
1
2
3
4
5
6
7
8
9
10
10 11 12 13 14
UI
resp
on
se t
ime
(s)
Number of tenants
with RRDw/o RRD
Baseline: completeinstallation in each VM
RRD: disk I/O intensive
Benchmark by scriptingfront-end activities Front-end and database accesses Dynamic web page generation (average response time) Service discovery and monitoring accuracy
20
Multi-tenant Scalability
60% increase for Xen 90% increase for OpenVZ
58% increase for Xen 83% increase for OpenVZ
0
1
2
3
4
5
15 16 17 18 19
UI
resp
on
se t
ime
(s)
Number of tenants
OpenVZXen
Xen w/errors
0
0.2
0.4
0.6
0.8
1
15 16 17 18 19 20 21 22U
I re
spo
nse
tim
e (
s)Number of tenants
OpenVZXen
Xen w/errors
With RRD Without RRD
21
Future Work
Java class sharing Duplicated class definition, but JVMs are in different VMs
Coordinating JVMs JVMs in guest OS are unaware of VM sizing Dynamic JVM sizing
22
Conclusion
An approach to enabling multi-tenant capability Virtualize the base platform Share supporting services
Increased service density 60-90% more tenants on a single platform
23
Thank you for your attention.
Any questions?
{chtsai,kgshin}@eecs.umich.edu{yaopruan,sambits,aashaikh}@us.ibm.com