9
REPORT Systems and Virtualization Management: Standards and the Cloud (A report on SVM 2011) Mark Carlson Received: 31 January 2012 / Accepted: 2 February 2012 / Published online: 12 February 2012 Ó Springer Science+Business Media, LLC 2012 1 Introduction The 5th International Distributed Management Task Force (DMTF) Academic Alliance Workshop on Systems and Virtualization Management: Standards and the Cloud (SVM 2011, http://dmtf.org/svm11) was held on October 24, 2011 in Paris, France during the 7th International Conference on Network and Services Man- agement (CNSM). SVM 2011 was organized by DMTF and coordinated with CNSM. The primary theme of SVM 2011 was ‘‘Systems and Virtualization Management: Standards and the Cloud’’. With the advent and increasing popularity of Cloud Computing, systems management and virtualization management technology has taken on increasing importance. The goal of SVM 2011 was to illuminate related standards and research issues, and covered areas such as: the implications of standards for virtualization in Cloud Computing, the advances in information models and protocols that aid in managing Clouds, new problems incurred when managing Cloud offerings and services, and how management itself benefits from virtualization and the Cloud. Submissions on topics related to managing Clouds, virtualization of distributed resources/services and work in management standard- ization were presented. 2 Academic Papers Sessions The main body of SVM 2011 consisted of nine paper presentations, one keynote speech, one invited talk, one closing panel, and one poster session. M. Carlson (&) Portland, OR, USA e-mail: [email protected] 123 J Netw Syst Manage (2012) 20:453–461 DOI 10.1007/s10922-012-9227-3

Systems and Virtualization Management: Standards and the Cloud (A report on SVM 2011)

Embed Size (px)

Citation preview

REPORT

Systems and Virtualization Management: Standardsand the Cloud (A report on SVM 2011)

Mark Carlson

Received: 31 January 2012 / Accepted: 2 February 2012 / Published online: 12 February 2012

� Springer Science+Business Media, LLC 2012

1 Introduction

The 5th International Distributed Management Task Force (DMTF) Academic

Alliance Workshop on Systems and Virtualization Management: Standards and the

Cloud (SVM 2011, http://dmtf.org/svm11) was held on October 24, 2011 in Paris,

France during the 7th International Conference on Network and Services Man-

agement (CNSM). SVM 2011 was organized by DMTF and coordinated with

CNSM.

The primary theme of SVM 2011 was ‘‘Systems and Virtualization Management:

Standards and the Cloud’’. With the advent and increasing popularity of Cloud

Computing, systems management and virtualization management technology has

taken on increasing importance. The goal of SVM 2011 was to illuminate related

standards and research issues, and covered areas such as: the implications of

standards for virtualization in Cloud Computing, the advances in information

models and protocols that aid in managing Clouds, new problems incurred when

managing Cloud offerings and services, and how management itself benefits from

virtualization and the Cloud. Submissions on topics related to managing Clouds,

virtualization of distributed resources/services and work in management standard-

ization were presented.

2 Academic Papers Sessions

The main body of SVM 2011 consisted of nine paper presentations, one keynote

speech, one invited talk, one closing panel, and one poster session.

M. Carlson (&)

Portland, OR, USA

e-mail: [email protected]

123

J Netw Syst Manage (2012) 20:453–461

DOI 10.1007/s10922-012-9227-3

3 Keynote by Winston Bumpus

This session discussed the current work ongoing within the DMTF to address Cloud

Computing and its underlying virtualization technologies. This included the Cloud

Management Working Group’s recently released Cloud Infrastructure Management

Interface (CIMI) specifications (dmtf.org/cloud) and the ISO/IEC adopted cloud

workload standard the DMTF Open Virtualization Format (OVF).

This session also looked at additional cloud work underway, including the Cloud

Software License Management and Cloud Auditing Data Federation. It briefly

discussed work going on within our Alliance Partner organizations. It pointed out the

gaps and challenges on the road ahead that we, industry and the academic community

can work on together to achieve the vision of interoperable cloud computing.

4 Presentations

Slide show presentations from our workshop are available on our Web site

(http://dmtf.org/svm11/presentation). To read the abstracts for the workshop papers,

please visit the IEEEXplore Digital Library (http://ieeexplore.ieee.org/xpl/most

RecentIssue.jsp?punumber=6093622).

Summaries of the papers are as follows:

Fernandez, Cordero; Somavilla, Rodriguez; Corchero, Tarrafeta; and Galan [1]

‘‘Virtualization-based testbeds are nowadays widely used for the creation of the

network environments needed to test protocols and applications. Virtualization has

highly contributed to reduce the cost of testbeds setup, either in terms of hardware

resources needed or work effort. However, the complexity of present networks

creates the need for very complex testbeds made out of tens or even hundreds of

virtual machines, interconnected according to specific topologies.’’ [10].

The authors presented a tool for the deployment and management of virtual

network scenarios over clusters of Linux servers. This tool, named VNX (Virtual

Networks over LinuX), allows the definition of virtual network scenarios using an

XML based language in terms of the virtual machines included, their characteristics

and the topology used to interconnect them. These scenarios are processed and

automatically deployed by the tool over one or more Linux servers.

VNX allows the user to control how the virtual scenarios are distributed over the

different cluster servers, using algorithms ranging from a simple round-robin to

complex user-defined ones. Besides, it allows defining additional restriction rules to

exert a fine control over the distribution, for example, to oblige a specific virtual

machine to run over a specific server or to avoid two heavy loaded virtual machines

to run over the same server.

VNX is based on two previous tools, VNUML and EDIV, which have been

extensively enhanced with new functionalities such as virtual machine auto-

configuration (using an OVF-like approach), support of several virtualization

hypervisors through the integration of the libvirt API, and the integration of router

virtualization technologies. VNX is distributed as open source software (http://

www.dit.upm.es/vnx).

454 J Netw Syst Manage (2012) 20:453–461

123

After the presentation, several questions were asked about the tool, mainly

related to the auto-configuration approach used and its relationship with the OVF

standard activities in DMTF.

Yan, Sung Lee, Zhao, Ma, and Mohamed [2]

The paper ‘‘Infrastructure Management of Hybrid Cloud for Enterprise Users’’

from HP Labs was presented by Shixing Yan. ‘‘Cloud Computing has become more

and more prevalent over the past few years, and we have seen the emergence of

Infrastructure-as-a-Service (IaaS) which is the most acceptable Cloud Computing

service model. However, coupled with the opportunities and benefits brought by

IaaS, the adoption of IaaS also faces management complexity in the hybrid cloud

environment which enterprise users are mostly building up.’’ [11]. A cloud

management system, titled Monsoon, proposed in this paper provides enterprise

users an interface and portal to manage the cloud infrastructures from multiple

public and private cloud service providers.

‘‘To meet the requirements of the enterprise users, Monsoon has key components

such as user management, access control, reporting and analytic tools, corporate

account/role management, and a policy implementation engine. The Corporate

Account module supports enterprise users’ subscription and management of multi-

level accounts in a hybrid cloud which may consist of multiple public cloud service

providers and private clouds. The Policy Implementation module in Monsoon will

allow users to define the geography-based requirements, security level, government

regulations and corporate policies and enforce these policies to all the subscriptions

and deployments of a user’s cloud infrastructure.’’ [11]. This presentation and a

demo video attracted great interest from the audience in the workshop. The

researchers from Deltacloud and OpenNebula also discussed the cloud management

and interoperability with the presenter in the workshop.

Senneset Haaland, Hermann, Ulrich, Lara, Rohrich, and Kebschull [3]

In their daily business, cluster administrators depend on up to date information

about the cluster to be able to perform their job efficiently. For smaller or very static

systems, a manually updated database might suffice, but the bigger and more

dynamic a system is, the more time will be spent by system administrators to gather

and to keep information up to date. With manual input, the human element is also a

common source of errors that can lead to inconsistencies in the database.

A system that can automatically collect and keep inventory information up to

date would free the administrator from the tedious tasks, reduce the cost of

operating the cluster and improve the consistency of the inventory information.

The Common Information Model (CIM) offers a detailed, extensible and object

oriented model of the hardware and software of computer systems. Typical CIM

server implementations aim at integration into the Web-Based Enterprise Manage-

ment (WBEM) architecture where instances are meant to be handled by providers.

The integrity mechanisms of their instance data storage are not as elaborate as in

common database systems. In particular, distributed and concurrent access without

transaction support is an issue.

This paper introduced inventory software for the automatic gathering and

persistent storage of device information in a compute cluster. The internal object

storage is realized by Object Relational Mapping of the Common Information

J Netw Syst Manage (2012) 20:453–461 455

123

Model. Automated generation of code and database schema provides a flexible

model, intuitive access and supports data integrity. Two implementations have been

developed to support database access using different programming languages.

Toueir, Broisin, and Sibilla [4]

This paper proposed a mathematical extension of a CIM Metric Model, in order

to measure or to calculate automatically the mathematical metrics, without the need

to add or develop additional components in the monitoring entity.

The context of this study is reconfigurable systems, particularly SOA as an

experimental supervised environment. Therefore, we will enhance SOA by adding

some Management & QoS tasks into the Service Broker to reconfigure its functional

components (services), so-called Functional Reconfiguration.

Besides the Functional Reconfiguration (which could be based on the knowledge

built by monitoring), we try to make the monitoring itself reconfigurable, so-called

Monitoring Reconfiguration.

Our approach is based on the WBEM architecture, which gives the possibility to

realize the underlying monitoring process for Management & QoS purposes.

Basically, we adopted the choice of centralized monitoring performed by the

Service Broker mediating The Service Providers and the Service Clients.

This paper classifies metrics in two main categories Elementary & Composite

Metrics. On the one hand, the Elementary Metrics are divided in two subcategories:

(1) Resource Metrics which are directly pollable from the remote agents, and (2)

Measurable Metrics which are internally calculated/measured by the Service Broker

based on particular logic. On the other hand, the Composite Metrics imply the

Mathematical Metrics which are internally calculated by the Service Broker based

on a formula synthesized by Elementary Metrics and mathematical functions.

Van Der Ham, Papagianni, Steger, Matray, Kryftis, Grosso, and Lymberopoulos [5]

This paper proposed an Information Model and its related Data Models

describing concepts of virtual resources and services within a federation of

heterogeneous virtualized infrastructures. ‘‘Our basic assumption is that semantic

and context-awareness, in the form of Semantic Web descriptions, better support

services in federated platforms. [We] build upon our experiences from the

development of two ontologies for computer networks and for network monitoring,

NDL and MOMENT, to support and guide the development of the Information

Model described in this paper.’’ [12]. The requirements of our envisaged

Information and Data models are defined within the scope of the EC FP7 project

NOVI for federating virtualized infrastructures, using PlanetLab and FEDERICA as

two examples which may be members of a Future Internet federated environment.

However, the Information and Data Models presented in this paper are designed to

be generic so that they can be used by other infrastructures within a federation.

The paper identified the requirements of an Information Model for federating

virtualized infrastructures. We then positioned the role of the Information Model in

NOVI’s federated architecture and described a concrete use-case to highlight the

role of the envisaged Information Model, together with the various services and

functionality it needs to support. Next, we provided an overview of existing

information models which provide a starting point for the definition of the NOVI

information model. The paper concluded with an overview of the work we plan to

456 J Netw Syst Manage (2012) 20:453–461

123

carry out in order to define and to implement the NOVI Information and Data

Models.

Dawoud, Takouna, and Meinel [6]

‘‘The rapid growth of E-Business and the frequent changes in sites’ contents pose

the need for rapid and dynamic scaling of resources.’’ [13]. Elasticity is one of the

distinguishing characteristics associated with Cloud computing emergence. It

enables cloud resources to auto-scale to cope with workload demand. ‘‘However,

current implementation of the scalability in the cloud uses the Virtual Machine

(coarse-grained) as a scaling unit, which often leads to over-provisioning of

resources. Hence, we propose an Elastic VM (fine-grained) scaling architecture.’’

[13]. It implements the scalability into the VM resources level. So, instead of

scaling-out dynamically by running more VMs instances, our architecture scales-up

the VM’s resources themselves (e.g., number of cores and memory size) to cope

with the workload demand. A theoretical comparison between Elastic VM and the

current Multi-instances scaling architectures (e.g., Amazon EC2 and GoGrid) shows

that Elastic VM scaling architecture is more able to maintain QoS metrics.

Moreover, for the sake of the practical comparison between current scalability

architecture in the cloud (i.e., Multi-instances) and Elastic VM scaling architectures,

we implemented locally similar architectures to Amazon Elastic Load Balancing

and Amazon Auto Scaling. Experimental results show that Elastic VM scaling

architecture is able to reduce scaling overhead, maintain a high throughput, mitigate

Service Level Objectives (SLOs) violations, and enable a simple scalability to a

broader range of applications including databases.

Danciu, Gentschen Felde, Kasch, and Metzker [7]

Host virtualization has increased the number of management attribute classes and

instances, and it introduces an additional degree of heterogeneity due to different

hypervisor products coupled with multiple guest operating systems. The authors

contributed a paper on the classification of attribute matching patterns and proposed

a methodology for the systematic harmonization of management attributes,

implemented as an extension to the libvirt library. The presentation was followed

up by a fairly lively discussion: while the handling of attributes is a topic well

addressed in literature, the multitude of elements introduced by virtualization

exacerbates the issue of heterogeneity, again.

Hlavacs and Treutner [8]

‘‘The ability to live migrate virtual machines (VMs) between physical servers

without any perceivable service interruption is pivotal for building more energy

efficient Cloud Computing infrastructures in the future. Nevertheless, energy

efficiency is not worth the effort if quality metrics (e.g., QoS, QoE) are severely

decreased by, e.g., dynamic consolidation using live migration. We identify the

most significant utilization metrics to predict the service level during live migrations

for a web server scenario. We show important correlations, give reasons and draw

conclusions for systems using live migration for yielding higher energy efficiency.

We also give reasons for extending the current hypervisors’ capabilities regarding

VM utilization collection and reporting. We present the effects of live migration on

service levels for different workload scenarios. In particular, we demonstrate that

live migration should be done preventively. This anticipates disproportional high

J Netw Syst Manage (2012) 20:453–461 457

123

service level degradation due to live migration. We examine the most important

utilization metrics for predicting the service level by both stepwise and exhaustive

regression.’’ [14].

As a result, we can explain 90% of the service level variance during live migration

with a single variable, the UNIX load average, which gives information about

queueing issues within the VM. Using more variables yields 95%. As a consequence,

systems using live migration as a mechanism to realize a more energy efficient target

distribution need to consider the UNIX load average, if service levels during live

migrations are important. Gathering load information of VMs currently needs to be

done by VM introspection, as typical hypervisors do not collect and export this

information. There are related efforts by qemu-kvm and libvirt developers to pass-

through the VMs’ memory utilization, as the amount of free memory can not be

reliably observed directly by the hypervisor. Therefore, we recommend that such

efforts should be extended to additionally export load information. Further utilization

metrics can be collected too, but for the described scenario, the UNIX load average is

the most important one and should not be disregarded.

Kretzschmar and Golling [9]

‘‘The erosion of trust boundaries already happening in organizations is amplified

and accelerated by Cloud computing. One of the most important security challenges

is to manage and to assure a secure Cloud usage over multi-provider Inter-Cloud

environments with dedicated communication infrastructures, security services,

processes and policies. This paper focuses on the identification of functions within

future Inter-Cloud environments that belongs to the Cloud Security Management

functional spectrum. Therefore, we describe all identified functional aspects and

necessary objects in order to define a platform independent Security Management

Spectrum for Inter-Cloud (SMSIC).

SMSIC will assist Cloud providers to analyze the necessary further development

for their security management systems in order to support future Inter-Cloud

environments characterized by use cases like Hot-Standby and migration of external

Cloud services. Examples based on the Dropbox-Amazon S3 Cloud offering were

given during presentation. In addition, the better comprehension of the security

management spectrum from a functional perspective will enable the Cloud provider

community to design more efficient portals and gateways between Inter-Cloud

providers itself respective their customer, and facilitate the adoption of these results

in scientific and standardization environments.’’ [15].

4.1 Invited Talk by Ignacio Llorente

The OpenNebula open-source platform is the result of 6 years of research and

development in efficient and scalable management of virtual machines on large-

scale distributed infrastructures in close collaboration with an active and engaged

community and the main cloud computing players. The presentation gave a practical

overview of cloud interoperability and portability, from the perspective of an open-

source project for cloud enabling a Data Center, and the major aspects to consider in

order to achieve complete interoperability in the near future. This comprises not

only portable data formats and standard interfaces for virtual workloads or data

458 J Netw Syst Manage (2012) 20:453–461

123

elements, but additionally common rules and internationally recognized standards

for security and service quality.

The presentation elaborated on the support for interoperability and portability in

OpenNebula and its rich ecosystem of third-party components implementing

standard specifications and adaptors. At the private cloud level, OpenNebula

enables adaptability, recognizing that our users have data-centers composed of

different hardware and software components for security, virtualization, storage,

and networking. Its open, architecture, interfaces and components provide the

flexibility and extensibility that many enterprise IT shops need for internal cloud

adoption. OpenNebula supports most common hypervisors, such as KVM, VMware

and Xen, and the ecosystem includes experimental adaptors for OpenVZ,

VirtualBox, XCP and Hyper-V. At the public cloud, OpenNebula implements the

AWS interface, and its ecosystem includes implementations of VMware vCloud,

OGF OCCI, SNIA CDMI and DMTF OVF, and adaptors for Libcloud and

Deltacloud. At the hybrid cloud, OpenNeula supports the combination of local

private infrastructure with Amazon EC2 cloud resources, and any major cloud

provider through an experimental Deltacloud adaptor available in the ecosystem.

5 Panel Session

The panel of speakers at the end of SVM is an annual tradition. The audience asked

questions about common threads throughout the papers, which were well answered

by the panel.

Always a hot topic of discussion, we also brainstormed in the room about new

areas of research such as cloud computing and suggested refinements of the research

that was presented. Feedback on SVM in general was great and everyone agreed

that we should continue to host SVM going forward.

6 Poster Session

The poster session was a new addition to the SVM program. Eleven posters were

accepted for presentation at SVM 2011, and nine were actually presented.

The poster session was held during the welcome reception for the International

Conference on Network and Services Management (CNSM). The posters were on

display at the entrance to the banquet hall. At least one presenter was available for

each poster to answer questions and go into detail about the information on display.

For a copy of the posters, or to read the abstracts submitted with each poster,

please visit our Web site: http://dmtf.org/svm11/poster_presentation.

7 Upcoming SVM workshop

For the third year in a row, SVM will be co-located with the International

Conference on Network and Services Management (CNSM). The 6th International

J Netw Syst Manage (2012) 20:453–461 459

123

DMTF Academic Alliance Workshop on Systems and Virtualization Management:

Standards and the Cloud will be happening in October 2012 in Las Vegas, Nevada

USA. Information about our upcoming workshop can be found at: http://www.

dmtf.org/svm12.

We expect a report on SVM 2012 will appear sometime after the event in

October.

Acknowledgments The authors would like to thank all SVM 2011 organizing committee members for

their dedication and continuous efforts to make this workshop a success. We would also like to thank the

City of Paris, and the 7th International Conference on Network and Services Management (CNSM) for

their support. Our special thanks are extended to all the volunteers of the workshop.

References

1. Fernandez, D., Cordero, A., Somavilla, J., Rodriguez, J., Corchero, A., Tarrafeta, L., Galan, F.:

Distributed virtual scenarios over multi-host Linux environments. In: 2011 5th International DMTF

Academic Alliance Workshop on Systems and Virtualization Management (SVM 2011). Institute of

Electrical and Electronics Engineers, 22 Dec 2011

2. Yan, S., Sung Lee, B., Zhao, G., Ma, D., Mohamed, P.: Infrastructure management of hybrid cloud

for enterprise users. In: 2011 5th International DMTF Academic Alliance Workshop on Systems and

Virtualization Management (SVM 2011). Institute of Electrical and Electronics Engineers, 22 Dec

2011

3. Senneset Haaland, Ø., Hermann, M., Ulrich, J., Lara, C., Rohrich, D., Kebschull, U.: Realization of

inventory databases and object relational mapping for the common information model. In: 2011 5th

International DMTF Academic Alliance Workshop on Systems and Virtualization Management

(SVM 2011). Institute of Electrical and Electronics Engineers, 22 Dec 2011

4. Toueir, A., Broisin, J., Sibilla, M.: Toward configurable performance monitoring introduction to

mathematical support for metric representation and instrumentation of the CIM metric model. In:

2011 5th International DMTF Academic Alliance Workshop on Systems and Virtualization Man-

agement (SVM 2011). Institute of Electrical and Electronics Engineers, 22 Dec 2011

5. Van Der Ham, J., Papagianni, C., Steger, J., Matray, P., Kryftis, Y., Grosso, P., Lymberopoulos, L.:

Challenges of an information model for federating virtualized infrastructures. In: 2011 5th Interna-

tional DMTF Academic Alliance Workshop on Systems and Virtualization Management (SVM

2011). Institute of Electrical and Electronics Engineers, 22 Dec 2011

6. Dawoud, W., Takouna, I., Meinel, C.: Elastic VM for rapid and optimum virtualized resources

allocation. In: 2011 5th International DMTF Academic Alliance Workshop on Systems and Virtu-

alization Management (SVM 2011). Institute of Electrical and Electronics Engineers, 22 Dec 2011

7. Danciu, V., Gentschen Felde, N., Kasch, M., Metzker, M.: Bottom-up harmonisation of management

attributes describing hypervisors and virtual machines. In: 2011 5th International DMTF Academic

Alliance Workshop on Systems and Virtualization Management (SVM 2011). Institute of Electrical

and Electronics Engineers, 22 Dec 2011

8. Hlavacs, H., Treutner, T.:Predicting web service levels during VM live migrations. In: 2011 5th

International DMTF Academic Alliance Workshop on Systems and Virtualization Management

(SVM 2011). Institute of Electrical and Electronics Engineers, 22 Dec 2011

9. Kretzschmar, M., Golling M.: The security management spectrum in multi-provider inter-cloud

environments. In: 2011 5th International DMTF Academic Alliance Workshop on Systems and

Virtualization Management (SVM 2011). Institute of Electrical and Electronics Engineers, 22 Dec

2011

10. Fernandez, D., Cordero, A., Somavilla, J., Rodriguez, J., Corchero, A., Tarrafeta, L., Galan, F.:

Distributed virtual scenarios over multi-host Linux environments. In: 2011 5th International DMTF

Academic Alliance Workshop on Systems and Virtualization Management (SVM 2011). IEEEXplore

Digital Library, 22 Dec 2011

11. Yan, S., Sung Lee, B., Zhao, G., Ma, D., Mohamed, P.: Infrastructure management of hybrid cloud

for enterprise users. In: 2011 5th International DMTF Academic Alliance Workshop on Systems and

Virtualization Management (SVM 2011). IEEEXplore Digital Library, 22 Dec 2011

460 J Netw Syst Manage (2012) 20:453–461

123

12. Van Der Ham, J., Papagianni, C., Steger, J., Matray, P., Kryftis. Y., Grosso, P., Lymberopoulos, L.:

Challenges of an information model for federating virtualized infrastructures. In: 2011 5th Interna-

tional DMTF Academic Alliance Workshop on Systems and Virtualization Management (SVM

2011). IEEEXplore Digital Library, 22 Dec 2011

13. Dawoud, W., Takouna, I., Meinel, C.: Elastic VM for rapid and optimum virtualized resources

allocation. In: 2011 5th International DMTF Academic Alliance Workshop on Systems and Virtu-

alization Management (SVM 2011). IEEEXplore Digital Library, 22 Dec 2011

14. Hlavacs, H., Treutner, T.: Predicting web service levels during VM live migrations. In: 2011 5th

International DMTF Academic Alliance Workshop on Systems and Virtualization Management

(SVM 2011). IEEEXplore Digital Library, 22 Dec 2011

15. Kretzschmar, M., Golling, M.: The security management spectrum in multi-provider inter-cloud

environments. In: 2011 5th International DMTF Academic Alliance Workshop on Systems and

Virtualization Management (SVM 2011). IEEEXplore Digital Library, 22 Dec 2011

Author Biography

Mark Carlson Principal Cloud Strategist at Oracle, has more than 30 years of experience with

Networking and Storage development and more than 15 years experience with Java technology. He has

spoken at numerous industry forums and events. He is the chair of the SNIA Cloud Storage, NDMP and

XAM SDK technical working groups, chairs the DMTF Policy working group, serves on the SNIA

Technical Council, represents Oracle on the DMTF Technical Committee, and serves as DMTF VP of

Alliances.

J Netw Syst Manage (2012) 20:453–461 461

123