13
OpenStack Summit Tokyo 2015 OpenStack New Features proposed by 1. Log Request ID mapping [Cross-Project: Nova, Cinder, Glance, Neutron and others] 2. Masakari: VMHA for OpenStack Compute 3. Unshelve performance improvement [Nova] 4. Availability Zone Support [Neutron] 5. Linuxbridge Distributed Virtual Router (DVR) [Neutron] 6. OPNFV Integration [Congress] 7. Enable New Agents [Neutron]

NTT SIC marketplace slide deck at Tokyo Summit

Embed Size (px)

Citation preview

Page 1: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

OpenStack New Features proposed by

1. Log Request ID mapping[Cross-Project: Nova, Cinder, Glance, Neutron and others]

2. Masakari: VMHA for OpenStack Compute3. Unshelve performance improvement [Nova]4. Availability Zone Support [Neutron]5. Linuxbridge Distributed Virtual Router (DVR) [Neutron]6. OPNFV Integration [Congress]7. Enable New Agents [Neutron]

Page 2: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

What it is

Why it is important

How it works

Log Request ID mapping [Nova, Cinder, Glance, Neutron and others]

Current status and Future plan

It outputs its own request ID with the request ID received from another component in an API response to the log within one line.

This function enables us to track API calls between components easily and is crucial for automated log analysis.For example, a volume creation based on an existing image(Glance) (API call from cinder to glance).

It adds a function to get request ID in a response from another component in clients(python-*client).It adds a function to output its own request ID and the request ID in a response from another component within one line to the log in the caller.

The spec has been approved in the community (openstack-specs)We will implement it in each client ( python-*client ) and then implement log outputs in each component.Reference https://review.openstack.org/#/c/156508

Page 3: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

Log Request ID mapping [Nova, Cinder, Glance, Neutron and others]

2015-10-08 16:14:33.498 DEBUG glanceclient.common.http [req-7c08c16e-6e34-4480-a3b9-a14c01ab7c61 admin] curl -g -i -X HEAD -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: {SHA1}b2e64a18668afc935162441dd6af6a07b1f173ab' -H 'Content-Type: application/octet-stream' http://10.0.2.15:9292/v1/images/c95a9731-77c8-4da7-9139-fedd21e9756d log_curl_request /usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py:1232015-10-08 16:14:33.521 DEBUG glanceclient.common.http [req-7c08c16e-6e34-4480-a3b9-a14c01ab7c61 admin]HTTP/1.1 200 OKcontent-length: 0x-image-meta-status: active( 10 lines omitted )x-image-meta-property-kernel_id: 08dc38b9-7b14-4d96-a641-17faef0a7960x-openstack-request-id: req-req-0bbacfda-ec83-4275-9b65-1e011f3a2923( snipped… )x-image-meta-disk_format: ami log_http_response /usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py:136

glance-apicinder-volume

2015-10-08 16:14:33.498 DEBUG cinder.volume.manager [req-7c08c16e-6e34-4480-a3b9-a14c01ab7c61 admin] image down load from glance req-req-0bbacfda-ec83-4275-9b65-1e011f3a2923

Current (stable/kilo):

Our suggestion:

The association between request IDs is output within one line. Unnecessary information(other response headers, etc.) is not output.

2015-10-08 16:14:33.502 11610 DEBUG oslo_policy.policy [req-0bbacfda-ec83-4275-9b65-1e011f3a2923 924515e485e846799215a0c9be9789cf 46e99ee00fd14957b9d75d997cbbbcd8 - - -] Reloaded policy file: /etc/glance/policy.json _load_policy_file /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:425(2 lines omitted)2015-10-08 16:14:33.520 11610 INFO eventlet.wsgi.server [req-0bbacfda-ec83-4275-9b65-1e011f3a2923 924515e485e846799215a0c9be9789cf 46e99ee00fd14957b9d75d997cbbbcd8

Page 4: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

What it is

Why it is important

How it works

Masakari: VMHA for OpenStack Compute

Current status and Future plan

It provides Virtual Machine High Availability (VMHA) for “Pets” service model.It recovers automatically a VM instance in case of VM or hypervisor failure to minimize the downtime.

Cloud-native application handles High Availability at its own layer.Sometimes, customers and/or applications still prefer “Pets” service model.

It monitors status of VM and KVM Host with pacemaker.It rescues VM with Nova API when errors occurs.No modification to OpenStack components.

It’s published for the community under Apache license at github https://github.com/ntt-sic/masakariYou can download the source code and try it.Sponsor Demo: http://sched.co/4M84

Page 5: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

Masakari: VMHA for OpenStack Compute

OpenStack API

Compute Nodes

Controller Nodes & Backend Nodes

Page 6: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

What it is

Why it is important

How it works

Unshelve performance improvement [Nova]

Current status and Future plan

It speeds up unshelving(powering on) a VM by utilizing VM image ‘cache’.

It solves the issue that it takes long time to unshelve(power on) a VM when VM’s image size is large.

It keeps a VM image in an instance store by configuration when shelving(powering off) the VM. (=‘cache’)* Assumption: Compute nodes share their instance store.It boots the VM by utilizing the ‘cache’(not downloading the image from glance) when unshelving(powering on) the VM.

We proposed a spec for Mitaka release in the nova community.

Reference https://blueprints.launchpad.net/nova/+spec/improve-unshelve-performance

Page 7: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

Unshelve performance improvement [Nova]

HV#1

Shelve(power off)

VM

Image-storeinstance-store

(1)Image-create(create meta data)

NFS

HV#1

Unshelve(power on)

Image-storeinstance-store

HV#2

VM

(1)Image download * the issue

(2)Image upload (File upload)

(3) Image deletion

NFSNFS NFS

(2) VM boot

HV#1

VM

Image-storeinstance-store

(1)Image-create(create meta data)

NFS

HV#1

Image-storeinstance-store

HV#2

VM

(2)Image upload NFSNFS NFS

(1) VM bootOur suggestion:

Shelve(power off) Unshelve(power on)

Current:(3) Image deletion

(2) Image deletion

Page 8: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

What it is

Why it is important

How it works

Availability Zone Support [Neutron]

Current status and Future plan

It enables network resources to get High Availability.

When breakdown happens, it doesn’t affect user’s resources.It improves the reliability.

It added extension API and attribute to each resource.

It’s under development, partially implemented (API and DB updates are merged).It’s needed to discuss about the development for other use cases (Segment, Cell).Reference https://blueprints.launchpad.net/neutron/+spec/add-availability-zone

Compute Node Network Node

Server DHCP(Active)

Router(Active)

Tunnel / VLAN network

External Network

Compute Node Network Node

Server DHCP(Active)

Router(Passive)

Neutron’s AZ 1 Neutron’s AZ 2

External networkTunnel/VLAN network

L3-HA routers across AZsMultiple DHCPs across AZs

Page 9: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

What it is

Why it is important

How it works

Linuxbridge Distributed Virtual Router (DVR) [Neutron]

Current status and Future plan

It enables operators to use DVR function with Linuxbridge.

Some operators want to use Linuxbirdge since it’s stable than openvswitch and maintenance cost is low.

It achieves routing by ebtables.

We proposed an implementation for Proof of Concept.It’s need to consider about the development for SNAT and DHCP.Reference https://bugs.launchpad.net/neutron/+bug/1504039

Page 10: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

Linuxbridge Distributed Virtual Router (DVR) [Neutron]

Compute Node

      Linuxbridge

Interface

External networkTunnel/VLAN network

ExternalBridge

FloatingIPRouter

Interface

NetworkABridge

DVRRouter

NetworkBBridge

Server

Interface

Server

Compute Node

      Linuxbridge

ExternalBridge

FloatingIPRouterDVR

Router

NetworkBBridge

ServerServer

External Network

InterfaceInterface Interface

Network Node

      Linuxbridge

Interface

ExternalBridge

Interface

NetworkABridge

SNATRouter

NetworkBBridge

DHCP

Interface

DHCP

Tunnel / VLAN network

NetworkABridge

Achieve DVR with Linuxbridge

Page 11: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

What it is

Why it is important

How it works

OPNFV Integration [Congress]

Current status and Future plan

It gets a vm-to-Host mapping.It notifies the mapping to ceilometer when an error occurs.

It enables to define a different error condition based on a company’s policies.

It detects a defined error and the mapping with Nova API.It notifies the mapping to Ceilometer.

To allow any datasource to push information (Under discussion).Etherpad link https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Congress

Page 12: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

OPNFV Integration [Congress]

Host Host

VM1 VM2 VM3

Nova

Congress Ceilometer

Get a vm info and Host info

Notify the mapping when an error occur

Calculate vm-to-host mappingAnd check whether a policy violation exists or not

Another datasource

Enable another datasource to push to Congress

Page 13: NTT SIC marketplace slide deck at Tokyo Summit

OpenStack Summit Tokyo 2015

What it is

Why it is important

How it works

Enable New Agents [Neutron]

Current status and Future plan

It enables operators to get a chance for maintenance of new node.

Operators need to test for new node before user’s resource is created.

It added the option which an agent isn’t targeted for scheduling on the node if the option is set.

We implemented and it’s released in Liberty.

Reference https://blueprints.launchpad.net/neutron/+spec/enable-new-agents

Network Node New Network Node(maintenance mode)

Compute Node

User’sDHCP

User’sRouter

Server

Tunnel / VLAN network

Network Node

Compute Node

User’sDHCP

User’sRouter

Server

Admin’sDHCP

Admin’sRouter

Tunnel / VLAN network

Add a new node & Test first

External networkTunnel/VLAN network

At adding a new node, testing the node with administrator resources before a customer’s resource is deployed on the node