35
(19) United States (12) Patent Application Publication (10) Pub. No.: US 2014/0068045 A1 US 2014.0068045A1 TARU et al. (43) Pub. Date: Mar. 6, 2014 (54) NETWORKSYSTEMAND VIRTUAL NODE (52) U.S. Cl. MGRATION METHOD CPC ...................................... H04L 41/02 (2013.01) USPC .......................................................... 709/223 (71) Applicant: Hitachi, Ltd., Tokyo (JP) (72) Inventors: Toshiaki TARUI, Sagamihara (JP); Yasusi KANADA, Tokyo (JP): Yasushi (57) ABSTRACT KASUGAI, Ninomiya (JP) (73) Assignee: Hitachi, Ltd., Tokyo (JP) A disclosed example is a network system including physical nodes. The network system provides a virtual network system (21) Appl. No.: 13/961,209 including virtual nodes allocated computer resources of the physical nodes. In a case where the network system performs (22) Filed: Aug. 7, 2013 migration of a first virtual node for executing service using (30) Foreign Application PrioritV Dat computer resources of a first physical node to a second physi oreign Application Priority Uata Aug. 29, 2012 (JP) ...................... (51) - - - - - - - - - - - 2012-1883.16 Publication Classification Int. C. H04L 2/24 (2006.01 VIRTUALNETWORK1 (SLICE 1) PHYSICALNETWORK ) VIRTUAL NETWORK2 (SLICE 2) cal node, the network system creates the communication paths for connecting the second physical node and the neigh boring physical nodes on the physical links, starts the service executed by the first virtual node using the computer resources secured by the second physical node, Switches communication paths to the created communication paths for Switching the virtual links. 10

(19) United States (12) Patent Application Publication (10 ... · linkswitching - s121 s123 s124 s126 disconnect link notice of link disconnection - s128 s132 . ... mennt (vmd, meniunit-64

  • Upload
    lyhuong

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

(19) United States (12) Patent Application Publication (10) Pub. No.: US 2014/0068045 A1

US 2014.0068045A1

TARU et al. (43) Pub. Date: Mar. 6, 2014

(54) NETWORKSYSTEMAND VIRTUAL NODE (52) U.S. Cl. MGRATION METHOD CPC ...................................... H04L 41/02 (2013.01)

USPC .......................................................... 709/223 (71) Applicant: Hitachi, Ltd., Tokyo (JP)

(72) Inventors: Toshiaki TARUI, Sagamihara (JP); Yasusi KANADA, Tokyo (JP): Yasushi (57) ABSTRACT KASUGAI, Ninomiya (JP)

(73) Assignee: Hitachi, Ltd., Tokyo (JP) A disclosed example is a network system including physical nodes. The network system provides a virtual network system

(21) Appl. No.: 13/961,209 including virtual nodes allocated computer resources of the physical nodes. In a case where the network system performs

(22) Filed: Aug. 7, 2013 migration of a first virtual node for executing service using (30) Foreign Application PrioritV Dat computer resources of a first physical node to a second physi oreign Application Priority Uata

Aug. 29, 2012 (JP) ......................

(51)

- - - - - - - - - - - 2012-1883.16

Publication Classification

Int. C. H04L 2/24 (2006.01

VIRTUALNETWORK1 (SLICE 1)

PHYSICALNETWORK

)

VIRTUAL NETWORK2 (SLICE 2)

cal node, the network system creates the communication paths for connecting the second physical node and the neigh boring physical nodes on the physical links, starts the service executed by the first virtual node using the computer resources secured by the second physical node, Switches communication paths to the created communication paths for Switching the virtual links.

10

Patent Application Publication Mar. 6, 2014 Sheet 1 of 20 US 2014/0068045 A1

VIRTUALNETWORK2 (SLICE 2)

VIRTUALNETWORK1 (SLICE 1)

PHYSICAL NETWORK

10

VIRTUAL NODEB

VIRTUAL NODEA

VIRTUAL NODEC

Patent Application Publication Mar. 6, 2014 Sheet 2 of 20 US 2014/0068045 A1

DOMAIN 300 W

DOMAIN MANAGEMENT SERVER 320 310

CPU PRIMARY

STORAGEDEVICE

321 DOMAIN MANAGEMENT

UNIT

328 MAPPING INFORMATION

323 VIRTUALNODE MANAGEMENT INFORMATION

330

SECONDARY STORAGE DEVICE

340

NC

500-3 500-2

NODE 1199-2 MANAGE MENT UNIT

NODE MANAGE MENT UNIT

LINK NK"169-2 MANAGE- MANAGE MENT UNIT MENT UNIT

100-2

400-4

190-1 NDE NODE 190-4 MES MES MANAGE- MANAGE. MENT UNIT MENT UNIT 110-4

PHYSICALNODED

Patent Application Publication Mar. 6, 2014 Sheet 3 of 20 US 2014/0068045 A1

MAPPING INFORMATION 322 710 730 720

VIRTUAL NODEC PHYSICAL NODE C VMC

FIG. 4

VIRTUAL NODE MANAGEMENT INFORMATION (VIRTUALNODEC)

810

VIRTUAL LINK LINKD; LINK 250-1 VIRTUAL NODE ID: VIRTUALNODEA

VIRTUAL LINK LINK D: LINK 250-2 VIRTUALNODE ID: VIRTUALNODEB

FIG. 5

Patent Application Publication Mar. 6, 2014 Sheet 4 of 20 US 2014/0068045 A1

500-3, ITO DOMAINMANAGEMENT SERVER 100-3

900 PHYSICALNODEC

SERVER 920

PRIMARY STORAGEDEVICE

NODE MANAGEMENT

- UNIT

SECONDARY STORAGE DEVICE

600-1

go? GRE CONVERTER 1110 TO PHYSICALNODEA

N-NODE PATH 400-1 SWITCH 400-2

CRISEASN TOPHYSICALNODEB

900 900

SERVER 920

PRIMARY STORAGE DEVICE

"'G) 932 VIRTUALIZATION

MANAGEMENT UNIT

C d SECONDARY STORAGE DEVICE

SERVER

FIG. 6

Patent Application Publication Mar. 6, 2014 Sheet 5 of 20 US 2014/0068045 A1

1201 1202 1203 1200

GRE HEADER PCSIPE VIRTUAL NETWORKPACKET

FIG. 7A

1211 1212 1213 1210

PACKET TYPE GRE HEADER | CONTROL CONTROL INFORMATION

FIG. 7B

PATH CONFIGURATION INFORMATION (PHYSICALNODEA)

1310 1320 1110

COMMUNICATION DIRECTION COMMUNICATIONAVAILABILITY

FIG. 8

Patent Application Publication Mar. 6, 2014 Sheet 6 of 20 US 2014/0068045 A1

100-3 100-1 300 100-4 100-2

PHYSICAL PHYSICAL DOMAIN PHYSICAL PHYSICAL NODEC NODEA MANAGEMENT NODED NODE B

SERVER

INSTRUCTION FOR VMCREATION S101

INSTRUCTION FOR LINK CREATION S102

INSTRUCTION FOR LINK CREATION S103

INSTRUCTION FOR LINK CREATION S104

INSTRUCTION FOR LINK CREATION S105

NOTIFICATION OF REQUIREMENTS FORVM DEACTIVATION

CREATE VM (NOTACTIVATEDYET)

S108 CREATE LINK

NOTICE OF RESOURCE CREATE LINK S109 SECUREMENT m NOTICE OF

RESOURCE S111 SECUREMENT NOTICE OF RESOURCE

SECUREMENT

NSTRUCTION FOR VMACTIVATION

ACTIVATE VM

NOTICE OF COMPLETON OF VMACTIVATION

S116 INSTRUCTION FOR LINKSWITCHING S117

S118 S119

INSTRUCTION FOR LINKSWITCHING

FIG. 9A

Patent Application Publication

100-3 100-1

PHYSICAL PHYSICAL NODEC NODEA

NOTICE OF COMPLETION OF LINKSWITCHINGS 120

DEACTIVATE VMAND

DISCONNECT LINK

DISCONNECT LINK

NOTICE OF LINK DISCONNECTION

NOTICE OF VM DEACTIVATION

Mar. 6, 2014 Sheet 7 of 20

300

DOMAN MANAGEMENT

SERVER

S127

AND LINK DISCONNECTION S129

INSTRUCTION FOR RESOURCE RELEASE S130

INSTRUCTION FOR RESOURCE

RELEASE S131

FIG. 9B

INSTRUCTION FOR RESOURCE

RELEASE

US 2014/0068045 A1

100-4 100-2

PHYSICAL PHYSICAL NODED NODEB

NOTICE OF COMPLETION OF LINKSWITCHING - S121

S123

S124

S126

DISCONNECT LiNK

NOTICE OF LINK DISCONNECTION - S128

S132

Patent Application Publication Mar. 6, 2014 Sheet 8 of 20 US 2014/0068045 A1

NOTICE OF RESOURCE SECUREMENT

SECURE RESOURCES FORVM

PHYSICALNODEA

FIG. 1 OA

Patent Application Publication Mar. 6, 2014 Sheet 9 of 20 US 2014/0068045 A1

300

DOMAIN MANAGEMENT SERVER

NSTRUCTION FOR LINKSWITCHENG

PHYSICALNODEA PHYSICALNODED

FIG 10B

Patent Application Publication Mar. 6, 2014 Sheet 10 of 20 US 2014/0068045 A1

300

DOMAIN MANAGEMENT SERVER

PHYSICALNODEC

110-1 PHYSICALNODEA PHYSICALNODED

100-1 100-4

FIG 10C

Patent Application Publication Mar. 6, 2014 Sheet 11 of 20 US 2014/0068045 A1

PATH CONFIGURATION INFORMATION (PHYSICALNODEA)

1310 1320 1100

VMA. VMD No FIG. 11A

PATH CONFIGURATION INFORMATION (PHYSICALNODEA)

1310 1320 1100

COMMUNICATION DIRECTION

FIG. 11B

Patent Application Publication Mar. 6, 2014 Sheet 12 of 20 US 2014/0068045 A1

1100

GRE CONVERTER

e. FIG. 12A

FIG. 12B

Patent Application Publication Mar. 6, 2014 Sheet 13 of 20 US 2014/0068045 A1

15-1 15-2 reverW aaaaaaaaaaaaaaaaaaaaaaavur - aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaW--------------rror---------------------

DOMAINA DOMANB

300-1 300-2 1300

DOMAIN MANAGEMENT DOMAIN MANAGEMENT SERVERA. SERVER B

1200-4

NODE NODE 1199-2 MANAGE- MANAGE MENT UNIT MENT UNIT

LINK NK 160-2 MANAGE- MANAGE MENT UNIT MENT UNIT

100-2

400-4

'NEEE - N35.94 MENNT (VMD, MENIUNIT-64

160-1 -v. MNEW MENft NIT 16. 4 MENT NT PHYSICALNODEA 400-3 PHYSICALNODED

FIG. 13

Patent Application Publication Mar. 6, 2014 Sheet 14 of 20 US 2014/0068045 A1

100-3 100-1 300-1 300-2 100-4 100-2

PHYSICAL | PHYSICAL DOMAIN DOMAIN PHYSICAL | | PHYSICAL MANAGEMENT MANAGEMENT

NODEC NODEA SERVERA SERVER B NODED NODEB INSTRUCTION

FOR INSTRUCTION | VMCREATION S101

FOR LINK CREATION - S102

INSTEg|ION LINK CREATION S103 INSTRUCTION .

FOR LINK CREATION S104

NOTIFICATION OF S105 REQUIREMENTS LINK CREATION

FORVM DEACTIVATION S106

CREATE VM (NOTACTIVATEDYET)

CREATE LINK

NOTICE OF CREATE LINKS109 RESOURCE NOTICE OF SECUREMENT - S110 RESOURCE S111

SECUREMENT NOTICE OF RESOURCE

SECUREMENT NSTRUCTION FORVM

ACTIVATION

ACTIVATE VM

NOTICE OF

COELYON INSERGIN ACTIVATION SWITCHING - S116

INSTRSTION S117 S118 LINKSWITCHING

S119 SWITCHLINKS

FIG. 14A

Patent Application Publication Mar. 6, 2014 Sheet 15 of 20 US 2014/0068045 A1

100-3 100-1 300-1 300-2 100-4 100-2

PHYSICAL PHYSICAL DOMAIN DOMAIN PHYSICAL | PHYSICAL MANAGEMENT MANAGEMENT NODEC NODEA "SRWEA SERVER B NODED NODEB

NOTICE OF COMPLETION NOTICE OF OF LINK COMPLETON SWITCHING - S120 OFINK S121

SWITCHING

DEACTIVATE VMAND

DISCONNECT LINK

S123 RESPONSE

DISCONNECT LINK

NOTICE OF LINK DSCONNECTION

S124

S126

DISCONNECT LNK

NOTICE OF LINK DISCONNECTION S128

NOTICE OF VM DEACTIVATION AND LINKDSCONNECTION

NSTRUCTION FOR

RESOURCE RELEASE

NSTRUCTION FOR

RESOURCE RELEASE

S129

S130

S131

INSTRUCTION FOR RESOURCE

RELEASE S132

FIG. 14B.

Patent Application Publication Mar. 6, 2014 Sheet 16 of 20 US 2014/0068045 A1

15-1 15-2 ----------------------\- f------------------\-

300-1 DOMAINA 300-2 DOMANB

DOMAIN MANAGEMENT --------------------- SERVERA ---------------------

DOMAIN - - - - - MANAGEMENT

y SERVERB y NOTICE OF y RESOURCE | f : y SECUREMENT | f :

W s

l

NOTICE OF RESOURCE SECUREMENT

-----------------------------------------

SECURE RESOURCES FORVM 110-1

PHYSICALNODEA

-------aw------assrooarsaw ase--as-a------------------- ----------vs.-run-sw-------rr-e-----------ass-ress-rears-el

FIG. 15A

Patent Application Publication Mar. 6, 2014 Sheet 17 of 20 US 2014/0068045 A1

DOMAIN DOMAIN MANAGEMENT : MANAGEMENT SERVERA --- SERVERB

110-1 PHYSICALNODEA

FIG. 15B

Patent Application Publication Mar. 6, 2014 Sheet 18 of 20 US 2014/0068045 A1

15-1 - 15-2

- - - - - - - - - - - - - - - - - - - - -W.------------------------------------- p-A-

300-1 DOMAINA 300-2 DOMANB

DOMAIN DOMAIN MANAGEMENT MANAGEMENT SERVERA SERVERB

PHYSICALNODE C

100-3 400

110-1 110-4

PHYSICALNODEA PHYSICALNODED M

100-1 -------------------- a---------------------------------------

FIG. 15C

Patent Application Publication Mar. 6, 2014 Sheet 19 of 20 US 2014/0068045 A1

100-3 100-1 300-1 300-2 100-4 100-2 A/ - A

PHYSICAL PHYSICAL DOMAIN DOMAIN PHYSICAL | | PHYSICAL MANAGEMENT MANAGEMENT

INSTRUCTION FORVM CREATION AND NOTIFICATION OF REQUIREMENTS FOR VMACTIVATION S201

INSTRUCTION FOR

LINK CREATION

INSTEg"ION LINK CREATION S103

LINK CREATION S104 INSTRUCTION

FOR S105 NOTIFICATION OF LINK CREATION REQUIREMENTS

FORVM DEACTIVATION

CREATE VM (NOTACTIVATEDYET)

CREATE LINK

CREATE LINKS109

REPORT OF LINK CREATION S202

REPORT OF LINK CREATION

ACTIVATE VM

S204 NOTICE OF START OF SERVICE

NOTICE OF START OF SERVICE

S205 S118

SWITCH S119 LINKS - SWITCH

LINKS

FIG. 16A

Patent Application Publication Mar. 6, 2014 Sheet 20 of 20 US 2014/0068045 A1

100-3 100-1 300-1 300-2 100-4 100-2

PHYSICAL PHYSICAL DOMAIN DOMAIN PHYSICAL | | PHYSICAL MANAGEMENT MANAGEMENT

NODEC NODEA SERVERA SERVER B NODED NODEB

(SEE, M NOTICE OF OF LINK SWITCHING - S120 CoEFON S121

SWITCHING

S122 DEACTIVATE VMAND

DISCONNECT LINK

RESPONSE S123

RESPONSE S124

S126 DisconnECT-S125 LINK DISCONNECT

LINK NOTICE OF LINK DISCONNECTION - S127 N99E9ENK, S128

DISCONNECTION NOTICE OF

VM DEACTIVATION AND LINK DISCONNECTION

INSTRUCTION FOR

RESOURCE RELEASE

INSTRUCTION FOR

RESOURCE RELEASE

S129

S130

S131

INSTRUCTION FOR RESOURCE

RELEASE S132

FIG. 16B

US 2014/0068045 A1

NETWORKSYSTEMAND VIRTUAL NODE MGRATION METHOD

CLAIM OF PRIORITY

0001. The present application claims priority from Japa nese patent application JP2012-1883.16 filed on Aug. 29, 2012, the content of which is hereby incorporated by refer ence into this application.

BACKGROUND

0002 This invention relates to a method for migration of a virtual node in a virtual network. 0003. In recent years, various services, such as the Internet services, telephone services, mobile services, and enterprise network services, are provided via networks. To create net works for such different services and to provide functions required for the services, virtual network technology is employed that creates a plurality of virtual networks (slices) on a physical network. 0004. In order to create a virtual network, nodes forming the physical network to be the infrastructure are required to have a function to perform processing specific to the virtual network. 0005 Since this function is different depending on the slice, it is common to implement the function by executing a program (a program for a general-purpose server or network processor). 0006. In the virtual network technology, the virtual net work configuration is separated from the physical network configuration. Accordingly, a node (virtual node) for a virtual network can be allocated to any physical node if computer resources (such as a CPU, a memory, and a network band width) and performance (such as network latency) required for the virtual node can be secured. The same applies to a link for a virtual network; a virtual link can be freely configured with physical links. 0007. A virtual node can be created with designation of a specific physical node and physical links based on a demand of the administrator of the virtual network. 0008. In the meanwhile, the virtual network technology requires that the addresses and the packet configuration in the virtual network do not affect those in the physical network. 0009 For this purpose, it is required to separate the virtual network from the physical network using a VLAN and sepa rate packets in the virtual network from packets in the physi cal network by encapsulating the packets using GRE and Mac-in-Mac. 0010. The encapsulation enables virtual network commu nication in a free packet format, which does not depend on the existing IP communication. 0011 To create a virtual network covering a wide area, the virtual network may have to be created from networks under different management systems. For example, a virtual net work may be created from networks of different communi cation providers or networks in a plurality of countries. 0012. In the following description, a management unit for network in physical networks is referred to as domain and creating a virtual network ranging in a plurality of domains is referred to as federation.

0013 Federation is to create a virtual network demanded by the administrator of the virtual network to provide service under cooperation of the management servers of a plurality of domains, like in the case of a single domain.

Mar. 6, 2014

0014. As described above, virtual nodes can be freely allo cated to physical nodes; however, they need to be reallocated because of some reason. In other words, a demand for migra tion of a virtual node is generated. 0015 For example, in the case of increasing the amount of computer resources allocated to a virtual node, if the physical node does not have enough computer resources, the virtual node needs to be transferred to another physical node having a Sufficient amount of computer resources. Besides, the des tination physical node should be close to the Source node in the network. 0016. The migration of a virtual node is desirable to be seamless in the virtual network, which means the physical node allocated the virtual node should be changed without changing the configuration of the virtual network. 0017. Furthermore, the service of the virtual network should be kept provided during the execution of migration. That is to say, migration of a node should be completed without interruption of the service when seeing from the service users of the virtual network. Some techniques for live migration of a virtual machine (VM) between servers have been commercialized; however, they generate a very short interruption (about 0.5 seconds) of operation of the VM when transferring the VM. In application of Such a technique to a node of a virtual network, interruption of network communi cation is unacceptable. Accordingly, migration of a virtual node should be achieved without using the VM live migra tion.

SUMMARY

0018 Pisa, P., and seven others, “OpenFlow and Xen based Virtual Network Migration'. Wireless in Developing Countries and Networks of the Future, volume 327 of IFIP Advances in Information and Communication Technology, Springer Boston, pp. 170-181 discloses, in FIG. 3, a migra tion method in a virtual network configured with OpenFlow Switches. To keep communication in the virtual network dur ing the migration, OpenFlow Switches, where a flow (in one direction) goes through, are configured in accordance with the following three steps to perform migration: 0019 (1) Add the definition of the flow to go through a new node to the newly added node and the node where the flow from the new node meets the existing path; 0020 (2) Change the definition of the flow into the defi nition of the new flow in the node where the existing path branches to the new node; and (0021 (3) Delete the definition of the flow in the old node where the flow no longer goes through. 0022. During transmission of a flow going through Open Flow Switches, the foregoing step (2) that changes the path information enables the flow to go along a new path without interruption of transmission. 0023. However, this existing technique is based on the condition that the virtual nodes are allocated to OpenFlow Switches. Accordingly, it is difficult to apply this existing technique to virtual nodes implemented by a program running on a general-purpose server or a network processor. 0024. Furthermore, in this existing technique, the Open Flow switches are controlled by a single controller, which means this technique is based on a single domain network. Accordingly, it cannot be applied to migration between domains. 0025. This invention has been accomplished in view of the foregoing problems. That is to say, an object of this invention

US 2014/0068045 A1

is to provide a network system that, in a virtual network ranging in a plurality of domains, allows migration of chang ing the allocation of a virtual node quickly and without inter ruption of the service being executed by the virtual node. 0026. An aspect of this invention is a network system including physical nodes having computer resources. The physical nodes are connected to one another via physical links. The network system provides a virtual network system including virtual nodes allocated computer resources of the physical nodes to execute predetermined service. The net work system including: a network management unit forman aging the virtual nodes; at least one node management unit for managing the physical nodes; and at least one link manage ment unit for managing connections of the physical links connecting the physical nodes and connections of virtual links connecting the virtual nodes. The network management unit holds mapping information indicating correspondence relations between the virtual nodes and the physical nodes allocating the computer resources to the virtual nodes and virtual node management information for managing the Vir tual links. The at least one link management unit holds path configuration information for managing connection states of the virtual links. In a case where the network system performs migration of a first virtual node for executing service using computer resources of a first physical node to a second physi cal node, the network management unit sends the second physical node an instruction to secure computer resources to be allocated to the first virtual node. The network manage ment unit identifies neighboring physical nodes allocating computer resources to neighboring virtual nodes connected to the first virtual node via virtual links in the virtual network. The network management unit sends the at least one link management unit an instruction to create communication paths for implementing virtual links for connecting the first virtual node and the neighboring virtual nodes on physical links connecting the second physical node and the neighbor ing physical nodes. The at least one link management unit creates the communication paths for connecting the second physical node and the neighboring physical nodes on the physical links based on the instruction to create the commu nication paths. The at least one node management unit starts the service executed by the first virtual node using the com puter resources secured by the second physical node. The network management unit sends the at least one link manage ment unitan instruction to switch the virtual links. Theat least one link management unit Switches communication paths to the created communication paths for Switching the virtual links.

0027. According to an aspect of this invention, the service of a virtual node is started in a physical node of a migration destination and communication paths to be allocated virtual links are prepared between the physical node of the migration destination and the physical nodes to execute the service of neighboring virtual nodes, so that migration of the virtual node to a different physical node can be performed quickly without interruption of the service being executed by the virtual node.

BRIEF DESCRIPTION OF THE DRAWINGS

0028 FIG. 1 is an explanatory diagram illustrating a con figuration example of a network system in the embodiments of this invention;

Mar. 6, 2014

0029 FIG. 2 is an explanatory diagram illustrating a con figuration example of a virtual network (slice) in Embodi ment 1 of this invention; 0030 FIG. 3 is an explanatory diagram illustrating a con figuration example of a physical network in Embodiment 1 of this invention; 0031 FIG. 4 is an explanatory diagram illustrating an example of mapping information in Embodiment 1 of this invention; 0032 FIG. 5 is an explanatory diagram illustrating an example of virtual node management information in Embodi ment 1 of this invention; 0033 FIG. 6 is an explanatory diagram illustrating a con figuration example of a physical node in Embodiment 1 of this invention; 0034 FIG. 7A is an explanatory diagram illustrating an example of packet format in Embodiment 1 of this invention; 0035 FIG. 7B is an explanatory diagram illustrating another example of packet format in Embodiment 1 of this invention; 0036 FIG. 8 is an explanatory diagram illustrating an example of path configuration information in Embodiment 1 of this invention; 0037 FIG. 9A is a sequence diagram illustrating a pro cessing flow of migration in Embodiment 1 of this invention; 0038 FIG.9B is a sequence diagram illustrating a pro cessing flow of migration in Embodiment 1 of this invention; 0039 FIG. 10A is an explanatory diagram illustrating a state within a domain 15 during the migration in Embodiment 1 of this invention; 0040 FIG. 10B is an explanatory diagram illustrating a state within a domain 15 during the migration in Embodiment 1 of this invention; 0041 FIG. 10C is an explanatory diagram illustrating a state within a domain 15 during the migration in Embodiment 1 of this invention; 0042 FIG. 11A is an explanatory diagram illustrating an example of path configuration information in Embodiment 1 of this invention; 0043 FIG. 11B is an explanatory diagram illustrating an example of path configuration information in Embodiment 1 of this invention; 0044 FIG. 12A is an explanatory diagram illustrating a connection state of communication paths in a GRE converter in Embodiment 1 of this invention; 0045 FIG. 12B is an explanatory diagram illustrating a connection state of communication paths in a GRE converter in Embodiment 1 of this invention; 0046 FIG. 13 is an explanatory diagram illustrating a configuration example of a physical network in Embodiment 2 of this invention; 0047 FIG. 14A is a sequence diagram illustrating a pro cessing flow of migration in Embodiment 2 of this invention; 0048 FIG. 14B is a sequence diagram illustrating a pro cessing flow of migration in Embodiment 2 of this invention; 0049 FIG. 15A is an explanatory diagram illustrating a state within a domain 15 during the migration in Embodiment 2 of this invention; 0050 FIG. 15B is an explanatory diagram illustrating a state within a domain 15 during the migration in Embodiment 2 of this invention; 0051 FIG. 15C is an explanatory diagram illustrating a state within a domain 15 during the migration in Embodiment 2 of this invention;

US 2014/0068045 A1

0052 FIG. 16A is a sequence diagram illustrating a pro cessing flow of migration in Embodiment 3 of this invention; and 0053 FIG. 16B is a sequence diagram illustrating a pro cessing flow of migration in Embodiment 3 of this invention.

DETAILED DESCRIPTION OF THE EMBODIMENTS

0054 First, a configuration example of a network system to be used as the basis of this invention is described. 0055 FIG. 1 is an explanatory diagram illustrating a con figuration example of a network system in the embodiments of this invention. 0056. In this invention, a plurality of different virtual net works 20 are created on a physical network 10. 0057 The physical network 10 is composed of a plurality of physical nodes 100, which are connected via specific net work lines. 0058. This invention is not limited to the type of the net work; any of WAN, LAN, and SAN, or other network may be used. This invention is not limited to the connection means either, which could be wired or wireless. 0059 A virtual network 20 is composed of a plurality of virtual nodes 200, which are connected to one another via virtual network lines. The virtual nodes 200 execute prede termined service in the virtual network 20. 0060 A virtual node 200 is implemented using computer resources of a physical node 100. Accordingly, one physical node 100 can provide virtual nodes 200 of different virtual networks 20. 0061 The virtual networks 20 may be networks using different communication protocols. 0062 Under the above-described scheme, independent networks can be freely created on a physical network 10. Moreover, effective utilization of existing computer resources lowers the introduction cost. 0063. In this description, a virtual network is also referred to as slice.

Embodiment 1

0064 FIG. 2 is an explanatory diagram illustrating a con figuration example of a virtual network (slice) 20 in Embodi ment 1 of this invention. 0065. In this embodiment, the slice 20 is composed of a virtual node A (200-1), a virtual node B (200-2), and a virtual node C (200-3). The virtual nodes A (200-1) and C (200-3) are connected via a virtual link 250-1; the virtual nodes B (200-2) and C (200-3) are connected via a virtual link 250-2. 0066. In the following explanation, the virtual node C (200-3) is assumed to be a virtual node to be the subject of migration. For simplicity of explanation, FIG. 2 shows a virtual network (slice) 20 with a simple topology; however, the processing described hereinafter can be performed in a virtual network (slice) 20 with a more complex topology.

(System Configuration) 0067 FIG. 3 is an explanatory diagram illustrating a con figuration example of the physical network 10 in Embodi ment 1 of this invention. 0068 Embodiment 1 is described using a physical net work 10 under a single domain 15 by way of example. 0069. The domain 15 forming the physical network 10 includes a domain management server 300 and a plurality of

Mar. 6, 2014

physical nodes 100. This embodiment is based on the assumption that the slice 20 shown in FIG. 2 is provided using physical nodes 100 in the domain 15. 0070 The domain management server 300 is a computer for managing the physical nodes 100 in the domain 15. The domain management server 300 includes a CPU 310, a pri mary storage device 320, a secondary storage device 330, and an NIC 340. (0071. The CPU 310 executes programs stored in the pri mary storage device 320. The CPU 310 executes the pro grams to perform functions of the domain management server 300. The domain management server 300 may have a plural ity of CPUs 310. 0072 The primary storage device 320 stores programs to be executed by the CPU 310 and information required to execute the programs. An example of the primary storage device 320 is a memory. 0073. The primary storage device 320 stores a program (not shown) for implementing a domain management unit 321. The primary storage device 320 also stores mapping information 322 and virtual node management information 323 for the information to be used by the domain management unit 321. 0074 The domain management unit 321 manages the physical nodes 100 and the virtual nodes 200. In this embodi ment, migration of a virtual node 200 is executed by the domain management unit 321. 0075. The mapping information 322 is information for managing correspondence relations between the physical nodes 100 in the domain 15 and the virtual nodes 200. The details of the mapping information 322 will be described later using FIG. 4. The virtual node management information 323 is configuration information for virtual nodes 200. The details of the virtual node management information 323 will be described later using FIG. 5. 0076. The virtual node management information 323 is held by each physical node 100; the domain management server 300 can acquire the virtual node management infor mation 323 from each physical node 100 in the domain 15. 0077. The secondary storage device 330 stores a variety of data. Examples of the secondary storage device 330 are an HDD (Hard Disk Drive) and an SSD (Solid State Drive). 0078. The program for implementing the domain manage ment unit 321, the mapping information 322, and the virtual node management information 323 may be held in the sec ondary storage device 330. In this case, the CPU 310 retrieves them from the secondary storage device 330 to load the retrieved program and information to the primary storage device 320. (0079. The NIC 340 is an interface for connecting the domain management server 300 to other nodes via network lines. In this embodiment, the domain management server 300 is connected to the physical nodes 100 via physical links 500-1,500-2, 500-3, and 500-4 connected from the NIC 340. More specifically, the domain management server 300 is con nected so as to be able to communicate with node manage ment units 190 of the physical nodes 100 via the physical linkS500. 0080. The domain management server 300 may further include a management interface to connect to the node man agement units 190 of the physical nodes 100. I0081. A physical node 100 provides a virtual node 200 included in the slice 20 with computer resources. The physi cal nodes 100 are connected to one another via physical links

US 2014/0068045 A1

400. Specifically, the physical node A (100-1) and the physi cal node C (100-3) are connected via a physical link 400-1: the physical node C (100-3) and the physical node B (100-2) are connected via the physical link 400-2; the physical node A (100-1) and the physical node D (100-4) are connected via a physical link 400-3; and the physical node B (100-2) and the physical node D (100-4) are connected via a physical link 400-4.

0082 Each virtual node 200 is allocated to one of the physical nodes 100. In this embodiment, the virtual node A (200-1) is allocated to the physical node A (100-1); the virtual node B (200-2) is allocated to the physical node B (100-2); and the virtual node C (200-3) is allocated to the physical node C (100-3). 0083. Each physical node 100 includes a link management unit 160 and a node management unit 190. The link manage ment unit 160 manages physical links 400 connecting physi cal nodes 100 and virtual links 250. The node management unit 190 manages the entirety of the physical node 100. The physical node 100 also includes a virtualization management unit (refer to FIG. 6) for implementing a virtual machine (VM) 110. 0084. In this embodiment, a VM110 provides functions to implement a virtual node 200. Specifically, the VM 110 pro vides programmable functions for the virtual node 200. For example, the VM 110 executes a program to implement the function to convert the communication protocol. I0085. In this embodiment, the VM A (110-1) provides the functions of the virtual node A (200-1); the VM B (110-2) provides the functions of the virtual node B (200-2); and the VM C (110-3) provides the functions of the virtual node C (200-3). I0086. In this embodiment, a VM 110 provides the func tions of a virtual node 200; however, this invention is not limited to this. For example, the function of the virtual node 200 may be provided using the network processor, a GPU, or an FPGA.

0087. In a physical link 400 connecting physical nodes 100 allocated virtual nodes 200, GRE tunnels 600 are created to implement a virtual link 250. This invention is not limited to this scheme implementing the virtual link 250 using the GRE tunnels 600. For example, the virtual link 250 can be implemented using a Mac-in-Mac or a VLAN. I0088 Specifically, GRE tunnels 600-1 and 600-2 for pro viding the virtual link 250-1 are created in the physical link 400-1 and GRE tunnels 600-3 and 600-3 for providing the virtual link 250-2 are created in the physical link 400-2. I0089. One GRE tunnel 600 supports unidirectional com munication. For this reason, two GRE tunnels 600 are created in this embodiment to Support bidirectional communication between virtual nodes 200. 0090 FIG. 4 is an explanatory diagram illustrating an example of the mapping information 322 in Embodiment 1 of this invention.

0091. The mapping information 322 stores information indicating correspondence relations between the virtual nodes 200 and the physical nodes 100 running the VMs 110 for providing the functions of the virtual nodes 200. Specifi cally, the mapping information 322 includes virtual node IDs 710, physical node IDs 720, and VMIDs 730. The mapping information 322 may include other information. 0092. A virtual node ID 710 stores an identifier to uniquely identify a virtual node 200. A physical node ID 720

Mar. 6, 2014

stores an identifier to uniquely identify a physical node 100. A VMID 730 stores an identifierto uniquely identify a VM 110. 0093 FIG. 5 is an explanatory diagram illustrating an example of the virtual node management information 323 in Embodiment 1 of this invention. 0094. The virtual node management information 323 stores a variety of information to manage a virtual node 200 allocated to a physical node 100. In this embodiment, the virtual node management information 323 is in the XML format and a piece of virtual node management information 323 is for a single virtual node 200. In typical, pieces of virtual node management information 323 are for a physical node 100. 0.095 The virtual node management information 323 includes an attribute 810 and virtual link information 820. The virtual node management information 323 may include other information. 0096. The attribute 810 stores information indicating the attribute of the virtual node 200, for example, identification information on the programs to be executed on the virtual node 200. 0097. The virtual link information 820 stores information on the virtual links 250 connected to the virtual node 200 allocated to the physical node 100. For example, a piece of virtual link information 820 stores identification information on one of such virtual links 250 and identification information on the other virtual node 200 connected via the virtual link 2SO.

I0098. The example of FIG. 5 shows the virtual node man agement information 323 on the virtual node C (200-3). This virtual node management information 323 includes virtual link information 820-1 and virtual link information 820-2 on the virtual link 250-1 and the virtual link 250-2, respectively, which connect the virtual node C (200-3) allocated to the physical node C (100-3). 0099. This invention is not limited to the data format of the virtual node management information 323; the data format may be a different one. Such as a table format. 0100 FIG. 6 is an explanatory diagram illustrating a con figuration example of a physical node 100 in Embodiment 1 of this invention. Although FIG. 6 illustrates the physical node C (100-3) by way of example, the physical node A (100-1), the physical node B (100-2), and the physical node D (100-4) have the same configuration. 0101 The physical node C (100-3) includes a plurality of servers 900, an in-node switch 1000, and a GRE converter 1100. Inside the physical node C (100-3), a VLAN is created. 0102) Each server 900 includes a CPU 910, a primary storage device 920, an NIC 930, and a secondary storage device 940. (0103) The CPU910 executes programs stored in the pri mary storage device 920. The CPU 910 executes the pro grams to perform the functions of the server 900. The primary storage device 920 stores programs to be executed by the CPU 910 and information required to execute the programs. 0104. The NIC 930 is an interface for connecting the physical node to other apparatuses via network lines. The secondary storage device 940 stores a variety of information. 0105. In this embodiment, a physical node 100 includes a server 900 including a node management unit 931 and a server 900 including a virtualization management unit 932. The CPU910 executes a specific program stored in the pri mary storage device 920 to implement the node management unit 931 or the virtualization management unit 932.

US 2014/0068045 A1

0106 When the following description is provided by a sentence with a subject of the node management unit 931 or the virtualization management unit 932, the sentence indi cates that the program for implementing the node manage ment unit 931 or the virtualization management unit 932 is being executed by the CPU910. 0107 The node management unit 931 is the same as the node management unit 190. The node management unit 931 holds virtual node management information 320 to manage the virtual nodes 200 allocated to the physical node 100. 0108. The virtualization management unit 932 creates VMS 110 using computer resources and manages the created VMs 110. An example of the virtualization management unit 932 is a hypervisor. The methods of creating and managing VMs 110 are known; accordingly, detailed explanation thereof is omitted. 0109 The server 900 running the node management unit 931 is connected to the in-node Switch 1000 and the GRE converter 1100 via a management network and is also con nected to the domain management server 300 via the physical link 500-3. The servers 900 running the virtualization man agement units 932 are connected to the in-node switch 1000 via an internal data network. 0110. The in-node switch 1000 connects the servers 900 and the GRE converter 1100 in the physical node C (100-3). The in-node switch 1000 has a function for managing a VLAN and transfers packets within the VLAN. Since the configuration of the in-node switch 1000 is known, the expla nation thereof is omitted; however, the in-node switch 1000 includes, for example, a Switching transfer unit (not shown) and an I/O interface (not shown) having one or more ports. 0111. The GRE converter 1100 corresponds to the link management unit 160; it manages connections among physi cal nodes 100. The GRE converter 1100 creates GRE tunnels 600 and communicates with other physical nodes 100 via the GRE tunnels 600. The GRE converter 1100 includes com puter resources such as a CPU (not shown), a memory (not shown), and a network interface. 0112. This embodiment employs the GRE converter 1100 because virtual links 250 are provided using GRE tunnels 600; however, this invention is not limited to this. A router and an access gateway apparatus based on a protocol for imple menting virtual links 250 may be alternatively used. 0113. The GRE converter 1100 holds path configuration information 1110. The path configuration information 1110 is information representing connections of GRE tunnels 600 to communicate with virtual nodes 200. The GRE converter 1100 can switch connections to virtual nodes 200 using the path configuration information 1110. The details of the path configuration information 1110 will be described later with reference to FIG. 8. 0114. When sending a packet to a VM 110 running on a remote physical node 100, the GRE converter 1100 attaches a GRE header to the packet in the local physical node 100 to encapsulate it and sends the encapsulated packet. When receiving a packet from a VM 110 running on a remote physical node 100, the GRE converter 1100 removes a GRE header from the packet and converts (decapsulates) it into a Mac-in-Mac packet for the VLAN to transfer the converted packet to a VM 110 in the physical node 100. 0115 Now, the format of packets transmitted between physical nodes 100 is described. 0116 FIGS. 7A and 7B are explanatory diagrams illustrat ing examples of packet format in Embodiment 1 of this inven

Mar. 6, 2014

tion. FIG. 7A illustrates the packet format of a data packet 1200 and FIG. 7B illustrates the packet format of a Control packet 1210. 0117. A data packet 1200 consists of a GRE header 1201, a packet type 1202, and a virtual network packet 1203. 0118. The GRE header 1201 stores a GRE header. The packet type 1202 stores information indicating the type of the packet. In the case of a data packet 1200, the packet type 1202 stores “DATA’. The virtual network packet 1203 stores a packet to be transmitted in the virtual network or the slice 20. 0119) A control packet 1210 consists of a GRE header 1211, a packet type 1212, and control information 1213. I0120) The GRE header 1211 and the packet type 1212 are the same as the GRE header 1201 and the packet type 1202, respectively, although the packet type 1212 stores “CON TROL. The control information 1213 stores a command and information required for control processing. I0121 Data packets 1200 are transmitted between VMs 110 that provide the functions of virtual nodes 200 and con trol packets 1210 are transmitted between servers 900 run ning the node management units 931 of the physical nodes 1OO.

I0122) When the GRE converter 1100 receives a packet from a VM 110 running on a remote physical node 100, it identifies the type of the received packet with reference to the packet type 1202 or 1212. If the received packet is a control packet 1210, the GRE converter 1100 performs control pro cessing based on the information stored in the control infor mation 1213. If the received packet is a data packet 1200, the GRE converter 1100 transfers a decapsulated packet to a specified server 900. I0123 To send a data packet 1200 to a VM 110 running on a remote physical node 100, the GRE converter 1100 sends an encapsulated packet in accordance with the path configura tion information 1110. To send a control packet 1210 to the domain management server 300 or a remote physical node 100, the GRE converter 1100 sends an encapsulated packet via a GRE tunnel 600. 0.124 FIG. 8 is an explanatory diagram illustrating an example of the path configuration information 1110 in Embodiment 1 of this invention. FIG. 8 explains the path configuration information 1110 included in the GRE con verter 1100 in the physical node A (100-1) by way of example. 0.125. The path configuration information 1110 includes communication directions 1310 and communication avail abilities 1320. 0.126. A communication direction 1310 stores information indicating the communication direction between VMs 110. namely, information indicating the communication direction of a GRE tunnel 600. I0127 Specifically, the communication direction 1310 stores identification information on the VM 110 of the trans mission source and the VM 110 of the transmission destina tion. Although the example of FIG.8 uses an arrow to repre sent the communication direction, this invention is not limited to this; any data format is acceptable if the VMs 110 of the transmission source and the transmission destination can be identified. I0128. A communication availability 1320 stores informa tion indicating whether to connect the communication between the VMs 110 represented by the communication direction 1310. In this embodiment, if communication between the VMs 110 is to be connected, the communication

US 2014/0068045 A1

availability 1320 stores “OK” and if communication between the VMs 110 is not to be connected, the communication availability 1320 stores “NO”.

(Migration)

0129. Hereinafter, migration of the virtual node C (200-3) from the physical node C (100-3) to the physical node D (100-4) will be described with reference to FIGS. 9A, 9B, 10A, 10B, 10C, 11A, 11B, 12A, and 12B. 0130 FIGS. 9A and 9B are sequence diagrams illustrating a processing flow of migration in Embodiment 1 of this inven tion. FIGS. 10A, 10B, and 10C are explanatory diagrams illustrating states in the domain 15 during the migration in Embodiment 1 of this invention. FIGS. 11A and 11B are explanatory diagrams illustrating examples of the path con figuration information 1110 in Embodiment 1 of this inven tion. FIGS. 12A and 12B are explanatory diagrams illustrat ing connection states of communication paths in the GRE converter 1100 in Embodiment 1 of this invention. 0131 This embodiment is based on the assumption that the administrator who operates the domain management server 300 enters a request for start of migration together with the identifier of the virtual node C (200-3) to be the subject of migration. This invention is not limited to the time to start the migration. For example, the migration may be started when the load to a VM 110 exceeds a threshold. 0132) The domain management server 300 first secures computer resources required for the migration and configures information used in the migration. Specifically, Steps S101 to S106 are performed. 0133. These are preparation for preventing interruption of the service executed in the slice 20 and switching VMs 110 in no time. 0134. The domain management server 300 sends an instruction for VM creation to the physical node D (100-4) (Step S101). 0135 Specifically, the domain management server 300 sends an instruction to create a VM D (110-4) to the node management unit 931 of the physical node D (100-4). The instruction for VM creation includes a set of configuration information for the VM D (110-4). The configuration infor mation for a VM 110 includes, for example, the CPU to be allocated, the size of memory to be allocated, the path name of the OS boot image, and program names to provide the service to be executed by the virtual node C (200-3). 0136. The domain management server 300 creates the instruction for VM creation so that the VM-D (110-4) will have the same capability as the VM C (110-3). Specifically, the domain management server 300 acquires the configura tion information for the VM C (110-3) from the virtualiza tion management unit 932 in the server 900 running the VM C (110-3) to create the instruction for VM creation based on the acquired configuration information. 0.137 The domain management server 300 sends instruc tions for virtual link creation to the physical nodes A (100-1) and D (100-4) (Steps S102 and S103). In similar, the domain management server 300 sends instructions for virtual link creation to the physical nodes B (100-2) and D (100-4) (Steps S104 and S105). Specifically, the following processing is performed. 0.138. The domain management server 300 identifies the physical node C (100-3) allocated the virtual node C (200-3) with reference to the mapping information 322.

Mar. 6, 2014

0.139 Next, the domain management server 300 identifies the virtual node A (200-1) and the virtual node B (200-2) connected via the virtual links 250-1 and 250-2 with refer ence to the virtual node management information 323 of the physical node C (100-3). 0140. Furthermore, the domain management server 300 identifies the physical node A (100-1) allocated the virtual node A (200-1) and the physical node B (100-2) allocated the virtual node B (200-2) with reference to the mapping infor mation 322. 0141 Next, the domain management server 300 investi gates the connections among virtual nodes 200 to identify neighboring virtual nodes 200 of the virtual node C (200-3). Under the connections in the slice 20 in this embodiment, the virtual nodes 200 that can be connected from the virtual node C (200-3) with one hop are defined as the neighboring virtual nodes 200. Accordingly, the virtual nodes A (200-1) and B (200-2) are the neighboring virtual nodes 200 of the virtual node C (200-3). The number of hops can be freely deter mined. 0.142 Furthermore, the domain management server 300 identifies the physical nodes A (100-1) and B (100-2) allo cated the neighboring virtual nodes 200 as neighboring physi cal nodes 100. 0143. The domain management server 300 sends instruc tions to create a virtual link 250-1 between the physical node A (100-1) and the physical node D (100-4). The domain management server 300 further sends instructions to create a virtual link 250-2 between the physical node B (100-2) and the physical node D (100-4). 0144. The instruction for virtual link creation includes configuration information for the virtual link 250. The con figuration information for the virtual link 250 includes, for example, a bandwidth, a GRE key required for connection, and IP addresses. 0145. Described above is the processing at Steps S102, S103, S104, and S105. 0146 Next, the domain management server 300 notifies the physical node C (100-3) of requirements for VM deacti vation (Step S106). 0147 The requirements for VM deactivation represent the requirements to deactivate a VM110 running on the physical node 100 of the migration source. Upon receipt of the require ments for VM deactivation, the node management unit 931 of the physical node C (100-3) starts determining whether the requirements for VM deactivation are satisfied. 0.148. This embodiment is based on the assumption that the requirements for VM deactivation are predetermined so as to deactivate the VM C (110-3) when notices of completion of virtual link switching are received from the neighboring physical nodes, namely, the physical nodes A (100-1) and B (100-2). In other words, the node management unit 931 of the physical node C (100-3) does not deactivate the VM C (110 3) until receipt of notices of completion of virtual link switch ing from the physical node A (100-1) running the VM A (110-1) and the physical node B (100-2) running the VM B (110-2). 0149. When the physical node D (100-4) receives the instruction for VM creation, it creates a VM D (110-4) on a specific server 900 in accordance with the instruction for VM creation (Step S107). Specifically, the following processing is performed. 0150. The node management unit 931 determines a server 900 where to create the VM D (110-4). The node manage

US 2014/0068045 A1

ment unit 931 transfers the received instruction for VM cre ation to the virtualization management unit 932 running on the determined server 900. 0151. The virtualization management unit 932 creates the VM D (110-4) in accordance with the instruction for VM creation. After creating the VM D (110-4), the virtualization management unit 932 responds the completion of the creation of the VM D (110-4). At this moment, the created VM D (110-4) is not activated. 0152. Described above is the processing at Step S107. 0153. When the physical nodes A (100-1) and D (100-4) receive the instructions for virtual link creation, they create GRE tunnels 600-5 and 600-6 (refer to FIG. 10A) to imple ment the virtual link 250-1 in accordance with the instruc tions for virtual link creation (Step S108). Specifically, the following processing is performed. 0154 The node management unit 931 of the physical node A (100-1) transfers the instruction for virtual link creation received to the GRE converter 1100 upon receipt of it from the domain management server (300). Also, the node manage ment unit 931 of the physical node D (100-4) transfers the instruction for virtual link creation received from the domain management server 300 to the GRE converter 1100 upon receipt of it. (O155 The GRE converters 1100 of the physical nodes A (100-1) and D (100-4) create GRE tunnels 600-5 and 600-6. The GRE tunnels 600 can be created using a known tech nique; accordingly, the explanation thereof is omitted in this description. 0156 The GRE converter 1100 of the physical node A (100-1) adds entries corresponding to the GRE tunnels 600-5 and 600-6 to the path configuration information 1110 as shown in FIG. 11A. (O157. The GRE converter 1100 of the physical node A (100-1) sets “NO” to the communication availability 1320 of the entry for the GRE tunnel 600-5 and “OK” to the commu nication availability 1320 of the entry for the GRE tunnel 600-6 (refer to FIG. 11A). 0158. In the meanwhile, the GRE converter 1100 of the physical node D (100-4) adds entries corresponding to the GRE tunnels 600-5 and 600-6 to the path configuration infor mation 1110 and sets “OK” to the communication availabili ties 1320 of the entries.

0159. Through the above-described processing, a virtual link 250 that allows only unidirectional communication from the VM D (110-4) to the VM A (110-1) is created between the physical nodes A (100-1) and D (100-4). 0160 Described above is the processing at Step S108. 0161 In similar, the physical nodes B (100-2) and D (100 4)create GRE tunnels 600-7 and 600-8 (refer to FIG. 10A) to implement the virtual link 250-2 in accordance with the instructions for virtual link creation upon receipt of them (Step S109). (0162. On this occasion, the GRE converter 1100 of the physical node B (100-2) sets “NO” to the communication availability 1320 of the entry for the GRE tunnel 600-7 and “OK” to the communication availability 1320 of the entry for the GRE tunnel 600-8. The GRE converter 1100 of the physi cal node D (100-4) sets “OK” to the communication avail abilities 1320 of the entries for the GRE tunnels 600-7 and 600-8.

0163. After creating the virtual links 250, the node man agement units 931 of the physical nodes A (100-1) and B

Mar. 6, 2014

(100-2) send the domain management server 300 notices indicating that the computer resources have been secured (Steps S110 and S111). 0164. In the meanwhile, the node management unit 931 of the physical node D (100-4) sends the domain management server 300 a notice indicating that the computer resources have been secured after creating the VM D (110-4) and the virtual links 250 (Step S112). 0.165. In response, the domain management server 300 creates update information for the mapping information 322 and the virtual node management information 323 based on the notices indicating the computer resources have been secured and stores it on a temporal basis. In this embodiment, the domain management server 300 creates the information as follows.

0166 The domain management server 300 creates update information for the mapping information 322 in which the entry corresponding to the virtual node C (200-3) includes the physical node D (100-4) in the physical node ID 720 and the VM D (110-4) in the VM ID 730. The domain management server 300 also creates virtual node management information 323 on the physical node D (100-4). The domain management server 300 may acquire the virtual node management infor mation 323 from the physical node D (100-4). (0167 FIG. 10A illustrates the state of the domain 15 when the processing up to Step S112 is done. (0168. In FIG. 10A, the GRE tunnels 600-5 and 600-7 are represented by dotted lines, which mean that the GRE tunnels 600-5 and 600-7 are present but they cannot be used to trans mit packets. Now using FIG. 12A, a connection state of communication paths in the GRE converter 1100 of the physi cal node A (100-1) is explained. (0169. As shown in FIG. 12A, the GRE converter 1100 configures its internal communication paths so as to transfer the packets received from both of the VM C (110-3) and the VM D (110-4) to the VM A (110-1). The GRE converter 1100 also configures its internal communication paths so as to transfer the packets received from the VM A (110-1) only to the VM C (110-3). As previously described, the GRE con verter 1100 controls the packets not to be transferred to the GRE tunnel 600-5.

0170 0171 The domain management server 300 sends an instruction to activate the VM D (110-4) to the physical node D (100-4) (Step S113). Specifically, the instruction to activate the VM D (110-4) is sent to the node management unit 931 of the physical node D (100-4). (0172. The role of this instruction is to prevent the VM D (110-4) from operating before creation of virtual links 250. 0173 The node management unit 931 of the physical node D (100-4) instructs the virtualization management unit 932 to activate the VM D (110-4) (Step S114) and sends a notice of completion of activation of the VM D (110-4) to the domain management server 300 (Step S115). 0.174. At the time when the service of the virtual node C (200-3) is started by the activation of the VM D (110-4), both of the VM C (110-3) and the VM D (110-4) can provide the function of the virtual node C (200-3). At this time, however, the virtual node C (200-3) that uses the function provided by the VM C (110-3) may be still working on the service in progress. Accordingly, the virtual node C (200-3) using the function provided by the VM C (110-3) successively executes the service.

The explanation returns to FIG.9A.

US 2014/0068045 A1

(0175. However, as shown in FIG. 10A, the virtual node C (200-3) using the function provided by the VM D (110-4) also has started service. For this reason, even if the virtual links 250 are switched, the service is not interrupted. When seeing from the user using the slice 20, it can be recognized as if the service is executed by a single virtual node C (200-3) 0176 It should be noted that, in this embodiment, the service executed by the virtual node C (200-3) is stateless. That is to say, if the VM 110 providing the function to the virtual node C (200-3) executing the service is switched to another, the VMs 110 can perform processing independently. If the service executed by the virtual node C (200-3) is not stateless, providing a shared storage to share state informa tion between the migration source VM 110 and the migration destination VM 110 enables continued service.

0177. After the domain management server 300 receives the notice of completion of activation of the VM D (110-4). it sends instructions for virtual link Switching to the neigh boring physical nodes, namely the physical node A (100-1) and the physical node B (100-2) (Steps S116 and S117). Each instruction for virtual link switching includes identification information on the GRE tunnels 600 to be switched.

0178. Upon receipt of the instructions for virtual link switching, the physical node A (100-1) and the physical node B (100-2) switch the virtual links 250 (Steps S118 and S119). Specifically, the following processing is performed. 0179 Upon receipt of an instruction for virtual link switching, the node management unit 931 transfers the received instruction to the GRE converter 1100.

0180. The GRE converter 1100 refers to the path configu ration information 1110 to identify the entries for the GRE tunnels 600 to be switched based on the identification infor mation on the GRE tunnels 600 included in the received instruction for virtual link Switching. On this occasion, the entries for the GRE tunnel 600 connected to the VM C (110 3) of the migration source and the GRE tunnel 600 connected to the VM D (110-4) of the migration destination are identi fied.

0181. The GRE converter 1100 replaces the values set to the communication availabilities 1320 between the identified entries. Specifically, it changes the communication availabil ity 1320 of the entry for the GRE tunnel 600 connected to the VM 110 of the migration source into “NO” and the commu nication availability 1320 of the entry for the GRE tunnel 600 connected to the VM 110 of the migration destination into “OK”.

0182. Through this operation, the path configuration information 1110 is updated into the one as shown in FIG. 11B.

0183. The GRE converter 1100 switches the internal com munication paths connected to the GRE tunnels 600 in accor dance with the updated path configuration information 1110. The GRE converter 1100 sends a notice of completion of Switching the communication paths to the node management unit 931.

0184 Even after switching the internal communication paths, if a packet received by the GRE converter 1100 is a control packet 1210 and the destination of the control packet is the physical node 100 that had been allocated the virtual node 200 before the migration, the GRE converter 1100 can send the control packet 1210 via the internal communication path that had been used before the switching of the virtual links 250.

Mar. 6, 2014

0185. In other words, the GRE converter 1100 controls data packets 1200 so as not to be transferred to the physical node 100 that had been allocated the virtual node 200 before the migration. 0186 Through the processing described above, the inter nal communication paths are switched as shown in FIG.12B. At this moment, the migration of the virtual node C (200-3) to the VM D (110-4) is completed. The virtual links 250 in the overall system are switched as shown in FIG. 10B. 0187. In this way, the virtual links 250 are switched after a certain time period has passed in order to obtain the result of the service executed by the virtual node C (200-3) using the function provided by the VM C (110-3). This approach assures the consistency in the service of the slice 20. 0188 Described above is the processing at Steps S118 and S119. (0189 After switching the virtual links 250, the virtual node C (200-3) that uses the function provided by the VM D (110-4) executes the service. At this time, however, the node management unit 931 of the physical node C (100-3) main tains the VM C (110-3) active since the requirements for deactivation of the VM C (110-3) are not satisfied. 0190. After switching the connection of the GRE tunnels 600 for implementing the virtual links 250, the physical nodes A (100-1) and B (100-2) send notices of completion of virtual link switching to the physical node C (100-3) (Steps S120 and S121). Specifically, the following processing is performed. 0191 The node management unit 931 of each physical node 100 inquires the GRE converter 1100 of the result of switching the virtual link 250 to identify the GRE tunnel 600 to which the connection is switched. The GRE converter 1100 outputs information on the entry newly added to the path configuration information 1110 to identify the GRE tunnel to which the connection is switched. 0.192 The node management unit 931 of each physical node 100 identifies the physical node 100 which runs the VM 110 to which the identified GRE tunnel 600 is connected with reference to the identifier of the VM 110. For example, the node management unit 931 of each physical node 100 may sendan inquiry including the identifier of the identified VM to the domain management server 300. In this case, the domain management server 300 can identify the physical node 100 that runs the identified VM110 with reference to the mapping information 322. (0193 The method of identifying the physical node 100 to send a notice of completion of virtual link Switching is not limited to the above-described one. For example, the node management unit 931 may originally hold information asso ciating GRE tunnels 600 with connected physical nodes 100. 0194 The node management unit 931 creates a notice of completion of virtual link switching including the identifier of the connected physical node 100 and sends it to the GRE converter 1100. It should be noted that the notice of comple tion of virtual link switching is a control packet 1210. (0195 The GRE converter 1100 sends the notice of completion of virtual link Switching to the connected physical node 100 via the GRE tunnel 600. 0196. Described above is the processing at Steps S120 and S121. (0197) Next, the physical node C (100-3) deactivates, upon receipt of the notices of completion of virtual link switching from the physical nodes A (100-1) and B (100-2), the VM C (110-3) and the connection of the GRE tunnels 600 (Step S122).

US 2014/0068045 A1

0198 This is because that the node management unit 931 of the physical node C (100-3) has determined that the requirements for deactivation of the VM C (110-3) are sat isfied.

0199 As mentioned above, the notices of completion of virtual link switching are transmitted via the GRE tunnels 600-2 and 600-4 for transmitting data packets 1200. Accord ingly, the node management unit 931 of a physical node 100 is assured that data packets 1200 are no longer sent from the VM A (110-1) or VM B (110-2) to the VM C (110-3) by receiving the notices of completion of virtual link Switching. 0200. If the domain management server 300 is configured

to send the notice of completion of virtual link switching, the control packet 1210 corresponding to the notice of comple tion of virtual link Switching is transmitted via a communi cation path different from the communication path for trans mitting data packets 1200. Accordingly, there remains a possibility that data packets 1200 may be transmitted via the GRE tunnels 600-2 or 600-4.

0201 On the other hand, the above configuration is capable of recognizing that the VM 110 which had provided the function to the virtual node 200 before migration is no longer necessary by receiving control packets 1210 from all the physical nodes 100 communicating with the VM 110 running on the physical node 100 before migration. 0202 The physical node C (100-3) sends responses to the notices of completion of virtual link Switching to the physical nodes A (100-1) and B (100-2) (Steps S123 and S124). 0203 Since these responses are control packets 1210, they are transmitted via the GRE tunnels 600-1 and 600-3. Accord ingly, the physical nodes 100 can be assured that packets are no longer sent from the VM 110 that had implemented func tions before migration. 0204. Upon receipt of the response to the notice of completion of virtual link Switching, each of the physical node A (100-1) and B (100-2) disconnects the GRE tunnel 600 for communicating with the VM C (100-3) (Steps S125 and S126). 0205 Specifically, the node management unit 931 of each physical node 100 sends the GRE converter 1100 an instruc tion to disconnect the GRE tunnel 600 for communicating with the VM C (110-3). Upon receipt of the instruction for disconnection, the GRE converter 1100 stops communication via the GRE tunnel 600 for communicating with the VM C (110-3). 0206. The physical nodes A (100-1) and B (100-2) each send a notice of virtual link disconnection to the domain management server 300 (Steps S127 and S128). The physical node C (100-3) notifies the domain management server 300 of deactivation of the VM C (110-3) and disconnection to the VM C (110-3) (Step S129). 0207. The domain management server 300 sends instruc tions to release the computer resources related to the VM C (110-3) to the physical nodes A (100-1), B (100-2), and C (100-3) (Steps S130, S131, and S132). 0208 Specifically, the domain management server 300 instructs the physical node A (100-1) to release the computer resources allocated to the GRE tunnels 600-1 and 600-2 and the physical node B (100-2) to release the computer resources allocated to the GRE tunnels 600-3 and 600-4. The domain management server 300 also instructs the physical node C (100-3) to release the computer resources allocated to the

Mar. 6, 2014

VM C (110-3) and the GRE tunnels 600-1,600-2, 600-3, and 600-4. As a result, effective use of computer resources is attained. (0209. In FIGS. 9A and 9B, the instructions and responses exchanged between the domain management server 300 and each physical node 100 may be issued in any sequence within the range of consistency of processing or may be issued simultaneously. The same instruction or response may be sent a plurality of times. Alternatively, a single instruction or response may be separated into a plurality of instructions or responses to be sent. 0210 FIG. 10C is a diagram illustrating the state of the domain after the processing up to Step S132 is done. FIG. 10C indicates that the virtual node C (200-3) has been transferred from the physical node C (100-3) to the physical node D (100-4). It should be noted that the transfer of the virtual node C (200-3) is not recognized in the slice 20. 0211 Embodiment 1 enables migration of a virtual node 200 in a slice 20 between physical nodes 100 without inter rupting the service being executed by the virtual node 200 or changing the network configuration of the slice 20.

Embodiment 2

0212 Embodiment 2 differs from Embodiment 1 in the point that the created virtual network 20 ranges in two or more domains 15. Hereinafter, migration of a virtual node 200 between domains 15 is described. Differences from Embodi ment 1 are mainly described.

(System Configuration)

0213 FIG. 13 is an explanatory diagram illustrating a configuration example of the physical network 10 in Embodi ment 2 of this invention. Embodiment 2 is described using a physical network 10 under two domains 15 by way of example. 0214. The domain A (15-1) and the domain B (15-2) form ing the physical network 10 each includes a domain manage ment server 300 and a plurality of physical nodes 100. Embodiment 2 is based on the assumption that the slice 20 shown in FIG. 2 is provided using physical nodes 100 in the both domains 15. The slice 20 ranging in two domains 15 can be created using federation function. 0215. The domain management server A (300-1) and the domain management server B (300-2) are connected via a physical link 1300. The domain management server A (300 1) and the domain management server B (300-2) communi cate with each other via the physical link 1300 to share the management information (Such as the mapping information 322 and the virtual node management information 323) of the domains 15. 0216. The configuration of each domain management server 300 is the same as that of Embodiment 1; accordingly, the explanation thereof is omitted. In addition, connections among physical nodes 100 are the same as those of Embodi ment 1; the explanation thereof is omitted. 0217. In Embodiment 2, the physical link 400-2 connect ing the physical node B (100-2) and the physical node C (100-3) and the physical link 400-3 connecting the physical node A (100-1) and the physical node D (100-4) are the network connecting the domains 15. 0218 For this reason, gateway apparatuses may be installed at the gates of the domains 15 depending on the implementation of the physical network 10. This embodiment

US 2014/0068045 A1

is based on the configuration that direct connection of physi cal nodes 100 between the two domains 15 is available with GRE tunnels 600; but in the case where gateways are installed, the same processing can be applied. 0219. The configuration of each physical node 100 is the same as that of Embodiment 1; the explanation thereof is omitted.

(Migration)

0220 Hereinafter, like in Embodiment 1, migration of the virtual node C (200-3) from the physical node C (100-3) to the physical node D (100-4) will be described with reference to FIGS. 14A, 14B, 15A, 15B, and 15C. However, it is different in the point that the virtual node 200 is transferred between physical nodes 100 in different domains 15. 0221 FIGS. 14A and 14B are sequence diagrams illustrat ing a processing flow of migration in Embodiment 2 of this invention. FIGS. 15A, 15B, and 15C are explanatory dia grams illustrating states in the domains 15 during the migra tion in Embodiment 2 of this invention. 0222. The method of updating the path configuration information 1110 and the method of controlling the internal communication paths in the GRE converter 1100 are the same as those in Embodiment 1; the explanation of these methods is omitted. 0223) This embodiment is based on the assumption that the administrator who operates the domain management server A (300-1) enters a request for start of migration together with the identifier of the virtual node C (200-3) to be the subject of migration. This invention is not limited to the time to start the migration. For example, the migration may be started when the load to a VM 110 exceeds a threshold. 0224. In this embodiment, the domain management Serv ers A (300-1) and B (300-2) cooperate to execute the migra tion, but the domain management server A (300-1) takes charge of migration. The same processing can be applied to the case where the domain management server B (300-2) takes charge of migration. 0225. The domain management server 300 creates an instruction for VM creation so that the VM-D (110-4) to be created will have the same capability as the VM C (110-3). Specifically, the domain management server 300 acquires the configuration information for the VM C (110-3) from the virtualization management unit 932 in the server 900 running the VM C (110-3) to create the instruction for VM creation based on the acquired configuration information. 0226. In Embodiment 2, the sending of the instruction for VM creation to the physical node D (100-4) is different (Step S101). 0227 Specifically, the domain management server A (300-1) sends the instruction for VM creation to the domain management server B (300-2). The instruction for VM cre ation includes the identifier of the destination physical node D (100-4) for the address information. 0228. The domain management server B (300-2) transfers the instruction to the physical node D (100-4) in accordance with the address information in the received instruction. 0229. This embodiment is based on the assumption that the instruction for VM creation originally includes the iden tifier of the destination physical node D (100-4); however, this invention is not limited to this. For example, the domain management server A (300-1) may send an instruction for VM creation same as the one in Embodiment 1 and the domain management server B (300-2) may determine the

Mar. 6, 2014

physical node 100 to forward the instruction in consideration of information on the loads of the physical nodes 100 in the domain B (15-2). 0230. In Embodiment 2, the sending of the instructions for virtual link creation to the physical nodes B (100-2) and D (100-4) is different (Steps S103, S104, and S105). 0231 Specifically, the domain management server A (300-1) sends the instructions for virtual link creation to the domain management server B (300-2). Each instruction for virtual link creation includes the identifier of the destination physical node B (100-2) or D (100-4) for the address infor mation. The domain management server A (300-1) can iden tify that the neighboring physical node 100 of the physical node D (100-4) is the physical node B (100-2) with reference to the mapping information 322. 0232. The domain management server B (300-2) transfers the received instructions for virtual link creation to the physi cal nodes B (100-2) and D (100-2) in accordance with the address information of the instructions.

0233. Upon receipt of the instructions for virtual link cre ation, the physical nodes A (100-1) and D (100-4) creates GRE tunnels 600-5 and 600-6 (refer to FIG. 15A) for imple menting the virtual link 250-1 based on the instructions for virtual link creation (Step S108). 0234. The method of creating the GRE tunnels 600-5 and 600-6 is basically the same as the creation method described in Embodiment 1. Since the slice is created to range in a plurality of domains by federation in this embodiment, the GRE tunnels are also created between domains. It should be noted that, depending on the domain, and additionally, depending on the implementation scheme of the physical network connecting the domains, the link Scheme may be switched to a different one (such as VLAN) at the boundary between the domains.

0235. After the node management unit 931 of the physical node B (100-2) creates the virtual link 250, it sends a notice indicating that the computer resources have been secured to the domain management server B (300-2) (Step S111). The domain management server B (300-2) transfers this notice to the domain management server A (300-1) (refer to FIG.15A). 0236. After the node management unit 931 of the physical node D (100-4) creates the VM D (110-4) and the virtual links 250, it sends a notice indicating that the computer resources have been secured to the domain management server B (300-2) (Step S112). The domain management server B (300-2) transfers this notice to the domain manage ment server A (300-1) (refer to FIG. 15A). 0237. The domain management server B (300-2) may merge the notices of securement of computer resource from the physical nodes B (100-2) and D (100-4) to send the merged notice to the domain management server A (300-1). 0238. In Embodiment 2, the instruction for VM activation and the notice of completion of VM activation are transmitted via the domain management server B (300-2) (Steps S113 and S115). The instruction for virtual link switching to the physi cal node B (100-2) is also transmitted via the domain man agement server B (300-2) (Step S117) as shown in FIG.15B. 0239. The notice of completion of link switching sent from the physical node B (100-2) is transmitted via the GRE tunnel 600 created on the physical link 400-2, but not via the domain management server B (300-2) (Step S121). The response to be sent to the physical node B (100-2) is also

US 2014/0068045 A1

transmitted via the GRE tunnel 600 created on the physical link 400-2, but not via the domain management server B (300-2) (Step S124). 0240. The notice of virtual link disconnection sent from the physical node B (100-2) is transmitted to the domain management server A (300-1) via the domain management server B (300-2) (Step S128). The instruction to release com puter resources is also transmitted to the physical node B (100-2) via the domain management server B (300-2) (Step S132). 0241 The other processing is the same as the Embodiment 1; accordingly, the explanation is omitted. 0242 Embodiment 2 enables migration of a virtual node 200 between domains 15 in a slice 20 ranging in a plurality of domains 15 without interrupting the service being executed by the virtual node 200.

Embodiment 3

0243 Embodiment 2 generates many communications between domain management servers 300 as shown in FIGS. 14A and 14.B. Since these communications include authenti cations between domains 15, the overhead increases. More over, the increase in transmission of control commands elevates the overhead in migration. 0244. In view of the above, Embodiment 3 accomplishes migration with less communication between domain man agement servers 300. Specifically, the communication between domain management servers is reduced by transmit ting control packets via physical links 400 between physical nodes 100. 0245 Hereinafter, differences from Embodiment 2 are mainly described. The configurations of the physical network 10, the domain management servers 300, and the physical nodes 100 are the same as those in Embodiment 1; the expla nation is omitted.

(Migration)

0246 Hereinafter, like in Embodiment 2, migration of the virtual node C (200-3) from the physical node C (100-3) in the domain A (15-1) to the physical node D (100-4) in the domain B (15-2) will be described with reference to FIGS. 16A and 16B. 0247 FIGS. 16A and 16B are sequence diagrams illustrat ing a processing flow of migration in Embodiment 3 of this invention. 0248. The domain management server A (300-1) notifies the domain management server B (300-2) of an instruction for VM creation and requirements for VM activation (Step S201). 0249 Since the virtual link 250 for the physical node D (100-4) has not been created yet at this time, the instruction for VM creation and the requirements for VM activation are transmitted to the physical node D (100-4) via the domain management server B (300-2). This is because the link to the added node has not been created yet. 0250. The requirements for VM activation represent the requirements to activate the VM 110 created on the physical node 100 of the migration destination. Upon receipt of the requirements for VM activation, the node management unit 931 of the physical node D (100-4) starts determining whether the requirements for activation are satisfied. 0251. This embodiment is based on the assumption that the requirements for VM activation are predetermined so as to

Mar. 6, 2014

activate the VM D (110-4) when notices of completion of virtual link creation are received from the neighboring physi cal nodes, namely, the physical nodes A (100-1) and B (100 2). 0252. In Embodiment 3, none of the node management units of the physical nodes A (100-1), B (100-2), and D (100-4) send a notice of securement of computer resources to the domain management server A (300-1). Embodiment 3 is different in the point that the node management units of the physical nodes A (100-1) and B (100-2) send reports of virtual link creation to the physical node D (100-4) via GRE tunnels 600 (Steps S202 and S203). 0253) Through these operations, the communication between the domain management servers 300 and between the domain management servers 300 and physical nodes 100 can be reduced to activate the VM D (110-4). Accordingly, the overhead in the migration can be reduced. 0254. In Embodiment 3, when the node management unit 931 of the physical node D (100-4) receives reports of virtual link creation from the neighboring physical nodes 100, it instructs the virtualization management unit 932 to activate the VM D (110-4) (Step S114). (0255. After activating the VM D (110-4), the node man agement unit 931 of the physical node D (100-4) sends notices of start of service to the neighboring physical nodes 100 (Steps S204 and S205). The notice of start of service is a notice indicating that the virtual node C (200-3) has started servicing using the function provided by the VM D (110-4). (0256 Specifically, the notices of start of service are trans mitted to the physical nodes A (100-1) and B (100-2) via the GRE tunnels 600. 0257. Upon receipt of the notices of start of service, the physical nodes A (100-1) and B (100-2) switch the virtual links 250 (Steps S118 and S119). (0258 Embodiment 3 is different in the point that the physical nodes A (100-1) and B (100-2) switch the virtual links 250 in response to the notices of start of service sent from the physical node D (100-4). In other words, transmis sion of the notice of completion of VM activation and instruc tions for virtual link Switching is replaced by transmission of the notice of start of service. 0259 Although Embodiment 2 requires communication between physical nodes 100 and the domain management servers 300 to switch the virtual links 250, Embodiment 3 encourages direct communication between physical nodes, so that the communication via the domain management serv ers 300 can be reduced. 0260 The other processing is the same as that in Embodi ment 2; the explanation is omitted. 0261 Embodiment 3 can reduce the communication with the domain management servers 300 by communication via the links (GRE tunnels 600) connecting physical nodes 100. Consequently, the overhead in migration can be reduced. 0262 The variety of software used in the embodiments can be stored in various storage media, such as electro-mag netic, electronic, and optical type of non-transitory storage media, or can be downloaded to computers via communica tion network such as the Internet. 0263. The embodiments have described examples using control by software but part of the control can be imple mented by hardware. 0264. As set forth above, this invention has been described in detail with reference to the accompanying drawings, but this invention is not limited to these specific configurations

US 2014/0068045 A1

but includes various modifications and equivalent configura tions within the scope of the appended claims. What is claimed is: 1. A network system including physical nodes having com

puter resources, the physical nodes being connected to one another via

physical links, the network system providing a virtual network system

including virtual nodes allocated computer resources of the physical nodes to execute predetermined service, and

the network system comprising: a network management unit for managing the virtual

nodes; at least one node management unit for managing the physi

cal nodes; and at least one link management unit for managing connec

tions of the physical links connecting the physical nodes and connections of virtual links connecting the virtual nodes,

wherein the network management unit holds mapping information indicating correspondence relations between the virtual nodes and the physical nodes allo cating the computer resources to the virtual nodes and virtual node management information for managing the virtual links,

wherein the at least one link management unit holds path configuration information for managing connection states of the virtual links, and

wherein, in a case where the network system performs migration of a first virtual node for executing service using computer resources of a first physical node to a second physical node,

the network management unit sends the second physical node an instruction to secure computer resources to be allocated to the first virtual node:

the network management unit identifies neighboring physical nodes allocating computer resources to neigh boring virtual nodes connected to the first virtual node via virtual links in the virtual network;

the network management unit sends the at least one link management unit an instruction to create communica tion paths for implementing virtual links for connecting the first virtual node and the neighboring virtual nodes on physical links connecting the second physical node and the neighboring physical nodes;

the at least one link management unit creates the commu nication paths for connecting the second physical node and the neighboring physical nodes on the physical links based on the instruction to create the communication paths;

the at least one node management unit starts the service executed by the first virtual node using the computer resources secured by the second physical node:

the network management unit sends the at least one link management unit an instruction to Switch the virtual links; and

the at least one link management unit Switches communi cation paths to the created communication paths for Switching the virtual links.

2. The network system according to claim 1, wherein the at least one link management unit controls data

transmission and reception between virtual nodes based on the path configuration information,

Mar. 6, 2014

wherein, in the creating the communication paths on the physical links connecting the second physical node and the neighboring physical nodes,

the at least one link management unit creates the commu nication paths configured so as to permit data transmis sion from the first virtual node allocated the computer resources of the second physical node to the neighboring virtual nodes and prohibit data transmission from the neighboring virtual nodes to the first virtual node allo cated the computer resources of the second physical node, and adds configuration information associating identification information on the created communica tion paths with information indicating whether to permit data transmission to the path configuration information, and

wherein, upon receipt of the instruction to switch the vir tual links, the at least one link management unit updates the configuration information added to the path configu ration information so as to permit data transmission from the neighboring virtual nodes to the first virtual node allocated the computer resources of the second physical node.

3. The network system according to claim 2, wherein the network management unit sends the at least

one node management unit a requirement for stopping the service executed by the first virtual node allocated the computer resources of the first physical node,

wherein the at least one node management unit determines whether the received requirement for stopping the ser vice is satisfied, and

wherein, when it is determined that the received require ment for stopping the service is satisfied, the at least one node management unit stops the service executed by the first virtual node allocated the computer resources of the first physical node.

4. The network system according to claim 3, wherein the requirement for stopping the service is reception of notices of completion of the switching the virtual links from the neigh boring physical nodes.

5. The network system according to claim 4, wherein the at least one node management unit releases the computer resources of the first physical node allocated to the first virtual node after stopping the service executed by the first virtual node allocated the computer resources of the first physical node.

6. The network system according to claim 2, wherein each of the physical nodes includes the node man

agement unit and the link management unit, wherein the link management unit of the second physical

node and the link management units of the neighboring physical nodes create the communication paths,

wherein the link management unit of the second physical node adds first configuration information to permit data transmission and reception via the communication paths to the path configuration information,

wherein the link management units of the neighboring nodes add second configuration information to permit data reception via the communication paths and prohibit data transmission via the communication paths to the path configuration information,

wherein the node management units of the neighboring physical nodes send the second physical node first con

US 2014/0068045 A1

trol information indicating completion of the creating the communication paths via the created communica tion paths,

wherein, after receipt of the first control information, the node management unit of the second physical node allo cates the secured computer resources to the first virtual node and starts the service executed by the first virtual node,

wherein the node management unit of the second physical node sends the neighboring physical nodes second con trol information indicating the start of the service executed by the first virtual node via the communication paths, and

wherein, after receipt of the second control information, the link management units of the neighboring physical nodes change the second configuration information so as to permit data transmission via the communication paths to switch the virtual links.

7. A method for migration of a virtual node included in a virtual network provided by a network system including physical nodes having computer resources,

the physical nodes being connected to one another via physical links,

the virtual network including virtual nodes allocated com puter resources of the physical nodes to execute prede termined service,

the network system including: a network management unit for managing the virtual

nodes; at least one node management unit for managing the physi

cal nodes; and at least one link management unit for managing connec

tions of physical links connecting the physical nodes and connections of virtual links connecting the virtual nodes,

the network management unit holding mapping informa tion indicating correspondence relations between the virtual nodes and the physical nodes allocating the com puter resources to the virtual nodes and virtual node management information for managing the virtual links,

the at least one link management unit holding path con figuration information for managing connection states of the virtual links,

the method, in a case of migration of a first virtual node for executing service using computer resources of a first physical node to a second physical node, comprising:

a first step of sending, by the network management unit, the second physical node an instruction to secure computer resources to be allocated to the first virtual node:

a second step of identifying, by the network management unit, neighboring physical nodes allocating computer resources to neighboring virtual nodes connected to the first virtual node via virtual links in the virtual network;

a third step of sending, by the network management unit, the at least one link management unit an instruction to create communication paths for implementing virtual links for connecting the first virtual node and the neigh boring virtual nodes on physical links connecting the second physical node and the neighboring physical nodes;

a fourth step of creating, by the at least one link manage ment unit, the communication paths for connecting the second physical node and the neighboring physical

Mar. 6, 2014

nodes on the physical links based on the instruction to create the communication paths;

a fifth step of starting, by the at least one node management unit, the service executed by the first virtual node using the computer resources secured by the second physical node:

a sixth step of sending, by the network management unit, the at least one link management unit an instruction to Switch the virtual links; and

a seventh step of Switching, by the at least one link man agement unit, communication paths to the created com munication paths for Switching the virtual links.

8. The method for migration of a virtual node according to claim 7.

wherein the at least one link management unit controls data transmission and reception between virtual nodes based on the path configuration information,

wherein, the fourth step includes: a step of creating the communication paths configured so

as to permit data transmission from the first virtual node allocated the computer resources of the second physical node to the neighboring virtual nodes and prohibit data transmission from the neighboring virtual nodes to the first virtual node allocated the computer resources of the second physical node; and

a step of adding configuration information associating identification information on the created communica tion paths with information indicating whether to permit data transmission to the path configuration information, and

wherein the seventh step includes a step of updating, upon receipt of the instruction to switch virtual links, the configuration information added to the path configura tion information so as to permit data transmission from the neighboring virtual nodes to the first virtual node allocated the computer resources of the second physical node.

9. The method for migration of a virtual node according to claim 8, further comprising:

a step of sending, by the network management unit, the node management unit a requirement for stopping the service executed by the first virtual node allocated the computer resources of the first physical node,

a step of determining, by the node management unit, whether the received requirement for stopping the ser vice is satisfied, and

a step of stopping, by the node management unit, the Ser vice executed by the first virtual node allocated the com puter resources of the first physical node in a case of determination that the received requirement for stopping the service is satisfied.

10. The method for migration of a virtual node according to claim 9, wherein the requirement for stopping the service is reception of notices of completion of the switching of virtual links from the neighboring physical nodes.

11. The method for migration of a virtual node according to claim 10, further comprising a step of releasing, by the node management unit, the computer resources of the first physical node allocated to the first virtual node after stopping the service executed by the first virtual node allocated the com puter resources of the first physical node.

12. The method for migration of a virtual node according to claim 8,

US 2014/0068045 A1

wherein each of the physical nodes includes the node man agement unit and the link management unit,

wherein the fourth step includes: a step of creating, by the link management unit of the

second physical node and the link management units of the neighboring physical nodes, the communication paths;

a step of adding, by the link management unit of the second physical node, first configuration information to permit data transmission and reception via the communication paths to the path configuration information;

a step of adding, by the link management units of the neighboring nodes, second configuration information to permit data reception via the communication paths and prohibit data transmission via the communication paths to the path configuration information; and

a step of sending, by the node management units of the neighboring physical nodes, the second physical node

14 Mar. 6, 2014

first control information indicating completion of the creating the communication paths via the created com munication paths,

wherein the fifth step includes: a step of allocating, by the node management unit of the

second physical node which have received the first con trol information, the secured computer resources to the first virtual node to start the service executed by the first virtual node,

a step of sending, by the node management unit of the second physical node, the neighboring physical nodes second control information indicating the start of the service executed by the first virtual node via the com munication paths, and

wherein the seventh step includes a step of changing, by the link management units of the neighboring physical nodes which have received the second control informa tion, the second configuration information so as to per mit data transmission via the communication paths to Switch the virtual links.

k k k k k