11
Enterprise Private Cloud OpenStack Deployment in Minutes Introduction This lab will take you through the basics of how to configure OpenStack on Oracle Solaris 11. OpenStack is a popular open source cloud infrastructure that has been integrated into Oracle Solaris 11.2. OpenStack includes a number of services that help you manage the compute, storage and network resources in your data center through a central web based dashboard. These services can be summarized as the following: Service Name Description Nova Compute virtualization Cinder Block storage Neutron Software Defined Networking (SDN) Keystone Authentication between cloud services Glance Image management and deployment Horizon Web based dashboard For this lab and the time allocated to us, we will simply set up OpenStack in a single node instance. For a typical enterprise deployment, these services would be spread across multiple nodes with load balancing and other high availability capabilities. With the Oracle Solaris 11.2 release, a new archive format was introduced called Unified Archives. Unified Archives provide easy golden image style deployment, allowing administrators to quickly snapshot a running system and deploy it as clones within a cloud environment. Using this technology, an OpenStack based Unified Archive was created and made available which makes deploying this complex software easy on a single node:

Enterprise Private Cloud Setup

Embed Size (px)

DESCRIPTION

Enterprise Private Cloud Setup

Citation preview

  • Enterprise Private Cloud OpenStack Deployment in Minutes Introduction This lab will take you through the basics of how to configure OpenStack on Oracle Solaris 11. OpenStack is a popular open source cloud infrastructure that has been integrated into Oracle Solaris 11.2. OpenStack includes a number of services that help you manage the compute, storage and network resources in your data center through a central web based dashboard.

    These services can be summarized as the following: Service Name Description Nova Compute virtualization Cinder Block storage Neutron Software Defined Networking (SDN) Keystone Authentication between cloud services Glance Image management and deployment Horizon Web based dashboard For this lab and the time allocated to us, we will simply set up OpenStack in a single node instance. For a typical enterprise deployment, these services would be spread across multiple nodes with load balancing and other high availability capabilities. With the Oracle Solaris 11.2 release, a new archive format was introduced called Unified Archives. Unified Archives provide easy golden image style deployment, allowing administrators to quickly snapshot a running system and deploy it as clones within a cloud environment. Using this technology, an OpenStack based Unified Archive was created and made available which makes deploying this complex software easy on a single node:

  • http://www.oracle.com/technetwork/server-storage/solaris11/downloads/unified-archives-2245488.html However, for this lab we will choose a manual route to give you more experience with the OpenStack services and how they are configured. Lab Setup This lab has the following set up: Oracle Solaris 11.2 (root password is solaris11) OpenStack configuration script located in /root/hol_single_host.py

    1. Installing the OpenStack packages First we will install the OpenStack packages from the IPS package repository as follows: # pkg install openstack rabbitmq rad-evs-controller Packages to install: 182 Services to change: 3 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 182/182 23198/23198 116.4/116.4 806k/s PHASE ITEMS Installing new actions 26599/26599 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 Now that we have successfully installed these packages, we will need to restart the rad:local SMF service. RAD (the Remote Administration Daemon) provides programmatic access to the administrative interfaces on Oracle Solaris 11 that we use in the Oracle Solaris plugins for OpenStack. # svcadm restart rad:local We will also need to enable the RabbitMQ service. RabbitMQ is a messaging system that enables communication between the core OpenStack services. # svcadm enable rabbitmq # svcs rabbitmq STATE STIME FMRI online 23:58:04 svc:/application/rabbitmq:default 2. Configuring Keystone Keystone provides authentication between the core OpenStack services. It will be the first service that we will configure and enable. OpenStack uses a series of configuration files with defined sections that include key/value pairs. For configuration of the OpenStack services, we will use a script for convenience

  • for a multi-node environment you would typically configure these manually to suit your needs. # ./hol_single_host.py keystone configuring keystone Now enable the Keystone service: # svcadm enable -rs keystone # svcs keystone STATE STIME FMRI online 23:59:31 svc:/application/openstack/keystone:default In order to allow for successful authentication, we will need to populate the Keystone database with a number of users across different tenants that reflect the core OpenStack services. In our case we will use sample data provided by a script. In a production deployment you would associate Keystone with a directory service such as LDAP or Active Directory. User Tenant Password admin demo secrete nova service nova cinder service cinder neutron service neutron glance service glance Lets run this script now: # /usr/demo/openstack/keystone/sample_data.sh +-------------+---------------------------------------+ | Property | Value | +-------------+---------------------------------------+ | adminurl | http://localhost:$(admin_port)s/v2.0 | | id | cdd38de578ffe450a4ebd17e6345ed72 | | internalurl | http://localhost:$(public_port)s/v2.0 | | publicurl | http://localhost:$(public_port)s/v2.0 | | region | RegionOne | | service_id | db9909b96b916b6ed04a818c6f407df0 | +-------------+---------------------------------------+ +-------------+------------------------------------------------------+ | Property | Value | +-------------+------------------------------------------------------+ | adminurl | http://localhost:$(compute_port)s/v1.1/$(tenant_id)s | | id | 48d62b0291f44c258f0bef5fe72024b9 | | internalurl | http://localhost:$(compute_port)s/v1.1/$(tenant_id)s | | publicurl | http://localhost:$(compute_port)s/v1.1/$(tenant_id)s | | region | RegionOne | | service_id | c38ced19a4894a5bc61cbb77e9868bbf | +-------------+------------------------------------------------------+ +-------------+----------------------------------------+ | Property | Value | +-------------+----------------------------------------+ | adminurl | http://localhost:8776/v1/$(tenant_id)s | | id | 975e3db88eb56836e779e1b0e8d2dd21 | | internalurl | http://localhost:8776/v1/$(tenant_id)s | | publicurl | http://localhost:8776/v1/$(tenant_id)s | | region | RegionOne | | service_id | 39daf3d31c0348f0ae32b04a2ed3dbc4 | +-------------+----------------------------------------+ +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://localhost:9292 | | id | a77c1ed7d1a44751afeed55e2e0bbc99 | | internalurl | http://localhost:9292 |

  • | publicurl | http://localhost:9292 | | region | RegionOne | | service_id | 903f1738fc066deed8a8c4a38925d1e5 | +-------------+----------------------------------+ +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | adminurl | http://localhost:8773/services/Admin | | id | 86d0e7f081d7e512b6638534f391b6ee | | internalurl | http://localhost:8773/services/Cloud | | publicurl | http://localhost:8773/services/Cloud | | region | RegionOne | | service_id | 86b96889f88be522abf19d7ff8e7db18 | +-------------+--------------------------------------+ +-------------+---------------------------------------------+ | Property | Value | +-------------+---------------------------------------------+ | adminurl | http://localhost:8080/v1 | | id | 756642548112e822be94a5da3a73588e | | internalurl | http://localhost:8080/v1/AUTH_$(tenant_id)s | | publicurl | http://localhost:8080/v1/AUTH_$(tenant_id)s | | region | RegionOne | | service_id | 6d22986ee9c76880e0f0c0da4aa8fe0f | +-------------+---------------------------------------------+ +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://localhost:9696/ | | id | bbe5bf886bff4c089c0dbc42a65fa521 | | internalurl | http://localhost:9696/ | | publicurl | http://localhost:9696/ | | region | RegionOne | | service_id | f5c6aeb5a53bceb6f022b85e0b63956f | +-------------+----------------------------------+ Lets verify this result by setting environmental variables, SERVICE_ENDPOINT and SERVICE_TOKEN, and running the keystone client side command: # export SERVICE_ENDPOINT=http://localhost:35357/v2.0/ # export SERVICE_TOKEN=ADMIN # keystone user-list +----------------------------------+---------+---------+-------+ | id | name | enabled | email | +----------------------------------+---------+---------+-------+ | 5bdefb773d3c61fed79d96c5540f9766 | admin | True | | | 8b54a70c235ee1179f15a198a70be099 | cinder | True | | | 7949ac987dd5c514e778ba3932586109 | ec2 | True | | | d79d19dc2945ed758747c2e2d8ab7e89 | glance | True | | | ac11eb0e1aed68f2c45085797c8bade5 | neutron | True | | | d9e6d0ddfbaf4ca6a6ee9bb951877d3d | nova | True | | | eb3237eea75ae619aba6cf75a49f798f | swift | True | | +----------------------------------+---------+---------+-------+

    3. Configuring Glance Glance is a service that provides image management in OpenStack. It responsible for storing the array of images that you use to install onto the compute notes when you create new VM instances. It is comprised of a few different services that we will need to configure first. For convenience we have provided a script to be able to do this quickly: # ./hol_single_host.py glance configuring glance Lets now enable the Glance services: # svcadm enable -rs glance-api glance-db glance-registry glance-scrubber

  • We can check that this configuration is correct with the following: # export OS_AUTH_URL=http://localhost:5000/v2.0/ # export OS_PASSWORD=glance # export OS_USERNAME=glance # export OS_TENANT_NAME=service # glance image-list +----+------+-------------+------------------+------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +----+------+-------------+------------------+------+--------+ +----+------+-------------+------------------+------+--------+ As we can see from the above, we have successfully contacted the image registry, but there are no images currently loaded into Glance. We will populate the Glance database with an image in the second part of this lab.

    4. Configuring Nova Nova is the compute service in OpenStack responsible for scheduling and deploying new instances when required. Like Glance, it is comprised of several different services that need to be configured and enabled. We will use our script again to do this quickly: # ./hol_single_host.py nova configuring nova Nova does require a little more care in terms of the start order of services, so we will first enable the conductor service (which essentially proxies access to the Nova database from the compute nodes), and then the rest of the services: # svcadm enable -rs nova-conductor # svcadm enable -rs nova-api-ec2 nova-api-osapi-compute nova-scheduler nova-cert nova-compute Lets check that Nova is functioning correctly by setting up some environmental variables and viewing the endpoints: # export OS_AUTH_URL=http://localhost:5000/v2.0/ # export OS_PASSWORD=nova # export OS_USERNAME=nova # export OS_TENANT_NAME=service # nova endpoints +-------------+-------------------------------------------------------------+ | nova | Value | +-------------+-------------------------------------------------------------+ | adminURL | http://localhost:8774/v1.1/f17341f0a2a24ec9ec5f9ca497e8c0cc | | id | 08eb495c11864f67d4a0e58c8ce53e8b | | internalURL | http://localhost:8774/v1.1/f17341f0a2a24ec9ec5f9ca497e8c0cc | | publicURL | http://localhost:8774/v1.1/f17341f0a2a24ec9ec5f9ca497e8c0cc | | region | RegionOne | | serviceName | nova | +-------------+-------------------------------------------------------------+ +-------------+----------------------------------+ | neutron | Value | +-------------+----------------------------------+ | adminURL | http://localhost:9696/ | | id | 96e693c539c0ca3ee5f0c04e958c33fe | | internalURL | http://localhost:9696/ | | publicURL | http://localhost:9696/ | | region | RegionOne | +-------------+----------------------------------+ +-------------+----------------------------------+ | glance | Value | +-------------+----------------------------------+

  • | adminURL | http://localhost:9292 | | id | 121ad7a65c0fce83834583b2c0c7c3fb | | internalURL | http://localhost:9292 | | publicURL | http://localhost:9292 | | region | RegionOne | +-------------+----------------------------------+ +-------------+-----------------------------------------------------------+ | cinder | Value | +-------------+-----------------------------------------------------------+ | adminURL | http://localhost:8776/v1/f17341f0a2a24ec9ec5f9ca497e8c0cc | | id | ee83dab8b39d4d0ad480a75cadb965dc | | internalURL | http://localhost:8776/v1/f17341f0a2a24ec9ec5f9ca497e8c0cc | | publicURL | http://localhost:8776/v1/f17341f0a2a24ec9ec5f9ca497e8c0cc | | region | RegionOne | +-------------+-----------------------------------------------------------+ +-------------+--------------------------------------+ | ec2 | Value | +-------------+--------------------------------------+ | adminURL | http://localhost:8773/services/Admin | | id | 1558b719141ae2fed54ff0bfe80cb646 | | internalURL | http://localhost:8773/services/Cloud | | publicURL | http://localhost:8773/services/Cloud | | region | RegionOne | +-------------+--------------------------------------+ +-------------+----------------------------------------------------------------+ | swift | Value | +-------------+----------------------------------------------------------------+ | adminURL | http://localhost:8080/v1 | | id | 51f1908de52f68af984c849985924e0b | | internalURL | http://localhost:8080/v1/AUTH_f17341f0a2a24ec9ec5f9ca497e8c0cc | | publicURL | http://localhost:8080/v1/AUTH_f17341f0a2a24ec9ec5f9ca497e8c0cc | | region | RegionOne | +-------------+----------------------------------------------------------------+ +-------------+----------------------------------+ | keystone | Value | +-------------+----------------------------------+ | adminURL | http://localhost:35357/v2.0 | | id | 371c73559bd842d6b961d021eeeaa2e5 | | internalURL | http://localhost:5000/v2.0 | | publicURL | http://localhost:5000/v2.0 | | region | RegionOne | +-------------+----------------------------------+ It looks to be functioning properly, so we can continue. 5. Configuring Cinder Cinder provides block storage in OpenStack typically the storage that you would use to attach to compute instances. As before, we will need to configure and enable several services: # ./hol_single_host.py cinder configuring cinder # svcadm enable -rs cinder-api cinder-db cinder-scheduler cinder-volume:setup cinder-volume:default Again, lets double check that everything is working ok: # export OS_AUTH_URL=http://localhost:5000/v2.0/ # export OS_PASSWORD=cinder # export OS_USERNAME=cinder # export OS_TENANT_NAME=service # cinder list +----+--------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +----+--------+--------------+------+-------------+----------+-------------+ +----+--------+--------------+------+-------------+----------+-------------+ This looks correct as we have not allocated any block storage to date.

  • 6. Configuring Neutron Neutron provides networking capabilities in OpenStack, enabling VMs to talk to each other within the same tenants and subnets, and directly to the outside world. This is achieved using a number of different services. Behind the Oracle Solaris implementation is the Elastic Virtual Switch (EVS) that provides the necessary plumbing to span multiple compute nodes and route traffic appropriately. We will need to do some configuration outside OpenStack to provide a level of trust between EVS and Neutron using SSH keys and RAD. Lets first generate SSH keys for evsuser, neutron and root users: # su - evsuser -c "ssh-keygen -N '' -f /var/user/evsuser/.ssh/id_rsa -t rsa" Generating public/private rsa key pair. Your identification has been saved in /var/user/evsuser/.ssh/id_rsa. Your public key has been saved in /var/user/evsuser/.ssh/id_rsa.pub. The key fingerprint is: 13:cb:06:c4:88:5e:10:7d:84:8b:c8:38:30:83:89:9f evsuser@solaris # su - neutron -c "ssh-keygen -N '' -f /var/lib/neutron/.ssh/id_rsa -t rsa" Generating public/private rsa key pair. Created directory '/var/lib/neutron/.ssh'. Your identification has been saved in /var/lib/neutron/.ssh/id_rsa. Your public key has been saved in /var/lib/neutron/.ssh/id_rsa.pub. The key fingerprint is: 13:d6:ef:22:4b:f0:cf:9f:14:e3:ee:50:05:1a:c7:a5 neutron@solaris # ssh-keygen -N '' -f /root/.ssh/id_rsa -t rsa Generating public/private rsa key pair. Created directory '/root/.ssh'. Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: c1:6f:a5:38:fc:11:85:16:ad:1d:ad:cd:2f:38:ce:26 root@solaris We then need to take the various SSH public keys and include them in authorized_keys to provide password less access between these services: # cat /var/user/evsuser/.ssh/id_rsa.pub /var/lib/neutron/.ssh/id_rsa.pub /root/.ssh/id_rsa.pub >> /var/user/evsuser/.ssh/authorized_keys Finally, we need to quickly log into these and answer the one time prompt: # su - evsuser -c "ssh evsuser@localhost true" The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is 36:9b:74:4b:e9:57:11:70:bc:71:d6:4d:77:b4:74:b3. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. # su - neutron -c "ssh evsuser@localhost true" The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is 36:9b:74:4b:e9:57:11:70:bc:71:d6:4d:77:b4:74:b3. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. # ssh evsuser@localhost true The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is 36:9b:74:4b:e9:57:11:70:bc:71:d6:4d:77:b4:74:b3. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. EVS uses the concept of a controller to manage the elastic virtual switch across the resources in the data center. We need to set the configuration to this single host and initialize the EVS database: # evsadm set-prop -p controller=ssh://evsuser@localhost # evsadm # evsadm show-prop

  • PROPERTY PERM VALUE DEFAULT controller rw ssh://evsuser@localhost -- For this setup, we will use VXLANs to appropriately tag our network traffic and provide isolation. We can do this configuration as follows: # evsadm set-controlprop -p l2-type=vxlan # evsadm set-controlprop -p vxlan-range=200-300 We will also need to set the uplink port for the controller to be net0 (the only NIC available to us): # evsadm set-controlprop -p uplink-port=net0 # evsadm show-controlprop PROPERTY PERM VALUE DEFAULT HOST l2-type rw vxlan vlan -- uplink-port rw net0 -- -- vlan-range rw -- -- -- vlan-range-avail r- -- -- -- vxlan-addr rw 0.0.0.0 0.0.0.0 -- vxlan-ipvers rw v4 v4 -- vxlan-mgroup rw 0.0.0.0 0.0.0.0 -- vxlan-range rw 200-300 -- -- vxlan-range-avail r- 200-300 -- -- Now that we have done the basic configuration with EVS, we can go ahead and configure Neutron to use this configuration. We will use the script for convenience. # ./hol_single_host.py neutron configuring neutron # svcadm enable -rs neutron-server neutron-dhcp-agent Lets test Neutron and make sure things are working: # export OS_AUTH_URL=http://localhost:5000/v2.0/ # export OS_PASSWORD=neutron # export OS_USERNAME=neutron # export OS_TENANT_NAME=service # neutron net-list We see an empty result. This is expected since we havent created any networks yet. 7. Configuring Horizon Finally we can configure Horizon, which is the web dashboard for OpenStack, providing self-service capabilities in a multi-tenant environment. Lets go ahead and do that. # ./hol_single_host.py horizon configuring horizon # cp /etc/apache2/2.2/samples-conf.d/openstack-dashboard-http.conf /etc/apache2/2.2/conf.d # svcadm enable apache22 # svcs apache22 STATE STIME FMRI online 1:53:42 svc:/network/http:apache22

    8. Logging into Horizon

  • Within the host environment, open up a browser and navigate to http://localhost/horizon. Use admin/secrete as the user/password combination.

    After signing in you will see the main dashboard for the OpenStack administrator. On the left part of the screen you will see two tabs one that shows the administration panel, the other that shows the project panel that gives us the list of projects that this current user is a member of. We can think of projects as a way to provide organizational groupings. Instead of launching an instance as an administrator, lets go and create a new user under the Admin tab. Select the Users menu entry to display the following screen.

  • We can see that there are a few users already defined these users either represent the administrator or are for the various OpenStack services. Lets go ahead and click on the Create User button and fill in some details for this user. We will include them in the demo project for now, but we could equally have created a new project if we wanted to.

  • We will use this user to provision an instance in the next part of this OpenStack lab. 8. More Information

    Download Oracle Solaris 11 http://www.oracle.com/technetwork/server-storage/solaris11/downloads/ Download OpenStack Unified Archive http://www.oracle.com/technetwork/server-storage/solaris11/downloads/unified-archives-2245488.html Oracle OpenStack on Oracle Solaris Technology Page http://www.oracle.com/technetwork/server-storage/solaris11/technologies/openstack-2135773.html Getting Started with OpenStack on Oracle Solaris http://www.oracle.com/technetwork/articles/servers-storage-admin/getting-started-openstack-os11-2-2195380.html