46
Moving your physical Red Hat Enterprise Linux servers to Azure or AWS Dan Kinkead Technical Integrated Support Manager May 2018

Linux servers to Azure or AWS Moving your physical Red Hat ... · Moving your physical Red Hat Enterprise Linux servers to Azure or AWS Dan Kinkead Technical Integrated Support Manager

  • Upload
    others

  • View
    11

  • Download
    0

Embed Size (px)

Citation preview

Moving your physical Red Hat Enterprise Linux servers to Azure or AWS

Dan KinkeadTechnical Integrated Support ManagerMay 2018

Dan KinkeadTechnical Integrated Support Manager

[email protected]

Overview● Before the migration● Preparing the server for migration to cloud provider● Physical2Virtual (P2V) process● Convert and upload the virtual machine (VM) to AWS● Convert and upload the virtual machine (VM) to Azure● Demo of image creation and deployment in Azure

Before the Migration

Moving to the cloudBut how to get there?

● In-place● Containerize apps● Migrate server

Source: The Simpsons/20th Century Fox

MAKE A BACKUP!!!

Source: memegenerator.net

Register for Red Hat Cloud AccessBefore moving the server to the cloud, migrate the subscriptions that will be used in the cloud to the Cloud Access Program

● Introduction to the Red Hat Cloud Access program○ Red Hat Cloud Access and migrating your subscriptions between clouds

● Goto the enrollment form to migrate your subscriptions○ Need your Red Hat account number as well as your cloud provider’s account number

● For AWS you will also get access to the Red Hat Gold Images in your AWS account○ How to Locate Red Hat Cloud Access Gold Images on AWS EC2

Setup Conversion Server

● For the P2V conversion to be supported by Red Hat, the conversion server has to be a Red Hat Enterprise Linux 7.3 system, or later

● Needs to allow SSH access for root or sudo user● Needs to have ‘virt-v2v’ installed

○ ‘yum install virt-v2v’● Have enough storage space for the image

Prepare Physical Machine for AWS

Prepare the Physical Machine for AWS● Do not use LVM or encryption on the OS disk. Only the following file systems are supported:

EXT2, EXT3, EXT4, Btrfs, JFS, or XFS● SSH needs to be installed and enabled● Software firewall needs to allow in SSH for you● Install the AWS CLI

○ ‘curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"’○ ‘unzip awscli-bundle.zip’○ ‘./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws’

● Set networking to be DHCP and disable IPv6● Reboot the system with the P2V bootable media

Prepare Physical Machine for Azure

● Do not use LVM or encryption on the OS disk and the following file systems are supported: EXT3, EXT4, or XFS

● Azure supports Red Hat Enterprise Linux 6.7+ and Red Hat Enterprise Linux 7.1+● SSH needs to be installed and enabled● Software firewall needs to allow in SSH for you● Add the following to /etc/draccut.conf:

add_drivers+=" hv_vmbus "add_drivers+=" hv_netvsc "add_drivers+=" hv_storvsc “

Prepare the Physical Machine for Azure

● Regenerate the initramfs image○ ‘draccut -f -v’

● Enable the extras repos and install the Windows Azure Linux Agent (WALinuxAgent)○ ‘subscription-manager repos --enable rhel-7-server-extras-rpms’○ ‘yum install WALinuxAgent’

● Enable the waagent service○ ‘systemctl enable waagent.service’

● Edit the /etc/waagent.conf file:

earlyprintk=ttyS0console=ttyS0rootdelay=300numa=off <only needed in RHEL 6>

Provisioning.DeleteRootPassword=nResourceDisk.EnableSwap=yResourceDisk.SwapSizeMB=<size>

Prepare the Physical Machine for Azure -Cont’d

● Edit grub to enable the following kernel options:

● Remove (if present):○ rhgb or quiet

● Rebuild grub (for RHEL 7)○ ‘grub2-mkconfig -o /boot/grub2/grub.cfg’

Prepare the Physical Machine for Azure -Cont’d ● Edit ‘/etc/sysconfig/network-scripts/ifcfg-eth0’ so it only contains:

DEVICE="eth0"BOOTPROTO="dhcp"ONBOOT="yes"TYPE="Ethernet"USERCTL="no"PEERDNS="yes"IPV6INIT="no"

● Remove any persistent network device rules○ ‘rm -f /etc/udev/rules.d/70-persistent-net.rules’○ ‘rm -f /etc/udev/rules.d/75-persistent-net-generator.rules’

● Stop and remove cloud-init (if installed)○ ‘systemctl stop cloud-init’○ ‘yum remove cloud-init’

● Reboot the system with the P2V bootable media

P2V Process

P2V Processhttps://access.redhat.com/articles/2702281 - Converting physical machines to KVM virtual machines using virt-p2v in RHEL7

● Download P2V image from Red Hat Customer Portal● Create bootable image from ISO and boot up on physical machine● Input connection details for the conversion server● Set output options● Start the conversion process

P2V Process - Cont’d

1. Conversion server location2. Port number for SSH3. Username

a. If not root, then select sudo4. Password5. Path name of user’s private SSH key

file on the conversion server6. Test the connection

P2V Process - Cont’d

1. Details of converted server2. Output options

a. Output to ‘qemu’b. Change output storage if

necessaryc. Output format to qcow2

3. Select drives to be converted4. Start conversion

P2V Process - Cont’d

● Logs stored on conversion server in /tmp

Uploading to AWS

Uploading to AWS● Install qemu-img

○ ‘yum install qemu-img’ ● Convert the image to VHD

○ ‘qemu-img convert -f qcow2 -O vpc <image-name> <output-name>.vhd’● Install the AWS CLI● Configure the AWS CLI

○ ‘aws configure’■ You will need your AWS access key and secret key■ You can setup your default region■ Set your preferred output style (JSON, Text, or Table)

● Create an s3 bucket○ ‘aws s3 mb s3://<bucket-name>’

● Upload image to s3 bucket○ ‘aws s3 cp <image-name> s3://<bucket-name>’

Uploading to AWS - Cont’d5. Create a file name ‘trust-policy.json’ with the following:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vmie.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals":{ "sts:Externalid": "vmimport" } } } ]}

Uploading to AWS - Cont’d● Create a file name ‘role-policy.json’ with the following:

{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket" ], "Resource":[ "arn:aws:s3:::<bucket-name>", "arn:aws:s3:::<bucket-name>/*" ] }, { "Effect":"Allow", "Action":[ "ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe*" ], "Resource":"*" } ]}

Uploading to AWS - Cont’d● Create a role and grant VM Import/Export access

○ ‘aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json’● Attach the policy to the role

○ ‘aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json’

Uploading to AWS - Cont’d● Create a file named ‘containers.json’ with the following:

[ { "Description": "<description>", "Format": "VHD", "UserBucket": { "S3Bucket": "<bucket-name>", "S3Key": "<image-name>" }}]

● Start the import process○ ‘aws ec2 import-image --description "<description>" --license-type BYOL --disk-containers

file://containers.json’● Monitor the import process and note the imageID when it completes

○ ‘aws ec2 describe-import-image-tasks’

Uploading to AWS - Cont’d● Deploy the AMI using the imageID

○ ‘ aws ec2 run-instances --image-id <imageID> --instance-type <instance-type>’● Check on the status of your deployment

○ ‘aws ec2 describe-instances’● If you need to stop an instance

○ ‘aws ec2 stop-instances --instance-ids <InstanceID>’● If you need to terminate an instance

○ ‘aws ec2 terminate-instances --instance-ids <InstanceID>’

Uploading more disks to AWS● Convert the disk image to VHD

○ ‘qemu-img convert -f qcow2 -O vpc <image-name> <output-name>.vhd’● Upload disk image

○ ‘aws s3 cp <image-name> s3://<bucket-name>’● Edit ‘containers.json’ file changing the <image-name> and <description>● Import disk image

○ ‘aws ec2 import-snapshot --description "<description>" --disk-container file://containers.json’● Check status and note snapshot-id when complete

○ ‘aws ec2 describe-import-snapshot-tasks’● Create volume and note volume-id when complete

○ ‘aws ec2 create-volume --availability-zone <location> --snapshot-id <snapshot-id>’● Attach volume

○ ‘aws ec2 attach-volume --volume-id <volume-id> --instance-id <instance-id> --device <device-name>’

Uploading to Azure

Uploading to Azure● All images uploaded to Azure needed to be in a fixed VHD format and the qcow2 image must be

aligned on a 1MB boundary before being converted to VHD● Need qemu-img installed● To verify the size of the image save the following as ‘align.sh’ and run it with the image name:

#!/bin/bashMB=$((1024 * 1024))size=$(qemu-img info -f raw --output json "$1" | gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')rounded_size=$((($size/$MB + 1) * $MB))if [ $(($size % $MB)) -eq 0 ]then echo "Your image is already aligned. You do not need to resize." exit 1fiecho "rounded size = $rounded_size"export rounded_size

Uploading to Azure - Cont’d● In order to align the image, convert the image to RAW format

○ ‘qemu-img convert -f qcow2 -O raw <image-name>.qcow2 <image-name>.raw’● Set the image to the rounded value that the align.sh script provided

○ ‘qemu-img resize -f raw <image-name>.raw <rounded-value>’● Convert the RAW image to a fixed sized VHD

○ If using Red Hat Enterprise Linux server with qemu-img version 1.5.3■ ‘qemu-img convert -f raw -o subformat=fixed -O vpc <image-name>.raw <image.name>.vhd’

○ If using Fedora with qemu-img version 2.2.1 or greater■ ‘qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-name>.raw

<image.name>.vhd’

Uploading to Azure - Cont’d● Install the Azure CLI

○ ‘rpm --import https://packages.microsoft.com/keys/microsoft.asc’○ ‘sh -c 'echo -e "[azure-cli]\nname=Azure

CLI\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo'’

○ ‘yum install azure-cli’● Login to the Azure CLI

○ ‘az login’● Configure defaults

○ ‘az configure --defaults location=<azure-region> group=<resource-group>’● Create resource group

○ ‘az group create -n <resource-group>’● Create storage account

○ ‘az storage account create -n <storage-account-name> --sku <sku-type>’● Get the storage account connection string and export it

○ ‘az storage account show-connection-string -n <storage-account-name>’○ ‘export AZURE_STORAGE_CONNECTION_STRING="<storage-connection-string>"’

Uploading to Azure - Cont’d● Create a storage container

○ ‘az storage container create -n <container-name>’● Upload the VHD to the storage container

○ ‘az storage blob upload --account-name <storage-account-name> -c <container-name> -t page -f <image-name> -n <image-name>’

● Get the URL for the uploaded image○ ‘az storage blob url -c <container-name> -n <image-name>’

● Create an image from the VHD that was uploaded○ ‘az image create -n <image-name> --source <URL> --os-type linux’

● Create the VM ○ ‘az vm create -n <vm-name> --size <vm-size> --image <image-name>’

● Stop a VM○ ‘az vm stop -n <vm-name>’

● Delete a VM○ ‘az vm delete -n <vm-name>’

Uploading more disks to Azure● Convert disk image

○ ‘qemu-img convert -f qcow2 -O vpc <image-name> <image-name>.vhd’● Upload disk image to Azure

○ ‘az storage blob upload --account-name <storage-account-name> -c <container-name> -t page -f <image-name> -n <image-name>’

● Get the URL for the uploaded disk image○ ‘az storage blob url -c <container-name> -n <image-name>’

● Create disk and note id when complete○ ‘az disk create -n <disk-name> --sku <sku-type> --source <URL>’

● Attach disk○ ‘az vm disk attach --vm-name <vm-name> --disk <disk-id>’

● On VM locate and mount disk○ ‘dmesg | grep SCSI ’○ ‘mount <disk-dev> <dir>’

● Get UUID of drive for mounting in /etc/fstab○ ‘blkid’

Demo Time!

Want to get more hands on practice on creating a custom image for Azure?

● Building a Red Hat Enterprise Linux gold image for Azure lab (L1071)○ May 9th from 10am-12pm○ Room 155○ Moscone South

THANKS Y’ALLplus.google.com/+RedHat

linkedin.com/company/red-hat

youtube.com/user/RedHatVideos

facebook.com/redhatinc

twitter.com/RedHatNews