10
Post-Installation Tasks This chapter contains the following topics: Validating an MSX Installation, on page 1 MSX Data Backups, on page 6 Backing Up the CockroachDB, on page 6 Restoring the CockroachDB, on page 7 Logging into the Portal, on page 8 Using Google Maps with MSX, on page 9 Removing an MSX Installation, on page 9 Restricting Access to Inception VM, on page 10 Validating an MSX Installation Before You Begin • NSO has the correct configuration settings. See Verifying Network Service Orchestrator Configurations, on page 1. • All the microservices are up and running. See Validating the Status of MSX, on page 5. Verifying Network Service Orchestrator Configurations Before logging in to the Portal, it is important to ensure that the Cisco Network Service Orchestrator (NSO) has been loaded with the correct configuration settings. You can verify the configuration settings using the following procedure. There are multiple NSO instances if you are deploying more than one service pack. Therefore, these steps must be performed on the service pack-specific NSO node, for example, nso-manageddevice, and so on. Note Step 1 Log in to one of the kubernetes master nodes. # grep master inventory/inventory Post-Installation Tasks 1

Post-InstallationTasks...cd /msx-3.8.0/ansible export ANSIBLE_VAULT_PASSWORD_FILE= ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

  • Upload
    others

  • View
    31

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Post-InstallationTasks...cd /msx-3.8.0/ansible export ANSIBLE_VAULT_PASSWORD_FILE= ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

Post-Installation Tasks

This chapter contains the following topics:

• Validating an MSX Installation, on page 1• MSX Data Backups, on page 6• Backing Up the CockroachDB, on page 6• Restoring the CockroachDB, on page 7• Logging into the Portal, on page 8• Using Google Maps with MSX, on page 9• Removing an MSX Installation, on page 9• Restricting Access to Inception VM, on page 10

Validating an MSX InstallationBefore You Begin

• NSO has the correct configuration settings. See Verifying Network Service Orchestrator Configurations,on page 1.

• All the microservices are up and running. See Validating the Status of MSX, on page 5.

Verifying Network Service Orchestrator ConfigurationsBefore logging in to the Portal, it is important to ensure that the Cisco Network Service Orchestrator (NSO)has been loaded with the correct configuration settings. You can verify the configuration settings using thefollowing procedure.

There are multiple NSO instances if you are deploying more than one service pack. Therefore, these stepsmust be performed on the service pack-specific NSO node, for example, nso-manageddevice, and so on.

Note

Step 1 Log in to one of the kubernetes master nodes.

# grep master inventory/inventory

Post-Installation Tasks1

Page 2: Post-InstallationTasks...cd /msx-3.8.0/ansible export ANSIBLE_VAULT_PASSWORD_FILE= ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

[kube-master]kubernetes-master-ctsai-east-2-1 ansible_host=<master_1_ip_address> ansible_user=centos

ansible_become=truekubernetes-master-ctsai-east-2-2 ansible_host=<master_2_ip_address> ansible_user=centos

ansible_become=truekubernetes-master-ctsai-east-2-3 ansible_host=<master_3_ip_address> ansible_user=centos

ansible_become=true# ssh -F ssh.cfg centos@<master_1_ip_address>

Step 2 Access the NSO node using this command:

kubectl -n vms exec -it nso-<servicepack_name>-0 -c nso-<servicepack_name> /bin/sh

For example:

$ kubectl -n vms exec -it nso-vbranch-0 -c nso-vbranch /bin/sh

Or

$ kubectl -n vms exec -it nso-manageddevice-0 -c nso-manageddevice /bin/sh

Step 3 Change to vms user:

su vmsnso

Step 4 Run NSO CLI:

ncs_cli -u admin

Step 5 Verify the following:

• For NACM groups for vmsnso

admin@ncs% show nacm groupsgroup ncsadmin {user-name [ private vmsnso ];}group ncsoper {user-name [ public vmsnso ];}[ok][2017-01-26 17:19:59]

• For the aaa user

admin@ncs% show aaa authentication users useruser vmsnso {uid 1000;gid 1000;password$6$XfC.UmxZoxMGq58Y$Re4XKlYNHm2Ws2WkjWL09H9VNGoJqNG7TzQhtVPDZfjTY6amBxiAdafKl7iu4HQM2/uPy/2irtu/vRvJANJb//;ssh_keydir '';homedir '';}[ok][2017-01-26 17:21:10][edit]

Step 6 Verify that the service packs were successfully installed (exit configure mode).

Post-Installation Tasks2

Post-Installation TasksVerifying Network Service Orchestrator Configurations

Page 3: Post-InstallationTasks...cd /msx-3.8.0/ansible export ANSIBLE_VAULT_PASSWORD_FILE= ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

admin@ncs> show packages package oper-status

Step 7 Verify the day0 device configurations.

Use one of these commands: cfg-selector or pnp day0-common (for Managed Device service pack) to verify that theday0 configuration is updated with globals and provider. Exit configure mode before executing these commands.

• Using cfg-selector command.

admin@ncs> show configuration cfg-selectorglobals {

ncs-service-node 10.3x.1.xx;ip-address <fully-qualified-domain-name> of VMS portal;mgmt-ipv6-type false;sa-encryption-key $4$SngMGroVL+76nI4dGb496GBHn1uWZILUVR0FTjturZSDMZ4thbtG5mcMftAfGszx;

}provider VZ {

variables {...}service-assurance {...}offering IWAN {...}

• Using pnp day0 command (To be used for Cisco MSX Managed Device service pack).

admin@ncs> show configuration pnp day0-commonday0-common config-mgmt {variable CPE_HOSTNAME {value "";}variable CPE_SNMP_V3_AUTH_PASS {value CiscoVMS100%; <-- actual value obtained from passwords.yml}variable CPE_SNMP_V3_PRIV_PASS {value CiscoVMS100%; <--- actual value obtained from passwords.yml}variable CPE_SNMP_V3_USER {value vmsuser;}variable DEV_CUSTOMER_DNS_1 {value 8.8.8.8;}variable DEV_CUSTOMER_DNS_2 {value 8.8.4.4;}variable DEV_MGMT_HUB1 {value 173.39.80.209;}variable DEV_MGMT_HUB2 {value ""; <-- populated only in case of Dual DC deployment}variable DEV_MGMT_IP_ADDRESS {value "10.255.0.1";}variable DEV_MGMT_LOCAL_KEY {value cisco123; <-- actual value obtained from passwords.yml}variable DEV_MGMT_REMOTE_IDENTITY {

Post-Installation Tasks3

Post-Installation TasksVerifying Network Service Orchestrator Configurations

Page 4: Post-InstallationTasks...cd /msx-3.8.0/ansible export ANSIBLE_VAULT_PASSWORD_FILE= ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

value cisco.com;}variable DEV_MGMT_REMOTE_KEY {value cisco123; <-- actual value obtained from passwords.yml}variable DEV_MGMT_ROUTE {value "0.0.0.0 0.0.0.0";}variable DEV_MGMT_TUNNEL_INTERFACE {value 0;}variable DEV_NAT_KEEPALIVE {value 60;}variable ONBOARDING_INTERFACE {value ""; <-- set to the value of onboarding interface (e.g. GigabitEthernet 0/0/1. This isselected when device model is configured during add device flow in portal UI/API.}}

Step 8 Verify PNP server interface map settings.

admin@ncs% show pnpserver {

port 443;use-ssl true;

}interface-map "(C29[0-9][0-9])|(CISCO29[0-9][0-9])" {

wan GigabitEthernet0/1;lan GigabitEthernet0/0;config-restore-file flash:day--1-config;

}interface-map "(C39[0-9][0-9])|(CISCO39[0-9][0-9])" {

wan GigabitEthernet0/1;lan GigabitEthernet0/0;config-restore-file flash:day--1-config;

}interface-map ASR10[0-9][0-9] {

wan GigabitEthernet0/0/1;lan GigabitEthernet0/0/2;config-restore-file bootflash:day--1-config;

}interface-map ISR4[0-9][0-9][0-9] {

wan GigabitEthernet0/0/1;lan GigabitEthernet0/0/2;config-restore-file bootflash:day--1-config;

}[ok][2017-01-26 19:34:57]

For the Cisco MSXManaged Device service pack, there is no preconfigured pnp interface-map. Instead, thevalue of Tunnel0 source interface is obtained based on the value of the onboarding interface and the devicemodel configured when adding a site/device in the portal UI provisioning flow.

Note

Verification for Managed Device:

admin@ncs% show pnpserver {port 443;use-ssl true;}proxy-servers {

Post-Installation Tasks4

Post-Installation TasksVerifying Network Service Orchestrator Configurations

Page 5: Post-InstallationTasks...cd /msx-3.8.0/ansible export ANSIBLE_VAULT_PASSWORD_FILE= ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

allow-any;}logging {directory /var/log/ncs;serial all;}

Step 9 Verify that the provider name is correctly set. To verify, run show provider-infrastructure in the configuration mode.For example:

admin@ncs% show provider-infrastructureprovider-infrastructure CiscoSystems {catalog vBranch;}[ok][2017-10-18 19:54:57]

To find the list of supported VNFs, physical CPEs, run show catalog in the configuration mode. For example:

vmsnso@ncs% show catalogcatalog vBranch {branch-cpe ENCS {physical false;read-timeout 90;write-timeout 90;enable-commit-queue false;branch-cpe-template pnp-map-vCPE;nfvis-tenant admin;password $8$M5naF0NizWpvaJf8wqK5nGPtnX3PJyUs/AFn5EVt/tE=;day0 {file nfvis_day0.cfg;}cpe-onboarding {device-type netconf;port 830;}network GE0-0-SRIOV-1;network GE0-0-SRIOV-2;network GE0-1-SRIOV-1;network GE0-1-SRIOV-2;network LAN-SRIOV-1;network LAN-SRIOV-2;network LAN-SRIOV-3;network LAN-SRIOV-4;network LAN-SRIOV-5;network LAN-SRIOV-6;network int-mgmt-net;network lan-net;

Step 10 Log in to the MSX Portal and verify that the service packs are now available. For more information, see Logging intothe Portal, on page 8.

Validating the Status of MSXAfter verifying NSO configuration, verify all microservices are up and running.

Use the procedure below to check the status of all microservices available in the MSX platform:

Post-Installation Tasks5

Post-Installation TasksValidating the Status of MSX

Page 6: Post-InstallationTasks...cd /msx-3.8.0/ansible export ANSIBLE_VAULT_PASSWORD_FILE= ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

Step 1 Move to the ansible folder and invoke the following playbook to verify the kubernetes pod status.

cd /msx-3.8.0/ansibleansible kube-master -m command -a "kubectl get pod -n vms -o wide"

Step 2 Export the ANSIBLE_VAULT_PASSWORD_FILE main.yml variable to the path of the password file.

export ANSIBLE_VAULT_PASSWORD_FILE=<vault_pwd_path>

Step 3 Verify the health status of the MSX Platform.

ansible-playbook checks/platform-health.yml

MSX Data BackupsAs a best practice, you should periodically create a backup of your system once it is in operation. Ensure thatthe certificates (ca.pem and ca-key.pem) are also backed up along with your other data. They are located at/etc/ssl/vms-certs on the Inception and kube-master nodes. Use the followiong playbook to perform thebackup:ansible-playbook vms-backup.yml --extra-vars backup_tag=msx-backup-tag

Whenever you need to restoreMSX, ensure that you use the same tag that was specified in the backup operation.Note

Backing Up the CockroachDBWhen making a backup of the MSX system, in addition to running the vms-backup.yml playbook, you mustrun the following steps to back up the CockroachDB.

Step 1 Log in to the installer container.Step 2 Run the cockroachdb-backup.yml playbook.

cd /msx-3.8.0/ansibleexport ANSIBLE_VAULT_PASSWORD_FILE=<path to pwd file>ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

Step 3 Verify that the roachdump job completed successfully.ssh -F ssh.cfg centos@<kubernetes-master-1-VM_IP>sudo sukubectl get po | grep roachdump

Verified command output example:roachdump-9qprn 0/1 Completed 0 83s

Step 4 View the job logs to ensure that the backup was successful.

Post-Installation Tasks6

Post-Installation TasksMSX Data Backups

Page 7: Post-InstallationTasks...cd /msx-3.8.0/ansible export ANSIBLE_VAULT_PASSWORD_FILE= ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

kubectl logs roachdump-9qprn

Log command output example:+ exec /bin/bash /backup.sh nfviallsps2020-02-22T07:13:04Z INFO Starting backup of database nfviallsps2020-02-22T07:13:04Z INFO Backup successfully completed. Beginning commpression.2020-02-22T07:13:04Z INFO Backup succesfully compressed. Sending to long term storage.`/backup/2020-02-22T07:13:04Z-serviceconfigmanager.sql.gz` ->`backupstore/backups/2020-02-22T07:13:04Z-nfviallsps.sql.gz`Total: 56 B, Transferred: 56 B, Speed: 493 B/s2020-02-22T07:13:04Z INFO Backup successfully stored offline. Removing local copy.2020-02-22T07:13:04Z INFO Backup job complete!

Restoring the CockroachDBUse the following procedure to locate and restore a CockroachDB backup.

Step 1 Log in to the installer container.cd /msx-3.8.0/ansiblessh -F ssh.cfg centos@<kubernetes-master-1-VM_IP>sudo su

Step 2 Find the value for "MC_HOST_backupstore".vi /etc/kube-manifests/cockroachdb/roachdump-job.yml

Step 3 Export the value of that environment variable.export MC_HOST_backupstore=<value_of_MC_HOST_backupstore>

Step 4 List the available backup files.

For OpenStack:: /usr/local/bin/mc ls backupstore/backups

For AWS: /data/vms/minio/mc ls backupstore/ {{ vms_subdomain }}-msx-bucket-{{ vms_domain }}/backups

Step 5 Use the following procedure to drop the Cockroach database.kubectl exec -it cockroachdb-0 -c cockroachdb bash

./cockroach sql --certs-dir cockroach-certsdrop database serviceconfigmanager cascade;

Step 6 To restore, choose a file from the list of backups.Step 7 Log in to the installer container.

cd /msx-3.8.0/ansibleexport ANSIBLE_VAULT_PASSWORD_FILE=<path to pwd file>ansible-playbook cockroachdb-restore.yml --extra-vars '{"restoreTarget":{ "database":"serviceconfigmanager", "user":"serviceconfigmanager", "service":"serviceconfigmanager","backupfile":"<backup file name>" }}'

Step 8 Verify that the restore operation was successful.ssh -F ssh.cfg centos@<kubernetes-master-1-VM_IP>sudo sukubectl get po | grep roachrestore

Post-Installation Tasks7

Post-Installation TasksRestoring the CockroachDB

Page 8: Post-InstallationTasks...cd /msx-3.8.0/ansible export ANSIBLE_VAULT_PASSWORD_FILE= ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

Step 9 View the output to verify that the roachrestore job completed successfully.roachrestore-ph65b 0/1 Completed 0 67s

Step 10 View the log to make sure that the restore operation was successful.kubectl logs roachrestore-ph65b

Verified command output example:+ exec /bin/bash /restore.sh 2020-02-25T18:58:00Z-serviceconfigmanager.sql.gz2020-02-25T19:17:58Z INFO Starting restore of database file2020-02-25T18:58:00Z-serviceconfigmanager.sql.gz`backupstore/ei-infra-aws-msx-bucket.qa.ciscovms.com/backups/2020-02-25T18:58:00Z-serviceconfigmanager.sql.gz`-> `/backup/2020-02-25T18:58:00Z-serviceconfigmanager.sql.gz`Total: 2.56 KiB, Transferred: 2.56 KiB, Speed: 59.12 KiB/s2020-02-25T19:17:59Z INFO Successfully retrieved backup archive2020-02-25T19:17:59Z INFO Successfully extracted sql from archiveCREATE TABLECREATE TABLECREATE TABLECREATE TABLECREATE TABLEINSERT 4INSERT 1INSERT 5INSERT 3ALTER TABLEALTER TABLE2020-02-25T19:17:59Z INFO Restore job complete!

Logging into the PortalThe portal passwords are stored in the password.yml file. The default portal credentials are available in themain.yml file.

The default credentials are valid only if the auto_generate_password variable in the main.yml file is set toFALSE.

Note

If you have set the auto_generate_password variable in the main.yml file to TRUE, the system generatesrandom passwords automatically in password.yml file. In this scenario, to retrieve the credentials, run thiscommand:

ansible-vault --vault-password-file vault view group_vars/all/passwords.yml

Step 1 Verify your DNS entries. If you have used route53 to register the FQDN of the MSX portal to the ciscovms.com domain,ensure that the AWS Route53 entries have propagated to your server. You can verify this on the following website:

https://dnschecker.org/

Your FQDN setting from main.yml file (host.ciscovms.com) must match the values of your OpenStack instances, forexample, edge-instance-host-1

Post-Installation Tasks8

Post-Installation TasksLogging into the Portal

Page 9: Post-InstallationTasks...cd /msx-3.8.0/ansible export ANSIBLE_VAULT_PASSWORD_FILE= ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

Step 2 Log in to the portal with the following URL:

https://<msx_subdomain>.<msx_domain>

or

https://<your_portal_fqdn>

Add this FQDN and IP address to your local machine's hosts file.

Using Google Maps with MSXTo use the Address Form and the Google Maps, you must use the Google Maps API key. If you are usingyour own API Key, you can enable Google Maps using the procedure below. If you do not have your ownAPI key, you can ask Cisco TAC to give you an API key. The Cisco team will generate a key for you andallow your domain.

Use the following procedure to update the Google Maps API Key in MSX.

Step 1 Log in to the kubernetes-master-1 node.

ssh -i id_rsa centos@_INCEPTION_FLOATING_IP_ADDRESS_ -t ssh _kubernetes-master-1_IP_ADDRESS_

Step 2 In the file `/data/vms/skyfallui/gconfig.js, replace the GOOGLE_API_KEY line with the following:

var GOOGLE_API_KEY = 'AIzaSyGKCGans9q5vrZNtngc2D5vOIrpEXAMPLE'

Removing an MSX Installation

Step 1 Export the ANSIBLE_VAULT_PASSWORD_FILE main.yml variable to the path of the password file.

ANSIBLE_VAULT_PASSWORD_FILE=<path to the file>

Step 2 Invoke the following playbook:

ansible-playbook destroy-infra.yml (OpenStack)ansible-playbook destroy-infra-aws.yml (AWS)

This playbook removes MSX VMs, cleans up the cinder volumes, sec-groups, keys, floating IP addresses, the neutronrouter, and de-registers the vms_subdomain from route53. This playbook does not delete the CSR VM, or remove theSecurity Groups, Images, or Key Pairs from the OpenStack.

Post-Installation Tasks9

Post-Installation TasksUsing Google Maps with MSX

Page 10: Post-InstallationTasks...cd /msx-3.8.0/ansible export ANSIBLE_VAULT_PASSWORD_FILE= ansible-playbook cockroachdb-backup.yml --extra-vars backup_target_dbs= serviceconfigmanager

Restricting Access to Inception VMInception VM allows SSH access from any source IP address. This is primarily for debugging purposes andis required for the deployment to succeed. To restrict this access, add the required source IP addresses andupdate the security group attached to the Inception VM, after the deployment is complete.

Post-Installation Tasks10

Post-Installation TasksRestricting Access to Inception VM