16
Authors Georgi Todorov Jim Stringer Abstract This document describes how to manually install a ViPR SRM solution that consists of a frontend, backend, additional backend, and one collector. January 2015 Using the Binary Installer to Scaleout the ViPR SRM Environment

Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

  • Upload
    others

  • View
    31

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

Authors

Georgi Todorov Jim Stringer

Abstract

This document describes how to manually install a ViPR SRM solution that consists of a frontend, backend, additional backend, and one collector. January 2015

Using the Binary Installer to Scaleout the ViPR SRM Environment

Page 2: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

2 Using the Binary Installer to Scaleout the ViPR SRM Environment

Copyright © 2015 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware is a registered trademark of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners.

Page 3: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

3 Using the Binary Installer to Scaleout the ViPR SRM Environment

Table of Contents

Overview ................................................................................................................. 4

Audience................................................................................................................. 4

Requirements .......................................................................................................... 4

General requirements ......................................................................................................... 4

Linux requirements ............................................................................................................. 5

Server requirements ........................................................................................................... 5

Procedure ............................................................................................................... 6

Install the base 4-host binary deployment .......................................................................... 6

Configure backend hosts .................................................................................................... 6

Install and configure the additional backend ...................................................................... 7

Install and configure the frontend ....................................................................................... 8

Install and configure the collector .................................................................................... 12

Install and configure the Load Balancer Connectors ......................................................... 12

Configure the load balancer on the primary backend ........................................................ 13

Install the health collectors .............................................................................................. 15

Summary .............................................................................................................. 16

Page 4: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

4 Using the Binary Installer to Scaleout the ViPR SRM Environment

Overview This document describes how to manually install and scale out a ViPR SRM binary deployment solution with the following components:

Frontend server

Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL table named "apg"

Additional backend server with four instances (apg1, apg2, apg3 and apg4). Each instance includes a database (apg1, apg2, apg3, and apg4 respectively).

Collector server

In addition, this solution includes a properly configured Load Balancer Arbiter and a Load Balancer Connector installed and configured on each server.

The end result is a distributed solution capable of handling five million metrics using the load balancer.

Audience This article is intended for ViPR SRM installers, administrators, and anyone who manages the ViPR SRM application.

Requirements

General requirements

The requirements in this section are for a minimal deployment. In a production environment, the requirements will vary depending on the provisioned load and must include careful planning and sizing before beginning the deployment. The ViPR SRM Planner and the EMC ViPR SRM Performance and Scalability Guidelines document associated with your specific release will provide guidance for SolutionPacks and object discovery.

General requirements include:

64 bit operating system (Linux or Windows)

8 to 24 GB RAM for each host

Frontend – 16 GB RAM

Backends – 24 GB RAM

Collectors – 8 to 16 GB RAM

150 GB disk storage dedicated to ViPR SRM

Page 5: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

5 Using the Binary Installer to Scaleout the ViPR SRM Environment

4 CPUs per host for all ViPR SRM hosts

Forward and Reverse IP and DNS lookups must work on each server

Linux requirements

Specific Linux requirements include:

/tmp folder larger than 2.5 GB

SWAP file should be at least equal to the RAM size

On CentOs or RedHat-like Linux, the SELinux should be disabled or reconfigured

The graphical desktop environment is not required

On some Linux distributions:

MySQL server requires libaio1 libaio-dev or libaio to start

The installation process requires unzip

On system restart the apg services will not start

Server requirements

Type of Server Server Resources

RAM (GB) HDD (GB)

Backend default installation with Load Balancer Arbiter

24 150

Additional backend with 4 backend instances

24 150

Frontend 16 150

Collector 8 to 16 150

Each ViPR SRM server should have a health collector to monitor and report its state. The Load Balancer Arbiter is installed on the Primary backend. On each of the Collector Hosts, the Load Balancer Connector is installed.

Page 6: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

6 Using the Binary Installer to Scaleout the ViPR SRM Environment

Procedure

Install the base 4-host binary deployment

Install the ViPR SRM components on the four hosts to establish the basic 4-host ViPR SRM solution using the “Install EMC ViPR SRM Using the Binary Installer” article found at: https://community.emc.com/docs/DOC-40525.

The base binary deployment will be completed by selecting these options from the setup program:

1 Frontend host

2 Backend hosts

1 Collector Host

Configure the four hosts in the following order:

1. Select one of the backend hosts to be the Primary Backend

2. Select the other backend hosts to be the Additional Backend

3. Frontend

4. Collector

Configure backend hosts

After completing the default installation for each of the four hosts, configure the backend hosts as follows:

Step 1: Increase the maximum memory allocation pool for a Java Virtual Machine of the topology service to 6 GB.Edit the following file:

/opt/APG/Backends/Topology-Service/Default/conf/unix-service.properties

a. Set memory.max=6144

b. Reconfigure the topology service to commit the change:

/opt/APG/bin/manage-modules.sh update topology-service

Step 2: Configure MySQL:

a. Start the service: /opt/APG/bin/manage-modules.sh service start mysql Default

b. To allow remote access to the MySQL database, run the following command for each host in this ViPR SRM installation: /opt/APG/bin/mysql-command-runner.sh -c /opt/APG/Tools/MySQL-

Maintenance-Tool/Default/conf/mysql-root-mysql.xml -Q "GRANT

ALL PRIVILEGES ON *.* TO apg@<HOST NAME> IDENTIFIED BY PASSWORD

'*FA71926E39A02D4DA4843003DF34BEADE3920AF3'"

Page 7: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

7 Using the Binary Installer to Scaleout the ViPR SRM Environment

The <HOST NAME> must be the DNS name resolved this command - nslookup <CLIENT IP> - executed from the operating system where the backend is installed.

Step 3: Start the backend services: manage-module service start all

Step 4: Check that all the services are running: Manage-modules service status all

The remaining backend installation steps will be completed from the frontend GUI.

Install and configure the additional backend

Step 1: Configure MySQL

a. Start the service: /opt/APG/bin/manage-modules.sh service start mysql

b. Create four new databases with the names apg1, apg2, ap3, apg4 by running the following command four times (replacing the [1..n] with a number): /opt/APG/bin/mysql-command-runner.sh -c /opt/APG/Tools/MySQL-

Maintenance-Tool/Default/conf/mysql-root-mysql.xml -Q "create

database apg[1..n]";

c. Register the apg databases to the APG Backend server by running the following command for each apg database, responding with yes to the questions. /opt/APG/bin/manage-modules.sh install backend apg[1..n]

Step 2: Install the maintenance tool for each new backend by running the following command for each apg database, responding with yes to the questions.

/opt/APG/bin/manage-modules.sh install mysql-maintenance-tool

apg[1..n]

Step 3: Allow remote access to the new databases. Each database must be configured to be accessible from the collectors, frontends, primary backend and localhost:

/opt/APG/bin/mysql-command-runner-apg[1..n].sh -c

/opt/APG/Tools/MySQL-Maintenance-Tool/apg[1..n]/conf/mysql-root-

mysql.xml -Q "GRANT ALL PRIVILEGES ON apg[1..n].* TO apg@<Hostname

of the client> IDENTIFIED BY PASSWORD

'*FA71926E39A02D4DA4843003DF34BEADE3920AF3'"

Page 8: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

8 Using the Binary Installer to Scaleout the ViPR SRM Environment

Step 4: Reconfigure the JDBS connections of the new backends:

a. Find all of the XML files in the /opt/APG/Backends/APG-Backend/apg[1..n] folders that contains JDBS connection URLs.

For example …localhost:53306/apg

b. Update the URLs to include the correct APG database numbers.

For example …localhost:53306/apg[1..n]

c. To complete this step with a script (for Linux), run the following command: find /opt/APG/Backends/APG-Backend/apg[1..n] -name '*.xml' | xargs

sed -i "s|jdbc:mysql://localhost:53306/apg|\\0[1..n]|"

Step 5: The Telnet interface port for each new apg database must be different. The default port is 2001. To change the port numbers, edit the following files and set the port numbers to 2[1..n]01

/opt/APG/Backends/APG-Backend/apg[1..n]/conf/telnetinterface.xml

Step 6: The socket interface port for each new apg database must be different. The default port is 2000. To change the port numbers, edit the following files and set the port numbers to 2[1..n]00

/opt/APG/Backends/APG-Backend/apg[1..n]/conf/socketinterface.xml

Step 7: Point the MySQL maintenance tools to the new local databases by editing the following files and replacing “apg” with the correct names (like apg1).

/opt/APG/Tools/MySQL-Maintenance-Tool/apg[1..n]/conf/mysql.xml /opt/APG/Tools/MySQL-Maintenance-Tool/apg[1..n]/conf/mysql-root-apg.xml

To complete this step with a script (for Linux):

find /opt/APG/Tools/MySQL-Maintenance-Tool/apg[1..n] -name '*.xml' | xargs sed -i "s|jdbc:mysql://localhost:53306/apg|\\0[1..n]|"

Step 8: Start the backend services:

manage-module service start all

Step 9: Check that all the services are running: manage-modules service status all

The remaining backend installation steps will be completed from the frontend GUI.

Install and configure the frontend

Step 1: Complete the frontend installation as described in the article “Install EMC ViPR SRM Using the Binary Installer” article found at: https://community.emc.com/docs/DOC-40525.

Page 9: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

9 Using the Binary Installer to Scaleout the ViPR SRM Environment

Step 2: Set the connections that point to a local MySQL server to point to the new primary backend. In the following files, locate all of the strings like localhost:53306,and replace the string hostname with the primary backend DNS name (it must be resolvable):

/opt/APG/Web-Servers/Tomcat/Default/conf/server.xml

/opt/APG/Tools/Frontend-Search/Default/conf/frontend-search.xml

/opt/APG/Tools/Frontend-Report-Generator/Default/conf/report-generation-config.xml

/opt/APG/Tools/Administration-Tool/Default/conf/master-accessor-service-conf.xml

/opt/APG/Tools/WhatIf-Scenario-CLI/Default/conf/whatif-scenario-cli-conf.xml

Step 3: Set non-MySQL localhost connections to point to the new primary backend:

a. Locate and replace the hostname in the strings localhost:48443 in the following file:

/opt/APG/Web-Servers/Tomcat/Default/conf/Catalina/localhost/APG.xml

b. Locate and replace the hostname in the strings localhost:52569 in the following file:

/opt/APG/Web-Servers/Tomcat/Default/conf/Catalina/localhost/alerting-frontend.xml

c. Locate and replace the hostname in the strings localhost:52569 in the following file:

/opt/APG/Tools/Frontend-Report-Generator/Default/conf/report-generation-config.xml

Step 4: Increase the maximum memory that the Tomcat server can use by locating memory.max= in the following file and setting it to something like memory.max=6144:

/opt/APG/Web-Servers/Tomcat/Default/conf/unix-service.properties

Step 5: For each apg database, the following line must be added to give access to the timeseries database:

<ResourceLink name="jdbc/APG-DB[1..n]:{ cachegrp=DB }" global="jdbc/APG-DB[1..n]" type="javax.sql.DataSource"/>

For example:

<!-- Gives access to the timeseries database. --> <ResourceLink name="jdbc/APG-DB:{ cachegrp=DB }" global="jdbc/APG-DB" type="javax.sql.DataSource" /> <ResourceLink name="jdbc/APG-DB4:{ cachegrp=DB }" global="jdbc/APG-DB4" type="javax.sql.DataSource"/>

Page 10: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

10 Using the Binary Installer to Scaleout the ViPR SRM Environment

<ResourceLink name="jdbc/APG-DB3:{ cachegrp=DB }" global="jdbc/APG-DB3" type="javax.sql.DataSource"/> <ResourceLink name="jdbc/APG-DB2:{ cachegrp=DB }" global="jdbc/APG-DB2" type="javax.sql.DataSource"/> <ResourceLink name="jdbc/APG-DB1:{ cachegrp=DB }" global="jdbc/APG-DB1" type="javax.sql.DataSource"/>

This update must be made in each of the following files:

/opt/APG/Web-Servers/Tomcat/Default/conf/Catalina/localhost/APG.xml /opt/APG/Web-Servers/Tomcat/Default/conf/Catalina/localhost/centralized-management.xml /opt/APG/Web-Servers/Tomcat/Default/conf/Catalina/localhost/APG-WS.xml /opt/APG/Web-Servers/Tomcat/Default/conf/Catalina/localhost/alerting-frontend.xml

Step 6: For the data management tool to communicate with the timeseries backend2, define one entry per available timeseries database in the /opt/APG/Web Servers/Tomcat/Default/conf/Catalina/localhost/APG.xml file.

For example:

<Resource name="mgmt/APG-DB[1..n]" factory="org.apache.naming.factory.BeanFactory" disableSSLValidation="true" type="com.watch4net.apg.v2.modules.admin.WebServiceCommunication" url="https://ADDITIONAL BACKEND DNS NAME:48443/Backends/APG-Backend/apg[1..4]" user="admin" password="{691BF52FA42525C8E1EE3FA889C50B0E0DCFB3F14A9B94201A83FA814627027B46967830489991 839509D907F9A533D1B778C4B7E5562130E2A8D5E318F8F892}"/>

Note: The default credentials for the Gateway Web Services are “admin” with the password “changeme.”

Step 7: In the /opt/APG/Tools/Frontend-Search/Default/conf/frontend-search.xml file, add an entry like the following for each apg database:

<datasource id="APG-DB[1..n]:{ cachegrp=DB }" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://BACKEND DNS NAME:53306/apg[1..4]?autoReconnect=true" username="apg" password="watch4net" maxActive="30" maxIdle="5" validationQuery="SELECT 1" testOnBorrow="false" testWhileIdle="true" timeBetweenEvictionRunsMillis="10000" minEvictableIdleTimeMillis="60000" maxWait="10000" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true"/>

Step 8: In the /opt/APG/Tools/Frontend-Report-Generator/Default/conf/report-generation-config.xml file, add an entry like the following for each apg database:

<datasource id="APG-DB[1..n]:{ cachegrp=DB }"

Page 11: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

11 Using the Binary Installer to Scaleout the ViPR SRM Environment

driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://BACKEND DNS NAME:53306/apg[1..n]?autoReconnect=true" username="apg" password="watch4net" maxActive="30" maxIdle="5" validationQuery="SELECT 1" testOnBorrow="false" testWhileIdle="true" timeBetweenEvictionRunsMillis="10000" minEvictableIdleTimeMillis="60000" maxWait="10000" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true"/>

Step 9: In the /opt/APG/Web-Servers/Tomcat/Default/conf/server.xml file, add an entry like the following for each apg database:

<Resource name="jdbc/APG-DB[1..n]" auth="Container" type="javax.sql.DataSource" maxActive="10" maxIdle="10" validationQuery="SELECT 1" testOnBorrow="false" testWhileIdle="true" validationQueryTimeout="5" timeBetweenEvictionRunsMillis="10000" minEvictableIdleTimeMillis="60000" maxWait="10000" username="apg" password="watch4net" driverClassName="com.mysql.jdbc.Driver" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" url="jdbc:mysql://BACKEND DNS NAME:53306/apg[1..n]?autoReconnect=true"/>

Step 10: Start the frontend:

service apg-services start

Step 11: Register the servers:

a. On the frontend, go to Centralized Management > Configuration, and then click Register a Server. Register all of the APG servers that are up and running. The important settings are:

Server HostName = the DNS name of the server

Gateway URL = https://<server DNS name>:48443/

user name = admin

Page 12: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

12 Using the Binary Installer to Scaleout the ViPR SRM Environment

password = changeme

b. Click Save.

c. Once you have registered all of the servers, they should display in Centralized Management > Physical Overview without any error messages.

Install and configure the collector

Step 1: Complete the default collector installation as described in “Install EMC ViPR SRM Using the Binary Installer” article found at: https://community.emc.com/docs/DOC-40525.

Step 2: Start the collector:

manage-modules service start all

Install and configure the Load Balancer Connectors

For the configuration described in this document, you must install and configure Load Balancer Connectors (LBCs) on all of the ViPR SRM Collector servers.

The Load Balancer Arbiter is installed on the Primary Backend with the listener port of 2020.

The Load Balancer Connectors are installed on the Collector hosts.

The LBC listens on port 2020 local on that collector server and then communicates with the Load Balancer Arbiter on the Primary Backend also using port 2020.

Step 1: During the installation of the LBC, the script asks for the following information:

You must enter the DNS of the primary backend server where the arbiter is installed.

Step 2: The collector server has the LBC installed by default, but must be reconfigured to point it to the Primary Backend and its arbiter:

a. In Centralized Management > SolutionPacks > Other Componets, filter on the collector name, and click the reconfigure button for the LBC.

b. Update all of the host names so they point to the primary backend.

c. Ensure that the Arbiter Web-Service Instance name is the same name that the Arbiter is using.

Page 13: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

13 Using the Binary Installer to Scaleout the ViPR SRM Environment

Step 3: After you have installed the LBC and setup the Arbiter, check the logs under /opt/APG/Collecting/Collector-Manager/Load-Balancer/logs to see if all of the APG backend instances were correctly installed. For example:

# cat /opt/APG/Collecting/Collector-Manager/Load-Balancer/logscollecting-0-

0.log | grep "LoadFactorDecision::isReady"

INFO -- [2014-06-05 12:28:44 EDT] -- LoadFactorDecision::isReady(): Starting decision with current load factor of:

INFO -- [2014-06-05 12:28:44 EDT] -- LoadFactorDecision::isReady(): Backend1 0.0/750000.0 (0.0)

INFO -- [2014-06-05 12:28:44 EDT] -- LoadFactorDecision::isReady(): Backend2 0.0/750000.0 (0.0)

INFO -- [2014-06-05 12:28:44 EDT] -- LoadFactorDecision::isReady(): Backend3 0.0/750000.0 (0.0)

INFO -- [2014-06-05 12:28:44 EDT] -- LoadFactorDecision::isReady(): Backend4 0.0/750000.0 (0.0)

INFO -- [2014-06-05 12:28:44 EDT] -- LoadFactorDecision::isReady(): Backend0 595.0/750000.0 (7.933333333333333E-4)

Step 4: Look for any error messages that might indicate that there is something wrong with the configuration. If you find errors, redo the reconfiguration procedure.

Configure the load balancer on the primary backend

Requirements for the LBC:

Instance name must be changed from the default. Changing the name to “LBC” will make the instance easy to identify.

Connection must point to the localhost or the FDNS name

Listening port is 2020

The Arbiter must have a list of apg (timeseries) databases and their management services. To create the list, reconfigure the load-balancer-arbiter on the primary backend.

The following screenshot shows the SolutionPack Reconfiguration window for the load-balancer-arbiter:

Page 14: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

14 Using the Binary Installer to Scaleout the ViPR SRM Environment

The following table provides details about each field in the SolutionPack Reconfiguration page:

Label Meaning

Socket Connector port On this TCP port on the Primary Backend, the Arbiter is accepting the remote connections from all LBCs.

APG Backend hostname .. The hostname of the server where the apg database and its backend service are running. In this installation the possible options are backend and backend2. Do not use "localhost" for the default apg on the primary backend!

APG Backend data port Each apg has a backend and each backend has its own TCP port to receive raw data. The port must be unique only inside the server. Refer to the “Install and configure the additional backend” section.

In this installation the ports are 2000, 2100, 2200, 2300 and 2400.

Web-Service Gateway. Each APG server has a Web-Service Gateway. This hostname must point to the APG server with the backend service.

Backend Web-Service Instance The backend instance name. In this deployment, the possible values are:

Default (the default backend instance name from primary backend)

apg1

apg2

apg3

apg4

Page 15: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

15 Using the Binary Installer to Scaleout the ViPR SRM Environment

This is the name of the backend service.

Backend database type MySQL by default.

Backend database hostname .. The hostname where the MySQL database is running. By default, it is equal to the "APG Backend hostname,"

Backend database port.. The port on which MySQL is accepting remote TCP connections. By default, it is 53306

Backend Database name The database name used in MySQL. For example, apg, apg1, apg2.

Backend database username The user configured in MySQL. The default is "apg"

Backend database password The default password for the MySQL user is "watch4net"

To add all apg databases.

Install the health collectors

The Health Collector on the primary backend must point to the local LBC and its port. In this installation, the port is 2020.

The first installation of the SolutionPack will install the Health Reports on the frontend server.

In the “Select the components to install” window:

The alerts server should be where the collector is located. In some cases, this module can be skipped.

The data collection is the local LBC.

The reports are on the frontend server.

In the “Data collection” window:

Data collection is the local LBC on port 2020.

Alerting is on the primary backend.

Page 16: Using the Binary Installer to Scaleout the ViPR SRM ......Primary backend server — A typical backend server installation with one backend instance named "Default" attached to a MySQL

16 Using the Binary Installer to Scaleout the ViPR SRM Environment

Summary

Using the information in this article you should be able to manually install a ViPR SRM solution that consists of a frontend, backend, additional backend, and one collector.