64
Acision Message Application Framework MAF_R1.0-02 Installation and Configuration Manual Document Version: 1.0 Document Status: ISSUED Document Release Date: January 2010 Approved by:

MAF_R1.0-02_ICMAN

  • Upload
    anurag

  • View
    346

  • Download
    3

Embed Size (px)

Citation preview

Page 1: MAF_R1.0-02_ICMAN

Acision Message Application Framework MAF_R1.0-02Installation and Configuration Manual

Document Version: 1.0Document Status: ISSUEDDocument Release Date: January 2010Approved by:

Page 2: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 2 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Copyright © Acision BV 2009-2010

All rights reserved. This document is protected by international copyright law and may not be reprinted, reproduced, copied or utilised in whole or in part by any means including electronic, mechanical, or other means without the prior written consent of Acision BV.

Whilst reasonable care has been taken by Acision BV to ensure the information contained herein is reasonably accurate, Acision BV shall not, under any circumstances be liable for any loss or damage (direct or consequential) suffered by any party as a result of the contents of this publication or the reliance of any party thereon or any inaccuracy or omission therein. The information in this document is therefore provided on an "as is" basis without warranty and is subject to change without further notice and cannot be construed as a commitment by Acision BV.

The products mentioned in this document are identified by the names, trademarks, service marks and logos of their respective companies or organisations and may not be used in any advertising or publicity or in any other way whatsoever without the prior written consent of those companies or organisations and Acision BV.

Page 3: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 3 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Table of Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1 Installation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.1.1 Required Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.2 Tasks Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Installing NGP-R1.2-01 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.1 Installing NGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 Setting Up External Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3 Post NGP Installation Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.4 Open Files Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.5 SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3 Installing the Base Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.1 Updating the NGP YUM Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.2 Creating the MAF User and Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.3 Installing the 32-bit glibc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.4 Installing JDK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.5 Installing the GlassFish V2.1 Application Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.5.1 Cluster Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.5.2 Initializing GlassFish Single Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.5.3 Initializing a GlassFish Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.5.3.1 Initializing the Primary Cluster Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.5.3.2 Secondary cluster nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.6 Password File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.7 The Admin Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.8 GlassFish Startup Script. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.8.1 Enabling the GlassFish Startup Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4 Setting Up the Temporal Data Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4.1 Installing the TDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4.2 Starting the TDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.3 Setting Up the TDS Statistics Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.4 Setting Up the TDS Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

Page 4: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 4 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

4.4.1 Setting Up Periodic TDS Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

5 Continuing the Installation of MAF R1.0-02 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.1 Preparing GlassFish for Additional MAF Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.1.1 Set Up the GBG JMS Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5.2 Installing the MAF Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.3 Deploying the maf-core EAR File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5.4 The Default maf.properties File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5.5 Configuring the HAS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5.6 Setting Up Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5.7 Setting Up Connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.8 Configuring the HTTP Binding Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5.9 Deploying and Configuring the JBI Binding Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5.10 Setting Up the Service Assemblies for JBI Binding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

6 Installing and Configuring MAF SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

7 Verifying the MAF Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

7.1 Running the RFA Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

7.1.1 Sample RFA Tool Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

7.1.2 Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

A. Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

A.1 The maf.properties File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

A.2 The ldap.xml File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Version History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Page 5: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 5 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

List of Tables

Table 3-1: List of required RPM files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Table 7-3: RFA Tool Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Table 7-1: Possible Web Server Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Table A-1: MAF Foundational Component Parameters in maf.properties File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Page 6: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 6 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Page 7: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 7 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Preface

PurposeThere are a number of items that must be installed and configured onto a system to establish a GlassFish service configured to be ready for Acision Message Application Framework MAF_R1.0-02 services. The purpose of this document is to describe the installation procedure of the base and vendor environment software components that make up the core platform of the Message Application Framework (MAF) product. Installation manual includes both cluster and single node configuration.

AudienceThe target audience of this document includes the following personnel.

• Personnel responsible for installing, configuring, and integrating the Acision Message Application Framework product.

• MAF and other product development teams

• System testers

ScopeThe scope of the document is to describe detailed step by step procedures to install and configure for operation the MAF Services applications and the underlying foundational components upon which they are dependent.

OrganisationThis document contains the following chapters.

• Chapter 1: “Installation Overview”—Provides a brief introduction to the MAF installation and configuration process.

• Chapter 2: “Installing NGP-R1.2-01”—Provides instructions for installing the Next Generation Platform (NGP) for MAF.

• Chapter 3: “Installing the Base Components”—Provides instructions for installing the base software used by MAF, such as JDK and Glassfish.

• Chapter 4: “Setting Up the Temporal Data Store”—Provides instructions for installing the temporal data store and starting the memcachedb process.

• Chapter 5: “Continuing the Installation of MAF R1.0-02”—Provides instructions for completing the installation and configuration of MAF.

• Chapter 6: “Installing and Configuring MAF SNMP”—Provides instructions for installing and configuring SNMP for MAF.

• Chapter 7: “Verifying the MAF Installation”—Provides instructions for running the RFA tool to verify your installation.

• Appendix A.: “Configuration Files”—Contains descriptions of the parameters in the maf.properties and the ldap.xml configuration files.

Page 8: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 8 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

In addition to the above chapters, this document contains the following back matter:

• A list of abbreviations

• A glossary

• A list of references

• A version history

Typographic ConventionsThis document uses the typographic conventions listed in the following table:

Typeface/Symbol Meaning/Used For

Bold type Used in procedures for buttons, tabs, user entries, and some other elements in graphical user interfaces (GUIs), and for the names of keyboard and telephone keys. It is also used for text references to commands that a user enters.

Text enclosed in [square brackets] Used in text references to present optional elements in command input.

Italic text Used in text references to command input to represent non-literal text to be filled in by the user (for example, phone number or host name). It is also used for emphasis and for the names of documents.

Quotation marks Used for the names of sections within a document and to set off words used in a special sense.

Courier fixed-width type Used in screen samples to present system prompts, and for text file content.

Courier fixed-width bold type Used in screen samples to present text that a user enters at a system prompt.

<Courier fixed-width> in angle brackets Used in screen samples to represent non-literal text to be filled in by the user (for example, <password> or <host name>).

Page 9: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 9 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Note Means reader take note. Notes contain helpful suggestions or references to material not covered in the manual.

Caution Means reader be careful. In this situation, you might do something that could result in equipment damage or loss of data.

\ Denotes line continuation; the character should be ignored as the user types the example, and Enter should only be pressed after the last line.

Examples:

% grep searchforthis \data/*.dat

- Bridges two keystrokes that should be pressed simultaneously.

If Ctrl-C does not work, use Ctrl-Alt-Del.

Typeface/Symbol Meaning/Used For

Page 10: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 10 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Page 11: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 11 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

1 Installation Overview

The Acision Message Application Framework (MAF) forms the execution environment for Acision product services. MAF is deployed on the Next Generation Platform (NGP-R1.2-01), which provides a common operating system and network structure for Acision products, and can be set up in a clustered or single node configuration. The MAF software is installed in a Glassfish Application Server environment. The Acision MAF software includes an rpm package that specifically installs the required GlassFish Server. After the MAF GlassFish Server is configured and operational, the remainder of the MAF software is installed on top of the MAF GlassFish Server, upon which all of the remaining software depends.

1.1 PrerequisitesBefore you set up and install MAF, you must perform the following tasks.

• Ensure that the hardware necessary for the deployment has been set up and cables properly.

• Obtained the required documentation.

1.1.1 Required DocumentationIn addition to this manual, you will need the following documents to install and configure MAF.

• Next Generation Platform NGP-R1.2-01 Installation & Configuration Manual

• Next Generation Platform NGP-R1.2-01: Installation, Configuration & Operation Manual for Peripherals

• Next Generation Platform NGP-R1.2-01: Installation, Configuration & Operation Manual for HP Servers

• Next Generation Platform NGP-R1.2-01 Release Notes Document.

1.2 Tasks OverviewYou perform the following tasks to install and configure MAF:

• Install the Next Generation Platform (NGP) software. See Chapter 2: “Installing NGP-R1.2-01” starting on page 11.

• Install the base components used by MAF. This includes updating the NGP YUM repository, creating the MAF user and group, installing the 32-bit glibc, installing JDK, installing the and initializing the Glassfish Server. See Chapter 3: “Installing the Base Components” starting on page 20.

• Set up the data store, which includes installing the temporal data store and starting the memcachedb process. See Chapter 4: “Setting Up the Temporal Data Store” starting on page 27.

• Complete the installation of the MAF components in the Glassfish environment. See Chapter 5: “Continuing the Installation of MAF R1.0-02” starting on page 31.

• Install and configure the MAF SNMP components. See Chapter 6: “Installing and Configuring MAF SNMP” starting on page 43.

Page 12: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 12 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Page 13: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 13 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

2 Installing NGP-R1.2-01

The installation and setup of the Acision Message Application Framework is initiated with the YUM/CMA host setup using NGP documentation. In conjunction with the NGP documentation, you perform some additional steps for the Acision Message Application Framework on the NGP.

2.1 Installing NGPTo install NGP-R1.2-01 on each MAF host, use the Next Generation Platform NGP-R1.2-01 Installation & Configuration Manual. As you follow the instructions in that manual, use the following additional notes that are specific to MAF.

• Disk partitioning

MAF R1.0-02 works fine with the default NGP-R1.2-01 disk partitioning

Acision Directory Server (LDAP) may be optionally deployed on the same machine alongside MAF. In this case, it is necessary to create separate disk partition for LDAP data (/ds) and reserve sufficient disk space for it. A rough estimate is 6 GB per each million of Message Plus subscribers.

• Network interfaces

Before you start deploying NGP-R1.2-01, you should configure network interfaces in configuration file ngp-node.ini. See the example below.

The IP network of MAF consists of the following two logical segments.

– The Messaging segment (name=traffic) carries service-affecting traffic, for instance messages in any format: SMS, e-mails, and so forth.

– The OAM segment (name=oam) carries management traffic, for example, SNMP, Billing file transfer, provisioning, etc.

All of these logical segments have to be implemented in a redundant fashion. The Ethernet bonding is used to provide a highly available networking connection. Two network interfaces (NIC) are bonded into a single virtual interface to provide redundancy for each LAN configured.

The following example from the ngp-node.ini file is for a server with 6 Ethernet cards.

[NIC_00]name=oamdevice=bond0bonding=masteripaddr=10.227.159.15netmask=255.255.255.192

[NIC_01]name=trafficdevice=bond1bonding=masteripaddr=10.227.159.134netmask=255.255.255.224

[NIC_02]device=eth0bonding=slavebonding_master=bond0

[NIC_03]device=eth1

Page 14: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 14 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

bonding=slavebonding_master=bond0

[NIC_04]device=eth2bonding=slavebonding_master=bond1

[NIC_05]device=eth3bonding=slavebonding_master=bond1

[NIC_06]name=brdevice=bond0.2ipaddr=10.227.159.199netmask=255.255.255.192

2.2 Setting Up External StorageExternal storage is required for a clustered TDS. When you are setting up external storage, use the following TDS-specific instructions and notes.

• Using the NGP MSA2000 procedures, make two virtual volumes available to both HBA WWPNs of the TDS server hosts as follows.

volume name: tdsdbs<n> - size: <see-dimensioning-model> (example: 10 GB)

volume name: tdsdblog<n> - size: <see-dimensioning-model> (example: 5 GB)

• Use a RAID level and number of disks appropriate to the IOPS required by the dimensioning model, for example, RAID 10 with 8 disks.

• A clustered TDS requires that you follow the NGP Fiber Storage procedure on the nodes running the TDS server (by convention, the lowest instance numbered nodes).

• Prior to performing the NGP storage (device mapper) multipathing procedure, confirm that the qla driver is operating in failover mode with the following command as root:

cat /proc/scsi/qla2xxx/[01] | grep Driver

Expect output including a driver version suffix of -fo, which indicates the failover mode of the driver:

Firmware version 4.06.03 [IP] [84XX], Driver version 8.02.23-fo

Firmware version 4.06.03 [IP] [84XX], Driver version 8.02.23-fo

• On the TDS server nodes, perform the device mapper multipathing and storage-config procedures using the following configuration:

– Use tds_dbs as the friendly name for the tdsdbs virtual volume LUN WWID.

– Use tds_dblog as the friendly name for the tdsdblog virtual volume LUN WWID.

Page 15: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 15 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Note The qla hba driver is set up using NGP procedures to operate in failover (fo) mode, meaning that the qla driver manages failover between the primary and secondary paths to the LUNs. This failover mode presents only one SCSI device node for each LUN to the host. Thus, the device mapper multipathing will only detect and manage a single path. This is working as designed. A future release of NGP will set up the device mapper multpathing to manage more than a single path per LUN.

• Use the following storage configuration in the ngp-storage-phys.conf file (as used by the ngp-storage tool).

storage:msa = {devices:/dev/mapper/tds_dbs|/dev/mapper/tds_dblogshared:false}local:/dev/mapper/tds_dbs = {tdsdbs:ext3(-E stride=64 -O dir_index):1+}local:/dev/mapper/tds_dblog = {tdsdblog:ext3(-E stride=64 -O dir_index):1+}

Note The "-E stride=64" is designed for use with a LUN corresponding to a volume that resides on 8-disk RAID10 virtual disk of an HP MSA 2000 storage array. The stride value is the multiple of 16*n, where n is the number of independent spindles in the virtual disk. For an 8-disk RAID10, n is 4, so 4*16 yields 64. You must use a stride value appropriate to the design of the virtual disk upon which your LUN's volume resides.

• After you have completed the storage configuration procedure, perform the following steps:

– As root create mount points with the following command

mkdir -p /tds/dbs /tds/dblog

– Add the following lines to the end of /etc/fstab:

/dev/mapper/tds_dbs1 /tds/dbs ext3 noatime,nodiratime 1 2

/dev/mapper/tds_dblog1 /tds/dblog ext3 noatime,nodiratime 1 2

– Mount each with

mount /tds/dbs

mount /tds/dblog

Note Later during the MAF procedures you will need to recursively change the user/group ownership of the /tds/ tree to mafadmin:mafgrp, using the following command as root: chown -R mafadmin:mafgrp /tds/.

Page 16: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 16 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

2.3 Post NGP Installation TasksAfter you have deployed NGP on all target MAF nodes, perform the following procedure.

Step 1 Log in to the MAF host(s) as admin with the password system indicated in the NGP installation guide. From there, you can su (switch user) to root / Lcmanager as needed.

Step 2 On all MAF nodes, verify that the /etc/hosts file contains IP addresses and hostnames of all network interfaces, for example, oam and traffic.

When NGP-R1.2-01 is successfully deployed on the target MAF node, the ngp-firstboot script automatically updates the /etc/hosts file with values previously configured in ngp-node.ini file.

Here is an example of an automatically generated /etc/hosts file:

# Do not remove the following line, or various programs# that require network functionality will fail.127.0.0.1 localhost.intinfra.com localhost10.227.131.142 bcsib62.intinfra.com bcsib62-oam.intinfra.com bcsib62 bcsib62-oam172.17.8.142 bcsib62-traffic.intinfra.com bcsib62-traffic

Step 3 On each MAF cluster node, update the /etc/hosts file

You must manually synchronize the /etc/hosts file on each cluster node to contain the IP addresses and hostnames of all other nodes in the same cluster. Add two lines (oam and traffic) to the /etc/hosts file, one for oam and another for traffic.

In the following example, we have one primary (bcsib61) and one secondary node (bcsib62). The IP addresses and hostnames of the primary and secondary node should be added to the /etc/hosts file of both nodes (primary/secondary).

# Do not remove the following line, or various programs# that require network functionality will fail. 127.0.0.1 localhost.intinfra.com localhost# primary cluster node10.227.131.141 bcsib61.intinfra.com bcsib61-oam.intinfra.com bcsib61 bcsib61-oam172.17.8.141 bcsib61-traffic.intinfra.com bcsib61-traffic# secondary cluster node10.227.131.142 bcsib62.intinfra.com bcsib62-oam.intinfra.com bcsib62 bcsib62-oam172.17.8.142 bcsib62-traffic.intinfra.com

Note that:

• The localhost entry in the /etc/hosts is mandatory.

• Short hostnames and fully qualified domain names (FQDN) are both mandatory in the /etc/hosts file.

• Lines of all network interfaces (for example: oam and traffic) must be copied between all nodes.

Page 17: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 17 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Step 4 Verify the network interfaces.

a. On the primary node, use the ping command to verify that you can access all secondary nodes via default hostname, for example:

# ping hostname1

# ping hostname2

b. On the secondary node, use the ping command to verify that you can access the primary node via the default hostname, for example:

# ping hostname1

c. Repeat step b for all secondary nodes.

2.4 Open Files LimitIncrease open files limit from 1024 to 8192 on all nodes. As user root, add the following lines to /etc/security/limits.conf:

mafadmin soft nofile 8192mafadmin hard nofile 8192

2.5 SNMPMAF SNMP features depend upon the NGP-R1.2-01 SNMP agent. Thus, the optional procedure in Next Generation Platform NGP-R1.2-01 Installation & Configuration Manual for SNMP is mandatory for a MAF installation. Also, the optional procedure for HP PSP is a best practice to perform if you are using the HP hardware; so you should install it.

Page 18: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 18 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Page 19: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 19 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

3 Installing the Base Components

You can install MAF either on one machine (single node version) or on more machines (cluster version). This chapter is common to both versions. In the cluster version, all steps in this chapter should be done in all nodes. The installation steps that follow assume you have all the required RPM files listed in Table 3-1 available for subsequent transfer into the YUM repository of the NGP management server.

Table 3-1 List of required RPM files

RPM File Contents

glibc-2.3.4-2.43.i686.rpm Standard GNU C libraries (32-bit) required for Java

maf_jdk1.6.0_32bit-<version>.rpm Java installation RPM

Must be the 32 Bit version

maf_glassfishEsb_2.1GA-<version>.rpm Glassfish installation RPM

maf-bdb-<version>.rpm Database backend for MemcacheDB for TDS

maf-memcachedb-<version>.rpm MemcacheDB installation RPM for TDS

maf-components-core-<version>.rpm Core MAF Components (e.g. Configuration)

maf-components-cf-<version>.rpm MAF Components Connector Framework

maf-components-ejb-<version>.rpm MAF Components EJB Module

maf-connectivity-common-<version>.rpm MAF Connectivity Common Libraries

maf-connectivity-api-<version>.rpm MAF Connectivity API Libraries

maf-connectivity-ldap-<version>.rpm MAF Connectivity LDAP Protocol Core

maf-connectivity-ldap-ejb-<version>.rpm MAF Connectivity LDAP EJB Interface

maf-has-<version>.rpm MAF High Availability Server Core

maf-has-api-<version>.rpm MAF High Availability Server Core

maf-core-common-<version>.rpm MAF Connector Core Libraries

maf-core-gbg-<version>.rpm MAF GBG Connector

maf-core-hf-<version>.rpm MAF Connector Handler Framework

maf-core-tds-ejb-<version>.rpm EJB and Web Service for the TDS

maf-core-smpp-<version>.rpm MAF SMPP Connector

maf-core-smtp-<version>.rpm MAF SMTP Connector

maf-core-hf-ejb-<version>.rpm MAF EJB Handler Framework

maf-core-jbi-binding-<version>.rpm MAF Core JBI Bindings

Page 20: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 20 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

3.1 Updating the NGP YUM Repository

Step 1 Log in as root to the management server of your NGP installation. The following three NGP YUM repositories are already configured and accessible over the network: RHEL4.8, acision, 3rdparty.

Step 2 Go to the directory of the acision YUM repository:

# cd /var/yum/4AS/acision

Step 3 If needed, create the MAF subdirectory. This is a repository container underneath the acision repository.

# mkdir MAF

If this directory already exists, then make sure that the directory does not contain RPM packages from older MAF releases.

Step 4 Under the MAF subdirectory, create a subdirectory called R1.0-02

# cd MAF

# mkdir R1.0-02

If this directory already exists, then make sure that the directory (/var/yum/4AS/acision/MAF/<release>) does not contain RPM packages from older MAF releases.

Step 5 Copy all MAF packages (maf-*, maf, glibc-*) to the directory /var/yum/4AS/acision/MAF/<release>. This will include all the rpm files needed for the installation.

Step 6 Rebuild NGP YUM repositories.

# ngp-repo-update -u --repo acision

ngp-repo-update: Updating index of repository 4AS/acisionngp-repo-update: Creating repository control file in "/var/yum/4AS/acision".

Step 7 Log out from the NGP management server.

Step 8 Log in as root to the target MAF node(s).

Step 9 Make sure that the repository is properly set up. Execute the following commands on all nodes.

# ngp-yum list | egrep 'glibc.i686|maf'

Expect to see all RPM packages (maf-*, glibc-*). Here is an example:

glibc.i686 2.3.4-2.43 acisionmaf-bdb.x86_64 1.0-02.00.B01 acisionmaf-components-api.x86_64 1.0-02.00.B01 acisionmaf-components-cf.x86_64 1.0-02.00.B01 acisionmaf-components-core.x86_64 1.0-02.00.B01 acisionmaf-components-ejb.x86_64 1.0-02.00.B01 acisionmaf-connectivity-api.x86_64 1.0-02.00.B01 acisionmaf-connectivity-common.x86_64 1.0-02.00.B01 acisionmaf-connectivity-ldap.x86_64 1.0-02.00.B01 acisionmaf-connectivity-ldap-ejb.x86_64 1.0-02.00.B01 acisionmaf-core-common.x86_64 1.0-02.00.B01 acisionmaf-core-gbg.x86_64 1.0-02.00.B01 acisionmaf-core-hf.x86_64 1.0-02.00.B01 acisionmaf-core-hf-ejb.x86_64 1.0-02.00.B01 acisionmaf-core-jbi-binding.x86_64 1.0-02.00.B01 acisionmaf-core-smpp.x86_64 1.0-02.00.B01 acisionmaf-core-smtp.x86_64 1.0-02.00.B01 acisionmaf-has.x86_64 1.0-02.00.B01 acisionmaf-has-api.x86_64 1.0-02.00.B01 acisionmaf-memcachedb.x86_64 1.0-02.00.B01 acision

Page 21: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 21 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

maf_glassfishEsb_2.1GA.x86_64 1.0-02.00.B01 acisionmaf_jdk1.6.0_32bit.x86_64 1.0-02.00.B01 acisionmaf-connectivity-gbg.x86_64 1.0-02.00.B01 acisionmaf-connectivity-gbg-ejb.x86_64 1.0-02.00.B01 acision maf-jaf-core-hf-ejb.x86_64 1.0-02.00.B01 acision

3.2 Creating the MAF User and Group Perform the following procedure on all MAF nodes.

Step 1 Log in as root on the MAF node.

Step 2 Create the mafgrp group.

# groupadd -g 1600 mafgrp

Step 3 Create the mafadmin user.

# useradd -d /home/mafadmin -m -g mafgrp -s /bin/bash -u 16000 mafadmin

# passwd mafadmin

<enter-the-password-as-prompted>

3.3 Installing the 32-bit glibc Perform this procedure on all MAF nodes.

Step 1 Log in to the MAF host(s) as root.

Step 2 Install the i686 (32 bit) glibc library rpm by executing the following command. If that library rpm is already installed, then this command will output "Nothing to do." and that is ok.

# ngp-yum install glibc.i686

3.4 Installing JDKPerform this procedure on all MAF nodes.

Step 1 Log in to the MAF host(s) as admin with the password system indicated in the Next Generation Platform NGP-R1.2-01 Installation & Configuration Manual. From there, you can su (switch user) to root / Lcmanager as needed.

Step 2 Install the JAVA VM from the RPM package by executing the following command.

# ngp-yum install maf_jdk1.6.0_32bit

directory /opt/jdk1.6.0_14 is created and filled with JDK contents, all files owned by 'mafadmin:mafgrp'

Step 3 Set up the JAVA environment variables.

a. As root user using the “vi” editor, edit (or create) a /etc/profile.d/java.sh script entry with the following contents:

Page 22: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 22 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

export JAVA_HOME=/opt/jdk1.6.0_14/

export PATH=$JAVA_HOME/bin:$PATH

Subsequent logins will set the JAVA environment variables to their correct values.

b. Log out and then log back in again to source the new variables.

3.5 Installing the GlassFish V2.1 Application ServerTo install the GlassFish Application Server, perform the procedure in this section on all MAF nodes.

Step 1 Install the GlassFishESB RPM as the root user:

# ngp-yum install maf_glassfishEsb_2.1GA

Step 2 Configure the /etc/profile.d file. As root user using the vi editor, edit the /etc/profile.d/glassfish.sh script entry with the following contents.

For single-node environment:

export GLASSFISH_HOME=/opt/glassfishEsb_2.1GA/glassfish

export GLASSFISH_TARGET=server

export PATH=$PATH:$GLASSFISH_HOME/bin

For all nodes in clustered environment:

export GLASSFISH_HOME=/opt/glassfishEsb_2.1GA/glassfish

export GLASSFISH_TARGET=cluster1

export PATH=$PATH:$GLASSFISH_HOME/bin

The /opt/glassfishEsb_2.1GA directory is created and filled with GlassFish contents, all files owned by 'mafadmin:mafgrp' along with other requisite files and directories. The PATH variable will, upon subsequent logins, contain the correct path to the glassfish executables.

3.5.1 Cluster TuningUse the following procedure to tune Glassfish in a clustered environment.

Step 1 As the mafadmin user on the first cluster node, issue the following commands to tune glassfish:

asadmin delete-jvm-options -target=cluster1 "\-client"

asadmin delete-jvm-options -target=cluster1 "\-Xmx512m"

asadmin delete-jvm-options -target=cluster1 "\-XX\:MaxPermSize=128m"

asadmin create-jvm-options -target=cluster1 "\-server"

asadmin create-jvm-options -target=cluster1 "\-Xms3000m"

asadmin create-jvm-options -target=cluster1 "\-Xmx3000m"

asadmin create-jvm-options -target=cluster1 "\-XX\:+AggressiveHeap"

Page 23: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 23 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

asadmin create-jvm-options -target=cluster1 "\-XX\:+AggressiveOpts"

asadmin create-jvm-options -target=cluster1 "\-XX\:+UseParallelGC"

asadmin create-jvm-options -target=cluster1 "\-XX\:+UseParallelOldGC"

asadmin create-jvm-options -target=cluster1 "\-XX\:ParallelGCThreads=5"

asadmin create-jvm-options -target=cluster1 "\-XX\:MaxPermSize=192m"

Step 2 Tune HTTP Request Processing threads maximum to 50

asadmin set cluster1-config.http-service.request-processing.thread-count=50

Step 3 As the mafadmin user on each cluster node, issue the following commands to tune OpenESB.

• Tune HTTP BC Threads to 50 using the following commands on each cluster node:

sed -i -e 's/^OutboundThreads=.*/OutboundThreads=50/' \

/opt/glassfishEsb_2.1GA/glassfish/nodeagents/agent_*/instance_*/jbi/components/sun-http-binding/install_root/workspace/config.properties

• Tune BPEL SE Threads to 8. This value should match the number of processor cores in the server.

sed -i -e 's/^ThreadCount=.*/ThreadCount=8/' \

/opt/glassfishEsb_2.1GA/glassfish/nodeagents/agent_*/instance_*/jbi/components/sun-bpel-engine/install_root/workspace/config.properties

Step 4 Execute the following commands as mafadmin on the primary cluster node to make the above configuration tuning effective.

asadmin stop-cluster cluster1

asadmin start-cluster cluster1

3.5.2 Initializing GlassFish Single NodeFor cluster installations, skip this section.

Step 1 Log in as the mafadmin user and go to Glassfish home directory.

# su – mafadmin (if not already mafadmin)

> cd $GLASSFISH_HOME

>

Step 2 Create the default domain:

$ ./create-domain.sh --cluster false

>

Step 3 Log n as root and start the default domain:

# service maf-glassfish start

Starting maf-glassfish: [ OK ]

Step 4 Check the status of the GlassFish server:

# service maf-glassfish status

Page 24: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 24 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

domain1 running.

The domain is running, administration web console is available at http://<your_node_name>:4848

The default domain directory $GLASSFISH_HOME/domains/domain1 is created.

The browser should offer log-in page into the GF administration web-console.

Default administrator's user name = "admin"

Password = "adminadmin"

3.5.3 Initializing a GlassFish ClusterFor single node installation, skip this section.

3.5.3.1 Initializing the Primary Cluster Node

Step 1 Log in as the mafadmin user and go to the Glassfish home directory.

# su - mafadmin

> cd $GLASSFISH_HOME

>

Step 2 Create the default domain for the cluster and primary node.

$ ./create-domain.sh --cluster primary

>

The script asks you:

Default hostname of this machine is: bcsib61 You can choose different hostname for GlassFish node agent, for example: hostname-cluster in NGP environment. Please enter hostname or accept default [bcsib61]:

Use the defaults, even in an NGP environment.

Step 3 Start the domain using the following asadmin command.

# su - mafadmin

> asadmin start-domain

Step 4 Ensure the domain is started using following asadmin command.

> asadmin list-domains

domain1 runningCommand list-domains executed successfully.

3.5.3.2 Secondary cluster nodesRepeat this section for all secondary nodes.

Step 1 Make sure that domain on the primary cluster node is started.

Page 25: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 25 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Step 2 Log in as the mafadmin user and go to the Glassfish home directory.

# su - mafadmin

> cd $GLASSFISH_HOME

>

Step 3 Create the secondary node:

$ ./create-domain.sh --cluster secondary

>

The script asks you:

• If there are more than 2 nodes, you need to specify the sequential number of this secondary node. Enter a unique sequential number of this secondary node between 1 and 16.

Enter 1 for the first secondary node, 2 for the second secondary node etc.

• Specify the hostname of primary cluster node.

Enter the short hostname. Do not enter FQDN (fully qualified domain name).

• Default hostname of this machine is: bcsib62You can choose different hostname for GlassFish node agent,for example: hostname-cluster in NGP environment.Please enter hostname or accept default [bcsib62]:

Accept the default (which is the simply un-suffixed short hostname) rather than entering a suffixed short hostname.

3.6 Password FileDuring the installation process, a user name and user password is needed many times. Instead of using the interactive mode and typing the name and password, a password file is installed from the RPM package with default credentials. The passwords.txt default file is automatically created on installation.

In case it is not present, perform the following procedure on all nodes.

Step 1 Edit the file passwords.txt in $GLASSFISH_HOME directory to change the default passwords:

$ vi $GLASSFISH_HOME/passwords.txt

Step 2 Update the file contents to new passwords from:

AS_ADMIN_MASTERPASSWORD=changeit

AS_ADMIN_PASSWORD=adminadmin

AS_ADMIN_USERPASSWORD=xxx

Page 26: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 26 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

3.7 The Admin ConsoleThe Domain Administration Console (DAS), which is the common WEB interface to GlassFish, is now available on port 4848. You can use this interface to check various installation and configuration elements of the GlassFish Server as needed. Note that all the functionality available in the DAS is also accessible via the Linux command line.

3.8 GlassFish Startup Script

3.8.1 Enabling the GlassFish Startup ScriptTo configure the maf-glassfish server so that it starts automatically on each reboot of the system, you need to enable the maf-glassfish startup script. You must enable that script on all MAF nodes.

Step 1 Log in to the MAF node as root.

Step 2 For clustered systems only, modify the /etc/maf-1.0/config/maf-glassfish.options file on both nodes to include the DAS_HOSTNAME, AGENT_NAME and INSTANCE_NAME.

Example 3-1 Example of Primary Cluster Node

# Glassfish simplex and clustering roles.DAS_HOSTNAME="<primaryClusterHostName>"AGENT_NAME="agent_PRIM0"INSTANCE_NAME="instance_PRIM0" # NOTE: Non-cluster must use empty values for INSTANCE_NAME and AGENT_NAME.

# A setting of unlimited here will allow the Glassfish# processes to dump core files. NOTE: Kernel level tuning# also can affect this for suid processes. As such a setting of# /sbin/sysctl -w kernel.suid_dumpable=2 >/dev/null 2>&1# may also be needed.DAEMON_COREFILE_LIMIT="unlimited"

Example 3-2 Example of Secondary Cluster Node

# Glassfish simplex and clustering roles.DAS_HOSTNAME="<primaryClusterHostName>"AGENT_NAME=" agent_SEC1 "INSTANCE_NAME=" instance_SEC1 "# NOTE: Non-cluster must use empty values for INSTANCE_NAME and AGENT_NAME

# A setting of unlimited here will allow the Glassfish# processes to dump core files. NOTE: Kernel level tuning# also can affect this for suid processes. As such a setting of# /sbin/sysctl -w kernel.suid_dumpable=2 >/dev/null 2>&1# may also be needed.DAEMON_COREFILE_LIMIT="unlimited"

Step 3 Execute the following command on both nodes.

# /sbin/chkconfig --add maf-glassfish

Page 27: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 27 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

4 Setting Up the Temporal Data Store

As part of the MAF platform, a Temporal Data Store (TDS) server provides the service accessed via the High Availability Store (HAS) client API. The TDS server is an abstract service that is currently physically implemented by the memcachedb application server process using a bdb (Berkeley DB) embedded database. The HAS layer is installed on TDS clients later in this procedure.

Do not confuse the term HAS with Sun's High Availability DataBase (HADB) product or Sun's JavaDB (Derby) product. Neither HADB (part of the Glassfish Enterprise Profile product) nor JavaDB (included in the JDK and Glassfish) are specifically required by the MAF layer, not even when clustering is used. MAF uses the Cluster Profile of Glassfish, which uses ring topology in-memory session replication rather than HADB.

JavaDB (Derby) is not a supported part of the MAF platform. Nonetheless, JavaDB is installed and available for use by higher-level applications that work on the MAF platform; however, it is those higher-level application’s responsibility to engineer a suitable JavaDB solution and not MAF responsibility.

4.1 Installing the TDSThe first instance is the master, the second instance (if installed) is the client (slave), and subsequent instances are clients (slaves). Typical deployment scenarios include:

• Single-node environment: install only 1 TDS instance.

• Clustered environment with 2 MAF nodes:

– install master TDS instance on the primary MAF node

– install client (slave) TDS instance on the secondary MAF node

• Clustered environment with more than 2 MAF nodes:

– install master TDS instance on the primary MAF node

– install client (slave) TDS instance on the first secondary MAF node

– install client (slave) TDS instances on each subsequent MAF nodes, if needed

To install TDS, follow this procedure.

Step 1 Log in to the TDS node as root.

Step 2 Install the downloaded RPMs in the following order. Execute this command on all TDS nodes:

# ngp-yum install maf-bdb maf-memcachedb

Step 3 Run the ldconfig command to make sure the ld.so is informed of the new shared libraries installed so that it has the proper linkbindings. Execute this command on all TDS nodes.

# /sbin/ldconfig

Page 28: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 28 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

4.2 Starting the TDSTo start the TDS, you must start the memcachedb process using the following procedure on all TDS nodes.

Step 1 Log in as root.

Step 2 For clustered systems only, modify the /etc/maf-1.0/config/maf-tds.options file to include the NORMAL_MASTER_HOSTNAME and NORMAL_SLAVE_HOSTNAMES.

Example 4-1 Sample maf-tds.options File

# Configuration file for the maf-tds (memcachedb) service.

# TDS simplex and clustering roles.NORMAL_MASTER_HOSTNAME="cdusio61"NORMAL_SLAVE_HOSTNAMES="cdusio62"# NOTE: Non-cluster must use empty value for NORMAL_SLAVE_HOSTNAMES.

# A setting of unlimited here will allow the HAS (memcachedb)# processes to dump core files. NOTE: Kernel level tuning# also can affect this for suid processes. As such a setting of# /sbin/sysctl -w kernel.suid_dumpable=2 >/dev/null 2>&1# may also be needed.DAEMON_COREFILE_LIMIT="unlimited"

Step 3 Enable the maf-tds startup script by executing the following command.

/sbin/chkconfig --add maf-tds

Step 4 If you are running a clustered TDS, as root run the following command.

chown -R mafadmin:mafgrp /tds/

Step 5 If you are running a clustered TDS, as mafadmin run the following commands.

mkdir -p /tds/dbs/21201 /tds/dblog/21201

echo "set_lg_dir ../../dblog/21201" > /tds/dbs/21201/DB_CONFIG

Step 6 If you are running a clustered TDS, execute the following commands as root.

cd /

ln -s tds has

Step 7 Start the TDS service.

service maf-tds start

4.3 Setting Up the TDS Statistics ScriptThe memcache-stats.pl script retrieves the statistics that are built into the memcachedb data store, and monitors the status of the TDS as a whole. You should up a cron job to run this script automatically on a periodic basis.

Page 29: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 29 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

When the memcache-stats.pl script runs, its output is logged to a stats-<month>-<day>.log file(s). Refer to the Message Application Framework MAF_R1.0-01 Operator Manual for more information about the log files.

To automatically run the memcache-stats.pl script, perform the following procedure:

Step 1 Log in to each TDS node as root.

Step 2 Set up a cron job to periodically run the memcache-stats.pl script.

a. crontab -e

b. Add the crontab entry similar to the following entry.

0 * * * * /opt/memcachedb-1.2.0/script/memcache-stats.pl

This change is effective immediately. In the above example, the memcache-stats.pl script will run automatically every hour during the day every day of the week.

4.4 Setting Up the TDS BackupUse the procedure in this section to set up the TDS to run backups.

Step 1 Log in to each TDS node as root.

Step 2 For single-node environments, perform the following steps.

mkdir -p /opt/tds-backup

chown mafadmin:mafgrp /opt/tds-backup

Step 3 For clustered environments, perform the following steps

a. Confirm that the file /tds/dbs/21201/DB_CONFIG contains the proper relative-path based configuration for separate transaction log files using the following command as mafadmin.

cat /tds/dbs/21201/DB_CONFIG | egrep "^set_lg_dir"

Expect the following output.

set_lg_dir ../../dblog/21201

b. If you do not get the expected output in the above step, do not proceed further until you correct the above configuration to meet expectations.

mkdir -p /opt/tds-backup/dbs /opt/tds-backup/dblog

chown -R mafadmin:mafgrp /opt/tds-backup

4.4.1 Setting Up Periodic TDS BackupsPerform the procedure in this section on each TDS node to ensure that automatic backups are created periodically. It is important to do so because the backup procedure safely removes transaction logs that have been fully processed into backed up db files. This safe removal of transaction logs is critical to avoid filesystem overflow.

Step 1 Log in to each TDS node as mafadmin.

Step 2 Enter the following command:

Page 30: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 30 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

crontab -e

Step 3 Enter the following content as a single line into the crontab.

For standalone TDS, enter this line:

0 3 * * * PROCESSED=$(/opt/bdb-4.7.25.NC/bin/db_archive -h /var/memcachedb-1.2.0/data/21201 -a 2> /var/memcachedb-1.2.0/data/21201.backup.log) && /opt/bdb-4.7.25.NC/bin/db_hotbackup -h /var/memcachedb-1.2.0/data/21201 -b /opt/tds-backup/21201 -v >> /var/memcachedb-1.2.0/data/21201.backup.log 2>&1 && [[ "$PROCESSED" ]] && rm -f $PROCESSED

For a clustered TDS, enter this line:

0 3 * * * /opt/bdb-4.7.25.NC/bin/db_hotbackup -h /tds/dbs/21201 -b /opt/tds-backup/dbs/21201 -D -c -v > /opt/tds-backup/dbs/21201.backup.log 2>&1

The cron jobs above run the backup at 3 AM local time each day. You should adjust this time as necessary to avoid doing a backup on multiple TDS servers at the same time and to avoid doing backup during peak system usage times.

Note It is not permissible to use the backslash (\)character to escape a newline in a crontab so as to spread a single crontab logical line over multiple physical lines.

Page 31: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 31 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

5 Continuing the Installation of MAF R1.0-02

This chapter outlines the steps to take to install and configure the MAF Components into the GlassFish environment.

5.1 Preparing GlassFish for Additional MAF ComponentsThe MAF layer is a common peer to Acision’s messaging products. It provides the upper layer application products with the services common to them. The MAF_ROOT setting provides the pathway for common access to those MAF services. Note that MAF_ROOT is a variable of Glassfish; it is not a system environment variable.

Note The MAF_ROOT GlassFish variable will be set to /etc/maf-1.0. This directory must have a subdirectory of config (/etc/maf-1.0/config) which is created in a step below. It is inside the /etc/maf-1.0/config directory that the central applications configuration file, known as maf.properties is stored.

Note The following procedure will add a MAF_ROOT system property and change the current max-files-count parameter on a standalone or a cluster.

Use the following procedure to set up the GlassFish environment for MAF:

Step 1 Log in to the GlassFish Server as mafadmin user.

$ whoami

mafadmin

Step 2 For a single node environment, edit the server-config configuration section in the Edit the $GLASSFISH_HOME/domains/domain1/config/domain.xml file.

a. Stop the Glassfish server.

> asadmin stop-domain

b. Edit the $GLASSFISH_HOME/domains/domain1/config/domain.xml file. Make the changes in bold below.

…<config dynamic-reconfiguration-enabled="true" name="server-config">…

<http-file-cache file-caching-enabled="true" file-transmission-enabled="false" globally-enabled="true" hash-init-size="0" max-age-in-seconds="30" max-files-count="8192" medium-file-size-limit-in-bytes="537600" medium-file-space-in-bytes="10485760" small-file-size-limit-in-bytes="2048" small-file-space-in-bytes="1048576"/>

… </management-rules> <system-property name="MAF_ROOT" value="/etc/maf-1.0"/> </config>…

Page 32: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 32 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

c. Save the file with the changes.

d. Start the Glassfish server.

> asadmin start-domain

Step 3 For a clustered environment, execute the following command as mafadmin on the primary cluster node.

$ asadmin create-system-properties --target $GLASSFISH_TARGET "MAF_ROOT=/etc/maf-1.0"

$ asadmin set cluster1-config.http-service.http-file-cache.max-files-count=8192

5.1.1 Set Up the GBG JMS ResourcesIf your deployment requires GBG, you must perform the steps of this section; otherwise you should skip this section.

Step 1 As root on the Glassfish Server, add the JMS Service.

Note The usage of the Java Messaging Service API is required for communicating with the GBG component.

a. Add the JMS Service Physical Destination for MDBQueue. As mafadmin, execute the following command on the single/primary node to create the MDBQueue name:

$ asadmin create-jmsdest --desttype queue --target $GLASSFISH_TARGET \ --passwordfile $GLASSFISH_HOME/passwords.txt MDBQueue

Result: Command create-jmsdest executed successfully.

b. Add the JMS Service Physical Resource for MDBQueue. As mafadmin, execute the following command on the single/primary node to add JMS service physical resource for MDBQueue.

$ asadmin create-jms-resource --restype javax.jms.Queue --target $GLASSFISH_TARGET \ --passwordfile $GLASSFISH_HOME/passwords.txt --enabled=true MDBQueue

Result: Command create-jms-resource executed successfully.

Step 2 As mafadmin, execute the following command on the single/primary node to add maf-components connection factory:

$ asadmin create-jms-resource --target $GLASSFISH_TARGET \ --restype javax.jms.QueueConnectionFactory \ maf-components/GbgMdbWSClient/connectionFactory

Result: Command create-jms-resource executed successfully.

Page 33: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 33 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Step 3 Continuing as mafadmin, execute the following command on the single node or primary node to make your configuration changes effective.

For standalone:

> asadmin stop-domain

> asadmin start-domain

For cluster:

> asadmin stop-cluster cluster1

> asadmin start-cluster cluster1

5.2 Installing the MAF ComponentsThe MAF software is delivered as a set of RPM packages having mutual dependencies. The RPM files that are to be installed now were previously copied into the yum repository at /var/yum/4AS/acision/MAF/<release>.

Note You must perform the installation process for the MAF RPM packages on all nodes (single node, primary node, secondary nodes).

Step 1 Prepare to install rpms by becoming the root user:

su -

Step 2 Install the MAF core RPM which contains SNMP, logger, and configuration.

# ngp-yum install maf-components-core

Step 3 Install the MAF JCA framework RPM containing the JCA framework and component’s management.

ngp-yum install maf-components-cf maf-components-api

Step 4 Install the maf-components-ejb rpm to allow access to the components from outside the EAR file.

# ngp-yum install maf-components-ejb

Step 5 Install the common part of all the MAF connectors and MAF connectivity interfaces.

The MAF connectivity components are inbound/outbound connectors (resource adapters) to the backends. These connectors are used by most Acision products and thus they are part of MAF so that they can be easily shared across the products.

# ngp-yum install maf-connectivity-common maf-connectivity-api

Step 6 If your deployment requires LDAP, install the LDAP connector and ejb components.

# ngp-yum install maf-connectivity-ldap maf-connectivity-ldap-ejb

Step 7 If your deployment requires GBG, install the GBG connector and ejb components.

# ngp-yum install maf-connectivity-gbg maf-connectivity-gbg-ejb

Step 8 If your deployment requires HAS/TDS, install the TDS ejb components.

# ngp-yum install maf-core-tds-ejb

Page 34: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 34 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Step 9 To ensure that the MAF files reside in versioned and unversioned directories, create the following symbolic links:

# cd /opt/

# ln -s /opt/maf-1.0 /opt/maf

# cd /etc/

# ln -s /etc/maf-1.0 /etc/maf

Step 10 Install the MAF core RPM files (including several such rpms via dependencies).

# ngp-yum install maf-core-hf-ejb

5.3 Deploying the maf-core EAR FileThe maf-core EAR (Enterprize Archive) file contains all the upstream dependencies needed to enable MAF services and applications.

Step 1 As mafadmin on single node or primary node only, execute the make-ear.sh script to obtain the EAR file from installed components.

# su - mafadmin

$ cd /opt/maf/script/

$ ./make-ear.sh maf-core.ear

Step 2 As mafadmin, execute the following command on the single or primary node to deploy EAR file.

$ asadmin deploy --host localhost --target $GLASSFISH_TARGET \--passwordfile $GLASSFISH_HOME/passwords.txt --enabled=true \--name maf-core /opt/maf/bin/maf-core.ear

Result: Command deploy executed successfully.

The EAR file is now successfully deployed.

Step 3 As mafadmin, execute the following command on the single or primary node to validate the installation of the maf-core EAR file:

$ asadmin list-sub-components --host localhost --user admin \ --passwordfile=$GLASSFISH_HOME/passwords.txt maf-core

maf-core-hf-ejb-R1.0-02.00.B03.jar <EJBModule>maf-components-ejb-R1.0-02.00.B03.jar <EJBModule>maf-core-tds-ejb-R1.0-02.00.B03.jar <EJBModule>has-memcachedb-R1.0-02.00.B03.jar <EJBModule>maf-connectivity-ldap-ejb-R1.0-02.00.B03.jar <EJBModule>maf-connectivity-gbg-ejb-R1.0-02.00.B03.jar <EJBModule>maf-connectivity-gbg-R1.0-02.00.B03.rar <ResourceAdapterModule>maf-components-adapter-R1.0-02.00.B03.rar <ResourceAdapterModule>maf-connectivity-ldap-R1.0-02.00.B03.rar <ResourceAdapterModule>

Command list-sub-components executed successfully.

Page 35: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 35 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

5.4 The Default maf.properties File The maf.properties file contains many service-specific configuration settings that govern much of the functionality of MAF. It is also available for use by upper-layer applications that are installed on the MAF platform. The file resides in the /etc/maf-1.0/config/ directory. A default maf.properties file is installed during system installation. It is a minimal configuration file and evolves to match implementation-specific settings as the installation continues in this document.

Example 5-1 shows the default contents of the maf.properties file after installation These settings are valid for all systems and represent the suggested starting values. We recommend that you do not change any of these default settings.

Step 1 In a clustered environment, please use the following convention to keep maf.properties consistent on all cluster nodes.

Each time you edit the maf.properties file, do so on the primary cluster node. After you have edited the file, copy the updated maf.properties from the primary node to each secondary node of the cluster. Then, restart the cluster to make the changes effective. An example command sequence for doing so is:

On primary cluster node as mafadmin user:

vi /etc/maf-1.0/config/maf.properties

(edit the file)

scp /etc/maf-1.0/config/maf.properties cdusio62:/etc/maf-1.0/config/maf.properties

scp /etc/maf-1.0/config/maf.properties cdusio63:/etc/maf-1.0/config/maf.properties

asadmin stop-cluster cluster1

asadmin start-cluster cluster1

Example 5-1 Default maf.properties File

# Logginglog4j.rootLogger=WARN, filelog4j.appender.file=org.apache.log4j.RollingFileAppenderlog4j.appender.file.File=/opt/glassfishEsb_2.1GA/glassfish/domains/domain1/logs/maf.loglog4j.appender.file.MaxFileSize=10000KBlog4j.appender.file.MaxBackupIndex=3log4j.appender.file.layout=org.apache.log4j.PatternLayoutlog4j.appender.file.layout.ConversionPattern=[%d | %t | %c{3} | %p] %m%n#log4j.logger.com.acision=ALL

# SNMPsnmp.enabled=truesnmp.agent.port=1896snmp.agent.report.interval=3600snmp.agent.working.dir=/tmpsnmp.trapreciever.host=localhostsnmp.trapreciever.port=162

# MAF Managementmanagement.period=40000

jbi.marshaller.threadpool.size=20jbi.unmarshaller.threadpool.size=50ioThreadsMax=50

Page 36: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 36 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

5.5 Configuring the HAS ClientEarlier in the installation process, the TDS was installed and set up for operation. To provide high availability service for temporal data, the HAS client software is made available, via MAF, to higher layer applications.

Step 1 To use the TDS in conjunction with HAS for MAF, add the following lines to the maf.properties file as mafadmin.

Supply your real TDS server address(es) and listening port(s)

• For single-node environment:

# HA Data Store MemcacheDB settings

has.memcachedb.servers=10.29.41.149:21201

• For clustered environment add a space-separated list of memcachedb servers. This must be configured on all nodes:

# HA Data Store MemcacheDB settings

has.memcachedb.servers=10.29.41.149:21201 10.29.41.150:21201

The has.memcachedb.servers property variable allows you to specify the IP address and IP Port where the service is operating as either a single server or two servers operating redundantly in a master / slave relationship. The values for the property are space delimited when more than one entry is provided.

Step 2 Restart the Glassfish Server to put the HAS configuration into service.

5.6 Setting Up LibrariesThis section provides information about how to set up the spymemcachedb and log4j libraries to work with HAS

Step 1 Log in to the MAF nodes as mafadmin.

Step 2 To use memcachedb as High Availability Storage for MAF, place a spymemcachedb library into the Glassfish library directory.

• For single-node (standalone) environment:

This installation step is not needed. This required library is already in $GLASSFISH_HOME/domains/domain1/lib directory.

• For primary node of clustered environment:

$ cp $GLASSFISH_HOME/domains/domain1/lib/spymemcachedb-2.3.1.1.jar \ $GLASSFISH_HOME/nodeagents/agent_PRIM0/instance_PRIM0/lib

• For secondary nodes of clustered environment:

Note Remember to replace the <seqnumber> with sequential number of the secondary node.

$ cp $GLASSFISH_HOME/domains/domain1/lib/spymemcachedb-2.3.1.1.jar \ $GLASSFISH_HOME/nodeagents/agent_SEC<seqnumber>/instance_SEC<seqnumber>/lib

Page 37: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 37 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Step 3 To redirect the HAS client logging information into MAF logging, place a log4j library into the Glassfish library directories. Otherwise, the logging information will be written into Glassfish log file.

• For single-node environment:

$ cp /opt/maf/contents/lib/log4j-1.2.15.jar $GLASSFISH_HOME/domains/domain1/lib

• For cluster environment:

$ cp /opt/maf/contents/lib/log4j-1.2.15.jar $GLASSFISH_HOME/domains/domain1/lib

$ cp /opt/maf/contents/lib/log4j-1.2.15.jar \ $GLASSFISH_HOME/nodeagents/agent_PRIM0/instance_PRIM0/lib

• For secondary nodes of clustered environment:

Note Remember to replace the <seqnumber> with sequential number of the secondary node.

$ cp /opt/maf/contents/lib/log4j-1.2.15.jar $GLASSFISH_HOME/domains/domain1/lib

$ cp /opt/maf/contents/lib/log4j-1.2.15.jar \ $GLASSFISH_HOME/nodeagents/agent_SEC<seq-number>/instance_SEC<seq-number>/lib

Note After updating the maf.properties file and copying libraries, you must restart GlassFish Server.

Step 4 As mafadmin, restart Glassfish Server on a standalone or a cluster with the following commands.

For standalone, use:

asadmin stop-domain

asadmin start-domain

For cluster on the primary node, use:

asadmin stop-cluster cluster1

asadmin start-cluster cluster1

5.7 Setting Up ConnectorsThis section provides information about adding connector connection pools and creating connector resources.

Step 1 For a cluster in the GlassFish administration console on the primary node, disable the transaction service recovery on startup by executing the following commands as mafadmin:

asadmin set cluster1-config.transaction-service.automatic-recovery=false

asadmin get cluster1-config.transaction-service.automatic-recovery

Expect a value of false.

Step 2 Log in as mafadmin.

Step 3 On the single node or primary node, add the LDAP connector-connection-pool.

Page 38: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 38 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

$ cd /opt

$ asadmin create-connector-connection-pool \--raname maf-core#maf-connectivity-ldap-R<version> \--connectiondefinition javax.resource.cci.ConnectionFactory \--passwordfile $GLASSFISH_HOME/passwords.txt ldap-pool

Result: Command create-connector-connection-pool executed successfully.

Step 4 Verify that the ldap-pool connector-connection-pool was created.

$ asadmin list-connector-connection-pools | grep ldap-pool

Result: ldap-pool

Step 5 On the single node or primary node, create LDAP connector resource.

$ cd /opt

$ asadmin create-connector-resource --poolname ldap-pool --host localhost \--target $GLASSFISH_TARGET --passwordfile $GLASSFISH_HOME/passwords.txt \ --enabled=true ldap

Step 6 Verify that the LDAP connector-resource was created

$ asadmin list-connector-resources $GLASSFISH_TARGET | grep ldap

Result: ldap

5.8 Configuring the HTTP Binding ComponentYou must add the MAFFeaturePort to the sun-http-binding component so that the listening port of the EJB Feature Beans can be configured. This is important because in a single-node environment the EJB Feature Beans listen on port 8080, but listen on a different port in a clustered environment.

Step 1 Log in to the MAF node as mafadmin.

Step 2 Find your HTTP Listener Port

• For a single-node environment: Port = 8080

• For a primary node of clustered environment: Port = 1110

• For a secondary nodes of clustered environment: Port = 1110 + sequence number

For example, the first secondary node has an HTTP Listener Port of 1111 (1110 + 1)

Note Note: HTTP Listener Ports for clustered environment are set by the createdomain installation script.sh. If you have modified the default configuration, you can find the up-to-date values in $GLASSFISH_HOME/domains/domain1/config/domain.xml configuration file. See HTTP_LISTENER_PORT system property in the following sections:<server config-ref="cluster1-config"...name="instance_PRIM0"...><server config-ref="cluster1-config"...name="instance_SEC1"...>

Step 3 For a single-node environment, use the following steps to set up the variables.

a. Create the variables by issuing the following commands as mafadmin:

$ asadmin create-jbi-application-variable --component sun-http-binding MAFFeaturePort=8080

Page 39: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 39 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Command create-jbi-application-variable executed successfully.

b. Confirm the sun-http-binding application variable is set.

$ asadmin list-jbi-application-variables --component sun-http-binding

MAFFeaturePort = [STRING]8080

Command list-jbi-application-variables executed successfully.

Step 4 For a clustered environment, set up the variables on the primary cluster node. The following commands set up the variables for the first two nodes. If your deployment has more nodes in the cluster, you will need to execute additional commands for those nodes.

Note Execute all of the following commands on the primary node.

a. Create the variables by issuing the following commands.

asadmin create-jbi-application-variable --target instance_PRIM0 \--component sun-http-binding MAFFeaturePort=1110

asadmin create-jbi-application-variable --target instance_SEC1 \--component sun-http-binding MAFFeaturePort=1111

b. Confirm the sun-http-binding application variables are set. The following commands verify the variables for the first two nodes. If your deployment has more nodes in the cluster, you will need to execute additional commands to verify those nodes.

asadmin list-jbi-application-variables --component sun-http-binding --target instance_PRIM0

asadmin list-jbi-application-variables --component sun-http-binding --target instance_SEC1

5.9 Deploying and Configuring the JBI Binding ComponentThe JBI Binding Component acts as an interface between external messaging systems (VM, SMSC, Telepath, MMSC, SMTP, GBG, etc.) and services deployed into JBI environment. For each connection to an external messaging system, you must create a service unit. You must package all service units into a service assembly and you must deploy that assembly to the application server.

Step 1 Log in to the MAF node as root.

Step 2 Deploy the jar file. Execute the following commands on all nodes:

# ngp-yum install maf-core-jbi-binding

You will find the zip file in the location /opt/maf/bin/maf-core-jbi-binding-<version>-installer.zip after installing the JBI package.

Step 3 Log in as mafadmin on the single node or primary node.

Step 4 Install the zip file into Glassfish.

$ asadmin install-jbi-component --host localhost --target $GLASSFISH_TARGET \--passwordfile $GLASSFISH_HOME/passwords.txt \ /opt/maf/bin/maf-core-jbi-binding-R<version>-installer.zip

Installed component maf-core-jbi-binding.

Step 5 Execute the following command to start the JBI Binding Component.

Page 40: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 40 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

$ asadmin start-jbi-component --host localhost --target $GLASSFISH_TARGET \ --passwordfile $GLASSFISH_HOME/passwords.txt maf-core-jbi-binding

Result: Started component maf-core-jbi-binding.

Step 6 Verify that the JBI Binding Component is in place.

$ asadmin list-jbi-binding-components --target $GLASSFISH_TARGET | grep maf-core-jbi-binding

Result: JBI Binding Component maf-core-jbi-binding is present in the list.

5.10 Setting Up the Service Assemblies for JBI BindingThe JBI binding component installed and deployed into GlassFish in the procedure above provides the binding between the SMPP, SMTP, and GBG resources and the upper layer GlassFish applications that will ultimately use those resources. Access to the resources is provided via unique service assemblies (SAs) for each of the resources. Since those resources have implementation-specific network and environmental characteristics, configuration settings must be built into the SAs so that the appropriate connections and usage settings are made available to GlassFish after they are deployed.

To accomplish the unique configuration of each SA, three independent actions are required. First, the maf.properties file, located in the /etc/maf/config directory, must be edited to include the settings for each of the MAF connector resources (SMPP, SMTP, and GBG). Second, a special script must be executed that takes that maf.properties file as input and subsequently generates each of the service assemblies with the appropriate settings and files into SA zip files. Lastly, each of the SAs must be deployed into the GlassFish server and started.

Step 1 Configure the SMPP, SMTP, and GBG connector resource settings in the maf.properties file.

The following example show some sample settings outlining the attributes required by each of the individual SAs that must be included in the maf.properties file.

Note You can configure multiple backend connector resources in the same maf.properties file. For example, if a second SMPP backend connector resource should need to be configured into service, the prefix for that section of the maf.properties file would be smpp1. The same is true for configuring the second and subsequent backend connector resources for SMTP and GBG.

• Adding the following lines for the SMPP Outbound Connector Resource to the maf.properties file will result in an SA being built named SMPP_1.zip.

# ----- SMPP Outbound connector settings -----smpp.service.id=SMPP connectorsmpp.resourceName=smppoutsmpp.hostname=10.226.90.54smpp.port=9000smpp.protocol=SMPPsmpp.bindingType=JBIsmpp.component.id=SMPP_1smpp.systemId=APPsmpp.systemType=SRVsmpp.password=secretsmpp.addressRange.ton=8smpp.addressRange.npi=2smpp.shortCode.pattern=\d{1}smpp.queue.capacity=10000

Page 41: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 41 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

smpp.failsafe=truesmpp.interactionMode=TROMBONINGsmpp.heartbeat.period=30000smpp.throttling.enable=falsesmpp.throttling.group=SMSC_1smpp.throttling.timeout.ms=30000smpp.throttling.client.permits=3smpp.management.enable=truesmpp.serviceUnit.name=Service Assembly for connector SMPP_1

• Adding the following lines for the SMTP Connector Resource to the maf.properties file will result in an SA being built named SMTP_1.zip.

# ----- SMTP connector -----smtp.hostname=10.226.90.42smtp.port=25smtp.resourceName=smtpsmtp.service.id=SMTP connectorsmtp.protocol=SMTPsmtp.bindingType=JBIsmtp.component.id=SMTP_1smtp.security.auth=falsesmtp.username=acision@acision.comsmtp.password=acisionsmtp.debug.enable=falsesmtp.debug.file=/tmp/smtp_debug_mail.logsmtp.serviceUnit.name=Service Assembly for connector SMTP_1

• Adding the following lines for the GBG Connector Resource to the maf.properties file will result in an SA being built named GBG_1.zip.

# ----- GBG connector -----gbg.bindingType=JBIgbg.protocol=GBGgbg.component.id=GBG_1gbg.hostname=treehouse.us.intinfra.comgbg.mdb.connectionFactory=maf-components/GbgMdbWSClient/connectionFactorygbg.mdb.queue=MDBQueuegbg.peerIncompleteTransaction.startLimit=10000gbg.peerIncompleteTransaction.stopLimit=30000gbg.peerRequestQueue.startLimit=50000gbg.peerRequestQueue.stopLimit=100000gbg.port=9876gbg.resourceName=gbgoutgbg.service.id=GBG connectorgbg.transmitter.url=http://localhost:9080/GbgTransmitterService/GbgTransmitterPortgbg.serviceUnit.name=Service Assembly for connector GBG_1gbg.debug.enable=falsegbg.debug.file=/tmp/gbg_debug_file.loggbg.offline=false

Step 2 Prepare the service assembly for deployment. Execute the following command as mafadmin:

$ /opt/maf/script/genSA.sh -i /etc/maf-1.0/config/maf.properties -d /opt/maf/bin/

where

-i : Points to maf.properties

Page 42: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 42 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

-d: This argument is optional and defines a directory where new service assemblies will be created. The recommended value is /opt/maf/bin/.

The script searches the maf.properties configuration file for connectors with JBI binding type and asks if the connector should have a service assembly created. By default, you should answer yes to all questions. This will cause the generation of each of the SAs.

Step 3 Deploy the service assemblies as mafadmin on the standalone or primary cluster node:

$ asadmin deploy-jbi-service-assembly --host localhost --target $GLASSFISH_TARGET \ --passwordfile $GLASSFISH_HOME/passwords.txt /opt/maf/bin/SMPP_1.zip

$ asadmin deploy-jbi-service-assembly --host localhost --target $GLASSFISH_TARGET \ --passwordfile $GLASSFISH_HOME/passwords.txt /opt/maf/bin/SMTP_1.zip

$ asadmin deploy-jbi-service-assembly --host localhost --target $GLASSFISH_TARGET \ --passwordfile $GLASSFISH_HOME/passwords.txt /opt/maf/bin/GBG_1.zip

Step 4 As mafadmin, cycle glassfish to make the above changes effective.

For standalone:

asadmin stop-domain

asadmin start-domain

For cluster, do the following steps on the primary node:

asadmin stop-cluster cluster1

asadmin stat-cluster cluster1

Step 5 For cluster only, verify that all instances running using the following command as mafadmin on the primary node.

asadmin list-instances

Step 6 Execute the following commands as mafadmin on the single/primary node to start the service assemblies.

$ asadmin start-jbi-service-assembly --host localhost --target $GLASSFISH_TARGET \ --passwordfile $GLASSFISH_HOME/passwords.txt 'Service Assembly for connector SMPP_1'

$ asadmin start-jbi-service-assembly --host localhost --target $GLASSFISH_TARGET \ --passwordfile $GLASSFISH_HOME/passwords.txt 'Service Assembly for connector SMTP_1'

$ asadmin start-jbi-service-assembly --host localhost --target $GLASSFISH_TARGET \ --passwordfile $GLASSFISH_HOME/passwords.txt 'Service Assembly for connector GBG_1'

Result:

Started service assembly Service Assembly for connector SMPP_1.

Started service assembly Service Assembly for connector SMTP_1.

Started service assembly Service Assembly for connector GBG_1.

Step 7 Verify that service assemblies are in place:

$ asadmin list-jbi-service-assemblies --target $GLASSFISH_TARGET

Result: Service Assembly for connector GBG_1Service Assembly for connector SMPP_1Service Assembly for connector SMTP_1Command list-jbi-service-assemblies executed successfully.

Page 43: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 43 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

6 Installing and Configuring MAF SNMP

MAF supports detection and reporting of stateful alarm conditions. These alarm notifications include the following:

• Alarm Condition: Identifying the nature of the problem to which the alarm relates)

• Alarm Entity: Identifying the system component/module to which the alarm condition relates)

• Alarm Severity: Identifying the perceived severity of the alarm condition for the associated alarm entity)

• Context-Specific Parameters: Providing optional additional parameters on the specific details of the alarm).

Logging of alarm notifications to an alarm log file is also provided. Finally, alarm transitions are reported as SNMP traps.

The SnmpIo class is used to wrap the underlying SNMP libraries. This class supports variable bindings in traps. Condition, Entity, Severity, and Parameters information in traps via variable bindings are provided. Condition, Entity, Severity, and Parameters at the error level are logged when an alarm is raised.

Here is some information on integrating GlassFish/snmp4j as an SNMP subagent of the Net-SNMP agent provided by NGP and thereby set up to access RHEL rpms with yum. Port 1896/udp is the port used for GlassFish to listen on for SNMP get/set traffic, which will be proxied by Net-SNMP agent (snmpd) to GlassFish/snmp4j when the OID matches that within the subtree for MAF (1.3.6.1.4.1.3830.1.1.30).

To install and configure the proxy of SNMP Get/Getnext/Set from Net SNMP agent to GlassFish, perform the following procedure:

Step 1 Log in to the MAF node as root.

Step 2 Edit the /etc/snmp/snmpd.conf file, adding the following lines at its end.

view systemview included .1.3.6.1.4

proxy -v 1 -c public localhost:1896 .1.3.6.1.4.1.3830.1.1.27

proxy -v 1 -c public localhost:1896 .1.3.6.1.4.1.3830.1.1.30

Step 3 As root, execute the following command to activate the auto-restart of snmptrapd on reboot:

chkconfig --level 2345 snmptrapd on

Step 4 Launch snmptrapd by executing one of the following commands.

service snmptrapd start

or

service snmptrapd restart

Note The above allows access to the enterprises subtree of the MIB and the MAF OID subtree requests (particularly get/getnext) to GlassFish, which will listen on localhost:1896 for such requests.

Step 5 Configure maf.properties on all GlassFish nodes to reflect the settings on a per installation basis.

Page 44: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 44 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Note There is no place to configure the community name for SNMP get/getnext but the community name expected by MAF needs to be the same as the community name configured in snmpd.conf in the proxy line -c option (public by default). The snmp.agent.port parameter should default to the designated port 1896 and must agree with the port number in the proxy line after the hostname of snmpd.conf (1896 by sample above). The standard trap receiver port of 162 should be used for snmp.trapreceiver.port variable setting in the maf.properties file. Traps are not proxied through the NET SNMP agent; rather they are sent from MAF (GlassFish) directly to the snmp.trapreceiver.host to its snmp.trapreceiver.port. For testing convenience above snmp.trapreciever.host=127.0.0.1, has been used so traps can be sent to a local snmptrapd for lab testing purposes. The choice of /tmp for snmp.agent.working.dir is totally arbitrary and is not a required part of this design.

Step 6 Start the NET SNMP agent as root.

service snmpd start

service snmpd status

Step 7 Configure maf.properties for the SNMP traps and counters on all GlassFish nodes.

Step 8 Restart the GlassFish Server as root. For a single-node environment or clustered environment on the primary node, execute the following command.

For standalone, use:

asadmin stop-domain

asadmin start-domain

For cluster on the primary node, use:

asadmin stop-cluster cluster1

asadmin start-cluster cluster1

Page 45: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 45 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

7 Verifying the MAF Installation

This chapter provides instructions for using the RFA tool to verify that you have correctly installed a MAF system. If your MAF system is correctly installed the following entities/connections should be working: LDAP Connector, SNMP Connector, SMPP Connector, SMTP Connector, BPEL Engine, POJO-SE Engine.

The RFA tool is packaged in the maf_testtool.rpm. It requires the perl mod: perl-libwww-perl to be installed. This dependency should be part of the NGP installation.

There is no configuration in the traditional sense, but there is the following required data file that the RFA tool reads: /opt/maf/script/RFA/soap_req.dat. Ensure that this files exists.

7.1 Running the RFA ToolTo verify your MAF installation, perform the following steps:

Step 1 As root on all MAF nodes, install the RFA tool.

ngp-yum install maf-testtool

Step 2 As mafadmin on the standalone node or on the primary node of a cluster, perform the following steps.

> cd /opt/maf/bin

> asadmin deploy --host localhost --user admin --target $GLASSFISH_TARGET \ --passwordfile=$GLASSFISH_HOME/passwords.txt \ maf-testtool-features-ldapConnection-R1.0-02.00.B04.jar

> asadmin deploy --host localhost --user admin --target $GLASSFISH_TARGET \ --passwordfile=$GLASSFISH_HOME/passwords.txt \ maf-testtool-features-tds-R1.0-02.00.B04.jar

> asadmin deploy --host localhost --user admin --target $GLASSFISH_TARGET \ --passwordfile=$GLASSFISH_HOME/passwords.txt \ maf-testtool-features-snmp-R1.0-02.00.B04.jar

> asadmin deploy --host localhost --user admin --target $GLASSFISH_TARGET \ --passwordfile=$GLASSFISH_HOME/passwords.txt \ maf-testtool-features-gbg-R1.0-02.00.B04.jar

> asadmin list-components $GLASSFISH_TARGET | grep testtool

Expect the following output:

maf-testtool-features-ldapConnection-R1.0-02.00.B04 <ejb-module>maf-testtool-features-tds-R1.0-02.00.B04 <ejb-module>maf-testtool-features-snmp-R1.0-02.00.B04 <ejb-module>maf-testtool-features-gbg-R1.0-02.00.B04 <ejb-module>maf-testtool-features-ldapConnection-R1.0-02.00.B04#LdapConnectionService <webservice>maf-testtool-features-gbg-R1.0-02.00.B04#GBGTestWebService <webservice>maf-testtool-features-snmp-R1.0-02.00.B04#SNMPService <webservice>maf-testtool-features-tds-R1.0-02.00.B04#TDSTestWebService <webservice>

> asadmin deploy-jbi-service-assembly --host localhost --user admin --target $GLASSFISH_TARGET --passwordfile=$GLASSFISH_HOME/passwords.txt MessageTestsCA.zip

Page 46: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 46 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

> asadmin list-jbi-service-assemblies --target $GLASSFISH_TARGET | grep MessageTestsCA

Expect the following output:

MessageTestsCA

> asadmin start-jbi-service-assembly --host localhost --user admin --target $GLASSFISH_TARGET --passwordfile=$GLASSFISH_HOME/passwords.txt MessageTestsCA

Step 3 As mafadmin on all MAF nodes, edit the /opt/maf/script/RFA/soap_req.dat file to contain test specifications only for the MAF components you are using in your deployment.

chmod u+w /opt/maf/script/RFA/soap_req.dat

vi /opt/maf/script/RFA/soap_req.dat

Remove sections for components you are not using in your deployment; a section starts with a comment of # <component> and ends with a line prior to the comment of the next component or the end of the file.

Step 4 For cluster only, determine the HTTP SOAP BC HttpDefaultPort and MAFFeaturePort and on the current cluster node as follows.

grep HttpDefaultPort /opt/glassfishEsb_2.1GA/glassfish/nodeagents/agent_*/instance_*/jbi/components/sun-http-binding/install_root/workspace/config.properties

For a primary node:

asadmin list-jbi-application-variables --target instance_PRIM0 \--component sun-http-binding | grep MAFFeaturePort

For a secondary node:

asadmin list-jbi-application-variables --target instance_SEC<seqnumber> \ --component sun-http-binding | grep MAFFeaturePort

Step 5 Using the values you have obtained above, in the /opt/maf/script/RFA/soap_req.dat file change each :1110 to :<MAFFeaturePort-value> and change each :2110 to :<HttpDefaultPort-value>. Note that for the primary node, the sample values in the file should already match the values you obtained in step 4.

Step 6 If your deployment includes the SMTP component, edit /opt/maf/script/RFA/soap_req.dat file and replace each occurrence of the sample string [email protected] with the email address of a user in the SMTP server that you have configured for the SMTP component in maf.properties in the following property:

smtp.hostname

Step 7 If your deployment includes the LDAP component, create/edit the /etc/maf-1.0/config/ldap.xml file to contain the following content.

<ldap> <defaultLocation>o=ricuc.com</defaultLocation> <types> <type id="mplussubscriber"> <location>ou=subscribers,ou=[ou]</location> <dn ref="uniqueidentifier"/> <objectClass>subscriber</objectClass> <objectClass>mailrecipient</objectClass> <objectClass>inetorgperson</objectClass> <objectClass>person</objectClass> <objectClass>mplussubscriber</objectClass> <attribute id="prepaid"><value>no</value></attribute> <attribute id="userpassword"><value>abcd</value></attribute> <attribute id="mobiletype"><value>mobiletype</value></attribute>

Page 47: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 47 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

<attribute id="blocked"><value>no</value></attribute> <attribute id="language"><value>eng</value></attribute> </type> </types></ldap>

Step 8 Manually run the RFA tool.

/opt/maf/script/RFA/RFATest.pl [-s | -v]

where

Step 9 Remove the RFA tool so that the MAF web services are not openly exposed.

ngp-yum remove maf-testtool

7.1.1 Sample RFA Tool OutputsExample 7-1 displays what you will see if the RFA tool verifies a successful installation.

Example 7-1 Example of Successful Output from the RFA Tool

1 [SNMP_REGISTER] SUCCESS2 [SNMP_INCREMENT] SUCCESS3 [SNMP_GET_VARIABLE] SUCCESS4 [SNMP_UNREGISTER] SUCCESS5 [LDAP_CREATE] SUCCESS6 [LDAP_READ] SUCCESS7 [LDAP_UPDATE] SUCCESS8 [LDAP_DELETE] SUCCESS9 [TDS_SET_IN_DB] SUCCESS10 [TDS_GET_FROM_DB] SUCCESS11 [TDS_DELETE_FROM_DB] SUCCESS12 [SMTP] SUCCESS13 [SMPP] SUCCESS

Example 7-2 is an example of a failed installation. In this case, the RFA tool was run using the -sv switch (stand alone and verbose).

-s During the test, the soap requests are sent to the default URL with cluster port numbers. Use the -s switch to override this for standalone server processing.

-v Use the -v switch to view the HTTP Requests and Responses.

Page 48: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 48 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Example 7-2 Example of Output from a Failed Case

8 [LDAP_DELETE] FAILED + 404 Not Found +

Sent [POST http://localhost:8080/LdapConnectionWebService/LdapConnectionServiceUser-Agent: libwww-perl/5.79Content-Type: text/xml

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:prov="http://provisioning.testing.acision.com-v0.1"><soapenv:Header/><soapenv:Body><prov:deleteLdapEntry><objectClass>mplussubscriber</objectClass><searchCriteria>(telephonenumber='8988889999')</searchCriteria></prov:deleteLdapEntry></soapenv:Body></soapenv:Envelope>]

Page 49: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 49 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

7.1.2 Error MessagesFor a failed execution, FAILED will be shown followed by one or two error messages separated by plus (+). Table 7-3 describes the error messages.

Table 7-3 RFA Tool Error Messages

Error Message Reason

Error 1: “Response did not contain [<text>]” is displayed

Error 2: null - not applicable

The HTTP request was serviced properly. However, the expected result string was not found in the response:

Error 1: A response from the web server.

Error 2: The fault reason

See Table 7-1 for more details.

The SOAP request caused a soap fault.

Error 1: A response from the web server.

Error 2: null - not applicable

See Table 7-1 for more details.

If the HTTP request was not serviced properly, then the HTTP Response Message is displayed.

Table 7-1 Possible Web Server Error Messages

Error Message Reason Action

404 Not Found The URL is not available. Verify that the web service is installed and properly configured.

500 Can’t connect to localhost:<port>

The GlassFish web server is not accepting connections on <port>.

Verify that the server is running and listening on <port>.

500 Internal Server Error SOAP Request Fault. Verify that the web service is properly configured.

Page 50: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 50 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Page 51: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 51 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Appendix A. Configuration Files

This appendix contains information about the following MAF configuration files:

• maf.properties—Contains service-specific configuration settings.

• ldap.xml—Contains configuration for the LDAP connector.

A.1 The maf.properties FileThe maf.properties file contains many service-specific configuration settings that govern much of the functionality of MAF. This file resides in the /etc/maf-1.0/config directory.

A minimal default maf.properties file is installed during system installation. This file may be edited in deployments where upper-layer applications are installed on top of the MAF platform to match implementation-specific settings necessary for the deployment. This appendix provides information about the available parameters in the maf.properties file.

Table A-1 MAF Foundational Component Parameters in maf.properties File

Parameter Description Valid Values/Examples

<component prefix>.component.id Identifies a failover management group. Binds together mutually related subcomponents.

Example: SMPP_1

<component prefix>.management.enable Defines whether the handler attaches to the failover management.

true, false

<component prefix>.serviceUnit.name Defines the name of the JBI Service Unit to which the connector handler belongs.

Example:

Service Assembly for connector SMPP_1

<connector name>.throttling.client.permits

The number of permits to ask for in one request to the throttling master. The higher the count, the lower is the network traffic produced by the throttling framework. Setting the value too high will cause uneven distribution of permits among the nodes in the group.

Example: 3

<connector name>.throttling.enable Enables or disables throttling. true, false

<connector name>.throttling.group The throttling group to which the connector belongs. Each group has a limited number of permits per time period and this limit is shared by all connectors in the group.

Example: SMSC_1

<connector name>.throttling.timeout.ms Maximum waiting time when asking the throttling master for permits, in milliseconds.

Example: 30000

alarmLog.ldapUnavailable.connector.error.enabled

Enables/disables the LDAP unavailable traps

If this parameter is not included in the maf.properties file, the system defaults to yes.

yes (enable)

no (disable)

Page 52: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 52 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

alarmLog.ldapWriteUnavailable.connector.error.enabled

Enables/disables the LDAP Write unavailable traps

If this parameter is not included in the maf.properties file, the system defaults to yes.

yes (enable)

no (disable)

alarmLog.primaryLdapUnavailable.connector.error.enabled

Enables/disables the Primary LDAP unavailable traps

If this parameter is not included in the maf.properties file, the system defaults to yes.

yes (enable)

no (disable)

alarmLog.primaryLdapWriteUnavailable.connector.error.enabled

Enables/disables the Primary LDAP Write unavailable traps

If this parameter is not included in the maf.properties file, the system defaults to yes.

yes (enable)

no (disable)

clearTrap.billingException.billing.error.oid The OID for the mafClearBillingException trap.

1.3.6.1.4.1.3830.1.1.27.0.23

clearTrap.configurationError.core.error.oid

The OID for the mafClearConfigurationError trap.

1.3.6.1.4.1.3830.1.1.27.0.21

clearTrap.connectionException.core.error.oid

The OID for the mafClearConnectionException trap.

1.3.6.1.4.1.3830.1.1.27.0.24

clearTrap.haStorageUnavailable.hasClient.error.oid

The OID for the mafClearHaStorageUnavailable trap.

1.3.6.1.4.1.3830.1.1.27.0.27

clearTrap.ldapInterfaceError.provisioning.error.oid

The OID for the mafClearLdapInterfaceError trap.

1.3.6.1.4.1.3830.1.1.27.0.29

clearTrap.ldapUnavailable.connector.oid The OID of the mafClearLdapUnavailable trap.

1.3.6.1.4.1.3830.1.1.30.0.22

clearTrap.ldapWriteUnavailable.connector.error.oid

The OID of the mafClearLdapWriteUnavailable trap.

1.3.6.1.4.1.3830.1.1.30.0.39

clearTrap.prepaidUnavailable.core.error.oid

The OID for the mafClearPrepaidUnavailable trap.

1.3.6.1.4.1.3830.1.1.27.0.26

clearTrap.primaryLdapUnavailable.connector.oid

The OID of the mafClearPrimaryLdapUnavailable trap.

1.3.6.1.4.1.3830.1.1.30.0.36

clearTrap.primaryLdapWriteUnavailable.connector.error.oid

The OID of the mafClearPrimaryLdapWriteUnavailable trap.

1.3.6.1.4.1.3830.1.1.30.0.40

clearTrap.smtpUnavailable.core.error.oid The OID for the mafClearSmtpUnavailable trap.

1.3.6.1.4.1.3830.1.1.27.0.25

clearTrap.unknownServiceName.core.error.oid

The OID for the mafClearUnknownServiceName trap.

1.3.6.1.4.1.3830.1.1.27.0.28

eventTrap.configurationError.core.error.oid

The OID for the mafEventConfigurationError trap

1.3.6.1.4.1.3830.1.1.27.0.31

eventTrap.deserializationError.core.warning.oid

The OID for the mafEventDeserializationError trap.

1.3.6.1.4.1.3830.1.1.27.0.32

eventTrap.genericDataStoreError.core.warning.oid

The OID for the mafEventGeneralDataStoreError trap.

1.3.6.1.4.1.3830.1.1.27.0.34

eventTrap.serializationError.core.warning.oid

The OID for the mafEventSerializationError trap.

1.3.6.1.4.1.3830.1.1.27.0.33

Table A-1 MAF Foundational Component Parameters in maf.properties File (continued)

Parameter Description Valid Values/Examples

Page 53: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 53 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

gbg.bindingType The type of binding Example: JBI

gbg.component.id GBG identification Example: GBG_3

gbg.debug.enable Enables or disables debugging. Example: false

gbg.debug.file Defines a path to gbg_debug_file.log Example: /tmp/gbg_debug_file.log

gbg.hostname GBG server hostname. Example: clausen.cz.intinfra.com

gbg.mdb.connectionFactory Connection factory JNDI name for responses from the GBG connecotor.

Example: maf-components/GbgMdbWSClient/connectionFactory

gbg.mdb.queue Name of the MDB queue for sending responses using WebService. The value must be MDBQueue/

Example: MDBQueue

gbg.offline Indicates if the GBG connector is offline. If this value is true, the GBG connector is offline.

Example: false

gbg.peerIncompleteTransaction.startLimit Maximum number of requests sent to the GBG server and waiting for an answer from the GBG server to start accepting requests by the BGB connector.

The value must not be higher than gbg.peerIncompleteTransaction.stop Limit.

Example: 10000

gbg.peerIncompleteTransaction.stopLimit Maximum number of requests sent to the GBG server and waiting for an answer from the GBG server (BGB connector stops accepting request.)

Example: 30000

gbg.peerRequestQueue.startLimit Maximum number of waiting requests to start accepting requests by BGB connector.

The value must not be higher than gbg.eis.peerRequestQueue.stopLimit

Example: 50000

gbg.peerRequestQueue.stopLimit Maximum number of requests waiting to be send to the GBG server (BGB connector stops accepting request).

Example: 100000

gbg.port Port to connect to GBG. Example: 9876

gbg.protocol Defines the protocol that is used. Example: GBG

gbg.resourceName JNDI name. Name of the resource in the Glassfish.

Example: gbgout

gbg.service.id GBG component ID displayed in log files.

Example: GBG connector

gbg.transmitter.url Web service for GBG response. Example: http://localhost:9080/GbgTransmitterService/GbgTransmitterPort

has.memcachedb.retryCount Number of retries for a failed memcached request

Example: 2

has.memcachedb.servers The list of memcachedb servers that the MAF platform should use for the TDS feature

Example: 10.226.150.202:21201

Table A-1 MAF Foundational Component Parameters in maf.properties File (continued)

Parameter Description Valid Values/Examples

Page 54: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 54 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

has.memcachedb.timeout The timeout time for a memcached request to the memcachedb backends.

Example: 10

ioThreadsMax Defines the maximum count of outgoing threads.

Example: 50

jbi.marshaller.threadpool.size JAXB marshallers pools for JBI. Do not change the default values.

Default: 10

jbi.unmarshaller.threadpool.size JAXB unmarshallers pools for JBI. Do not change the default values.

Default: 30

ldap.outbound.cache.capacity Obsolete. Example: 30

ldap.outbound.config.path Path to ldap.xml Example:

/etc/maf-1.0/config/ldap.xml

ldap.outbound.endpoint Base of the LDAP tree Example: o=ricuc.com

ldap.outbound.hosts.read LDAP Read hosts, primary listed first Example:

10.226.90.91,10.226.90.92

ldap.outbound.hosts.write LDAP Write hosts, primary listed first Example:

10.226.90.91,10.226.90.92

ldap.outbound.password LDAP password Example: spmaster123

ldap.outbound.resourceName Name used when creating connector resources

Example: ldap

ldap.outbound.retry.interval Define the reconnection retry interval in milliseconds. If omitted from the maf.properties file, the system default interval of 1 minute is used.

Example: 60

ldap.outbound.service.id Used by logging to identify the name of the service responsible for writing a message in the log files. For example:

[2010-01-14 15:44:54,073 | Timer-27 | resource.spi.ManagedConnectionImpl | DEBUG] LDAP-A-MATRON ID=1 destroyed

Example: LDAP-A-MATRON

ldap.outbound.username LDAP User DN Example:

cn=DirectoryManager,o=ricuc.com

Table A-1 MAF Foundational Component Parameters in maf.properties File (continued)

Parameter Description Valid Values/Examples

Page 55: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 55 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

log4j.rootLogger

log4j.appender.file

log4j.appender.file.File

log4j.appender.file.MaxFileSize

log4j.appender.file.MaxBackupIndex

log4j.appender.file.layout

log4j.appender.file.layout.ConversionPattern

log4j.logger.com.acision

log4j.logger.com.acision.maf.jbi

log4j.logger.com.acision.maf.snmp

log4j.logger.com.acision.router

log4j.logger.com.sun.jbi

log4j.logger.org.snmp4j

See the Log4j documentation for information about logging settings:

http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html

Examples

log4j.rootLogger=WARN, file

log4j.appender.file=org.apache.log4j.RollingFileAppender

log4j.appender.file.File=/opt/glassfishEsb_2.1GA/glassfish/domains/domain1/logs/maf.log

log4j.appender.file.MaxFileSize=10000KB

log4j.appender.file.MaxBackupIndex=3

log4j.appender.file.layout=org.apache.log4j.PatternLayout

log4j.appender.file.layout.ConversionPattern=[%d | %t | %c{3} | %p] %m%n

#log4j.logger.com.acision=ALL

log4j.appender.file.MaxBackupIndex

management.period Defines the time interval after which the component status is checked. This value is in milliseconds.

Example: 40000

mm7.endpointAddress URL of MMSC SOAP interface. Example: http://mmsc:10021/vas_soap

mm7.password MMSC password. Example: 12345

mm7.protocol The protocol used.

mm7.resourceName URL of MMSC client. Example: mm7

mm7.service.id Identifies the connector. Example: MM7 connector

mm7.serviceCodeType Information supplied by the VASP which may be included in charging information.

Example: gold-sp33-im42

mm7.username MMSC username (VAS number) Example: 31600000010

mm7.vasId Identifier of the VASP for this MMS Relay/Server.

Example: News

mm7.vasPid Identifier of the originating application. Example: 31600000016

mm7.wsdlUrl URL of MMSC client. Example: /etc/maf/wsdl/MM7Transmitter.wsdl

raiseTrap.billingException.billing.error.oid The OID for the mafRaiseBillingException trap.

1.3.6.1.4.1.3830.1.1.27.0.13

raiseTrap.configurationError.core.error.oid

The OID for the mafRaiseConfigurationError trap.

1.3.6.1.4.1.3830.1.1.27.0.11

raiseTrap.connectionException.core.error.oid

The OID for the mafRaiseConnectionException trap.

1.3.6.1.4.1.3830.1.1.27.0.14

raiseTrap.haStorageUnavailable.hasClient.error.oid

The OID for the mafRaiseHaStorageUnavailable trap.

1.3.6.1.4.1.3830.1.1.27.0.17

raiseTrap.ldapInterfaceError.provisioning.error.oid

The OID for the mafRaiseLdapInterfaceError trap.

1.3.6.1.4.1.3830.1.1.27.0.19

Table A-1 MAF Foundational Component Parameters in maf.properties File (continued)

Parameter Description Valid Values/Examples

Page 56: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 56 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

raiseTrap.ldapUnavailable.connector.oid The OID of the mafRaiseLdapUnavailable trap.

1.3.6.1.4.1.3830.1.1.30.0.12

raiseTrap.ldapWriteUnavailable.connector.error.oid

The OID of the mafRaiseLdapWriteUnavailable trap.

1.3.6.1.4.1.3830.1.1.30.0.37

raiseTrap.prepaidUnavailable.core.error.oid

The OID for the mafRaisePrepaidUnavailable trap.

1.3.6.1.4.1.3830.1.1.27.0.16

raiseTrap.primaryLdapUnavailable.connector.oid

The OID of the mafRaisePrimaryLdapUnavailable trap.

1.3.6.1.4.1.3830.1.1.30.0.35

raiseTrap.primaryLdapWriteUnavailable.connector.error.oid

The OID of the mafRaisePrimaryLdapWriteUnavailable trap.

1.3.6.1.4.1.3830.1.1.30.0.38

raiseTrap.smtpUnavailable.core.error.oid The OID for the mafRaiseSmtpUnavailable trap.

1.3.6.1.4.1.3830.1.1.27.0.15

raiseTrap.unknownServiceName.core.error.oid

The OID for the mafRaiseUnknownServiceName trap.

1.3.6.1.4.1.3830.1.1.27.0.18

router.max.threads Defines the maximum count of threads which are used by the Service Endpoint Router. The value must be lower than the Max Thread Pool Size in configuration of openesb-pojo-engine.

Example: 10

smpp.addressRange.npi Defines the Numbering Plan Indicator. Example: 1

smpp.addressRange.ton The definition of TON Example: 2

smpp.bindingType The type of binding Example: JBI

smpp.component.id A unique identifier of the component Example: SMPP_1

smpp.failsafe Obsolete Example: false

smpp.heartbeat.period Defines the time interval in which a ping is sent to SMPP, in milliseconds. This value must be lower than the value set in management.period.

Example: 2000

smpp.hostname The SMPP server host name Example: SMPP.intinfra.com

smpp.interactionMode Defines the type of interaction with other components. Only tromboning is currently supported.

TROMBONING

smpp.management.enable Defines whether the SMPP attaches to the failover management.

Example: true

smpp.password The password for logging to the SMSC. Example: secret

smpp.port The SMPP server port Example: 8500

smpp.protocol The type of used protocol. Example: SMPP

smpp.queue.capacity Defines the maximum queue length of the SMSC responses.

Example: 1000

smpp.resourceName The JNDI name Example: smppout

smpp.service.id Identifies the connector. Example: SMPP connector

smpp.serviceUnit.name Defines the name of the JBI Service Unit to which the connector handler belongs.

Example: Service Assembly for connector SMPP_1

Table A-1 MAF Foundational Component Parameters in maf.properties File (continued)

Parameter Description Valid Values/Examples

Page 57: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 57 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

smpp.shortCode.pattern Defines the patterns for the short codes. Example: \d{1}

smpp.systemId ID of an application in the SMSC. Must be configured in the SMSC.

Example: APP

smpp.systemType The system type as defined in the SMSC. Must be configured in the SMSC.

Example: SRV

smpp.threads Example: 150

smtp.bindingType The type of binding. Example: JBI

smtp.component.id Defines the component unique identification

Example: SMTP_2

smtp.debug.enable Defines whether communication between SMTP and the server is logged.

Example: false

smtp.debug.file Defines a path to a file for debugging. This configuration item is required if debug.enable=true.

Example: /tmp/smtp_debug_mail.log

smtp.hostname SMTP server hostname Example: 10.29.41.155

smtp.password Defines a password. This configuration item is required if security.auth=true

Example: josefmrtvy

smtp.port Port to connect to SMTP server Example: 9001

smtp.protocol The protocol used Example: SMTP

smtp.resourceName JNDI name. Example: smtp

smtp.security.auth Defines whether the SMTP server

requires authentication.

Example: true

smtp.service.id Connector identification Example: SMTP connector

smtp.username Defines a username. This configuration item is required if security.auth=true

Example: [email protected]

snmp.agent.port Defines a port the SNMP agent listens on. This parameter is mandatory.

Example: 1896

snmp.agent.report.interval The time period, in seconds, after which the current state of the SNMP agent variables and counters is written to a file.

Default: 3600

snmp.agent.working.dir Defines a working folder for the SNMP Agent. Contains runtime files.

Example: /tmp

snmp.enabled Defines if the SNMP Agent is enabled or disabled. Other values than true are considered as false.

If false, the SNMP Agent does not start and no SNMP traps are sent.

Example: true

snmp.trapreciever.host Defines a client server name for SNMP communications.

Example: 127.0.0.1

snmp.trapreciever.port Defines a port to connect to the client for SNMP communications.

Example: 162

throttling.config.path Path to the throttling.xml configuration file.

Example: /etc/maf/config/throttling.xml

Table A-1 MAF Foundational Component Parameters in maf.properties File (continued)

Parameter Description Valid Values/Examples

Page 58: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 58 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

A.2 The ldap.xml FileThe ldap.xml file contains configuration for the LDAP connector. That file should not be modified. The ldap.xml file is located in /etc/maf-1.0/config directory. In a cluster environment, ldap.xml must be located in a directory as defined by MAF_ROOT in domain.xml on the primary node.

The ldap.xml describes object mapping from entities to the LDAP object classes. The following lines should be according to the object classes in your LDAP database:

<ldap><defaultLocation>c=en</defaultLocation><types><type id="entity1"><location>ou={unit},o=[org],c=cz</location><dn ref="pk"/><attribute id="sn"><value>first</value><value>second</value></attribute><attribute id="cn"><objectClass>oc1</objectClass><objectClass>oc2</objectClass><objectClass>...</type><type id="entity2">...</types><ldap>

Explanation:

• <defaultLocation>: DN, only once, entities without location specified will be stored here entity1, entity2, and so forth.

• The entity type identifiers: all known entities must be defined ou={unit},o=[org],c=cz template of the entity's base DN.

• unit: refers to the entity's field, its value should replace '{unit}' but also the field should be stored as an LDAP attribute of the entity's LDAP representation

• org: refers to the entity's field, its value should replace '[org]'

• pk - reference to the entity's field which holds DN attribute value, DN attribute name will be pk.

• sn, cn, and so forth: static attributes (and their values) to be added to the entity's LDAP representation, must be supported by object classes

• oc1, oc2, and soall object classes of the entity's LDAP representation

Page 59: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 59 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

The content for the ldap.xml appears below:

<ldap> <defaultLocation>o=ricuc.com</defaultLocation> <types> <type id="mpluscos"> <location>ou=mpluscos,ou=[ou]</location> <dn ref="mpluscosid"/> <objectClass>mpluscos</objectClass> </type> <type id="mplussubscriber"> <location>ou=subscribers,ou=[ou]</location> <dn ref="uniqueidentifier"/> <objectClass>subscriber</objectClass> <objectClass>mailrecipient</objectClass> <objectClass>inetorgperson</objectClass> <objectClass>person</objectClass> <objectClass>mplussubscriber</objectClass> <attribute id="prepaid"><value>no</value></attribute> <attribute id="userpassword"><value>abcd</value></attribute> <attribute id="mobiletype"><value>mobiletype</value></attribute> <attribute id="blocked"><value>no</value></attribute> <attribute id="language"><value>eng</value></attribute> </type> <type id="mafservicesubscription"> <location>uniqueidentifier=[uniqueidentifier],ou=subscribers,ou=[ou]</location> <dn ref="mafservicename"/> <attribute id="mafenabled"><value>yes</value></attribute> <objectClass>mafservicesubscription</objectClass> </type> <type id="mpluscoi"> <location></location> <dn ref="ou"/> <objectClass>mpluscoi</objectClass> <objectClass>organizationalunit</objectClass> <objectClass>lccommunity</objectClass> </type> <type id="mplusvpngroup"> <location>ou=[ou]</location> <dn ref="mplusvpngroupid"/> <objectClass>mplusvpngroup</objectClass> </type> <type id="mplusvpnentry"> <location>mplusvpngroupid=[mplusvpngroupid],ou=[ou]</location> <dn ref="mplusvpnext"/> <objectClass>mplusvpnentry</objectClass> </type> <type id="molargeaccountfilterlist"> <location>ou=molargeaccountfilterlist</location> <dn ref="mafkey"/> <objectClass>mafkeyentity</objectClass> </type> <type id="mtlargeaccountfilterlist"> <location>ou=mtlargeaccountfilterlist</location> <dn ref="mafkey"/> <objectClass>mafkeyentity</objectClass> </type> <type id="reservedaliaslist"> <location>ou=reservedaliaslist</location> <dn ref="mafkey"/> <objectClass>mafkeyentity</objectClass>

Page 60: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 60 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

</type> <type id="globalaliasoptoutlist"> <location>ou=globalaliasoptoutlist</location> <dn ref="mafkey"/> <objectClass>mafkeyentity</objectClass> </type> </types></ldap>

Page 61: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 61 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Abbreviations

BPEL Business Process Execution Language

DCS Data Coding Scheme

DAS Domain Administration Server

GBG Generic Billing Gateway

HADB High Availability Database (Sun)

HAS High Availability Storage (client-side API)

SNMP Simple Network Management Protocol

TDS Temporal Data Store (server-side service)

WS Web-service

WSDL Web Service Definition (Description) Language

Page 62: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 62 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

Glossary

Cluster version More instances of Glassfish application server are installed in a machine or in multiple machines (nodes).

EAR Enterprise Archive File. This type of file contains everything necessary to deploy an enterprise application on a GlassFish server.

EJB Enterprise Java Beans

JAR Java Archive File. This type of file contains class files and auxiliary resources associated with applications.

LDAP Light-weight Directory Access Protocol

Node Computer machine or virtual machine that represents one server instance.

OID SNMP Object Identifier

Single node version One instance of Glassfish application server is installed in a machine.

Page 63: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 63 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.0

References

The following references are used in this manual:

• Acision Next Generation Platform NGP_R1.2-01 Installation and Configuration Manual, August 2009

• Log4J Official Home page, http://logging.apache.org/log4j/

• Acision MAF R1.0-02 Product Architectural Document, January 2010.

• Glassfish Home page, http://docs.sun.com/app/docs/doc/820-4332/

Page 64: MAF_R1.0-02_ICMAN

Status: ISSUEDPage 64 of 64

MAF_R1.0-02_ICMANCopyright © Acision BV 2009-2010

Version: 1.064

Version History

Version Date Details of Changes

1.0 29-Jan-2010 Initial version of this document for the Acision Message Application Framework MAF_R1.0-02 release.