114
IBM TivoliCandle Management Server 360, CandleNet Portal 196 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products GC32-9429-01 OMEGAMON Platform

Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

  • Upload
    others

  • View
    16

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

IBMTivoli®

Candle Management Server 360, CandleNet Portal 196

Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

GC32-9429-01

OMEGAMON Platform

Page 2: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

12

1

2

Page 3: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

IBMTivoli® OMEGAMON Platform

Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

GC32-9429-01

Page 4: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

12

1

Fourth Edition (June 2005)

This edition replaces GC32-9181-00.

© Copyright Sun Microsystems, Inc. 1999

© Copyright International Business Machines Corporation 1996, 2005. All rights reserved.

Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

2

Before using this information and the product it supports, read the information in "Notices” on page 105.

Note

Page 5: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Contents 5

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11About This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12Documentation Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15

What�s New . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17

Chapter 1. Overview of Historical Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19About Historical Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20Historical Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22Performance Impact of Historical Data Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

Chapter 2. Planning Collection of Historical Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25Developing a Strategy for Historical Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . .26

Chapter 3. Configuring Historical Data Collection on CandleNet Portal . . . . . . . . . . . . . . .29Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30Configuring Historical Data Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32Starting and Stopping Historical Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35

Chapter 4. Configuring Historical Data Collection on CMW . . . . . . . . . . . . . . . . . . . . . . . .37Invoking the HDC Configuration Program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38Using the Configuration Dialog to Control Historical Data Collection . . . . . . . . . . . . . .40Defining Data Collection Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41Using the Advanced History Configuration Options Dialog . . . . . . . . . . . . . . . . . . . . . .44

Chapter 5. Warehousing Your Historical Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47Prerequisites to Warehousing Historical Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48Configuring Your Warehouse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49Preventing Historical Data File Corruption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50Error Logging for Warehoused Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52

Contents

Page 6: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

6 Historical Data Collection for IBM Tivoli OMEGAMON XE Products

Chapter 6. Converting History Files to Delimited Flat Files (Windows and OS/400) . . . . . .53Conversion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54Archiving Procedure using LOGSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55Archiving Procedure using the Windows AT Command. . . . . . . . . . . . . . . . . . . . . . . . .57Converting Files Using krarloff. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58AS/400 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60Location of the Windows Executables and Historical Data Collection Table Files. . . . . .61

Chapter 7. Converting History Files to Delimited Flat Files (z/OS) . . . . . . . . . . . . . . . . . . .63Automatic Conversion and Archiving Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64Location of the z/OS Executables and Historical Data Table Files . . . . . . . . . . . . . . . . .67Manual Archiving Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68

Chapter 8. Converting History Files to Delimited Flat Files (UNIX Systems) . . . . . . . . . . . .69Understanding History Data Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70Performing the History Data Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71

Chapter 9. Converting History Files to Delimited Flat Files (HP NonStop Kernel Systems).73Conversion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .74

Appendix A. Maintaining the Persistent Data Store (CT/PDS) . . . . . . . . . . . . . . . . . . . . . . . .75About the Persistent Data Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76Components of the CT/PDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77Overview of the Automatic Maintenance Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79Making Archived Data Available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82Exporting and Restoring Persistent Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85Data Record Format of Exported Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87Extracting CT/PDS Data to Flat Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91Command Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94

Appendix B. Support Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99

Appendix C. Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109

Page 7: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Figures 7

Figure 1. CandleNet Portal History Collection Configuration Configuration Tab. . . . . . . . . . . . . .33Figure 2. CandleNet Portal History Collection Configuration Status Tab. . . . . . . . . . . . . . . . . . . .36Figure 3. The Configure History Icon in the Administration Window . . . . . . . . . . . . . . . . . . . . . .38Figure 4. CMW History Configuration Dialog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39Figure 5. CMS Selection Portion of Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41Figure 6. Table or Group selection portion of dialog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42Figure 7. Advanced History Configuration Options dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44

Figures

Page 8: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

8 Historical Data Collection for IBM Tivoli OMEGAMON XE Products

Page 9: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Tables 9

Table 1. Symbols in Command Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15Table 2. Logfile parameter values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56Table 3. krarloff Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59Table 4. DD Names Required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65Table 5. KPDXTRA parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65Table 6. History conversion parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72Table 7. Determining the medium for dataset backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80Table 8. Section 1 Data Record Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87Table 9. Section 2 Data Record Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88

Table 10. Section 2 Table Description Record . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89Table 11. Section 2 Column Description Record. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89Table 12. Section 3 Record Format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90

Tables

Page 10: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

10 Historical Data Collection for IBM Tivoli OMEGAMON XE Products

Page 11: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Preface 11

Preface

This document describes the use of the historical data collection capability in CandleNet Portal®, the user interface for OMEGAMON® XE products. It also describes the use of the historical data collection capability in the Candle Management Workstation®.

Before you can use any of the procedures or tools documented in this book, OMEGAMON Platform version 3.6.0 must have been installed, including the following components:

� Candle Management Server® (CMS)

� CandleNet Portal client (desktop or browser)

� CandleNet Portal Server

� CMS and CandleNet Portal support for any IBM Tivoli® OMEGAMON XE monitoring products

For instructions, see installation and configuration books on the OMEGAMON Platform and CandleNet Portal Documentation CD and the IBM Tivoli OMEGAMON XE product documentation CDs.

If you intend to warehouse historical data, you must also have installed Microsoft�s MS SQL Server�relational database and the Candle® Warehouse Proxy agent on Windows.

P

Page 12: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

About This Guide

12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

About This Guide

Who should read this guideThis guide is intended for those responsible for planning or configuring historical data collection for resources monitored by OMEGAMON XE products and for those responsible for maintaining the collected data.

The historical data collection configuration, warehousing, and archiving tasks require a working knowledge of

� Windows®, and MVS®, OS/390®, or z/OS® operating systems

� Microsoft�s MS SQL Server�relational database

Document set informationThis book is part of the OMEGAMON XE Platform library. This section lists the other publications in the library and related documents. It also describes how to access Tivoli publications online and how to order Tivoli publications.

OMEGAMON XE Platform libraryThe following documents are available in the OMEGAMON XE Platform library:

� Administering OMEGAMON Products: CandleNet Portal, GC32-9180

This document describes the support tasks and functions required for the OMEGAMON Platform, including CandleNet Portal user administration.

� Using OMEGAMON Products: CandleNet Portal, GC32-9182

This guide describes the features of CandleNet Portal and how best to use them with your OMEGAMON products.

� Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX, SC32-1768

Provides instructions for installing and configuring the components of the OMEGAMON Platform and the CandleNet Portal interface.

� Configuring Candle Management Server (CMS) on z/OS, GC32-9414

Provides instructions for configuring and customizing the Candle Management Server on z/OS.

The following books document the Candle Management Workstation interface to the OMEGAMON products:

� Candle Management Workstation Administrator's Guide, GC32-9175-00

� Candle Management Workstation Quick Reference, GC32-9176-00

� Candle Management Workstation User's Guide, GC32-9177-00

Page 13: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Preface 13

About This Guide

The following books document the messages issued by the OMEGAMON Platform components and products that run on it:

� IBM Tivoli Candle Products Messages Volume 1 (AOP�ETX), SC32-9416-00

� IBM Tivoli Candle Products Messages Volume 2 (EU�KLVGM), SC32-9417-00

� IBM Tivoli Candle Products Messages Volume 3 (KLVHS-KONCT), SC32-9418-00

� IBM Tivoli Candle Products Messages Volume 4 (KONCV-OC), SC32-9419-00

� IBM Tivoli Candle Products Messages Volume 5 (ODC�VEB and Appendixes), SC32-9420-00

The online glossary for the CandleNet Portal includes definitions for many of the technical terms related to OMEGAMON XE software.

Accessing publications onlineThe OMEGAMON Platform and CandleNet Portal Documentation CD contains the publications that are in the product library. The format of the publications is PDF. Refer to the readme file on the CD for instructions on how to access the documentation.

IBM posts publications for this and all other Tivoli products, as they become available and whenever they are updated, to the Tivoli software information center Web site. Access the Tivoli software information center by first going to the Tivoli software library at the following Web address:

http://www.ibm.com/software/tivoli/library/

Scroll down and click the Product manuals link. In the Tivoli Technical Product Documents Alphabetical Listing window, click the OMEGAMON XE Platform link to access the product library at the Tivoli software information center.

If you print PDF documents on other than letter-sized paper, set the option in the File > Print window that allows Adobe Reader to print letter-sized pages on your local paper.

Ordering publications You can order many Tivoli publications online at the following Web site:

http://www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi

You can also order by telephone by calling one of these numbers:

� In the United States: 800-879-2755

� In Canada: 800-426-4968

In other countries, see the following Web site for a list of telephone numbers:

http://www.ibm.com/software/tivoli/order-lit

Tivoli technical trainingFor Tivoli technical training information, refer to the following IBM Tivoli Education Web site:

http://www.ibm.com/software/tivoli/education

Page 14: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

About This Guide

14 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Support informationIf you have a problem with your IBM software, you want to resolve it quickly. IBM provides the following ways for you to obtain the support you need:

� Searching knowledge bases: You can search across a large collection of known problems and workarounds, Technotes, and other information.

� Obtaining fixes: You can locate the latest fixes that are already available for your product.

� Contacting IBM Software Support: If you still cannot solve your problem, and you need to work with someone from IBM, you can use a variety of ways to contact IBM Software Support.

For more information about these three ways of resolving problems, see �Support Information� on page 99.

Participating in newsgroupsUser groups provide software professionals with a forum for communicating ideas, technical expertise, and experiences related to the product. They are located on the Internet and are available using standard news reader programs. These groups are primarily intended for user-to-user communication and are not a replacement for formal support.

To access a newsgroup, use the instructions appropriate for your browser.

Page 15: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Preface 15

Documentation Conventions

Documentation Conventions

OverviewThis guide uses several conventions for special terms and actions, and operating system-dependent commands and paths.

Panels and figuresThe panels and figures in this document are representations. Actual product panels may differ.

Required blanksThe slashed-b (!) character in examples represents a required blank. The following example illustrates the location of two required blanks.

!!!!eBA*ServiceMonitor!!!!0990221161551000

Revision barsRevision bars (|) may appear in the left margin to identify new or updated material.

Variables and literalsIn examples of z/OS® command syntax, uppercase letters are actual values (literals) that the user should type; lowercase letters are used for variables that represent data supplied by the user. Default values are underscored.

LOGON APPLID (cccccccc)

In the above example, you type LOGON APPLID followed by an application identifier (represented by cccccccc) within parentheses.

SymbolsThe following symbols may appear in command syntax:

Table 1. Symbols in Command Syntax

Symbol Usage

| The �or� symbol is used to denote a choice. Either the argument on the left or the argument on the right may be used. Example:

YES | NOIn this example, YES or NO may be specified.

[ ] Denotes optional arguments. Those arguments not enclosed in square brackets are required. Example:

APPLDEST DEST [ALTDEST]In this example, DEST is a required argument and ALTDEST is optional.

Page 16: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Documentation Conventions

16 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

{ } Some documents use braces to denote required arguments, or to group arguments for clarity. Example:

COMPARE {workload} -REPORT={SUMMARY | HISTOGRAM}

The workload variable is required. The REPORT keyword must be specified with a value of SUMMARY or HISTOGRAM.

_ Default values are underscored. Example:

COPY infile outfile - [COMPRESS={YES | NO}]In this example, the COMPRESS keyword is optional. If specified, the only valid values are YES or NO. If omitted, the default is YES.

Table 1. Symbols in Command Syntax

Symbol Usage

Page 17: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

What�s New 17

What�s New

Disk space requirements have movedInformation about space requirements for the historical tables for OMEGAMON XE products, formerly contained in an appendix to this book, has been moved to the user�s guide or getting started guide for the appropriate products.

W

Page 18: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

18 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Page 19: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Overview of Historical Data Collection 19

Overview of Historical DataCollection

IntroductionThis chapter introduces historical data collection.

Chapter ContentsAbout Historical Data Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Historical Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Performance Impact of Historical Data Requests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

1

Page 20: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

About Historical Data Collection

20 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

About Historical Data Collection

OverviewThe Historical Data Collection (HDC) Configuration program, invoked from either CandleNet Portal or from the Candle Management Workstation (CMW) begins the collection of historical data. The program allows you to specify the collection of historical data at either the Candle Management Server (CMS) or at the remote system where the OMEGAMON XE monitoring agent is installed.

For Candle Management Servers, you can optionally specify historical data to be warehoused. Candle monitoring agents can also warehouse data as long as they are connected to a CMS. The warehoused data is written to Microsoft�s MS SQL Server�relational database on Windows. See �Warehousing Your Historical Data� on page 47.

Alternatively, you can continue to convert your historical data to delimited flat files or datasets using programs distributed with CandleNet Portal and with the CMW. You can then use the converted historical data with any reporting tool from a third-party vendor such as SAS® or Microsoft® Excel, or with other popular PC application tools to produce trend analysis reports and graphics.

You can also load the converted data into relational databases such as DB2®, ORACLE®, Sybase®, Microsoft SQL Server, or others and produce customized history reports.

Managing your historical dataIt is vital that you either warehouse your historical data or convert your historical data to delimited flat files or datasets. Otherwise, your history data files will grow unchecked, using up valuable disk space. On the mainframe, datasets will fill and historical data will no longer be written.

If you choose not to warehouse your data, you must institute rolloff jobs to regularly convert and empty out the history data files. This task is in addition to the main function of the rolloff programs, which is to convert the binary history data into readable text files. See the Converting Files to Delimited Flat Files chapters, as appropriate for your platform, for instructions.

Collecting Short Term HistoryIn addition to the historical data collection reports, for which collection and conversion procedures are documented in this manual, CandleNet Portal and CMW provide a short term history reporting capability.

You can find information on how to request short term history reports and how to specify the time interval for which you want short term history displayed in the individual product manuals in the discussion of product reports. There is information about and illustrations of the available short term status history reports in the Candle Management Workstation

Page 21: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Overview of Historical Data Collection 21

About Historical Data Collection

User�s Guide. You can also find information on requesting history reports and on specifying time intervals in CandleNet Portal in the online help.

To collect the data required for the generation of short term history reporting, you must start historical data collection as documented in �Configuring Historical Data Collection on CMW� on page 37 or in �Configuring Historical Data Collection on CandleNet Portal� on page 29.

Page 22: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Historical Collection Options

22 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Historical Collection Options

OverviewTo provide flexibility in using historical data collection, you can:

� turn history collection on, or turn off all history collection for multiple selected Candle Management Servers and multiple selected tables for a product

� save the history file at the CMS or at the remote agent

� define what data to save; that is, select what columns of a history table should be collected

� define the periodic time interval to save data (05, 15, 30, or 60 minutes)

� define the number of intervals of history to retain before the data is warehoused to a relational data base using ODBC, or use product-provided scripts to convert historical data to delimited flat files. These options are mutually exclusive.

Historical data collection can be specified for individual Candle Management Servers, products, and tables. However, all agents of the same type that report directly to the same CMS must have the same history collection options. Also, for a given history table, the same history collection options are applied to all Candle Management Servers for which that history table�s collection is currently enabled.

For example, if collection of UNIX® Disk Performance (UNIXDPERF) is specified at the remote agent level, each UNIX agent running on a remote managed system collects historical data on that remote managed system.

For Candle Management Servers, you can optionally specify historical data to be warehoused. Candle monitoring agents can also warehouse data as long as they are connected to a CMS. The warehoused data is written to Microsoft�s SQL Server database on Windows.

Note: This document describes using Version 360 of the Warehouse Proxy Agent to warehouse your historical data.

Some Candle agents do not provide history data for all of their tables and attribute groups. This is because the applications group for that agent has determined that collecting history data for certain tables is not appropriate, or would have a detrimental effect on performance. This could be due to the vast amount of data that would be generated.

Therefore, for each product, only tables that are available for history collection are listed in the History Collection Configuration dialog.

If, after you configure history data for a table and start history collection, you still do not see history data for that table, there is a problem either with the agent collection of that data, or with the history mechanism.

Page 23: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Overview of Historical Data Collection 23

Performance Impact of Historical Data Requests

Performance Impact of Historical Data Requests

OverviewThe impact of historical data collection and/or warehousing on OMEGAMON Platform components is dependent on multiple factors, including collection interval, number and size of historical tables collected, amount of data, system size, and so on. This section describes some of these factors.

Impact on the CMS or the agent of large amounts of historical dataThe component specified for collecting and/or warehousing history data (either at the CMS or the agent) can be negatively impacted when processing large amounts of data. This can occur because the historical warehouse process on the CMS or the agent must read the large row set from the history data file. The data must then be transmitted to the Warehouse Proxy agent. For large datasets, this sometimes impacts memory and CPU resources. Because of its ability to handle numerous requests simultaneously, the impact on the CMS is not as great as the impact on the agent.

Impact on the agentFor agents processing a large data request, the agent may be prevented from processing other requests until the time-consuming request has completed. This is important with most agents because an agent can usually process only one report,one situation, or one warehousing request at a time.

Requests for historical data from large tablesRequests for historical data from tables that collect a large amount of data will have a negative impact on the performance of the OMEGAMON Platform components involved. To reduce the performance impact on your system, we recommend setting a longer collection interval for tables that collect a large amount of data. You specify this setting from the Configuration tab of the History Collection Configuration dialog. To find out the disk space requirements for tables in your OMEGAMON XE product, see �Disk Space Requirements for Historical Data Tables� on page 141.

When you are viewing a report or a workspace for which you would like (short term) historical data, you can set the Time Span interval to obtain data for previous samplings. Selecting a long time span interval for the report time span increases the amount of data being processed, and may have a negative impact on performance. The program must dedicate more memory and CPU cycles to process a large volume of report data. In this instance, we recommend specifying a shorter time span setting, especially for tables that collect a large amount of data.

If a report rowset is too large, the report request may drop the task and return to the CandleNet Portal or CMW with no rows because the agent took too long to process the request. However, the agent continues to process the report data to completion, and remains blocked, even though the report data is not viewable.

Page 24: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Performance Impact of Historical Data Requests

24 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

There could also be cases where the historical report data from the Persistent Data Store might not be available. This could occur because the Persistent Data Store may be not be available while its maintenance job is running.

Scheduling the warehousing of historical dataThe same issues with requesting large reports apply to scheduling the warehousing of historical data only once a day. The more data being warehoused at once requires many more resources to read data into memory, and to transmit to the Warehouse Proxy agent. If possible, we recommend making the warehousing rowset smaller by spreading the warehousing load over each hour, that is, by setting the warehouse interval to 1 hour.

Page 25: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Planning Collection of Historical Data 25

Planning Collection ofHistorical Data

IntroductionThis chapter provides information about

� selecting a strategy for historical data collection in your enterprise

� the components used by various platforms to accomplish historical data collection

� the tables used to collect historical data and their space requirements

Chapter ContentsDeveloping a Strategy for Historical Data Collection. . . . . . . . . . . . . . . . . . . . . . . . . . 26

2

Page 26: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Developing a Strategy for Historical Data Collection

26 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Developing a Strategy for Historical Data Collection

OverviewWhen developing a strategy for historical data collection, you must determine:

� the rules under which data will be collected; for example,

� How often do I want to collect historical data?

� Where do I want to collect the data�at the Candle Management Server or at the location where the OMEGAMON XE monitoring agent is running?

� What data do I want to collect?

� how often you want to warehouse collected data

� whether scheduling of data conversion to delimited flat files should be automatic or manual

Defining data collection rulesAmong the factors that should govern the frequency of historical data collection are such things as:

� How much disk storage will be required to store the data being collected?

� What use will be made of the collected data?

For information about using the History Configuration Dialog to establish the rules under which data is collected, see �Defining Data Collection Rules� on page 41.

Warehousing collected dataThe History Configuration program used by the Candle Management Workstation permits you to warehouse collected historical data to a database using ODBC. For additional information, see �Specifying collection options� on page 42.

CandleNet Portal also allows you to warehouse collected historical data to a database using ODBC. See �Configuring collection of attribute data� on page 32. For instructions on configuring a database, see �Warehousing Your Historical Data� on page 47.

Note: This document describes using Version 360 of the Candle Warehouse Proxy Agent to warehouse your historical data.

Defining the data conversion processData can be scheduled for conversion to delimited flat files either manually or automatically. If you choose to continue to convert data to delimited flat files, we strongly recommend that you schedule data conversion to be automatic. You will want to perform data conversion on a regular basis even if you are collecting historical data only to support short term history that is displayed on product reports. This is due to the fact that any historical data collection will result in use of system resources.

Page 27: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Planning Collection of Historical Data 27

Developing a Strategy for Historical Data Collection

Data conversion programsPrograms are called to execute the conversion of history files to delimited flat files. The program that performs the conversion differs depending on your system environment.

� UNIX component

� The program to convert the binary history file to a delimited flat file is called krarloff

� Windows 2000, ME, or XP components

� The program to convert the binary history file to a delimited flat file is called krarloff.

� The program used to simulate the UNIX crontab command to archive historical data collection files on Windows Candle Management Servers and remote managed systems is called LOGSPIN.EXE

� MVS, OS/390, or z/OS components

� The program to convert the binary history file to a delimited flat file is called KPDXTRA.

Columns added to history data files and to meta description filesFour columns are automatically added to the history data files and to the meta description files. These columns are:

� TMZDIFF. The time zone difference from Universal Time (GMT). This value is shown in seconds.

� WRITETIME. The CT timestamp when the record was written. This is a 16-character value in the format: cyymmddhhmmssttt, where:

� c = century

� yymmdd = year, month, day

� hhmmssttt = hours, minutes, seconds, milliseconds

� SAMPLES. Incremental counter for the number of samples written since the agent started. All rows written during the same interval have the same number.

� INTERVAL. The time between samples, shown in milliseconds.

Note: The warehousing process only adds two columns (TMZDIFF and WRITETIME), to the warehouse database. See �Warehousing Your Historical Data� on page 47.

For a sample meta description file, see �Sample *.hdr meta description file� on page 28.

Meta description filesA meta description file describes the format of the data in the source files. Meta description files are generated at the start of the historical data collection process.

The various platforms use different file naming conventions. Here are the rules for some platforms:

Page 28: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Developing a Strategy for Historical Data Collection

28 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

� AS/400 and HP NonStop� Kernel (formerly Tandem)

Description files use the name of the data file as the base. The last character of the name is �M�. For example, for table QMLHB, the history data file name is QMLHB and the description file name is QMLHBM.

� z/OS and earlier

Description records are stored in the PDS facility, along with the data.

� UNIX

Uses the *.hdr file naming convention.

� Windows

Uses the *.hdr file naming convention.

Sample *.hdr meta description file TMZDIFF(int,0,4)WRITETIME(char,4,16)QM_APAL.ORIGINNODE(char,20,128)QM_APAL.QMNAME(char,148,48)QM_APAL.APPLID(char,196,12)QM_APAL.APPLTYPE(int,208,4)QM_APAL.SDATE_TIME(char,212,16)QM_APAL.HOST_NAME(char,228,48)QM_APAL.CNTTRANPGM(int,276,4)QM_APAL.MSGSPUT(int,280,4)QM_APAL.MSGSREAD(int,284,4)QM_APAL.MSGSBROWSD(int,288,4)QM_APAL.INSIZEAVG(int,292,4)QM_APAL.OUTSIZEAVG(int,296,4)QM_APAL.AVGMQTIME(int,300,4)QM_APAL.AVGAPPTIME(int,304,4)QM_APAL.COUNTOFQS(int,308,4)QM_APAL.AVGMQGTIME(int,312,4)QM_APAL.AVGMQPTIME(int,316,4)QM_APAL.DEFSTATE(int,320,4)QM_APAL.INT_TIME(int,324,4)QM_APAL.INT_TIMEC(char,328,8)QM_APAL.CNTTASKID(int,336,4)SAMPLES(int,340,4)INTERVAL(int,344,4)

For example, an entry may have the form:

attribute_name(int,75,20)

where int identifies the data as an integer, 75 is the starting column in the data file, and 20 is the length of the field for this attribute in the file.

Estimating Space Required to Hold Historical Data TablesHistorical data is written to performance attribute tables. Refer to the product documentation for assistance in determining the names of the tables in which historical data is stored and their size, as well as those tables that are defaults. Most products provide worksheets to assist you in estimating the size of the disk storage required to hold your enterprise�s historical data.

Page 29: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection on CandleNet Portal 29

Configuring Historical Data Collectionon CandleNet Portal

IntroductionThis chapter describes how to configure and manage the collection of historical data from CandleNet Portal.

See �Configuring Historical Data Collection on CMW� on page 37, for instructions for configuring and managing historical data collection from the Candle Management Workstation.

Before you begin CMS start-up must be complete and the CMS must be running before you attempt to configure historical data collection. If you choose to warehouse your historical data rather than convert it to delimited flat files, you must have installed and configured the relational database to which you will roll off the data via ODBC.

Refer to the Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX for details on installing the database to which you will write historical data. See �Configuring Your Warehouse� on page 49 for configuration information.

Chapter ContentsOverview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30Configuring Historical Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Starting and Stopping Historical Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3

Page 30: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Overview

30 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Overview

Historical setupConfiguring historical data collection involves specifying the attribute groups for which data is collected, the collection interval, the roll-off interval (to a data warehouse), if any, and where the collected data is stored (at the agent or the CMS).

To ensure that data samplings are saved to populate your predefined historical workspaces, you must first configure and start historical data collection. This requirement does not apply to workspaces using attributes groups that are historical in nature and show all their entries without your starting data collection separately.

Some agents do not provide history data for all of their attribute group tables. This is because the application development group for that agent has determined that collecting history data for certain tables is not appropriate or would have a detrimental effect on performance. This could be due to the vast amount of data that would be generated. Therefore, for each product, only tables that are available for history collection are listed in the History Collection Configuration dialog. See �Configuring Historical Data Collection� on page 32.

Requirements for invoking the HDC configuration programIn order to invoke the HDC Configuration program, you must have Configure History authority. The system administrator can grant this authority using the Administer Users, Permissions tab in CandleNet Portal. If you do not have the proper authority, you will not see the menu option or the toolbar option for historical configuration. See the Using OMEGAMON Products: CandleNet Portal document for more information.

Data roll offOn Windows and UNIX systems, historical data is collected in binary files.These files grow as new data gets added at every sampling interval. Their size can increase quickly and take up a great deal of space on the hard drive. And the larger a history file is, the longer it takes to retrieve historical data into views. On z/OS systems, historical data is stored in data sets. If these datasets fill up and no empty datasets are available, future attempts to write data to any dataset in the group will fail.

On z/OS, you can configure the persistent data store (CT/PDS) to maintain the historical data sets. In addition, the OMEGAMON Platform has file conversion programs that move data out of the historical files or datasets to delimited text files and delete the stored information. See the chapter on converting files to delimited flat files appropriate for your platform for instructions.

The long-term history feature offers a more permanent solution. The history files are maintained automatically because the data is periodically rolled off to an historical database (also called Candle Data Warehouse or data warehouse). To use long-term history, you must have configured your environment to include the Warehouse Proxy agent and Candle Data Warehouse (historical database) for storing long-term historical

Page 31: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection on CandleNet Portal 31

Overview

data. See Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX and �Warehousing Your Historical Data� on page 47 for instructions.

Viewing historical dataThe table view and the bar, pie, and plot charts in CandleNet Portal have a tool for setting a time span. This Time Span tool causes previously collected data samples to be reported up to the time specified. Your product may also have predefined workspaces with historical views.

If, after you configure history data for a table and start history collection, you still do not see history data for that table, there is a problem either with the agent collection of that data, or with the history mechanism.

Page 32: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection

32 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection

OverviewYou use the History Collection Configuration dialog to:

� review current configuration of historical data collection for a specific CMS or product

� start or stop historical data collection

� specify how historical data is to be collected for a specific product on a specific CMS

� change existing specifications for data collection

Accessing the History Collection Configuration dialogYou access the History Collection Configuration dialog by clicking the icon on the toolbar or by selecting History Configuration from the Edit menu (Ctrl+H). If you do not see the icon or the menu option, your user ID does not have the proper authority.

Configuring collection of attribute dataThe groups for which you want to collect data must be configured before you can start data collection. You use the Configuration tab to set up historical data collection (see Figure 2 on page 36).

From the Configuration tab, you can specify:

� the product for which data is to be collected

� the attribute group or groups for which data is to be collected

� the interval at which data for a particular attribute group is collected

� the location at which the data is stored (either the agent or the CMS)

� the interval at which data is warehoused, if any

If short term history data is not being warehoused, it accumulates indefinitely unless it is rolled off using the provided file conversion programs. If it is being warehoused, data older than 24 hours is automatically deleted.

Page 33: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection on CandleNet Portal 33

Configuring Historical Data Collection

Figure 1. CandleNet Portal History Collection Configuration Configuration Tab

You can view the attribute groups for a selected product for which data collection is recommended by clicking Show Default Groups.

Note: You cannot configure data collection for individual attributes from CandleNet Portal. If you want to exclude or include specific attributes in a group, you must configure collection from the CMW. See �Configuring Historical Data Collection on CMW� on page 37.

Configuration tabTo configure data collection for an attribute group or groups:

1. On the Configuration tab, select the product (agent type) for which you want to collect data. The attribute groups for which you can collect historical data appear in a list box.

Page 34: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection

34 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Note: When you select a product type, you are configuring collection for all agents of that type that report to the selected CMS.

2. Select one or more attribute groups, then use the radio buttons to select the interval for data collection, the location of data collection, and the interval for warehousing, if any.

Note: The controls show the default settings when you first open the dialog. As you select attribute groups from the list, the controls do not change for the selected group. If you change the settings for a group, those changes continue to display no matter which group you select while the dialog is open. This enables you to adjust the configuration controls once and apply the same settings to any number of attribute groups (one after the other, or use Ctrl+click to select multiples or Shift+click to select all groups from the first one selected to this point). The true configuration settings show in the group list and in the Status tab.

3. Click Configure Group(s) to apply the configuration selections to the attribute group or groups. The values do not take effect unless you click this button.

Changes made to the configuration of any group are automatically reflected on the Status tab for all Candle Management Servers on which collection for the changed groups is already started. It is not necessary to stop and then restart collection for a group whose configuration has changed.

Note: Clicking Unconfigure Group(s) automatically stops collection for that group on all Candle Management Servers first.

Configuring data collection for logsThe CCC Logs apply to all applications. If you want to save the information in these logs, you should configure them for warehousing. You can configure historical data collection for any of the CCC Logs.

Note: Although you can set up historical data collection for any of these logs, you can create a chart or table view for only TNODESTS (Managed System Change Log) and Situations Status Log. CandleNet Portal currently does not provide query support for KRAMESG (Universal Message Log), OPLOG (Operations Log), TEIBLOG (Enterprise Information Base Changes Log), or TWORKLST (Worklist Log).

Page 35: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection on CandleNet Portal 35

Starting and Stopping Historical Data Collection

Starting and Stopping Historical Data Collection

OverviewYou start and stop historical data collection for a specific CMS from the Status tab of the History Collection Configuration dialog.

The attribute groups for which you want to collect data must be configured before you can start data collection. See �Configuring collection of attribute data� on page 32.

Starting historical data collectionUse the Status tab of the History Collection Configuration dialog to view the configuration and collection status for each attribute group of a selected product on a selected CMS (see Figure 2 on page 36). You also use the Status tab to start and to stop collection.

To start data collection for configured attribute groups:

1. On the Status tab, select a CMS from the dropdown list.

2. Select a product.

3. Select the attribute group or groups for which you want to start data collection. The attribute groups for which historical data collection has been configured are listed in the Collection Status table.

Shift-click to select a contiguous groups, or Ctrl-click to select noncontiguous groups.

4. Click Start Collection.

On distributed systems, two files are created for every attribute group selected: a configuration file with a .hdr extension and a binary history file with no extension. For example, if you select the Address Space CPU Utilization attribute group, the two history files are ASCPUUTIL.hdr and ASCPUUTIL.

Page 36: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Starting and Stopping Historical Data Collection

36 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Figure 2. CandleNet Portal History Collection Configuration Status Tab

Stopping data collectionTo stop data collection:

1. On the Status tab, select a CMS from the dropdown list.

2. Select a product.

3. Select the attribute group or groups for which you want to stop data collection.

Shift-click to select a contiguous groups, or Ctrl-click to select noncontiguous groups.

4. Click Stop Collection.

Page 37: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection on CMW 37

Configuring Historical DataCollection on CMW

IntroductionYou invoke the Historical Data Collection (HDC) Configuration program to start or to stop the collection of historical data. You define the rules for running the program using the History Configuration dialog, illustrated in this chapter.

For information on configuring historical data collection on CandleNet Portal, see �Configuring Historical Data Collection on CandleNet Portal� on page 29.

Before you beginThe CMS start-up must be complete and the CMS must be running before you attempt to configure historical data collection. If you choose to warehouse your historical data rather than convert it to delimited flat files, you must have installed and configured the relational database to which you will roll off the data via ODBC.

Refer to the Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX for details on installing the database to which you will write historical data. See Chapter 4, �Configuring Your Warehouse� on page 49 for configuration information.

Chapter ContentsInvoking the HDC Configuration Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Using the Configuration Dialog to Control Historical Data Collection . . . . . . . . . . . . . 40Defining Data Collection Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Using the Advanced History Configuration Options Dialog. . . . . . . . . . . . . . . . . . . . . 44

4

Page 38: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Invoking the HDC Configuration Program

38 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Invoking the HDC Configuration Program

Requirements for invoking the HDC Configuration programIn order to invoke the HDC Configuration program, you must have the appropriate authority to launch the program. The system administrator can grant this authority using the Authority Settings window. If you do not have appropriate authority to launch the Configure History program, the associated icon will not appear in the Administration - Icons window.

Steps to invoke the HDC Configuration ProgramTo invoke the HDC Configuration program:

1. Access the CMW Administration - Icons window (Figure 3).

2. From the CMW Administration - Icons window, double-click the Configure History icon. CMW displays the CCC History Configuration dialog

Figure 3. The Configure History Icon in the Administration Window

About the History Configuration dialogUsing the History Configuration dialog, you can:

� review current settings for historical data collection for a specific CMS or product

� start or stop historical data collection

� specify how historical data is to be collected for a specific product on a specific CMS or on multiple Candle Management Servers. You can now configure history for multiple servers and multiple tables at one time

� change existing specifications for data collection

Page 39: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection on CMW 39

Invoking the HDC Configuration Program

Figure 4. CMW History Configuration Dialog

Page 40: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Using the Configuration Dialog to Control Historical Data Collection

40 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Using the Configuration Dialog to Control Historical Data Collection

Specifying configuration optionsOn the CCC History Configuration dialog, you can select:

� Display current configuration to display the collection status for each table for the currently selected product.

If you have selected multiple Candle Management Servers, the Tables list box will show the collection status for the first selected CMS. A button labelled Next... will be visible which, if selected, updates the Tables list box with the status for the next selected CMS. You can continue to select the Next... button until you have displayed the status for each selected server.

Note:If you use this dialog to change your current configuration, the changes you make may not be immediately reflected in the Tables list box, since the request must be transmitted to and processed by each CMS. You may need to refresh the status of the Tables list box after a few seconds by selecting the Display Configuration button before your changes become evident.

� Start default collection to begin historical data collection for those product tables defined as defaults. A confirmation message box pops up giving you the option of cancelling your request. If you select Cancel, the Tables list box is updated to show those tables that have been designated as defaults.

In this manual, you can refer to �Disk Space Requirements for Historical Data Tables� on page 141 for information about the default historical tables for your installed Candle products.

� Stop all history collection to stop all historical data collection for the selected product on all selected Candle Management Servers.

� Start collection to begin collection for the tables that are currently selected

Note: Historical information will not be recorded unless you press Start collection.

� Advanced configuration to display a dialog that permits you to specify the subset of a table�s attributes that are to be collected. (By default, all of a table�s attributes are collected.) You can also access the Advanced History Configuration Options dialog by double-clicking a table or tables displayed in the Select Table(s) box.

� Help to receive information about panel options

� Quit to exit historical data collection. Selecting Quit stops the configuration program.

Page 41: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection on CMW 41

Defining Data Collection Rules

Defining Data Collection Rules

OverviewYou can specify these historical data collection values:

� one or more Candle Management Servers you wish to configure for a product you will select from a pulldown menu. Servers must be online to be configured.

� the product for which historical data is to be collected

� the name of the group(s) or table(s) for which historical data is to be collected

� the collection interval

� the location where data is to be collected--either at the CMS or at the location where the agent is running

� how often data is to be rolled off to a warehouse.

Selecting the target Candle Management Server(s)On the CCC History Configuration dialog, the Select CMS target(s) field displays the identifier for the hub CMS and any Candle Management Servers attached to that hub. You can refresh the list of target Candle Management Servers by selecting the Rebuild CMS List pushbutton.

Selecting Rebuild CMS List causes the displayed list of available Candle Management Servers to be refreshed with any CMS started or stopped since the list was last displayed

Figure 5. CMS Selection Portion of Dialog

Selecting a productThe pulldown menu in the Select a Product field shows all of the Candle products installed in your environment. From this pulldown list, select the product or application for which you want historical data collected.

Selecting Group(s)In the Select Table(s) field, you can control whether the list box displays the actual table name or the Group name (the default) for each table. By clicking the appropriate button, you can view the list by Group name or by Table name. Depending on your selection, a list is displayed that contains the available groups or tables for which historical data can be collected.

Page 42: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Defining Data Collection Rules

42 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

For each entry in the list, the following are displayed(Figure 6):

� Group Name or Table Name: Name of the group or table for which historical data will be collected

� Collection Interval: Collection interval currently specified for the named group or table, or OFF

� Collection Location: Collection location currently specified for the named group or table

� Warehouse Interval: The frequency at which historical data is rolled off to your Candle data warehouse

� Filename: Name of the binary file to which raw historical data is written at each collection interval

Figure 6. Table or Group selection portion of dialog

Specifying collection optionsUsing the Table or Group selection portion of the dialog (Figure 6), you can specify the following collection options for historical data.

� Collection Interval: The interval at which historical data is collected. For example, specifying 5 causes historical data to be collected at the end of every 5 minute period. You can specify values of 5, 15, or 30 minutes, or 1 hour. Using this field, select Off to turn off collection for the selected CMS target(s) and associated product without affecting historical data collection on other Candle Management Servers or agents.

� Collect Data At: The location at which data is to be collected--either at the remote agent or at the CMS to which the agent is connected.

Note: If you use the Advanced Configuration button to provide a custom definition, and if collection is started once a custom definition is in place, the history data will be collected at the CMS regardless of the setting of the Collection Location radio button.

� Warehouse every: The frequency at which historical data is rolled off to your Candle data warehouse. If you do not want to warehouse your historical data, select Off.

� Filename: Name of the binary file to which raw historical data is written each collection interval

Page 43: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection on CMW 43

Defining Data Collection Rules

Note: Historical information will not be recorded unless you press Start collection.

Note: Warehousing data to an ODBC data base is mutually exclusive with running data conversion programs on your historical data. If you choose to continue to run your data conversion scripts, you will want to select Off for the Warehouse every option.

Runtime InformationThe message field at the bottom of the CCC History Configuration dialog can display status information pertaining to the current or most recently completed request.

Page 44: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Using the Advanced History Configuration Options Dialog

44 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Using the Advanced History Configuration Options Dialog

OverviewIf you select Advanced configuration from the CCC History Configuration dialog. or if you double-click on a table or tables displayed in the Select Table(s) box, the Advanced History Configuration Options dialog displays. Use this dialog to select the attributes for which you want historical data to be collected.

Note: To avoid the corruption of historical data files, you must roll off and delete existing history data files and meta files prior to modifying the Advanced History Configuration options when storing history data at the CMS. See �Preventing Historical Data File Corruption� on page 50.

Figure 7. Advanced History Configuration Options dialog

Use the Add and Remove buttons to add attributes to or remove attributes from the Selected and Available Attributes lists respectively. Add All and Remove All move the entire contents of one list to the other. You can also double-click an attribute in one list to move it to the other.

To obtain a list of the attributes currently being collected, click Current settings. Reset deletes any customized attribute subset you may have created so that next time collection is started for the table, the default, that is, all attributes, is selected.

Page 45: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Configuring Historical Data Collection on CMW 45

Using the Advanced History Configuration Options Dialog

When the Selected Attributes list is complete, select OK. This creates a local, custom configuration definition for the selected table that exists until the history configuration application terminates or you select the Reset button. This custom definition takes effect when historical data collection is next started for that table.

Every product, other than CCC Logs, requires that you specify at least the System_Name attribute as well as one other column.

Special considerations for CCC LogsThe CCC Logs, a group of enterprise information base (EIB) tables for which history is available, requires that you specify the Global_Timestamp attribute and at least one other column. The collection interval and location, as well as Warehouse interval, are fixed for the Status_History, EIB_Changes, Policy_Status, and System_Status logs, as follows:

� Collection Interval : once a day

� Collection Location: at the CMS

� Warehouse Interval: Once per day

See the Candle Management Workstation User�s Guide for additional information on the CCC Logs. See also the Candle Management Workstation Administrator�s Guide for a detailed description of the Display Item attribute. This attribute is used to more easily differentiate situations. You can view the results in the Status History log.

Universal Agent history configuration Generally, each product is shipped with a file that is installed into the CMW�s SQLLIB directory. This file contains all of the definitions required by the Historical Data Collection Configuration program to start and stop historical data collection. Because the tables and attributes collected by Universal Agents are defined by you, the history definition file is not available to the CMW. For Universal Agents, history definitions are created dynamically from the agent�s attribute file. This file is retrieved from the agent by the CMW when the agent comes online. There are no default tables for Universal Agents.

If a new Universal Agent comes online after the Historical Data Collection Configuration application has started, you will need to restart this application before history collection can be configured for the new agent.

Page 46: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Using the Advanced History Configuration Options Dialog

46 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Page 47: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Warehousing Your Historical Data 47

Warehousing YourHistorical Data

IntroductionSeveral steps are required in order to warehouse your historical data to a supported relational database using ODBC. Other considerations must also be addressed. This chapter provides guidance on warehousing historical data.

Note: This document describes using Version 360 of the Candle Warehouse Proxy Agent to warehouse your historical data.

Before you beginRefer to the Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX for details on installing the database to which you will write historical data. That database must be installed before you can begin rolling off historical data to it.

Also, review �Configuring Historical Data Collection on CMW� on page 37 or �Configuring Historical Data Collection on CandleNet Portal� on page 29 for information about using the Historical Data Collection program on the appropriate user interface and using the history configuration dialogs.

Chapter ContentsPrerequisites to Warehousing Historical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Configuring Your Warehouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Preventing Historical Data File Corruption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50Error Logging for Warehoused Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

5

Page 48: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Prerequisites to Warehousing Historical Data

48 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Prerequisites to Warehousing Historical Data

OverviewIn order to use ODBC to warehouse historical data, your enterprise must first:

1. Install Microsoft SQL Server.2. Define a user ID and password.

Important: In SQL Server, the user ID must be a member of the db_owner �Fixed Database Role� located in the Database/Roles menu. When the user ID exists in db_owner, all of the Warehouse Proxy objects in the database have the same user ID as the owner ID, and the new tables and columns are correctly inserted into the database.

3. Use the Windows ODBC Administrator to add and to configure a data source called Candle Data Warehouse. The data source be called Candle Data Warehouse. No other name is acceptable. Configure the data source to point to the SQL Server that is to be used for warehousing historical data.

4. Start the Warehouse Proxy Agent on a Windows system in the network. Configure the Candle Data Warehouse ODBC data source on the same system.

You are now ready to use data warehousing.

Note: For mainframe products, in addition to configuring ODBC and SQL Server, you must set up historical data collection by defining Persistent Data Store (CT/PDS) datasets. You must also set up the required maintenance tasks to ensure the availability of these datasets. See �Maintaining the Persistent Data Store (CT/PDS)� on page 75.

Historical data collection can be configured to be stored at any combination of the CMS or the agents. To ensure that history data is received from all sources, you must configure a common shared network protocol between the Warehouse Proxy agent and the component that is sending history data to it (from either a CMS or an agent).

For example, you might have a CMS configured to use both IP and IP.PIPE. In addition, one agent might be configured with IP and a second agent with IP.PIPE. In this example, the Warehouse Proxy agent must be configured to use both IP and IP.PIPE.

About the Warehouse Proxy agentThe Warehouse Proxy agent uses ODBC to write the historical data to a supported relational database. Only one Warehouse Proxy agent can be configured and running in your enterprise at one time. This proxy agent can handle warehousing requests from all managed systems in the enterprise. The proxy agent should be connected to the hub CMS. We recommend, if possible, installing the proxy agent on the same machine on which the warehouse database resides.

See Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX for details regarding installation of the Warehouse Proxy agent.

Page 49: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Warehousing Your Historical Data 49

Configuring Your Warehouse

Configuring Your Warehouse

OverviewYou use the history data collection configuration program in Candle Management Workstation (CMW) and in CandleNet Portal to specify how often data is rolled off to a relational database.

Naming of warehoused history tablesWarehoused history tables in the database have the same names as the group names of history tables. For example, Windows Servers history for group name NT_System is collected in a binary file having the name WTSYSTEM. Historical data in this file, WTSYSTEM, is warehoused to the database in a table named NT_System.

The following UNIX history tables are exceptions to the foregoing. User and Disk groups are exported to the database to tables having the names UNIXUSER and UNIXDISK. This is due to the fact that User and Disk are reserved words in SQL Server. Tables named UNIXUSER and UNIXDISK cannot be queried using MS/Query.

Columns added to the warehouse databaseTwo columns are automatically added to the warehouse database. These are:

� TMZDIFF. The time zone difference from Universal Time (GMT). This value is shown in seconds.

� WRITETIME. The CT timestamp when the record was written. This is a 16-character value in the format: cyymmddhhmmssttt, where:

� c = century (1 = 21st century)� yymmdd = year, month, day� hhmmssttt = hours, minutes, seconds, milliseconds

Attributes formattingSome attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files.

The Warehouse Database displays the correct attribute formatting only for those attributes that use integers with floating point number formats.

Logging successful exports of historical dataEvery successful export of historical data is logged in Candle Data Warehouse in a table called WAREHOUSELOG. The WAREHOUSELOG contains information such as origin node, table to which the export occurred, number of rows exported, time the export took place, and so forth. You can query this table to learn about the status of your exported history data.

Page 50: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Preventing Historical Data File Corruption

50 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Preventing Historical Data File Corruption

OverviewBecause history data storage on non-z/OS platforms uses flat files that are not indexed, corruption of historical data can occur. If history data is stored at either the agent or at the CMS, it is important to roll off the existing history data files and meta files into text files. You then delete the history data files and meta files at the agent or at the CMS for the selected tables to avoid corruption of the warehoused database tables. See �Converting History Files to Delimited Flat Files (Windows and OS/400)� on page 53.

Note: This situation does not apply to z/OS history data as this data is stored in the Persistent Data Store (CT/PDS) facility.

To avoid the corruption of historical data files, you must roll off and delete existing data files prior to:

� modifying the Advanced History Configuration options when storing history data at the CMS. See �Using the Advanced History Configuration Options Dialog� on page 44.

� upgrading an existing monitoring agent to a new release when storing history data at the agent. See Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX for installation instructions.

Preventing corruption when storing data at the CMSIf you store historical data at the CMS, perform the procedure that follows prior to using the Advanced History Configuration options:

1. Save, roll off, or export the existing history data that is stored at the CMS for the selected table.

2. Delete the CMS history data files and meta files for the selected table only.

3. If you are warehousing the data, save or rename the existing database table, in case you want to retain the data for later use.

4. Using the SQL DROP command, delete the database table.

You may now make modifications to the Advanced History Configuration options.

Preventing corruption when storing data at the agentIf you store historical data at the monitoring agent, perform the procedure that follows prior to upgrading the agent to a new release. You perform this procedure when you can identify which, if any, product tables have added new attributes. If you are unsure about newly added attributes, perform the procedure for all existing product history tables.

1. Save, roll off, or export the existing history data files that are stored at the agent.

2. Delete the agent history data and the meta files.

Page 51: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Warehousing Your Historical Data 51

Preventing Historical Data File Corruption

3. If you are warehousing the data, save or rename the existing database table, in case you want to retain the data for later use.

4. Using the SQL DROP command, remove the database table.

You may now proceed with the agent upgrade.

If your database is corruptedIf your database is corrupted, you can repair the database using this procedure:

1. Stop the Warehouse Proxy agent.

2. Stop the collecting of historical data.

3. Delete the history data files and the meta files.

4. If you are warehousing the data, save or rename the existing database table, in case you want to retain the data for later use.

5. Using the SQL DROP command, delete the database table.

6. Return to the Historical Data Collection program, Advanced History Configuration option, and select your attributes. If you think you might want to add to the table later, select all of the attributes now. You can always go back and remove the attributes that you don�t want. Once you remove the attributes, the table will still be big enough for attributes that you might want to add later.

Note: You cannot configure data collection for individual attributes from CandleNet Portal. If you want to exclude or include specific attributes in a group, you must configure collection from the CMW. See �Configuring Historical Data Collection on CMW� on page 37.

7. Start collecting data.

8. Restart the Warehouse Proxy agent.

The SQL Server recreates the database tables.

Page 52: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Error Logging for Warehoused Data

52 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Error Logging for Warehoused Data

Viewing errors in the Event LogShould an error occur during data rolloff, one or more entries are inserted into the Windows Application Event Log that is created on the system where the Warehouse Proxy is running. To view the Application Event Log, start the Event Viewer by clicking Start> Programs> Administrative Tools> Event Viewer. Select Application from the Log pull-down menu. (On Windows XP, click Start > Control Panel > Administrative tools > Event Viewer.)

Setting a trace optionYou can set error tracing on to capture additional error messages that can be helpful in detecting problems.

Activating the trace optionTo activate the trace option:

1. Click Start > (All) Programs > Candle OMEGAMON XE > Manage Candle Services

2. Right-click Warehouse Proxy and select Advanced > Edit Trace Parms. The Trace Parameters for Warehouse Proxy dialog displays.

3. Select the RAS1 filters. The default setting is ERROR.

4. Enter the path and file name of the RAS1.log file that will contain the error messages for the warehouse proxy. For example:

c:\Candle\CMA\LOGS\khdRas1.log

where khd indicates the product code for the warehouse proxy.

5. Enter the KDC_DEBUG setting. None is the default.

Viewing the Trace LogTo view the trace log containing the error messages:

1. Select Start > Programs > Candle OMEGAMON XE > Manage Candle Services.

2. Right-click Warehouse Proxy and select Advanced > View Trace Log. The Log Viewer window displays the log file for the warehouse proxy agent.

Page 53: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting History Files to Delimited Flat Files (Windows and OS/400) 53

Converting History Files toDelimited Flat Files

(Windows and OS/400)

IntroductionWarehousing data to an ODBC data base is mutually exclusive with running the file conversion programs described in this chapter. To use these conversion procedures, you must have specified Off for the Warehouse option on the History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal.

The history files collected using the rules established in the historical data collection configuration program can be converted to delimited flat files for use in a variety of popular applications to easily manipulate the data and create reports and graphs. Use the LOGSPIN program or the Windows AT command to schedule file conversion automatically. Use the krarloff program to manually invoke file conversion. (The LOGSPIN program invokes krarloff when file conversion is scheduled automatically.) For best results, you should schedule conversion to run every day. This is especially important on OS/400.

Chapter ContentsConversion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54Archiving Procedure using LOGSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Archiving Procedure using the Windows AT Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Converting Files Using krarloff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58AS/400 Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60Location of the Windows Executables and Historical Data Collection Table Files . . . . . . . . . . . 61

6

Page 54: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Conversion Process

54 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Conversion Process

OverviewWhen setting up the process that will convert the history files you have collected to delimited flat files, you can choose to schedule the process automatically using the LOGSPIN program or the Windows AT command, or manually by running the krarloff program. The LOGSPIN program invokes krarloff. Before deciding on which method to use, see the Microsoft Windows library for full details on the security implications of choosing to run a program such as LOGSPIN versus entering the Windows AT command.

Important: Candle recommends running history file conversion every 24 hours.

Page 55: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting History Files to Delimited Flat Files (Windows and OS/400) 55

Archiving Procedure using LOGSPIN

Archiving Procedure using LOGSPIN

OverviewTo convert historical data files on Windows Candle Management Servers and remote managed systems, follow these steps. Parameters for the logfile program are described in �Logfile parameters� on page 56.

1. Create a text file with each entry corresponding to the history table file to be converted. The text file must be located on each managed system on which data conversion is performed. The format of each line of the text file is:

logfile {SIZE=nnn | TIME=hh:mm} [HEADER=(Y/N) DELIM=c OUTPUT=filestem RFILE=tempname KEEP=(Y/N)]

The parameters in brackets are optional and the parameters in braces are required.

2. To start archiving historical data on the remote managed system, enter the following at the command prompt:

LOGSPIN filename [archpathname]

or

start LOGSPIN filename [archpathname]

where:

� filename is the name of the text file described above and is required.

� archpathname is the name of the path where the archive program is located. This is optional, and the default is to use the Windows search sequence.

Note: Entering the start LOGSPIN command automatically opens an additional window and runs the command in the background.

3. To stop archiving historical data on the remote managed system, enter the following at the command prompt:

LOGSPIN STOP

Page 56: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Archiving Procedure using LOGSPIN

56 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Logfile parametersThe table below describes the parameters that correspond to the krarloff program defaults.

Table 2. Logfile parameter values

Parameter Descriptionlogfile Name of the historical table to be converted/archived.SIZE Archive file at six-hour intervals if it exceeds nnnK bytes. The SIZE and

TIME parameters are mutually exclusive.TIME Archive the file once a day at the time specified in the format hh:mm. The

SIZE and TIME parameters are mutually exclusive.HEADER Specify Y to include a descriptive header in the archived file. The default is

N.DELIM Character to be used as a column delimiter. The default is a TAB character.OUTPUT Output filename for archived files. The suffix BK0�BK6 is appended to

each file, with BK0 representing the latest archive and BK6 the earliest. If no output filename is specified, the default is the first part of the log filename for an (8.3) filename, or the first 32 characters for a long filename.

RFILE Intermediate filename used by the LOGSPIN program. The default is the first part of the log filename for an (8.3) filename followed by .TMP, or the first 32 characters for a long filename followed by .TMP.

KEEP Specify Y to keep the intermediate file. The default is N (spintime, spinsize default).

Page 57: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting History Files to Delimited Flat Files (Windows and OS/400) 57

Archiving Procedure using the Windows AT Command

Archiving Procedure using the Windows AT Command

OverviewTo archive historical data files on Windows Candle Management Servers and on remote managed systems using the AT command, use the procedure that follows. To find out the format of the command, enter AT /? at the MS/DOS command prompt.

1. In order for the AT command to function, you must start the Task Scheduler service. To start the Task Scheduler service, select Settings >Control Panel > Administrative Tools > Services. Result: The Services window displays.

2. At the Services window, select Task Scheduler. Change the service Start Type to Automatic. Click Start.

Result: The Task Scheduler service is started.

An example of using the AT command to archive the history files is as follows:

AT 23:30 /every:M,T,W,Th,F,S,Su c:\sentinel\cms\archive.bat

In this example, Windows will execute the archive.bat file located in c:\sentinel\cms everyday at 11:30 pm. An example of the contents of archive.bat is:

krarloff -o memory.txt wtmemory

krarloff -o physdsk.txt wtphysdsk

krarloff -o process.txt wtprocess

krarloff -o system.txt wtsystem

Page 58: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting Files Using krarloff

58 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting Files Using krarloff

OverviewWhen initiated by LOGSPIN, the krarloff program makes an intermediate copy of the captured history binary file. This copy is processed while history data continues to be collected in the emptied original file. History file conversion can occur whether or not the CMS or the agent is running. You can also manually initiate krarloff as described below.

The krarloff program can be run either at the CMS or in the directory in which the agent is running, from the directory in which the history files are stored. See �Location of the Windows Executables and Historical Data Collection Table Files� on page 61.

Parameters for the krarloff program are described in �krarloff Parameters� on page 59.

Attributes formattingSome attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files.

When you use krarloff to roll off historical data into a text file, any attributes that require format specifiers as indicated in the attribute file are ignored. Only the raw number is seen in the rolled off history text file. Thus, instead of displaying 45.99% or 45.99, the number 4599 appears.

The Warehouse Proxy agent does use the product attribute files to display the correct attribute formatting. However, the Candle Warehouse Database displays the correct attribute formatting only for those attributes that use integers with floating point number formats. See �Warehousing Your Historical Data� on page 47.

Using krarloff on Windows Run the krarloff command from the directory in which the CMS or the agent is run by entering the following at the command prompt:

krarloff [-h] [-d delimiter] [-g] [-m meta-file] [-r rename-to-file] [-o output-file] {-s source | source-filename}

where the square brackets denote the optional parameters, and the curly braces denote a required parameter.

Note: The command is on a single line when typed.

Using krarloff on OS/400Run the krarloff command from an OS/400 in the directory in which the CMS is run by entering the following at the command prompt:

call qautomon/krarloff parm ([� -h�] [�-g�] [�-d� �delimiter�] [�-m� meta-file] [�-r� rename-source-file-to] [�-o� output-file] {�-s� source-file | source-file )}

Page 59: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting History Files to Delimited Flat Files (Windows and OS/400) 59

Converting Files Using krarloff

where the square brackets denote the optional parameters, and the curly braces denote a required parameter.

If you run krarloff from an OS/400 in the directory in which the agent is running, replace qautomon with the name of the executable for your agent. For example, the MQ agent would use kmqlib in the command string.

Note: The command is on a single line when typed.

krarloff Parameters

Table 3. krarloff Parameters Parameter Default Value Description

-h off Controls the presence or absence of the header in the output file. If present, the header is printed as the first line. The header identifies the attribute column name.

-d tab Delimiter used to separate fields in the output text file. Valid values are any single character (for example, a comma).

-g off Controls the presence or absence of the product �group_name� in the header of the output file. Add the -g to the invocation line for krarloff to include a group_name.attribute_name in the header.

-m source-file.hdr Meta-file that describes the format of the data in the source file. If no meta-file is specified on the command line, the default filename is used.

-r source-file.old Rename-to-filename parameter used to rename the source file. If the renaming operation fails, the script waits two seconds and retries the operation.

-o source-file.nnn where nnn is Julian day

Output filename. The name of the file containing the output text file.

-s none Required parameter. Source binary history file that contains the data that needs to be read. Within the curly brace, the vertical bar (|) denotes that you can either use an �-s source� option, or if a name with no option is specified, it is considered a source filename. No defaults are assumed for the source file.

Page 60: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

AS/400 Considerations

60 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

AS/400 Considerations

Where is the historical data stored on the AS/400?User data is stored in QUSRSYS. For each table, there are two files stored on OS/400 that are associated with historical data collection. For example, if you are collecting data for the system status attributes, these two files are KA4SYSTS and KA4SYSTSM. The former is the binary data that is being output by the OMA. The second file is the metafile. The metafile is a file having a single row that contains the names of the columns. The contents of both files can be displayed using DSPPFM.

What happens after krarloff is run?In using the system status example above, after running krarloff, file KA4SYSTS becomes KA4SYSTSO. A new KA4SYSTS file is generated when another row of data is available.

KA4SYSTSM remains untouched.

KA4SYSTSH is the file that is output by krarloff and containing the data is delimited flat file format. This file can be transferred from the AS/400 to the workstation by means of a file transfer program (FTP).

Page 61: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting History Files to Delimited Flat Files (Windows and OS/400) 61

Location of the Windows Executables and Historical Data Collection Table Files

Location of the Windows Executables and Historical Data Collection Table Files

Location of Windows executablesExecutables are located as follows:

� \candle\cms directory on the CMS, where candle is the directory in which the CMS was installed

� \candle\cma directory on the remote managed systems, where candle is the directory in which the agents were installed

Note: The krarloff conversion program must be located in the same directory as the LOGSPIN.EXE program.

Location of Windows historical data table filesIf you run the CMS and agents as processes or as services, the historical data table files are located in the

� \candle\cms directory on the CMS, where candle is the directory in which the CMS was installed

� \candle\cma\logs directory on the remote managed systems, where candle is the directory in which the agents were installed

Location of history configuration files on WindowsThe history configuration files are located in \candle\cms\sqllib.

Page 62: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Location of the Windows Executables and Historical Data Collection Table Files

62 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Page 63: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting History Files to Delimited Flat Files (z/OS) 63

Converting History Files toDelimited Flat Files (z/OS)

IntroductionThe history files collected by the rules established in the Historical Data Collection Configuration program or by your definitions related to historical data collection during product installation can be converted to delimited flat files automatically as part of your persistent data store maintenance procedures (see �Maintaining the Persistent Data Store (CT/PDS)� on page 75), or manually using a MODIFY command. You can use the delimited flat file as input to a variety of popular applications to easily manipulate the data and create reports and graphs.

Data that has been warehoused cannot be extracted since the warehoused data is deleted from the persistent data store. To use these conversion procedures, you must have specified Off for the Warehouse option on the History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal.

Chapter ContentsAutomatic Conversion and Archiving Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Location of the z/OS Executables and Historical Data Table Files . . . . . . . . . . . . . . . . 67Manual Archiving Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

7

Page 64: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Automatic Conversion and Archiving Process

64 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Automatic Conversion and Archiving Process

OverviewWhen you customized your OMEGAMON environment, you were given the opportunity to specify the EXTRACT option for maintenance. Specification of the EXTRACT option ensures that scheduling of the process to convert and archive information stored in your history data tables is automatic. No further action on your part is required. As applications write historical data to the history data tables, the persistent data store detects when a given data set is full, launches the KPDXTRA process to copy the data set, and notifies the Candle Management Server (CMS) that the data set can once again be used to receive historical information. Additional information about the persistent data store can be found in �Maintaining the Persistent Data Store (CT/PDS)� on page 75.

An alternative to the automatic scheduling of conversion is the ability to manually issue the command to convert the historical data files. Information about manually converting your files is found in �Manual Archiving Procedure� on page 68

Converting Files Using KPDXTRAThe conversion program, KPDXTRA, is called by the persistent data store maintenance procedures when the EXTRACT option is specified for maintenance. This program reads a dataset containing the collected historical data and writes out two files for every table that has data collected for it. The processing of this data does not interfere with the continuous collection being performed. Because the process is automatic, a brief overview of the use of KPDXTRA is provided here. For full information about KPDXTRA, review the sample JCL distributed with your OMEGAMON XE product. The sample JCL is found as part of the sample job KPDXTRA contained in the sample libraries RKANSAM and TKANSAM.

Attributes formattingSome attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files.

When you use KDPEXTRA to roll off historical data into a text file, any attributes that require format specifiers as indicated in the attribute file are ignored. Only the raw number is seen in the rolled off history text file. Thus, instead of displaying 45.99% or 45.99, the number 4599 appears.

About KPDXTRAKPDXTRA runs in the batch environment as part of the maintenance procedures. It is capable of taking a parameter that allows the default column separator to be changed. The z/OS JCL syntax for executing this command is:

// EXEC PGM=KPDXTRA,PARM=�PREF=dsn-prefix [DELIM=xx] [NOFF]�

Page 65: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting History Files to Delimited Flat Files (z/OS) 65

Automatic Conversion and Archiving Process

Several files must be allocated for this job to run.

In version 3.0.0 and later, all datasets are kept in read/write state even if they are not active. This makes the datasets unavailable if the CMS is running. That is, jobs cannot be run against the active datasets and the inactive datasets must be taken offline. You can dynamically remove a dataset from the CMS by issuing the modify command:

F stcname,KPDCMD QUIESCE FILE=DSN:dataset

If you must run a utility program against an active data store, issue a SWITCH command prior to issuing this QUIESCE command.

DDNAMES required to be allocated for KPDXTRAThe following is a summary of the DDnames that must be allocated for KPDXTRA. Refer to the sample JCL in the Sample Libraries distributed with the product for additional information.

KPDEXTRA parametersThe table that follows specifies the KPDEXTRA parameters.

Table 4. DD Names Required

RKPDOUT KPDXTRA log messages

RKPDLOG Persistent data store (CT/PDS) messages

RKPDIN Table definition commands file (input to CT/PDS subtask) as set up by CICAT

RKPDIN1 CT/PDS file from which data is to be extracted

RKPDIN2 Optional control file defined as a DUMMY DD statement

Table 5. KPDXTRA parametersParameter Default Value Description

PREF= none Required parameter. Identifies the high level qualifier where the output files will be written.

DELIM= tab Specifies the separator character to use between columns in the output file. The default is a tab character X�05�. To specify some other character, specify the 2-byte hexadecimal representative for that character. For example, to use a comma, specify DELIM=6B.

QUOTE NQUOTE Optional parameter that puts double quotes around all character type fields. Trailing blanks are removed from the output. Makes the output format of the KPDXTRA program identical in format to the output generated by the distributed krarloff program.

Page 66: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Automatic Conversion and Archiving Process

66 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

KPDXTRA messagesThese messages can be found in the RKPDOUT sysout logs created by the execution of the maintenance procedures:

Persistent datastore Extract program KPDXTRA - Version V130.00 Using output file name prefix: CCCHIST.PDSGROUPThe following characters will be used to delimit output file tokens: Column values in data file.............: 0x05 Parenthesized list items in format file: 0x6bNote: Input control file not found; all persistent data will be extracted.Table(s) defined in persistent datastore file CCCHIST.PDSGROUP.PDS#1: Appl. Table Extract Name Name Status --------------- ----------------- ------------ PDSSTATS PDSCOMM Excluded PDSSTATS PDSDEMO Included PDSSTATS PDSLOG Included PDSSTATS TABSTATS IncludedChecking availability of data in data store file: No data found for Appl: PDSSTATS Table: PDSDEMO . Table excluded. No data found for Appl: PDSSTATS Table: TABSTATS . Table excluded.The following 1 table(s) will be extracted: Appl. Table No. Oldest Newest Name Name Rows Row Row ---------------- ------------ -------- -------------------------- --------------------------- PDSSTATS PDSLOG 431 1997/01/10 05:51:20 1997/02/04 02:17:54Starting extract operation.Starting extract of PDSSTATS.PDSLOG. The output data file, CCCHIST.PDSGROUP.D70204.PDSLOG, does not exist; it will be created. The output format file, CCCHIST.PDSGROUP.F70204.PDSLOG, does not exist; it will be created.Extract completed for PDSSTATS.PDSLOG. 431 data rows retrieved, 431 written.Extract operation completed.

NOFF off Causes the creation (if set to ON) or omission (if set to OFF) of a separate file (header file) that contains the format of the tables. Also controls the presence or absence of the header in the output data file that is created as a result of the extract operation. If OFF is specified, the header file will not be created but the header information is included as the first line of the data file. The header information shows the format of the extracted data.

Table 5. KPDXTRA parameters

Page 67: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting History Files to Delimited Flat Files (z/OS) 67

Location of the z/OS Executables and Historical Data Table Files

Location of the z/OS Executables and Historical Data Table Files

Location of z/OS executablesExecutables are located in the &hilev.&midlev.RKANMOD or &hilev.&midlev.TKANMOD library, where:

� &hilev is the library in which the CMS was installed

� &midlev is the name you have provided at installation time.

Location of z/OS historical data table filesThe historical data files created by the extraction program are located in the following library structure:

� &hilev.&midlev.&dsnlolev.tablename.D

� &hilev.&midlev.&dsnlolev.tablename.H

where:

� &hilev qualifier is the library in which the CMS was installed.

� &midlev is the name you have provided at installation time.

� &dsnlolev is the low-level qualifier of the dataset names as set by the configuration tool.

� tablename can be up to 10 characters. When the tablename is greater than 8 characters, the tablename portion of the dataset contains the first 8 characters followed by a period, with the remaining characters of the name appended.

Datasets with a name ending with �D� represent data output. Datasets with a name ending with �H� represent header or format output.

Page 68: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Manual Archiving Procedure

68 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Manual Archiving Procedure

Converting historical files manuallyTo manually convert historical data files on the z/OS CMS and on the remote managed systems, issue the following MODIFY command:

F stcname,KPDCMD SWITCH GROUP=cccccccc EXTRACT

where:

� stcname identifies the name of the started task that is running either the CMS or MVS agents.

� cccccccc identifies the group name associated with the persistent data store allocations. The values for cccccccc may vary based on which products are installed. The standard group name is GENHIST.

When this command is executed, only the tables associated with the group identifier are extracted. If multiple products are installed, each can be controlled by separate SWITCH commands.

This switching can be automated by using either an installation scheduling facility or an automation product.

You can also use the OMEGAMON Platform advanced automation features to execute the SWITCH command. To do so, define a situation that, when it becomes true, executes the SWITCH command as the action.

Page 69: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting History Files to Delimited Flat Files (UNIX Systems) 69

Converting History Files toDelimited Flat Files (UNIX Systems)

IntroductionData that has been warehoused cannot be extracted since the warehoused data is deleted from the persistent data store. To use these conversion procedures, you must have specified Off for the Warehouse option on the History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal.

This chapter explains how the UNIX CandleHistory script is used to convert the saved historical data contained in the history data files to delimited flat files. You can use the delimited flat files in a variety of popular applications to easily manipulate the data to create reports and graphs.

The procedure described in this chapter empties the history accumulation files, and must be performed periodically so that the history files do not take up needless amounts of disk space.

Chapter ContentsUnderstanding History Data Conversion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Performing the History Data Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

8

Page 70: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Understanding History Data Conversion

70 Historical Data Collection Guide for OMEGAMON XE Products

Understanding History Data Conversion

OverviewIn the UNIX environment, you use the CandleHistory script to activate and customize the conversion procedure used to turn selected binary historical data tables into a form usable by other software products. The historical data that is collected is in a binary format and must be converted to ASCII in order to be used by third party products. Each binary file is converted independently. The historical data collected by the Candle Management Server (CMS) may be at the host location of the CMS or at the location of the reporting agent. Conversion can be run at any time, whether or not the CMS or agents are active.

Conversion applies to all history data collected under the current CANDLEHOME associated with a single CMS server, whether the data was written by the CMS or by a remote agent.

Additional information about CandleHistory can be found in the online help. When you enter CandleHistory -h at the command line, this output displays:

CandleHistory [ -h CANDLEHOME ] -C [ -L nnn[Kb|Mb] ] [ -t masks*,etc ] [ -D delim ] [ -H|+H ] [ -N n ] [ -p cms_name ] prod_code

CandleHistory -A?

CandleHistory [ -h CANDLEHOME ] -A perday|0 [ -W days ] [ -L nnn[Kb|Mb] ] [ -t masks*,etc ] [ -D delim ] [ -H|+H ] [ -N n ][ -i instance|-p cms_name ] prod_code

Note: Certain parameters are required. The pipe symbol separating items denotes mutual exclusivity (for example, Kb|Mb means enter either Kb or Mb, not both.) Typically entered as a single line at the UNIX command prompt.

The parameters used with the script are documented in �History conversion parameters� on page 72:

Page 71: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting History Files to Delimited Flat Files (UNIX Systems) 71

Performing the History Data Conversion

Performing the History Data Conversion

OverviewThe CandleHistory script schedules the conversion of historical data to delimited flat files. Both the manual process to perform a one-time conversion and the conversion script that permits you to schedule automatic conversions are documented below.

Important: The CandleHistory script must be executed from CANDLEHOME/bin.

After the conversion has taken place, the resulting delimited flat file has the same name as the input history file with an extension that is a single numerical digit. For example, if the input history file table name is KOSTABLE, the converted file will be named KOSTABLE.0. The next conversion will be named KOSTABLE.1, and so on.

Performing a one-time conversionTo perform a one-time conversion process, type the following at the command prompt:

./CandleHistory -C prod_code

Scheduling basic automatic history conversionsUse CandleHistory to schedule automatic conversions via the UNIX cron facility. To schedule a basic automatic conversion, type the following at the command prompt:

./CandleHistory -A n prod_code

where n is a number from 1�24. This number specifies the number of times per day the data conversion program will run, rounded up to the nearest divisor of 24. The product code is required as well.

For example,

CandleHistory -A 7 ux

means run history conversion every three hours.

Page 72: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Performing the History Data Conversion

72 Historical Data Collection Guide for OMEGAMON XE Products

Customizing your history conversionYou can use the CandleHistory script to further customize your history collection by specifying additional options. For example, you can choose to convert files that are above a particular size limit that you have set. You can also choose to perform the history conversion on particular days of the week.

The table that follows describes all of the history conversion parameters.

Table 6. History conversion parameters-C Identifies this as an immediate one-time conversion call. Required.

-A n Identifies this as a history conversion call. Required. Automatically run specified number of times per day; absence of -A means run conversion now. Value must be -A n, where n is 1-24, the number of runs per day, rounded up to the nearest divisor of 24. For example, -A 7 means run every three hours.

-A 0 Cancels all automatic runs for tabless specified.

-A ? Lists automatic collection status for all tables.

-W Day of the week (0=Sunday, 1=Monday, etc.). Can be a comma-delimited list of numbers or ranges thereof. For example, -W 1,3-5 means Monday, Wednesday, Thursday, and Friday. The default is Monday through Saturday (1-6).

-H Exclude column headers. Default is "attribute".

+H Include group (long table) names in column headers. Format is �Group_desc.Attribute� Default is attribute only.

-L Only converts files whose size is over a specified number of Kb/Mb (suffix can be any of none, K, Kb, M, Mb with none defaulting to Kb).

-h Override for the value of $CANDLEHOME

-t List of tables or mask patterns delimited by commas, colons, or blanks. If the pattern has embedded blanks, it must be surrounded with quotes.

-D Output delimiter to use. Default=tab character. Quote or escape blank: -D � �

-N Keep generation 0-n of output (default 9).

-i instance For agent instances (those not using the default queue manager). Directs the program to process historical data collected by the specified agent instance. For example, -i qm1 specifies the instance named �qm1�.

-p cms_name Directs the program to process historical data collected by the specified CMS instead of the agent.

Note: A product code of ms must be used with this option. The default action is to process data collected by prod_code agent.

prod_code Two-character product code of the product from which historical data is to be converted. Refer to Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX, for product codes.

Page 73: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Converting History Files to Delimited Flat Files (HP NonStop Kernel Systems) 73

Converting History Files toDelimited Flat Files

(HP NonStop Kernel Systems)

IntroductionIf you selected the option to warehouse data to an ODBC data base, that option is mutually exclusive with running the file conversion programs described in this chapter. To use these conversion procedures, you must have specified Off for the Warehouse option on the History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal.

The history files collected using the rules established in the HDC Configuration program can be converted to delimited flat files for use in a variety of popular applications to easily manipulate the data and create reports and graphs. Use the krarloff program to manually invoke file conversion. For best results, you should schedule conversion to run every day.

Support is provided for OMEGAMON XE for WebSphere MQ Configuration and for OMEGAMON XE for WebSphere MQ Monitoring running on the HP NonStop�Kernel operating system (formerly Tandem). For information specific to OMEGAMON XE for WebSphere MQ Monitoring relating to historical data collection, see the Customizing Monitoring Options topic found in your version of the product documentation.

Chapter ContentsConversion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

9

Page 74: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Conversion Process

74 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Conversion Process

OverviewWhen setting up the process that will convert the history files you have collected to delimited flat files, you can schedule the process manually by running the krarloff program. Parameters for the krarloff program are described in �krarloff Parameters� on page 59.

Important: Candle recommends running history file conversion every 24 hours.

Using krarloff on HP NonStop KernelThe history files are kept on the DATA subvolume, under the default <$VOL>.CCMQDAT. However, the location of the history files is dependent on where you start the monitoring agent. If you started the monitoring agent using STRMQA from the CCMQDAT subvolume, the files are stored on CCMQDAT.

You can run krarloff from the DATA subvolume by entering the following:

RUN <$VOL>.CCMQEXE.KRARLOFF <parameters>

Note that CCMQDAT and CCMQEXE are defaults. During the installation process, you can assign your own names for these files.

For a table listing the krarloff parameters, see �krarloff Parameters� on page 59.

Attributes formattingSome attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files.

When you use krarloff to roll off historical data into a text file, any attributes that require format specifiers as indicated in the attribute file are ignored. Only the raw number is seen in the rolled off history text file. Thus, instead of displaying 45.99% or 45.99, the number 4599 appears.

Page 75: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 75

Maintaining the PersistentData Store (CT/PDS)

IntroductionThe persistent data store (CT/PDS) runs in the same address space as the Candle Management Server (CMS). It provides the ability to record and retrieve tabular relational data on a 24 by 7 basis while maintaining indexes on the recorded data. This appendix describes the procedures you use to maintain the CT/PDS.

See the configuration documentation for your product for instructions on configuring the persistent datastore.

Note: For applications configured to run in the CMS address space, the Configure persistent data store step in the CMS product configuration is required. This step applies to z/OS-based products and non-z/OS-based products that enable historical data collection in this z/OS CMS. Any started task associated with a product (including the CMS address space itself) that is running prior to configuring the CT/PDS, must be stopped.

Chapter ContentsAbout the Persistent Data Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76Components of the CT/PDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Overview of the Automatic Maintenance Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Making Archived Data Available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82Exporting and Restoring Persistent Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85Data Record Format of Exported Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87Extracting CT/PDS Data to Flat Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Command Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

A

Page 76: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

About the Persistent Data Store

76 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

About the Persistent Data Store

OverviewThe persistent data store (CT/PDS) is used for writing and for retrieving historical data. The program is the server portion of a client/server application. The client code either provides data to be inserted into relational tables or make requests to retrieve the data. The CT/PDS acts as a subset of a database management system that is concerned only with the physical level of recording and retrieving data.

The data being written to the persistent data store is organized by tables, groups, and datasets. Each table is assigned to a group. A group can have one or more datasets assigned to it. Normally, three datasets are assigned to each group. Groups can have multiple tables assigned to them, so it is not necessary to have a dataset for each table defined to the system. The assignment of tables, groups, and datasets are defined during configuration of your product. See the product configuration documentation for details.

The CMS provides automatic maintenance for the datasets in the CT/PDS. Two procedures and one CLIST, located in &rhilev.&midlev.RKANSAM, provide the maintenance. Their default names are:

� KPDPROC1

� KPDPROCC

� KPDPROC2

If you changed prefix KPDPROC during the configuration process, the suffixes remain 1, C, and 2, respectively. See �Overview of the Automatic Maintenance Process� on page 79.

User ID when running the CT/PDS proceduresThe CT/PDS procedures run with the user ID of the person who installed the product.

Page 77: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 77

Components of the CT/PDS

Components of the CT/PDS

OverviewThe components described below make up the CT/PDS.

� KPDMANE

This is the primary executable program. It is a server for other applications running in the same address space. This program is designed to run inside the Engine address space as a separate subtask. Although it is capable of running inside the Engine, it does not make any use of Engine services. This is because the KPDMAIN program is also used in other utility programs that are intended to run in batch mode. This is the program that eventually starts the maintenance task when it does a switch and determines that no empty datasets are available.

� KPDUTIL

This program is used primarily to initialize one or more datasets for CT/PDS use. The program simply attaches a subtask and starts the KPDMANE program in it. The DD statements used when this program is run dictate what control files are executed by the KPDMANE program.

� KPDARCH

This program acts as a client CT/PDS program that pulls data from the specified dataset and writes it out to a flat file. The program attaches a subtask and starts up the KPDMANE program in it. The output data is still in an internal format, with all the index information excluded.

� KPDREST

This program acts as a client CT/PDS program that reads data created by the KPDARCH program and inserts it back into a dataset in the proper format so that the CT/PDS can use it. This includes the re-building of index information. The program attaches a subtask and starts the KPDMANE program in it.

� KPDXTRA

This is a client CT/PDS program that pulls data from a dataset and writes it to one or more flat files with all column data converted to EBCDIC and separated by tabs. This extracted data can easily be loaded into a DBMS or into spreadsheet programs such as Excel. As with the other client programs, a subtask is attached and the KPDMANE program will be loaded and executed in that environment. See �Extracting CT/PDS Data to Flat Files� on page 91.

� KPDDSCO

This program communicates with the started task that is running the CT/PDS and send it commands to be executed. The typical command executed is the RESUME command to tell the CT/PDS that it can once again use a dataset. This program is capable of using two forms of communication. The older version acts as a client application to the CMS. This mode uses SNA to connect to the server and submit the command requests. The later version uses an SVC 34 to execute a modify command

Page 78: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Components of the CT/PDS

78 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

to the proper started task. A secondary function of this program is to log information in a general log maintained in the CT/PDS tables.

Operation of the CT/PDSThe KPDMANE program invokes maintenance automatically in two places. The first is on startup when it is reading and processing every dataset it knows about. It looks at internal data to determine if the dataset is in a known and stable state. If not, it issues a RECOVER command. The second area is when it is recording information from applications onto an active dataset for a group. If it detects that it is running out of room on a write operation, it executes the SWITCH command internally.

� RECOVER Logic

This code puts the dataset into a quiesce state and closes the file. Information is set up to request an ARCHIVE, INIT, and RESTORE operation to be performed by the maintenance procedures. An SVC 34 is issued for a START command on KPDPROC1 (or its overridden name). The command exits to the caller with the dataset unusable until a RESUME command is executed.

� SWITCH Logic

The SWITCH command looks at all of the datasets assigned to the group and finds an empty one. Note that if no empty datasets are available, future attempts to write data to any dataset in the group will fail. Normally, an empty dataset will be found and it will be marked as the active dataset.

A test is made on the dataset being deactivated (because it is full) to see if the EXTRACT option was specified. If so, the EXTRACT command for the dataset is executed.

The next test is to check if there are any empty datasets in the current group. If not, the code finds the dataset with the oldest data and marks it for maintenance. With the latest release of the CT/PDS, the code checks to see if any of the maintenance options BACKUP, EXPORT, or EXTRACT were specified for this dataset. If not, the INITDS command is executed. Otherwise, the BACKUP command is executed.

� BACKUP Logic

This code puts the dataset in a quiesce state and closes it. A test is made to see if the user specified either BACKUP or EXPORT for the dataset and set appropriate options for the started task. The options always include a request to initialize the dataset. An SVC 34 is issued to start the KPDPROC1 procedure. The code returns to the caller with the dataset unavailable until the RESUME command is executed.

� EXTRACT Logic

This is similar to the BACKUP logic, except the only option specified is for an EXTRACT run with no initialization performed on the dataset.

� RESUME Logic

This code opens the specified dataset name and verifies that it is valid. The dataset is taken out of the quiesce state and made once again available for activation during the next SWITCH operation.

Page 79: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 79

Overview of the Automatic Maintenance Process

Overview of the Automatic Maintenance Process

Overview When a dataset becomes full, the CT/PDS selects an empty dataset to make it active. Once active, the CT/PDS checks to see if there are any more empty datasets. If there are no more empty datasets, maintenance is started on the oldest dataset, and data recording is suspended.

Prior to launching the KPDPROC1 process, the CT/PDS checks to see if either the BACKUP function or the EXPORT function has been specified. If neither function has been specified, then the dataset is initialized within the CT/PDS started task and KPDPROC1 is not executed.

The maintenance process consists of three files that are generated and tailored by the Configuration tool and invoked by the persistent data store. The files are:

� KPDPROC1

KPDPROC1 is a procedure that is started with an MVS START command. Limited information is passed to this started task which it uses to drive a CLIST in a TSO environment. The configuration tool creates this file and puts it into the RKANSAM library for each runtime environment (RTE) that has a CT/PDS component. This procedure must be copied to a system level procedure library so the command issued to start it can be found.

The parameters passed to KPDPROC1 vary based on the version of the configuration tool and the CT/PDS. This document assumes the latest version is installed. There are three parameters passed to the started task. They are:

� HILEV

This is the high level qualifier for the RTE that configured this version of the CT/PDS. It is obtained by extracting information from the DD statement that points to the CT/PDS control files.

� LOWLEV

This is the low level qualifier for the sample library. It currently contains the RKANSAM field name.

� DATASET

The fully qualified name of the dataset being maintained. It is possible to have a dataset name that does not match the high level qualifier specified in the first parameter.

� KPDPROCC

KPDPROCC is the CLIST that is executed by the procedure KPDPROC1 procedure. The CLIST has the task of obtaining all of the information needed to perform the maintenance and to submit a job to execute the desired maintenance.

Page 80: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Overview of the Automatic Maintenance Process

80 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

� KPDPROC2

KPDPROC2 is the actual JOB that gets executed to save the data and to initialize the dataset so it can be once again used by the CT/PDS. This procedure:

� backs up the data

� deletes the dataset

� allocates a new dataset with the same parameters as before

� makes the new dataset available for reading and writing

The configuration tool allows the user to pick the first seven characters of the maintenance procedure names. The KPDPROC is the default if the user does not modify it.

What part of maintenance do you control?Most of the CT/PDS maintenance procedure is automatic and does not require your attention. Through the configuration tool, you have already specified the EXTRACT, BACKUP and EXPORT options by indicating a Y or N for each dataset group. See �Command Interface� on page 94 for descriptions of additional commands that are used primarily for maintenance.

� BACKUP makes an exact copy of the dataset being maintained.

� EXPORT writes the data to a flat file in an internal format that can be used by external programs to post process the data. This is also used for recovery purposes when the CT/PDS detects potential problems with the data.

� EXTRACT writes the data to a flat file in human readable form which is suitable for loading into other DBMS systems.

If none of the maintenance options are specified, the data within the dataset being maintained is erased.

You can indicate whether to:

� back up the data for each dataset group

� back up the data to tape or to DASD for all dataset groups

Indicating dataset backup to tape or to DASDFor all dataset groups that you selected to back up, you must indicate whether you want to back up the data to tape or to DASD. This decision will apply to all datasets.

Table 7. Determining the medium for dataset backupIf you are backing up datasets to... THEN...

tape use KPDPROC2 as shipped

DASD follow the procedure below

Page 81: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 81

Overview of the Automatic Maintenance Process

Backing up datasets to DASDUse this procedure to modify KPDPROC2:

1. Access the procedure in &rhilev.&midlev.RKANSAM(KPDPROC2) with any editor.

2. Remove the comment characters from the step that backs up datasets to DASD and insert comment characters in the step that backs up datasets to tape.

3. Save the procedure.

4. Copy procedure KPDPROC2 to your system procedure library, usually SYS1.PROCLIB.

Naming the export datasetsWhen you choose to export data, you are requesting to write data to a sequential dataset. The names of all exported datasets follow the format

&rhilev.&midlev.&dsnlolev.A#######

where:

� &rhilev is the high-level qualifier of all datasets in the CT/PDS, as you specified in the CICAT

� &midlev is the mid-level qualifier of all datasets in the CT/PDS, as you specified in the CICAT

� &dsnlolev is the low-level qualifier of the dataset names as set by the CICAT

� A is a required character

� nnnnnnn is a sequential number

Page 82: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Making Archived Data Available

82 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Making Archived Data Available

OverviewThis topic shows you how to make data available to those products that use the CT/PDS after the data has been backed up to DASD or to tape.

To make the data available you will dynamically restore a connection between an archived dataset and the CMS.

When the automatic maintenance facility backs up a dataset in the persistent data store, it performs the following activities:

� disconnects the dataset from the CMS

� copies the dataset to tape or DASD in a format readable by the CMS

� deletes and reallocates the dataset

� reconnects the empty dataset to the CMS

To view archived data from the product, you must ensure that the dataset is stored on an accessible DASD volume and reconnect the dataset to the CMS.

Dataset naming conventionsWhen the maintenance facility backs up a dataset, it uses the following format to name the dataset:

&rhilev.&midlev.&dsnlolev.B#######

where:

� &rhilev is the high-level qualifier of all datasets in the CT/PDS, as you specified during configuration

� &midlev is the mid-level qualifier of all datasets in the CT/PDS, as you specified during configuration

� &dsnlolev is the low-level qualifier of the dataset names as set by the configuration tool

� B is a required character

� nnnnnnn is a sequential number

Page 83: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 83

Making Archived Data Available

PrerequisitesBefore you begin to restore the connection between the archived dataset and the CMS, you will need the following information:

� the name of the archived dataset that contains the data you want to view. Your systems programmer can help you locate the name of the dataset.

� the name of the CT/PDS group that corresponds to the data you want to view

Finding background informationYou can use the Installation and Configuration Assistance Tool to find the name of the CT/PDS group to which the archived dataset belongs by following this procedure:

1. Stop the CMS if it is running.

2. Log onto a TSO session and invoke ISPF.

3. At the ISPF Primary Option menu, enter 6 in the Option field to access the TSO command mode.

4. At the TSO command prompt, type:

EX 'shilev.INSTLIB'

where shilev is the high-level qualifier of the configuration tool installation library at your site.

The configuration tool first displays the copyright panel and then the Main Menu.

5. From the Main Menu, select Configure products and then Select product to configure.

6. From the Product Selection Menu, select the product.

7. On the Runtime Environments (RTEs) panel, specify C to select the RTE where the product you configured resides.

8. On the Configure product panel, select Configure persistent data store and then Modify and review data store specifications.

9. Locate the low-level qualifier of the dataset you want to reconnect and note the corresponding group name.

10. Press F3 until you exit the Configuration tool.

Connecting the dataset to the CMSTo reconnect the archived dataset to the CMS so you can view the data from the product, follow this procedure:

1. If the dataset resides on tape, use a utility such as IEBGENR to copy the dataset to a DASD volume that is accessible by the CMS.

2. Copy job KPDCOMMJ from &hilev.TKANSAM to &rhilev.&midlev.RKANSAM.

3. Access job &rhilev.&midlev.RKANSAM(KPDCOMMJ) with any editor.

Page 84: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Making Archived Data Available

84 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

4. Substitute site-specific values for the variables in the job, as described in the comments at the beginning of the job. In addition to the comments in the job, you may find the following information helpful:

� Variable &GROUP on the COMM ADDFILE statement is the group name that you identified in �Finding background information� on page 83.

� Variable &PDSN on the COMM ADDFILE statement is the name of the dataset you want to reconnect.

5. Locate the COMM ADDFILE statement near the bottom of the job and remove the comment character (*).

6. Submit KPDCOMMJ to restore the connection between the dataset you specified and the CMS.

7. To verify that the job ran successfully, you can view a report in RKPDLOG that lists all the persistent data store datasets that are connected to the CMS. RKPDLOG is the ddname of a SYSOUT file allocated to the CMS.

Locate the last ADDFILE statement in the log and examine the list of datasets that follows the statement. If the job ran successfully, the name of the dataset you reconnected will appear in the list.

Disconnecting the datasetThe dataset that you connected to the CMS is not permanently connected. The connection will automatically be removed the next time the CMS terminates. If you wish to remove the dataset from the CT/PDS immediately after you view the data, follow this procedure:

1. Access job &rhilev.&midlev.RKANSAM(KPDCOMMJ) with any editor.

2. Retain all site-specific values that you entered when you modified the job to reconnect the dataset in the previous procedure.

3. Locate the COMM ADDFILE statement near the bottom of the job and perform the following steps, if needed:

A. Remove the comment character from the statement, if one exists.B. Overtype the word ADDFILE with the word DELFILE.C. Remove the Group parameter together with its value.D. Remove the RO parameter if it exists.

4. Submit KPDCOMMJ to remove the connection between the dataset and the CMS.

To verify that the job ran successfully, you can view a report in RKPDLOG that lists all datasets connected to the CMS.

Locate the last DELFILE statement in the log and examine the list of datasets that follows the statement. If the job ran successfully, the name of the dataset you disconnected will not appear in the list.

5. If the dataset resides on tape, you may want to conserve space by deleting the dataset from DASD.

Page 85: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 85

Exporting and Restoring Persistent Data

Exporting and Restoring Persistent Data

OverviewIn addition to the standard maintenance jobs used by the persistent data store, there are sample jobs distributed with the CMS that you can use to export data to a sequential file and then restore the data to the original indexed format.

These jobs are not tailored by the configuration tool at installation time and must be modified to add pertinent information.

Exporting persistent dataFollow this procedure to export persistent data to a sequential file:

1. Stop the CMS if it is running.

2. Copy &thilev.&midlev.RKANSAM(KPDEXPTJ).

3. Update the jobcard with the following values:

With the exception of &pdsn, these values can be found in the PDSLOG SYSOUT of the CMS started task.

4. Submit the job.

&rhilev high-level qualifier of the runtime environment where the CT/PDS resides.

&pdsn fully qualified name of the CT/PDS dataset to be exported

&expdsn fully qualified name of the export file you are creating

&unit2 DASD unit identifier for &expdsn

&ssz record length of output file (You can use the same record length as defined for &pdsn.)

&sct count of blocks to allocate (You can use the same size as the blocks allocated for &pdsn.)

&bsz &ssz value plus eight

Page 86: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Exporting and Restoring Persistent Data

86 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Restoring exported dataFollow this procedure to restore a previously exported CT/PDS dataset.

1. Copy &thilev.&midlev.RKANSAM(KPDRESTJ).

2. Update the jobcard with the following values:

With the exception of &pdsn, these values can be found in the PDSLOG SYSOUT of the CMS started task.

3. Submit the job.

&rhilev high-level qualifier of the runtime environment where the CT/PDS resides.

&pdsn fully qualified name of the CT/PDS dataset to be restored

&expdsn fully qualified name of the file you are creating

&unit2 DASD unit identifier for &expdsn

&group identifier for the group that the dataset will belong to

&siz size of the dataset to be allocated, in megabytes

Page 87: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 87

Data Record Format of Exported Data

Data Record Format of Exported Data

OverviewThis section describes the format of the dictionary entries but not its contents. The actual meaning of the tables and columns is product-specific.

Due to the nature of the data being recorded, the format of a dataset is complex. A single dataset contains descriptions for every table that was recorded in the original data set, therefore mapping information in the form of a data dictionary is provided for every table. In many cases, the tables can have variable length columns as well as rows of data where some of the columns are not available. The information about missing columns and lengths for variable columns are imbedded in the data records. Some tables have columns that physically overlay each other. This must be taken into account when trying to obtain data for these overlays.

Data in the exported file is kept in internal format which means that many of the fields will be in binary. The output file is made up of three sections with one or more data rows within each.

� Section 1 describes general information about the data source used to create the exported data.

� Section 2 contains a dictionary needed to map out the data.

� Section 3 contains the actual data rows.

The historical data is maintained in relational tables, therefore the dictionary mappings provide table and column information for every table that had data recorded for it in the CT/PDS.

Section 1The Section 1 record is not needed to map out the data within the exported file. However, it is useful for determining how to re-allocate a dataset when a CT/PDS file needs to be reconstructed.

Section 1 contains a single data row used to describe information about the source of the data recorded in the export file. The data layout for the record is:

Table 8. Section 1 Data Record FormatField Offset Length Type Description

RecID 0 4 Char Record ID. Contains AA10 for header record 1.

Length 4 4 Binary Contains the record length of the header record.

Timestamp 8 16 Char Timestamp of export. Format: CYYMMDDHHMMSSMMM

Group 24 8 Char Group name to which the data belongs.

Data Store Ver 32 8 Char Version of KPDMANE used to record original data.

Page 88: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Data Record Format of Exported Data

88 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Section 2 RecordsSection 2 provides information about the tables and columns that are represented in Section 3. This section has a header record followed by a number of table and column description records.

Dictionary Header RecordThis is the first Section 2 record (and therefore the second record in the dataset). It provides general information about the format of the dictionary records that follow. It is used to describe how many tables are defined in the dictionary section. The data layout for the dictionary header record is:

Table description recordEach table within the exported dataset has a table record that provides its name, identifier, and additional information about the columns. All table records are provided before the first column record. The column records and all of the data records in section 3 use the identifier number to associate it with the appropriate table.

Export Version 40 8 Char Version of KPDARCH used to create exported file.

Total Slots 48 4 Binary Number of blocks allocated in original dataset.

Used Slots 52 4 Binary Number of used blocks at time of export.

Slot Size 56 4 Binary Block size of original dataset.

Expansion Area 60 20 --- Unused area.

Data Store Path 80 256 Char Name of originating dataset.

Export Path 336 256 Char Name of exported dataset.

Table 9. Section 2 Data Record FormatField Offset Length Type Description

RecID 0 4 Char Record ID. Contains DD10 for header record 2.

Dictionary Len 4 4 Binary Contains the length of the entire dictionary.

Header Len 8 4 Binary Length of the header record.

Table Count 12 4 Binary Number of tables in dictionary (1 record per table).

Column Count 16 4 Binary Total number of columns described.

Table Row Len 20 4 Binary Size of table row.

Col Row Len 24 4 Binary Size of column row.

Expansion 28 28 --- Unused area.

Table 8. Section 1 Data Record Format (continued)Field Offset Length Type Description

Page 89: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 89

Data Record Format of Exported Data

The map length and variable column count fields can be used to determine exactly where the data for each column starts and to properly determine if the column exists in a record. The format of the table description record is described in the table that follows.

Column description recordOne record exists for every column in the associated table record. Each record provides the column name, type, and other characteristics. The order of the column column rows is the same order in which the columns appear in the output row. However, some columns may be missing on any given row. The mapping structure defined under section 3 must be used to determine if a column is present.

The format of the column records is:

Table 10. Section 2 Table Description RecordField Offset Length Type Description

RecID 0 4 Char Record ID. Contains DD20 for table record.

Identifier Num 4 4 Binary Unique number for this table.

Application 8 8 Char Application name table belongs to.

Table Name 16 10 Char Table name.

Table Version 26 8 Char Table version.

Map Length 34 2 Binary Length of the mapping area.

Column Count 16 4 Binary Count of columns in the table.

Variable Cols 36 4 Binary Count of variable name columns.

Row Count 40 4 Binary Number of rows in exported file for this table.

Oldest Row 44 16 Char Timestamp for oldest row written for this table.

Newest Row 64 16 Char Timestamp for newest row written for this table.

Expansion 80 16 --- Unused area.

Table 11. Section 2 Column Description RecordField Offset Length Type Description

RecID 0 4 Char Record ID. Contains DD30 for table record.

Table Ident 4 4 Binary Identifier for the table this column belongs to.

Column Name 8 10 Char Column name.

SQL Type 18 2 Char SQL type for column.

Column Length 20 4 Binary Maximum length of this column�s data.

Flag 24 1 Binary Flag byte.

Spare 25 1 --- Unused.

Overlay Col ID 26 2 Char Column number if this is an overlay.

Page 90: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Data Record Format of Exported Data

90 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Section 3 recordsSection 3 has one record for every row of every table that was in the original CT/PDS dataset being exported. Each row starts with a fixed portion followed by the actual data associated with the row. The length of the column map can be obtained from the table record (DD20). Each bit in the map represents one column. A 0 for the bit position indicates that the column data is not present while a 1 indicates that data exists in this row for the column. Immediately following the column map field is an unaligned set of 2-byte length fields. One of these length fields exists for every variable length column in the table. This mapping information must be used to determine where the starting location for any given column is within the data structure. The actual data starts immediately after the last length field.

If dealing with overlay columns, use the column offset defined in the DD30 records to determine the starting location for this type of column. Typically, we do not worry about overlaid columns with extracted data. If you have a real need to look at the actual content of an overlaid column, you will need to expand the data by re-inserting any missing columns and expanding all variable length columns to the maximum length before doing the mapping.

The table that follows maps the fixed portion of the data.

Overlay Col Off 28 2 Char Offset into row for start of overlay column.

Alignment 30 2 --- Unused.

Spare 1 32 8 --- Unused.

Table 12. Section 3 Record FormatField Offset Length Type Description

RecID 0 4 Char Record ID. Contains ROW1 for column record.

Table Ident 4 4 Binary Identifier for the table this record belongs to.

Row Length 8 4 Binary Total length of this row.

Data Offset 12 4 Binary Offset to start of data.

Data Length 16 4 Binary Length of data portion of row.

Column Map 20 Varies Binary Column available map plus variable length fields.

Table 11. Section 2 Column Description Record (continued)Field Offset Length Type Description

Page 91: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 91

Extracting CT/PDS Data to Flat Files

Extracting CT/PDS Data to Flat Files

OverviewThis topic explains how to extract data from a CT/PDS dataset into a flat file in EBCDIC format. This information can be loaded into spreadsheets or databases.

The format of the data is converted to tab delimited columns. The data is written to separate files for each table, therefore the data format for all rows in each dataset is consistent. The program also generates a separate file. This file contains a single row that provides the column names in the order in which the data is organized. This file is also delimited for ease of use. An option (NOFF) on the KPDXTRA program bypasses creating the separate file and places the column information as the first record of the data file.

This job is not tailored by the configuration tool at installation time and must be modified to add pertinent information.

The output from this job is written to files with the following naming standard:

&pref.xymmdd.tablename

where:

� &pref is the high-level qualifier that you designate for the output files

� x is D for data output or F for format output

� ymmdd is the year (y), month (mm), and day (dd) on which the KPDXTRA job is run

� tablename is the identifier for the table being extracted. It is recommended that this name be no more than eight characters.

If this job is run more than once on a given day, data is appended to any data previously extracted for that day.

In Version 300 and later, all datasets are kept in read/write state even if they are not active. This makes the datasets unavailable if the CMS is running. That is, jobs cannot be run against the active datasets and the inactive datasets must be taken offline.

You can dynamically remove a dataset from the CMS by issuing the modify command:

F stcname,KPDCMD QUIESCE FILE=DSN:dataset

If you must run a utility program against an active data store, issue a SWITCH command prior to issuing this QUIESCE command.

Page 92: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Extracting CT/PDS Data to Flat Files

92 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Extracting CT/PDS data to EBCDIC filesUse this job to extract CT/PDS data to EBCDIC files.

1. Copy &thilev.&midlev.RKANSAM(KPDXTRAJ).

2. Update the jobcard with the following values:

3. Add the parameters you want to use for this job

4. Submit the job.

Extracted data format

Header RecordThe following is a sample extract header file record:

TMZDIFF(int,0,4) WRITETIME(char,1,16) ORIGINNODE(char,2,128) QMNAME(char,3,48) APPLID(char,4,12) APPLTYPE(int,5,4) SDATE_TIME(char,6,16) HOST_NAME(char,7,48) CNTTRANPGM(int,8,4) MSGSPUT(int,9,4) MSGSREAD(int,10,4) MSGSBROWSD(int,11,4) INSIZEAVG(int,12,4) OUTSIZEAVG(int,13,4) AVGMQTIME(int,14,4) AVGAPPTIME(int,15,4) COUNTOFQS(int,16,4) AVGMQGTIME(int,17,4) AVGMQPTIME(int,18,4) DEFSTATE(int,19,4) INT_TIME(int,20,4) INT_TIMEC(char,21,8) CNTTASKID(int,22,4) SAMPLES(int,23,4) INTERVAL(int,24,4)

Each field is separated by a tab character (by default). The data consists of the column name with a type, column number, and column length field within the parenthesis for each column. The information within parenthesis is used primarily to describe the internal formatting information, and therefore can be ignored.

Data RecordEach record in the data file for the above header contains data that looks like the following:

0 "1000104003057000" "MQM7:SYSG:MQESA" "MQM7" "XCXS2DPL" 2 "1000104003057434" "SYSG" 1 0 0 0 0 0 2 90007 0 2 0 1 96056 "016: 01" 1 1 900

&rhilev high-level qualifier of the runtime environment where the CT/PDS resides.

&pdsn fully qualified name of the CT/PDS dataset to be extracted

&pref high-level qualifier for the extracted data

PREF= identifies the high-level qualifier for the output file. This field is required.

DELIM=nn identifies the separator character to be placed between columns. The default is 05.

NOFF= if used, causes the format file not to be generated. The column names will be placed into the data file as the first record.

QUOTES use to place quotes around character type of data

Page 93: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 93

Extracting CT/PDS Data to Flat Files

Using the header file and the data file will match up as follows:

TMZDIFF 0 IntegerWRITETIME "1000104003057000 "CharacterORIGINNODE "MQM7:SYSG:MQESA "CharacterQMNAME "MQM7 "Character� � �SAMPLES 1 IntegerINTERVAL 900 Integer

Page 94: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Command Interface

94 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Command Interface

OverviewThe CT/PDS uses a command interface to perform many of the tasks needed to maintain the datasets used for historical data. Most of these commands can be invoked externally through a command interface supported in the Engine environment. These commands can be executed using the standard MVS MODIFY interface with the following format:

F stcname,KPDCMD command arguments

where

stcname Started task name of address space where the CT/PDS is running.

command One of the supported dynamic commands.

arguments Valid arguments to the specified command.

CommandsMany commands are supported by the CT/PDS. The commands described below are used primarily for maintenance.

SWITCH commandThis dynamic command causes a data store file switch for a specific file group. At any given time, update-type operations against tables in a particular group are directed to one and only one of the files in the group. That one file is called the "active" file. A file switch changes the active file for a group. In other words, the switch causes a file other than the currently active one to become the new active file.

If the group specified by this command has only one file, or the group currently has no inactive file that is eligible for output, the switch is not performed.

At the conclusion of a switch, CT/PDS starts the maintenance process for a file in the group if no empty files remain in the group.

The [NO]EXTRACT keyword may be used to force or suppress an extract job for the data store file deactivated by the switch.

Syntax:

SWITCH GROUP=groupid [ EXTRACT | NOEXTRACT ]

where

groupid Specifies the id of the file group that is to be switched. The group must have multiple files assigned to it.

EXTRACT: Specifies that the deactivated data store file should be extracted, even if the file's GROUP statement did not request extraction.

Page 95: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 95

Command Interface

NOEXTRACT: Specifies that extraction should not be performed for the deactivated data store file. This option overrides the EXTRACT keyword of the GROUP statement.

Note that if neither EXTRACT nor NOEXTRACT is specified, the presence or absence of the EXTRACT keyword on the file's GROUP statement determines whether extraction is performed as part of the switch.

BACKUPThis command causes a maintenance task to be started for the data store file named on the command. The maintenance task typically deletes, allocates and initializes a data store file, optionally backing up or exporting the file before deleting it. (The optional export and backup steps are requested via parameters on the data store file's GROUP command in the RKPDIN file.)

Syntax:

BACKUPFILE=DSN:dsname

where

dsname: Specifies the physical dataset name of the file that is to be maintained.

ADDFILE commandThis command is used to dynamically assign a new physical data store file to an existing file group. The command can be issued any time after the CT/PDS initialization has completed in the CMS. It can be used to increase the number of files assigned to a group or to bring old data back online. It cannot, however, be used to define a new file group ID. It may be used to add files only to groups that already exist as the result of GROUP commands in the RKPDIN input file.

Syntax:

ADDFILE GROUP=groupid FILE=DSN:dsname [ RO ] [ BACKUP ] [ ARCHIVE ]

where

groupid: Specifies the unique group id of the file group to which a file is to be added.

dsname: Specifies the fully-qualified name (no quotes) of the physical dataset that is to be added to the group specified by groupid.

RO: Specifies that the file is to be read-only (that is, that no new data may be recorded to it). By default, files are not read-only (that is, they are modifiable). This parameter may also be specified as READONLY.

Page 96: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Command Interface

96 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

DELFILE commandThis command is used to drop one physical data store file from a file group's queue of files. It can be issued any time after CT/PDS initialization has completed in the CMS.

The file to be dropped must be full, partially full, or empty; it cannot be the "active" (output) file for its group (if it is, the DELFILE command will be rejected as invalid).

The DELFILE command is conceptually the opposite of the ADDFILE command, and is intended to be used to manually drop a file that was originally introduced by a GROUP or ADDFILE command. Once a file has been dropped by DELFILE, it is no longer allocated to the CMS task and may be allocated by other tasks. Note that DELFILE does not physically delete a file or alter it in any way. To physically delete and un-catalog a file, use the REMOVE command.

Syntax:

DELFILE FILE=DSN:dsname

where

dsname: Specifies the fully-qualified (without quotes) name of the file that is to be dropped.

EXTRACT commandThis command causes an extract job to be started for the data store file named on the command. The job converts the table data in the data store file to delimited text format in new files, then signals the originating CMS to resume use of the data store file.

For each table extracted from the data store file, two new files are created. One file contains the converted data and one file contains a record describing the format of each row in the first file.

Syntax:

EXTRACT FILE=DSN:dsname

where

dsname: Specifies the physical dataset name of the file to have its data extracted.

INITDS commandThis command forces a data store file to be initialized within the address space where the CT/PDS is running.

Syntax:

BACKUP: Specifies that the file is to be copied to disk or tape before being reallocated by the automatic maintenance task. (Whether the copy is to disk or tape is a maintenance process customization option.) By default, files are not backed up during maintenance.

ARCHIVE: Specifies that the file is to be exported before being reallocated by the automatic maintenance task. By default, files are not exported during maintenance.

Page 97: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Maintaining the Persistent Data Store (CT/PDS) 97

Command Interface

INITDS FILE DSN:dsname

where

dsname: Identifies the data set name of the data store file to be initialized.

RECOVER commandThis command causes a recovery task to be started for the data store file named on the command. The recovery task attempts to repair a corrupted data store file by exporting it, reallocating and initializing it, and restoring it. The restore operation rebuilds the index information, the data most likely to be corrupted in a damaged file. The recovery is not guaranteed to be successful, however; some severe forms of data corruption are unrecoverable.

Syntax:

RECOVER FILE=DSN:dsname

where

dsname: Specifies the physical name of the dataset to be recovered.

RESUME commandThe RESUME command is used to notify the CT/PDS that it can once again make use of the dataset specified in the arguments. The file identified must be one that was taken offline by the backup, recover, or extract commands.

Syntax:

RESUME FILE=DSN:dsname

where

dsname: Specifies the physical name of the dataset to be brought online.

Other Useful Commands

QUERY CONNECT commandThe QUERY CONNECT command displays a list of applications and tables that are currently defined in the CT/PDS. The output of this command shows the application names, table names, total number of rows recorded for each table, the group the table belongs to, and the current dataset that the data is being written to.

Syntax:

QUERY CONNECT <ACTIVE>

where

ACTIVE - Optional parameter that only displays those tables that are active. An active table is one that has been defined and assigned to an existing group, and the group has datasets assigned to it.

Page 98: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Command Interface

98 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

DATASTORE commandThe QUERY DATASTORE command displays a list of datasets known to the CT/PDS. For each dataset, the total number of allocated blocks, the number of used blocks, the number of tables that have data recorded, the block size, and status are displayed.

Syntax:

QUERY DATASTORE <FILE=DSN:datasetname>

where

FILE - Optional parameter that allows you to specify that you are only interested in the details for a single dataset. When this option is used, the resulting display is changed to show information that is specific to the tables being recorded in the dataset.

COMMIT commandThis dynamic command flushes to disk all pending buffered data. For performance reasons, CT/PDS does not immediately write to disk every update to a persistent table. Updates are buffered in virtual storage. Eventually the buffered updates are "flushed" (that is, written to disk) at an optimal time. However, this architecture makes it possible for persistent data store files to become "corrupted" (invalid) if the files are closed prematurely, before pending buffered updates have been flushed. Such premature closings may leave inconsistent information in the files.

The known circumstances that may cause corruption are:

� Severe abnormal CMS terminations that prevent the CT/PDS recovery routines from executing

� IPLs performed without first stopping the CMS

The COMMIT command is intended to limit the exposure to data store file corruption. Some applications automatically issue this command after inserting data.

Syntax:

COMMIT

Page 99: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Support Information 99

Support Information

If you have a problem with your IBM software, you want to resolve it quickly. This section describes the following options for obtaining support for IBM software products:

� �Searching knowledge bases� on page 99

� �Obtaining fixes� on page 100

� �Receiving weekly support updates� on page 100

� �Contacting IBM Software Support� on page 101

Searching knowledge basesYou can search the available knowledge bases to determine whether your problem was already encountered and is already documented.

Searching the information centerIBM provides extensive documentation that can be installed on your local computer or on an intranet server. You can use the search function of this information center to query conceptual information, instructions for completing tasks, and reference information.

Searching the InternetIf you cannot find an answer to your question in the information center, search the Internet for the latest, most complete information that might help you resolve your problem.

To search multiple Internet resources for your product, use the Web search topic in your information center. In the navigation frame, click Troubleshooting and support > Searching knowledge bases and select Web search. From this topic, you can search a variety of resources, including the following:

� IBM technotes

� IBM downloads

� IBM Redbooks®

� IBM developerWorks®

� Forums and newsgroups

� Google

B

Page 100: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

100 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Obtaining fixesA product fix might be available to resolve your problem. To determine what fixes are available for your IBM software product, follow these steps:

1. Go to the IBM Software Support Web site at (http://www.ibm.com/software/support).

2. Click Downloads and drivers in the Support topics section.

3. Select the Software category.

4. Select a product in the Sub-category list.

5. In the Find downloads and drivers by product section, select one software category from the Category list.

6. Select one product from the Sub-category list.

7. Type more search terms in the Search within results if you want to refine your search.

8. Click Search.

9. From the list of downloads returned by your search, click the name of a fix to read the description of the fix and to optionally download the fix.For more information about the types of fixes that are available, IBM Software Support Handbook at http://techsupport.services.ibm.com/guides/handbook.html.

Receiving weekly support updatesTo receive weekly e-mail notifications about fixes and other software support news, follow these steps:

1. Go to the IBM Software Support Web site at http:/www.ibm.com/software/support.

2. Click My Support in the upper right corner of the page.

3. If you have already registered for My Support, sign in and skip to the next step. If you have not registered, click register now. Complete the registration form using your e-mail address as your IBM ID and click Submit.

4. Click Edit Profile.

5. In the Products list, select Software. A second list is displayed.

6. In the second list, select a product segment, for example, Application servers. A third list is displayed.

7. In the third list, select a product sub-segment, for example, Distributed Application & Web Servers. A list of applicable products is displayed.

8. Select the products for which you want to receive updates, for example, IBM HTTP Server and WebSphere Application Server.

9. Click Add products.

10. After selecting all products that are of interest to you, click Subscribe to email on the Edit profile tab.

11. Select Please send these documents by weekly email.

Page 101: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Support Information 101

12. Update your e-mail address as needed.

13. In the Documents list, select Software.

14. Select the types of documents that you want to receive information about.

15. Click Update. If you experience problems with the My support feature, you can obtain help in one of the following ways:

Online: Send an e-mail message to [email protected], describing your problem.

By phone: Call 1-800-IBM-4You (1-800-426-4968).

Contacting IBM Software SupportIBM Software Support provides assistance with product defects.

Before contacting IBM Software Support, your company must have an active IBM software maintenance contract, and you must be authorized to submit problems to IBM. The type of software maintenance contract that you need depends on the type of product you have:

� For IBM distributed software products (including, but not limited to, Tivoli, Lotus®, and Rational® products, as well as DB2® and WebSphere® products that run on Windows or UNIX operating systems), enroll in Passport Advantage® in one of the following ways:

� Online: Go to the Passport Advantage Web page (http://www.lotus.com/services/passport.nsf/WebDocs/ Passport_Advantage_Home) and click How to Enroll

� By phone: For the phone number to call in your country, go to the IBM Software Support Web site at http://techsupport.services.ibm.com/guides/contacts.html and click the name of your geographic region.

� For customers with Subscription and Support (S & S) contracts, go to the Software Service Request Web site at https://techsupport.services.ibm.com/ssr/login.

� For customers with IBMLink�, CATIA, Linux�, S/390®, iSeries�, pSeries®, zSeries®, and other support agreements, go to the Support Line Web site at http://www.ibm.com/services/us/index.wss/so/its/a1000030/dt006.

� For IBM eServer� software products (including, but not limited to, DB2 and WebSphere products that run in zSeries, pSeries, and iSeries environments), you can purchase a software maintenance agreement by working directly with an IBM sales representative or an IBM Business Partner. For more information about support for eServer software products, go to the IBM Technical Support Advantage Web site at http://www.ibm.com/servers/eserver/techsupport.html.

If you are not sure what type of software maintenance contract you need, call 1-800-IBMSERV (1-800-426-7378) in the United States. From other countries, go to the contacts page of the IBM Software Support Handbook on the Web at

Page 102: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

102 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

http://techsupport.services.ibm.com/guides/contacts.html and click the name of your geographic region for phone numbers of people who provide support for your location.

To contact IBM Software Support, follow these steps:

1. �Determining the business impact� on page 1022. �Describing problems and gathering information� on page 1023. �Submitting problems� on page 103

Determining the business impactWhen you report a problem to IBM, you are asked to supply a severity level. Therefore, you need to understand and assess the business impact of the problem that you are reporting. Use the following criteria:

Describing problems and gathering informationWhen explaining a problem to IBM, be as specific as possible. Include all relevant background information so that IBM Software Support specialists can help you solve the problem efficiently. To save time, know the answers to these questions:

� What software versions were you running when the problem occurred?

� Do you have logs, traces, and messages that are related to the problem symptoms? IBM Software Support is likely to ask for this information.

� Can you re-create the problem? If so, what steps were performed to re-create the problem?

� Did you make any changes to the system? For example, did you make changes to the hardware, operating system, networking software, and so on.

� Are you currently using a workaround for the problem? If so, be prepared to explain the workaround when you report the problem.

� What software versions were you running when the problem occurred?

Severity 1 The problem has a critical business impact. You are unable to use the program, resulting in a critical impact on operations. This condition requires an immediate solution.

Severity 2 The problem has a significant business impact. The program is usable, but it is severely limited.

Severity 3 The problem has some business impact. The program is usable, but less significant features (not critical to operations) are unavailable.

Severity 4 The problem has minimal business impact. The problem causes little impact on operations, or a reasonable circumvention to the problem was implemented.

Page 103: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Support Information 103

Submitting problemsYou can submit your problem to IBM Software Support in one of two ways:

� Online: Click Submit and track problems on the IBM Software Support site at http://www.ibm.com/software/support/probsub.html. Type your information into the appropriate problem submission form.

� By phone: For the phone number to call in your country, go to the contacts page of the IBM Software Support Handbook (http://techsupport.services.ibm.com/guides/contacts.html) and click the name of your geographic region.

If the problem you submit is for a software defect or for missing or inaccurate documentation, IBM Software Support creates an Authorized Program Analysis Report (APAR). The APAR describes the problem in detail. Whenever possible, IBM Software Support provides a workaround that you can implement until the APAR is resolved and a fix is delivered. IBM publishes resolved APARs on the Software Support Web site daily, so that other users who experience the same problem can benefit from the same resolution.

Page 104: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

104 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Page 105: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Notices 105

Notices

OverviewThis information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to:

IBM Director of LicensingIBM CorporationNorth Castle DriveArmonk, NY 10504-1785 U.S.A.

For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to:

IBM World Trade Asia CorporationLicensing2-31 Roppongi 3-chome, Minato-kuTokyo 106, Japan

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law:

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement might not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in

C

Page 106: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

106 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact:

IBM Corporation2Z4A/10111400 Burnet RoadAustin, TX 78758 U.S.A.

Such information may be available, subject to appropriate terms and conditions, including in some cases payment of a fee.

The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us.

Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurement may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.

Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only.

All IBM prices shown are IBM's suggested retail prices, are current and are subject to change without notice. Dealer prices may vary.

This information is for planning purposes only. The information herein is subject to change before the products described become available.

This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any

Page 107: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Notices 107

similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM�s application programming interfaces.

Each copy or any portion of these sample programs or any derivative work, must include a copyright notice as follows:

© (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights reserved.

If you are viewing this information in softcopy form, the photographs and color illustrations might not display.

TrademarksIBM, the IBM logo, AS/400, Candle, Candle Management Server, Candle Management Workstation, CandleNet, CandleNet Portal, DB2, developerWorks, eServer, IBMLink, iSeries, Lotus, Lotus Notes, MVS, OMEGAMON, OMEGAMON Monitoring Agent, OS/400, Passport Advantage, pSeries, Rational, Redbooks, S/390, Tivoli, the Tivoli logo, VTAM, z/OS, and zSeries are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both.

Intel, Intel Inside (logos), MMX, Celeron, Intel Centrino, Intel Xeon, Itanium, Pentium and Pentium III Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

Page 108: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

108 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

UNIX is a registered trademark of The Open Group in the United States and other countries.

Other company, product, and service names may be trademarks or service marks of others.

Page 109: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Index 109

Aadvanced history configuration options 44

CCC Logs product requirements for 45archiving procedures

using LOGSPIN 55archiving procedures using Windows AT

command 57AS/400

location of historical data 60AS/400 considerations 60AT command, Windows 54attributes

specifying for historical data collection 45

Bbegin or end collection 40books

see publications 12, 13

CCandle Data Warehouse

configuring 48requirements for 48

CandleHistory, running on UNIX 71CCC Logs

advanced history configuration options 45CMS

rebuild CMS list 41requirement for 29, 37select target 41

CMS, targetused to generate historical data collection rules 41

collecting historical datafor HP NSK systems 73

collection interval, specifying 42collection location, specifying 42collection options

historical data collection rules for 42collection options, specifying 42columns added to history data and to meta

description files 27configuration, custom

for historical data collection 44configure data collection

CMW 37Configure History icon 38configuring data collection

CandleNet Portal 32configuring your warehouse 49Conversion 54conversion process 74

MVS 63OS/400 54, 74overview 54, 74Windows 54, 74

conversion, dataautomatic for MVS 64defining 26HP NSK Systems 73mutually exclusive with warehousing 43MVS 63OS/400 53programs to perform 27UNIX 70using a MODIFY command on MVS 63using KPDXTRA on MVS 64Windows 53

converting files using krarfloff 58attributes formatting 58on OS/400 58parameters 59using krarloff on Windows 58

converting files using krarloffHP NSK 74OS/400 58Windows 58

converting historical dataUNIX 69

CT/PDS 75commands 94

customer supportsee Software Support 101

customizing your history conversion 72

Ddata conversion

automatic for MVS 64defining 26mutually exclusive with warehousing 43mututally exclusive with warehousing 43MVS 63OS/400 53programs to perform 27

Index

Page 110: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

110 Historical Data Collection Guide for OMEGAMON XE Products

using a MODIFY command on MVS 63using KPDXTRA on MVS 64Windows 53

data conversion, performingUNIX 70

data conversion, UNIXautomatic 71one time 71

data roll off 30data warehouse

configuring 49data warehousing

mutually exclusive with data conversion 43prerequisites to 48

DDNAMES for KPDXTRAon MVS 65

display list of available Candle Management Servers 41

displaying collection status 40documentation conventions 15

Eend or begin collection 40error logging for warehoused data 52exported historical data

logging of 49exporting persistent data 85

Ffile conversion

HP NSK systems 73file corruption 50

Ggroup

selecting for Historical Data Collection 41group, selecting

historical data collection rules for 41

Hhistorical data

components used to collect 25location on AS/400 60planning to collect 25selecting a strategy 25warehousing 26

historical data collectionattribute specifications required for 45CCC Logs used with 45custom configuration for 44defining rules 41rules 26

selecting a product for which to collect data 41starting default collection 40strategy 26using CandleNet Portal 35

historical data collection configurationfor Universal Agent 45

Historical Data Collection Configuration programinvoking 30, 38prerequisites to running 29, 37requirements for invoking 30, 38

historical data collection rulesselecting a group or table 41selecting a product 41selecting the target CMS 41specifying collection options 42

historical data conversionPerforming on UNIX 69

historical data table fileslocation in MVS 67

historical reportingconfiguration 32file maintenance required 30long-term and short-term 30overview 30

history configurationUniversal Agent 45

History Configuration dialog 32, 38used to display collection status 40

history configuration options, advanced 44history tables

naming of 49history, short term 20HP NSK

file conversion for 73using krarloff on 74

Iicon, Configure History 38information centers, searching to find software

problem resolution 99invoking the HDC Configuration program

requirements 30, 38steps 30, 38

Kknowledge bases, searching to find software problem

resolution 99KPDXTRA 64

DDNAMES to be allocated 65messages 66parameters 65

krarloffconverting files on HP NSK 74

Page 111: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Index 111

converting files on OS/400 58converting files on Windows 58

krarloff parametersOS/400 59Windows 59

Llocation of MVS executables 67logfile parameters

OS/400 56Windows 56

LOGSPIN 54LOGSPIN program 54LOGSPIN, archiving procedures using 55

Mmanuals

see publications 12, 13MVS

data conversion using KPDXTRA 64location of historical data table files 67manual archiving procedure 68

MVS executables, location of 67

Nnaming of history tables 49newsgroups 14

OODBC 22

requirement for using 20, 22SQL Server database on Windows/NT 11, 12, 20,

22used to warehouse historical data 26

ODBC datalogging of successful exports 49

OMEGAMON XE for WebSphere MQ productsrunning on HP NSK systems 73

online publicationsaccessing 13

Open Database Connectivity 22used to warehouse data 26

ordering publications 13OS/400

data conversion 53krarloff parameters 56, 59logfile parameters 56overview of conversion process 54, 74

Pperformance impact

on the agent 23

on the CMS or the agent 23requests for historical data from large tables 23warehousing 24

performance impact of large data requests 23Persistent Data Store

maintaining 75restoring exported data 86

persistent data storebacking up datasets to DASD 81command interface 94commands 94connecting the dataset to the CMS 83data record format of exported data 87dataset naming conventions 82determining the medium for dataset backup 80disconnecting the dataset 84exporting and restoring persistent data 85exporting persistent data 85extracted data format 92extracting CT/PDS data to EBCDIC files 92extracting CT/PDS data to flat files 91extracting data to EBCDIC files 92finding background information 83introduction 76maintaining the persistent data store 75making archived data available 82naming the export datasets 81overview of maintenance process 79what part of maintenance do you control 80

planning to collect historical data 25prerequisites to configuring your historical

warehouse 47prerequisites to running HDC Configuration

program 29, 37prerequisites to warehousing 48preventing file corruption

if your database is corrupted 51when storing data at the agent 50when storing data at the CMS 50

preventing historical data file corruption 50publications 12

accessing online 13ordering 13

Rreporting tool

data conversion using 20restoring exported persistent data 86roll off, data 30rules, defining for historical data collection 41rules, historical data collection

selecting a group or table 41

Page 112: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

112 Historical Data Collection Guide for OMEGAMON XE Products

selecting target CMS 41selecting the target CMS 41specifying collection options 42

Ssample meta description file (.hdr) 28select CMS targets for data collection 41selecting a product for Historical Data Collection 41selecting a table or group for Historical Data

Collection 41short term history 20Software Support

contacting 101SQL Server database on Windows/NT

access via ODBC 11, 12, 20, 22starting default collection 40starting historical data collection

CandleNet Portal 35stopping all historical data collection 40strategy for historical data collection 26

Ttable

selecting for Historical Data Collection 41table, selecting

historical data collection rules for 41Tandem see HP NSKTivoli software information center 13

UUniversal Agent history configuration 45

Vviewing historical data on CandleNet Portal 31

Wwarehouse

configuring 49prerequisites to configuring 47

warehouse interval, specifying 42warehousing

error logging for 52logging of successful exports 49mutually exclusive with data conversion 43prerequisites to 48

warehousing historical data 26warehousing, data

mutually exclusive with data conversion 43Windows

krarloff parameters 56, 59location of executables 61location of historical data table files 61

location of history configuration files 61logfile parameters 56overview of conversion process 54, 74

Windows AT command 54Windows data conversion 53

Page 113: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products
Page 114: Historical Data Collection Guide for IBM Tivoli OMEGAMON ...publib.boulder.ibm.com/tividd/td/ITCNetCC/GC32... · 12 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

IBM@

Part Number: GC32-9429-01

Printed in USA

GC32-9429-01

GC32-9429-01