124
Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices. Cisco Cloud Object Storage Release 3.12.1 User Guide May 8, 2017

Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Cisco Cloud Object Storage Release 3.12.1 User Guide

May 8, 2017

Cisco Systems, Inc.www.cisco.com

Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices.

Page 2: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.

THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS” WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.

IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.

© 2017 Cisco Systems, Inc. All rights reserved.

Page 3: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

C O N T E N T S

Overview 1-1

Product Description 1-1

COS and Cloud DVR 1-1

COS and V2PC 1-1

COS Components 1-2

Networks 1-3

COS Nodes 1-3

COS Cluster 1-4

Object Store Metadata 1-4

Hardware Platforms 1-4

Features 1-5

Overview 1-6

Server Support 1-10

Upgrade and Downgrade Support 1-12

Automated COS Node Configuration 1-12

Intel Preboot Execution Environment (PXE) Support 1-12

Improved TCP Transmission 1-13

Small Object Support 1-13

Fanout Compaction 1-13

V2PC GUI 1-13

COS Node Telemetry Forwarding 1-14

High Availability (HA) 1-14

Swauth API 1-14

Swift Object Store API 1-15

Fanout API 1-16

Object Store Metadata Resiliency 1-16

Object Store Data Resiliency 1-17

Management Interface Port Bonding 1-17

Service Load Balancing 1-18

CLI Utilities 1-18

COS Cluster Support 1-18

COS AIC Client Management 1-19

Node Decommissioning Paused for Maintenance Mode 1-19

Prerequisites 1-19

iiiCisco Cloud Object Storage Release 3.12.1 User Guide

Page 4: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Contents

Restrictions and Limitations 1-19

Deploying COS 2-1

Hardware Options 2-1

COS Network Architecture 2-2

Configuring End-to-End Quality of Service 2-3

About Priority Flow Control 2-4

Installing V2PC 2-4

Installing and Provisioning the Cisco-COS Application on V2PC 2-4

Confirm Prerequisites 2-4

Create a Provider 2-5

Create a Zone and Worker 2-5

Download and Import the COS Application 2-5

Launch the COS Application on the V2PC Master 2-6

Configuring the COS Application 2-6

Create an IP Pool 2-6

Create a COS Cluster 2-7

Create COS Node Profiles 2-7

Installing COS 2-7

Procedure for New Installations 2-8

Procedure for Pre-Loaded Installations 2-10

Changing COS Node Parameters after Installation 2-11

Initial COS Node Configuration 2-11

Using cosinit1step (Optional) 2-12

Registering the COS Node to V2PC 2-13

Creating User Accounts and Verifying the COS Node 2-13

Verifying Fanout API 2-14

Upgrading the Cisco-COS Application on V2PC 2-15

Download and Import a New COS Application 2-15

Remove the Existing COS Application Instance 2-15

Create a New COS Application Instance 2-15

Automated COS Node Configuration 2-16

Automated Configuration at Installation (Optional) 2-17

Enabling Fanout Compaction (Recommended) 2-18

Behavior and Limitations 2-18

Enablement Procedure 2-19

Configuring Telemetry Forwarding 2-19

Install the RPMs 2-19

ivCisco Cloud Object Storage Release 3.12.1 User Guide

Page 5: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Contents

Configure Telemetry Forwarding 2-21

Start Telemetry Forwarding 2-23

Troubleshooting the Service 2-23

Using Elasticsearch Index Templates 2-23

Using Kibana Index Patterns 2-24

System Monitoring 3-1

COS Cluster Status Monitoring 3-1

COS Node Status Monitoring 3-1

Viewing COS Node Status 3-2

Viewing Deployment Status 3-2

Viewing COS Alarms and Events 3-3

COS-AIC Alarms and Events 3-3

COS AIC Client Events 3-6

COS AIC Server Events 3-6

Viewing COS Statistics 3-6

COS AIC Client Monitoring 3-7

Troubleshooting Alarms, Events, and Statistics 3-8

COS Node Platform Monitoring with SNMP 3-9

Overview 3-9

Installation 3-10

Configuration 3-10

MIB Extensions 3-10

Monitored Items 3-13

Reference Information A-1

COS Service Model A-1

Using the V2PC GUI A-2

Accessing the V2PC GUI A-2

Dashboard Page A-3

Cisco Cloud Object Store (COS) Page A-3

COS Network Ports and Services A-3

COS Maintenance A-4

Command Line Reboot A-4

Switching Node Admin State from the GUI A-4

Node Decommissioning and Removal A-5

Reinstalling a COS Node in a Cluster A-6

Behavior of COS Services on COS Node Boot A-7

COS Service Reliability A-8

COS Node Disks A-8

vCisco Cloud Object Storage Release 3.12.1 User Guide

Page 6: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Contents

COS Services A-8

COS Node Interfaces A-9

Server Reachability A-9

COS Node Hard Drive Replacement A-9

Configuring Resiliency and Management Interface Bonding B-1

Configuring Resiliency B-1

About Mirroring B-2

About Erasure Coding B-2

Defining Resiliency B-2

Configuring Resiliency Using the V2PC GUI B-5

Configuring Local Mirroring Manually B-6

Configuring Local Erasure Coding Manually B-7

Migrating from LM to LEC Manually B-7

Configuring Remote Mirroring Manually B-7

Configuring Distributed Erasure Coding Manually B-8

Finding N:M Values B-9

Replicating Objects During Swift Write Operations B-11

Configuring Management Interface Bonding B-11

To Configure Bonding Manually B-11

COS Command Line Utilities C-1

Hardware Prerequisites C-1

Installing the CLI Utilities C-1

Using the CLI Utilities C-3

cos swauth C-3

cos-swift C-5

PXE Network Installation D-1

Setting up a DHCP or Proxy DHCP Server D-1

Installing and Configuring the DHCP Server for PXE D-2

Proxy DHCP Server for PXE D-4

Configuring TFTP for PXE D-5

Installing TFTP Server D-5

Configuring iptables for TFTP D-6

Setting up the PXELINUX Bootstrap Program D-7

Setting up a Network Installation Server D-9

Setting Up the FTP Server D-9

Setting Up the HTTP Server D-10

Setting Up the NFS Server D-11

viCisco Cloud Object Storage Release 3.12.1 User Guide

Page 7: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Contents

Enabling PXE Boot in BIOS D-12

Enabling the PXE Option ROM D-12

BIOS Boot Order D-12

CDDM Management Utility E-1

Utility Name E-1

Synopsis E-1

Description E-1

Options E-2

Return Codes E-5

Examples E-5

viiCisco Cloud Object Storage Release 3.12.1 User Guide

Page 8: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Contents

viiiCisco Cloud Object Storage Release 3.12.1 User Guide

Page 9: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Preface

This preface describes who should read the Cisco Cloud Object Storage Release 3.12.1 User Guide and explains its overall organization and document conventions. It contains the following sections:

• Audience, page ix

• Document Organization, page ix

• Document Conventions, page x

• Related Publications, page xi

• Obtaining Documentation and Submitting a Service Request, page xi

AudienceThis guide is for the networking professional managing the Cisco Cloud Object Storage (COS) product. Before using this guide, you should have experience working with Linux platforms and be familiar with the concepts and terminology of Ethernet, local area networking, clustering and high-availability, and network services such as DNS and NTP.

Document OrganizationThis document contains the following chapters and appendices:

Chapters or Appendices Descriptions

Chapter 1, “Overview” Describes the COS, its components, features, and prerequisites for deployment.

Chapter 2, “Deploying COS” Gives procedures for installing and configuring COS.

Chapter 3, “System Monitoring” Provides tools and methods for monitoring COS nodes and clusters.

Appendix A, “Reference Information” Describes additional features, maintenance details, and tools for managing COS.

Appendix B, “Configuring Resiliency and Management Interface Bonding”

Describes COS resiliency and management port bonding features and explains how to configure them through the COS setup file or (for resiliency only) V2PC GUI.

ixCisco Cloud Object Storage Release 3.12.1 User Guide

Page 10: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

PrefaceDocument Conventions

Document ConventionsThis document uses the following conventions:

Note Means reader take note. Notes contain helpful suggestions or references to material not covered in the manual.

Tip Means the following information will help you solve a problem. The tips information might not be troubleshooting or even an action, but could be useful information, similar to a Timesaver.

Caution Means reader be careful. In this situation, you might perform an action that could result in equipment damage or loss of data.

Timesaver Means the described action saves time. You can save time by performing the action described in the paragraph.

Appendix C, “COS Command Line Utilities”

Provides instructions for installing and using the COS CLI utilities.

Chapter D, “PXE Network Installation” Provides instructions for setting up remote installation using the Intel Preboot Execution Environment (PXE).

Appendix E, “CDDM Management Utility” Provides instructions for using the COS CDDM utility.

Chapters or Appendices Descriptions

Convention Indication

bold font Commands and keywords and user-entered text appear in bold font.

italic font Document titles, new or emphasized terms, and arguments for which you supply values are in italic font.

[ ] Elements in square brackets are optional.

{x | y | z } Required alternative keywords are grouped in braces and separated by vertical bars.

[ x | y | z ] Optional alternative keywords are grouped in brackets and separated by vertical bars.

string A nonquoted set of characters. Do not use quotation marks around the string or the string will include the quotation marks.

courier font Terminal sessions and information the system displays appear in courier font.

< > Nonprinting characters such as passwords are in angle brackets.

[ ] Default responses to system prompts are in square brackets.

!, # An exclamation point (!) or a pound sign (#) at the beginning of a line of code indicates a comment line.

xCisco Cloud Object Storage Release 3.12.1 User Guide

Page 11: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

PrefaceRelated Publications

Warning IMPORTANT SAFETY INSTRUCTIONS

This warning symbol means danger. You are in a situation that could cause bodily injury. Before you work on any equipment, be aware of the hazards involved with electrical circuitry and be familiar with standard practices for preventing accidents. Use the statement number provided at the end of each warning to locate its translation in the translated safety warnings that accompanied this device.

SAVE THESE INSTRUCTIONS

Warning Statements using this symbol are provided for additional information and to comply with regulatory and customer requirements.

Related PublicationsRefer to the following documents for additional information about COS 3.12.1:

• Release Notes for COS 3.12.1

• Cisco Cloud Object Storage Release 3.12.1 API Guide

• Cisco Cloud Object Storage Release 3.12.1 Troubleshooting Guide

• Cisco CDE6032 Storage Server Installation and Service Guide

• Cisco UCS S3260 Storage Server Installation and Service Guide

• Cisco UCS C3160 Rack Server Installation and Service Guide

• Cisco Content Delivery Engine 465 Hardware Installation Guide

• Cisco Virtualized Video Processing Controller User Guide

• Open Source Used in COS 3.12.1

Obtaining Documentation and Submitting a Service RequestFor information on obtaining documentation, using the Cisco Bug Search Tool (BST), submitting a service request, and gathering additional information, see What’s New in Cisco Product Documentation.

To receive new and revised Cisco technical content directly to your desktop, you can subscribe to the What’s New in Cisco Product Documentation RSS feed. The RSS feeds are a free service.

xiCisco Cloud Object Storage Release 3.12.1 User Guide

Page 12: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

PrefaceObtaining Documentation and Submitting a Service Request

xiiCisco Cloud Object Storage Release 3.12.1 User Guide

Page 13: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Cis

C H A P T E R 1

Overview

Product DescriptionCisco Cloud Object Storage (COS) provides distributed, resilient, high-performance storage and retrieval of binary large object (blob) data. Object storage is distributed across a cluster of hardware systems, or nodes. The storage cluster is resilient against hard drive failure within a node and against node failure within a cluster. Nodes can be added to or removed from the cluster to adjust cluster capacity as needed.

The underlying interface for managing COS content is the OpenStack Swift API, with enhancements to improve quality of service when accessing large media objects. COS includes an authentication and authorization service that implements the OpenStack Swauth API. To administer the cluster, COS includes an HTTP-based cluster-management API.

COS and Cloud DVRBeginning with Release 3.8.1, COS adds support for API calls that enable COS to manage fanout storage operations for applications such as Cloud DVR (cDVR). Fanout storage efficiently supports unique copies for fair-use compliance. A single fanout request can save many copies of an object, thereby saving network resources by optimizing storage compute and disk utilization.

The COS Fanout API includes calls to create, retrieve, and delete fanout objects and to create, retrieve, and delete individual copies of content within a fanout object. The Fanout API also enables interoperability between COS 3.12.1 and Cisco Virtual Media Recorder (VMR) as part of a complete cDVR solution.

COS and V2PCCOS 3.12.1 is installed as a service of Cisco Virtualized Video Processing Controller (V2PC). V2PC provides a common management interface for COS, VMR, and other applications that together form a complete virtualized media processing solution.

V2PC is the control interface for the Cisco Virtualized Video Platform (V2P), an open platform that transforms the way video infrastructure is built, deployed, provisioned, and maintained. V2PC enables a video processing application to run over a cloud or on-premise infrastructure while flexibly orchestrating its media workflows and resources. COS integrates transparently with V2PC, and can be managed through the V2PC graphical user interface (GUI) web application.

1-1co Cloud Object Storage Release 3.12.1 User Guide

Page 14: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview COS Components

Figure 1-1 Cisco Virtualized Video Processing (V2P) Platform

Customers can use V2PC to rapidly create and orchestrate media workflows across video headends and data center environments, and can evolve seamlessly from a hardware based infrastructure to a hybrid or pure virtualized cloud infrastructure. The software centric workflows increase the reachability of content across a variety of content consumption platforms.

This transformation has resulted in flexible user experiences and simplified operations, allowing customers to better manage, modify, and scale media workflows to support services such as Live, VOD, Time Shift, and Cloud DVR (cDVR) to OTT consumers.

V2PC works with a hierarchy of components that includes platforms, application containers, service containers, providers, zones, nodes, and the logical functions they support, which are configured into media workflows.

For more information on V2PC and its components, see the Cisco Virtualized Video Processing Controller User Guide for your V2PC release.

Note COS 3.12.1 has been tested for compatibility with V2PC Release Candidate 3.2.2 build 10744 and cos-app build cisco-cos-1.0.429.tgz. Later releases of COS are expected to be compatible with later versions of V2PC and cos-app. Contact Cisco for updated compatibility information.

COS ComponentsCOS has a number of subsystems.

• Networks: Interfaces are grouped into distinct networks to isolate management functions from high-volume data traffic.

• Clusters and Nodes: COS services are provided by a cluster of nodes, with both the cluster and the individual nodes as distinctly manageable components.

1-2Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 15: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview COS Components

• Virtualized Video Processing Controller (V2PC): COS 3.12.1 components are managed using services running on the V2PC.

• Hardware Platforms: COS software is currently deployed on selected Cisco Content Delivery Engine (CDE) and Cisco UCS server hardware models.

The following sections further describe each of these components.

NetworksCOS divides network interfaces into two groups: the data network and the management network.

• The management network is used to monitor and manage COS clusters and individual COS nodes. The management network also handles traffic from the COS metadata store.

• The data network is used by client applications to interact with the COS authentication and authorization services, and the COS object storage services. Client applications use the Swauth API to interact with the COS authentication and authorization services, and the Swift API to interact with the COS object storage services.

Similarly, the COS installation separates its traffic into data and management traffic, and expects these two types of traffic to be isolated into their own subnets. COS management traffic on 1G management adapters can be combined with other traffic not intended for the COS system. However, COS data traffic on 10G adapters should be on a managed subnet that does not permit traffic not intended for COS. If non-COS traffic is allowed on a COS data subnet, it will degrade system performance and can cause availability issues.

COS NodesThe COS software runs on a collection of computing systems called nodes, which are connected via the management and data networks. Currently, there are two types of COS nodes: the cluster controller and the storage nodes.

The storage nodes host software that manages object-store and authentication and authorization service metadata, stores and retrieves object contents, and communicates with the cluster controller. COS storage nodes can be added or removed without disrupting COS service availability. Adding nodes is a way of elastically increasing the storage and bandwidth capacity of the COS cluster.

The COS node software includes a customized Linux distribution, currently based on CentOS 6. This provides the basic framework for the other software applications and modules that run on the node. Each node runs a set of kernel modules and a number of daemons that run in the Linux user-space.

The kernel modules:

• Support real-time management of node hardware resources.

• Provide the distributed, resilient content-store used for object-store data.

• Provide the Swift and Swauth API support via the data network.

The daemons:

• Coordinate service log files.

• Communicate with the cluster controller.

• Provide a distributed database for object-store metadata.

• Communicate with the modules running in the kernel.

1-3Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 16: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview COS Components

While the data-network interfaces communicate directly with the kernel modules, the management network interfaces communicate directly with the user-space daemons.

COS ClusterCOS services are provided by software running on a set of nodes called a COS cluster. The nodes in the cluster are connected by both data and management networks. COS Release 3.12.1 supports one cluster per V2PC deployment. Each cluster has a single fully-qualified domain name (FQDN) that is used by client applications to access COS services.

A COS cluster also has a number of configuration parameters that define the cluster behavior. Some of these parameters include:

• The Swift and Swauth API constraints.

• The IP address pools used to assign IP addresses to individual node network adapters.

• The IP address of the V2PC configuration document server.

For a detailed description of the configuration parameters, see Deploying COS, page 2-1.

Object Store MetadataCOS object store metadata and Swauth service data are stored in the high-performance, resilient NoSQL Cassandra database. The cosd daemon running on each COS node acts as the Cassandra client, and implements the schema for Swift and Swauth metadata documents stored in Cassandra. Each COS storage node runs an instance of the Cassandra server, so metadata storage capacity increases linearly along with content storage capacity as COS storage nodes are added to the cluster.

Hardware PlatformsCurrently, COS 3.12.1 software can be deployed on the following hardware models:

• Cisco CDE6032 Dual Node Storage Server with 56 x 10 TB hard drives (560 TB total storage), giving 28 drives (280 TB) to each server node

• Cisco UCSC S3260-4U5 Dual Node Storage Server with 56 x 10 TB hard drives (560 TB total storage), giving 28 drives (280 TB) to each server node

• Cisco UCSC S3260-4U4 Single Node Storage Server with 56 x 6 TB hard drives (336 TB total storage), giving all 56 drives to one server node

• Cisco UCSC S3260-4U3 Dual Node Storage Server with 56 x 6 TB hard drives (336 TB total storage), giving 28 drives (168 TB) to each server node

• Cisco UCSC C3160-4U2 Rack Server with 54 x 6 TB hard drives (324 TB total storage)

• Cisco UCSC C3160-4U1 Rack Server with 54 x 4 TB hard drives (216 TB total storage)

• Cisco Content Delivery Engine CDE465-4R4 with 36 x 6 TB hard drives (216 TB total storage)

For information about installing the hardware, see the following:

• Cisco CDE6032 Storage Server Installation and Service Guide

• Cisco UCS S3260 Storage Server Installation and Service Guide

• Cisco UCS C3160 Rack Server Installation and Service Guide

1-4Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 17: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

• Cisco Content Delivery Engine 465 Hardware Installation Guide

Features• Overview, page 1-6

• Server Support, page 1-10

• Automated COS Node Configuration, page 1-12

• Intel Preboot Execution Environment (PXE) Support, page 1-12

• Improved TCP Transmission, page 1-13

• Small Object Support, page 1-13

• Fanout Compaction, page 1-13

• V2PC GUI, page 1-13

• High Availability (HA), page 1-14

• Swauth API, page 1-14

• Swift Object Store API, page 1-15

• Fanout API, page 1-16

• Object Store Metadata Resiliency, page 1-16

• Object Store Data Resiliency, page 1-17

• Management Interface Port Bonding, page 1-17

• Service Load Balancing, page 1-18

• CLI Utilities, page 1-18

• COS Cluster Support, page 1-18

• COS AIC Client Management, page 1-19

• Node Decommissioning Paused for Maintenance Mode, page 1-19

1-5Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 18: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

OverviewThe table below provides an overview of the COS features.

Table 1-1 Overview of COS Features

Feature Set Features

Cisco UCS and CDE Server Support • Supports installation on the following hardware:

– Cisco CDE6032 Dual Node Storage Server with 56 x 10 TB hard drives (560 TB total storage), giving 28 drives (280 TB) to each server node

– Cisco UCSC S3260-4U5 Dual Node Storage Server with 56 x 10 TB hard drives (560 TB total storage), giving 28 drives (280 TB) to each server node

– Cisco UCSC S3260-4U4 Single Node Storage Server with 56 x 6 TB hard drives (336 TB total storage), giving all 56 drives to one server node

– Cisco UCSC S3260-4U3 Dual Node Storage Server with 56 x 6 TB hard drives (336 TB total storage), giving 28 drives (168 TB) to each server node

– Cisco UCSC C3160-4U2 Rack Server with 54 x 6 TB hard drives (324 TB total storage)

– Cisco UCSC C3160-4U1 Rack Server with 54 x 4 TB hard drives (216 TB total storage)

– Cisco Content Delivery Engine CDE465-4R4 with 36 x 6 TB hard drives (216 TB total storage)

Upgrade and Downgrade Support • COS 3.12.1 adds support for upgrade from and downgrade to COS Release 3.8.x.

Automated Node Configuration • A single configuration file for all COS nodes can be stored on an FTP or HTTP server and then downloaded by the COS initialization routine (cosinit) during installation.

• A single downloadable configuration file eliminates the need to configure nodes individually, whether manually or via the V2PC GUI.

• COS 3.12.1 lets you specify the URL of a configuration file to be used at installation to automatically configure the node according to a predefined template.

Intel Preboot Execution Environment (PXE) Support

• PXE can be used to download a network bootstrap program (NBP) to remotely install a COS client over a network.

Improved TCP Transmission • COS 3.12.1 includes optimizations to improve TCP transmit performance.

Small Object Support • For cloud DVR and similar applications, COS 3.12.1 introduces Small Object Support to efficiently manage storage of many small files representing media segments.

1-6Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 19: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

Fanout Compaction • COS 3.12.1 adds support for compaction of fanout objects to reclaim disk space when copies of a fanout object are deleted before the entire fanout object has been deleted.

V2PC GUI • The V2PC GUI lets you quickly and easily access many COS deployment, monitoring, and alarm functions.

• Displays storage, network bandwidth, session count, and alarms for individual COS disks, nodes, services, and interfaces.

• Includes a graphical display of deployment statistics and trends related to disk, service, and interface status.

• Supports configuration of COS node service interface from the GUI.

• Supports setting of resiliency policies on a per-cluster basis from the GUI.

• Supports configuring COS clusters and (optionally) generating a COS node initialization profile.

COS Node Telemetry Forwarding • COS 3.12.1 supports the ability to configure forwarding of log events and statistics to an Elasticsearch instance or to a Cisco Zeus account centralized log management and statistical analysis.

High Availability (HA) • COS supports HA as implemented in V2PC, providing redundancy for the VMs. V2PC uses both Cisco and third-party components to support HA.

Swauth API • Simple Auth Service API for authentication of Swift operations.

• Based on Swauth Open-Source Middleware API.

• Used to manage accounts, users and account service endpoints.

Table 1-1 Overview of COS Features

Feature Set Features

1-7Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 20: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

Swift Object Store API • An implementation of a subset of the continually evolving OpenStack Swift API.

• Command executions are authenticated using auth tokens provided by Swauth service.

• Used to create and manage containers and objects for persistent storage in a COS cluster.

• Supports archiving of content from Cisco or ARRIS recorders using DataDirect Networks (DDN) Web Object Scaler (WOS) archive objects.

Fanout API • COS 3.12.1 includes support for a Fanout API to enable interactions with other Cisco applications in the Virtualized Video Processing (V2P) suite.

Object Store Metadata Resiliency • Metadata resiliency is provided by a distributed and replicated Cassandra document database.

• Each COS node participates in the persistence of a subset of the Cassandra database.

• Manual administrative intervention is required on node failure.

Table 1-1 Overview of COS Features

Feature Set Features

1-8Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 21: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

Feature Set Features

Object Store Data Resiliency • Data is resilient to both hard drive and COS node failures.

• Local Erasure Coding (LEC), or local COS node data resiliency, is provided by local software RAID.

Note By default, LEC is enabled and is configured for two drive failures. We recommend using this default configuration for resiliency.

• Distributed erasure coding (DEC) provides data resiliency across nodes, protecting stored content from loss due to node failure.

• COS cluster data resiliency is provided by object replication (mirroring). The V2PC GUI allows for configuration of both local and remote mirror copies.

Note When configuring local mirroring for resiliency, we recommend using no more than one local mirror copy.

• Supports configuration of mixed resiliency policies (local erasure with remote mirroring) via the V2PC GUI.

• Alarms are available for loss of storage.

Management Interface Bonding • Supports defining two node management interface ports as a primary-backup pair.

Service Load Balancing • COS cluster load balancing is provided by DNS round-robin of a FQDN to multiple physical IPv4 addresses hosted by COS nodes.

• Optimal load balancing is provided by extensions to the Swift API through the implementation of HTTP redirect.

• Remote smoothing facilitates load balancing by moving content to a new node when it is added to a cluster.

1-9Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 22: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

Server Support

Cisco UCS S3260 Storage Server

COS 3.12.1 supports the Cisco UCS S3260 platform, which supports up to two compute nodes and up to 56 storage disks per chassis.

The UCS S3260 is a 4RU chassis that supports a single-node or dual-node server node configuration. When configured for single-node, the chassis includes the following:

• 2 x 480 GB solid-state drives (SSDs) for operating system and COS installation

• 28 or 56 hard drives for content storage

• One server node with 32 x 16 GB RAM, providing 256 GB

• 1 system I/O controller with 2 x 40 Gbps QSFP ports

When configured for dual-node, the chassis includes the following:

• 4 x 480 GB solid-state drives (SSDs) for operating system and COS installation, 2 per node

• 56 hard drives for content storage, with the drives in slots 1-28 dedicated to server node 1 and the drives in slots 29-56 dedicated to server node 2

• Two server nodes with 32 x 16 GB RAM each, providing 256 GB for each node

• 2 system I/O controllers with 2 x 40 Gbps QSFP ports each, one controller dedicated to each server node

COS Cluster Support • Each COS application instance can have one or more clusters created to service that application instance.

• Each cluster can have its own asset redundancy policy, shared by all COS nodes that are members of that cluster.

• If a cluster is disabled, all member COS nodes will have their interfaces removed from the DNS. Likewise, when a cluster is enabled, all member node interfaces will be added back to the DNS.

COS AIC Client Management • The COS application instance controller (AIC) Client process is monitored by the monit process that runs on each COS node, and if not running, is restarted.

• The COS AIC Client process creates a PID file that is added to the monit script so it can be monitored and restarted.

• Command-line scripts support stopping and restarting the AIC Client process manually, bypassing the normal automatic restart process.

Node Decommissioning Paused for Maintenance Mode

• If a node is in the process of being decommissioned and any node in its cluster is placed in Maintenance mode, the decommissioning process is paused.

Feature Set Features

1-10Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 23: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

Note The CDE6032 is a special edition of the UCS S3260 configured specifically for COS and other V2P applications. The CDE6032 comes with COS 3.12.1 preloaded on a boot drive consisting of 4 x 480 GB SSDs in hardware RAID, and ships with the maximum storage currently available (560 TB total).

A pre-installation script is provided to properly configure the S3260 chassis for either single-node or dual-node service. You must run this script before installing COS on any S3260 server node.

After COS installation and during the cosinit sequence on each node, you are prompted to select one of three available storage bundles:

• UCS S3260-4U3 (28 disks per server node): Select this bundle if you configured a single COS node with 28 hard drives installed, or a dual COS node setup with 28 x 6 TB hard drives per server node.

• UCS S3260-4U4 (56 disks per server node): Select this bundle if you configured a single COS node with 56 hard drives.

• UCS S3260-4U5 (56 disks per server node): Select this bundle if you configured a dual COS node setup with 28 x 10 TB hard drives per server node.

Knowing which storage bundle is configured allows the system to more accurately report inventory and disk issues, such as bad or missing disk drives, after the node is up and running.

In a dual-node setup, the GUI displays the status of only those disks assigned to a particular node:

• Node1 will list Cisco Disk 01-28

• Node2 will list Cisco Disk 29-56

On each COS node, eth0 and eth1 are bonded to a bond0 management interface. This differs from the UCS-C3160, where eth0 and eth3 are bonded to a bond0 management interface.

For more information, see Deploying COS, page 2-1.

Cisco UCS C3160 Rack Server

The Cisco UCS C3160 is a modular, high-density server for service providers, enterprises, and industry-specific environments. The C3160 combines highly scalable computing with high-capacity local storage. Designed for cloud-scale applications, the C3160 is simple to deploy and is well suited for use in unstructured data repositories, media streaming, and content distribution applications.

The C3160 is a 4RU server. When configured for COS, the C3160 includes the following:

• 2 x 400 GB solid-state drives (SSDs) in RAID1, typically located in slots 55 and 56, for operating system and COS installation

• 54 hard drives in JBOD mode for 216 TB (4 TB drives) or 324 TB (6 TB drives) total storage

• One rear SSD

• Two system I/O controllers providing a total of four 10 GbE ports

A pre-installation script is provided to properly configure the C3160 chassis for COS. You must run this script before installing COS on the C3160.

For more information, see Deploying COS, page 2-1.

1-11Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 24: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

Cisco CDE Family Support

The Cisco Content Delivery Engine (CDE) family of rack servers supports ingest, storage, distribution, delivery, and management functions in the context of systems for delivery of entertainment-grade video content to subscribers. Each CDE contributes one or more support functions as determined by the content delivery applications (CDAs) that run on it.

The Cisco CDE465 Rack Server is designed and tested specifically to work with COS and related applications. The CDE465 provides enhanced storage capacity relative to earlier CDE models, with current models offering either 216 or 324 TB total storage.

Upgrade and Downgrade SupportCOS Release 3.12.1 adds support for upgrade from and downgrade to COS Release 3.8.4. This release also supports upgrade from and downgrade to the COS 3.10.1-b26. pre-release build.

Automated COS Node ConfigurationBeginning with COS 3.5.1, you can automate node configuration by providing a file to cosinit, the COS initialization routine, that includes a cluster name and IP pool reference address for at least one service interface. COS initialization will then configure the node without further intervention through the GUI or the API. A single configuration file for all COS nodes (or node sets) can be stored on an HTTP server for download by cosinit.

Beginning with COS 3.8.1, you can specify the URL of a configuration file to be used at installation to automatically configure the node according to a predefined template. Configuration of the node then proceeds automatically using the settings provided in the configuration file. This eliminates the need to configure nodes individually via the GUI or the API. This feature saves time by allowing for fully automated PXE installations as well as reduced effort during manual installation. See the Cisco Cloud Object Storage Release 3.8.1 User Guide for details.

Beginning with COS 3.12.1, the COS Node Profile page of the V2PC GUI can be used to configure a profile template for automated configuration of COS Nodes. Using a configuration template means that the following files no longer have to be configured for each COS node:

• /arroyo/test/setupfile

• /arroyo/test/SubnetTable

• /arroyo/test/RemoteServers

• /etc/cassandra/conf/cassandra.yaml

• /etc/cosd.conf

• /opt/cisco/cos/config/cos.cql

Intel Preboot Execution Environment (PXE) SupportPXE can be used to download a network bootstrap program (NBP) to remotely install a COS client over a network.

1-12Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 25: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

Improved TCP TransmissionCOS 3.12.1 includes optimizations to improve TCP transmit performance.

Small Object SupportFor cloud DVR and similar applications, COS 3.8.1 introduces small object support to efficiently manage storage of many small files representing media segments.

In Cloud DVR (cDVR) applications, the use of segmented media recording has the potential to greatly reduce the amount of duplicate video data stored on disk. However, this potential is limited by the large number of individual files created by media segmentation. Because each of these files is much smaller than a single disk allocation unit, it cannot be stored efficiently on the disk. In addition, having a large number of small files risks using up all available system object IDs (OIDs) before the disk is full. The potential for lost storage efficiency is only compounded when mirroring or erasure coding is applied for data resiliency.

Beginning with Release 3.8.1, COS uses small object support to enable more efficient storage of multiple small files. This technique maps multiple small files to a larger virtual container file which, by virtue of its larger size, makes more efficient use of a disk allocation unit. In this context, the small files are called small objects and the container file is called a container object, Each small object can be up to 32 MB in size. Each container object can be up to 256 MB in size, and can hold up to 64K (65535) small objects.

COS small object support integrates with distributed erasure coding to maintain parity data and fault tolerance as new files are added to or deleted from the container file. While the individual small files remain differentiated in the object database, they are managed at the object storage level as a single large object that can be stored safely and efficiently. A background "garbage collection" process recycles space in container objects that frees up when small objects within that container are deleted.

Fanout CompactionIn COS Release 3.8.1, as individual copies of a fanout object were deleted, the space allocated to the deleted copies did not become available for reuse until the entire object (that is, all of its copies) were deleted. In solutions that do not guarantee deletion of the entire fanout object within a relatively short period of time, significant unusable disk space could then accumulate, reducing overall storage capacity.

COS 3.12.1 introduces fanout compaction, a feature that reclaims the space allocated to deleted copies for use by other object contents before the entire fanout object has been deleted.

Note Enabling this feature requires changes in the way that data is represented on disk. For this reason, all nodes in a cluster in which it is enabled must be running a supporting COS release (currently only 3.12.1) to enable fanout compaction for the cluster.

V2PC GUIThe V2PC GUI enables quick and easy configuration of the COS infrastructure, service domain objects, and services. The GUI also provides valuable monitoring functions, including graphical displays of storage, network bandwidth, session count, and alarms and alarm history for individual COS disks, nodes, services, and interfaces. In addition, the GUI displays system and service diagnostics as well as event logs and log analysis.

1-13Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 26: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

The GUI supports configuration of the COS node service interface, setting resiliency policy (erasure coding or mirroring) on a per-cluster basis, and removal of COS nodes from a cluster after the node is decommissioned manually, with improved management of node and cluster maintenance.

For information on using the V2PC GUI to manage COS, see System Monitoring, page 3-1 and Using the V2PC GUI, page A-2.

COS Node Telemetry ForwardingCOS 3.12.1 supports the ability to configure forwarding of log events and statistical information to an Elasticsearch instance or to a Cisco Zeus account. This allows for centralized log management and statistical analysis of the COS service. The current set of information that is forwarded contains:

• A subset of the events from /arroyo/log/http.log.<DATE> and /arroyo/log/cosd.log.<DATE>

• A subset of the statistics from /arroyo/log/protocoltiming.log.<DATE>

• Statistics from /proc/calypso/stats/*_stats

For additional details and instructions for implementing this feature, see Deploying COS, page 2-1.

High Availability (HA)V2PC has two classes of components for HA:

• Third-party components such as Consul, MongoDB, and Redis use their own proprietary clustering and redundancy schemes.

• Cisco components, such as the V2PC GUI and DocServer, use ZooKeeper for leader election.

In an HA environment, multiple VMs provide redundancy for the applications. HA requires three VMs because applications such as Consul, MongoDB, and Redis require at least three components to form a working cluster.

Many of these applications also require a majority in order to form a quorum. That is, a cluster of three components can recover from the failure of a single component, because there are still two components to form a majority. But if two components fail, the single remaining component is not a majority, and the cluster cannot recover until one of the failed components recovers.

Therefore we recommend a configuration of three V2PC Master VMs to ensure recovery in the event of multiple failures, and to support high performance, especially when sharing databases and other applications.

Swauth APICOS includes a basic authentication service that can be used when COS is not installed along with other OpenStack services, such as the Keystone Identity service. The API for the COS authentication service is derived from the OpenStack Swauth middleware component API.

The authentication service API provides the following functions for managing accounts, users, and service endpoints:

• Listing Accounts

• Retrieving Account Details

• Creating an Account

1-14Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 27: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

• Deleting an Account

• Creating or Updating a User

• Retrieving User Details

• Deleting a User

• Creating or Updating Account Service Endpoints

• Getting an Authentication Token

For details, see the Cisco Cloud Object Storage Release 3.12.1 API Guide.

Swift Object Store APIThe COS object storage API is based on the OpenStack Swift API. It is implemented as a set of Representational State Transfer (REST) web services. All account, container, and object operations can be performed with standard HTTP calls. The requests are directed to the host and URL described in the X-Storage-Url HTTP header, which is part of the response to a successful request for an authentication token.

The COS object storage API defines restrictions on HTTP requests. The following table lists these restrictions, which are borrowed from the Swift API.

Note The container and object names must be UTF-8 encoded and then URL-encoded before inclusion in the HTTP request line. All the length restrictions are enforced against the URL-encoded request line.

The COS object store API provides the following functions, some of which provide extended functionality beyond the standard Swift API defined by OpenStack:

• Listing Containers

• Listing Objects

• Creating a Container

• Deleting a Container

• Retrieving an Object

• Retrieving an Archive (DDN WOS) Object

• Creating or Updating an Object

• Creating Unique Object Copies with Variants

• Reserving an Archive (DDN WOS) Object Identifier

Table 1-2 COS API Restrictions

Constraint Value

Maximum # of HTTP Headers per request 90

Maximum length of all HTTP Headers 4096 bytes

Maximum length per HTTP request line 8192 bytes

Maximum length of container name 256 bytes

Maximum length of object name 1024 bytes

1-15Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 28: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

• Creating an Archive (DDN WOS) Object

• Deleting an Object

• Deleting an Archive (DDN WOS) Object

• Creating or Updating Container Metadata

• Retrieving Container Metadata

• Deleting Container Metadata

• Retrieving Object Metadata

• Discovering COS Cluster Features

For details, see the Cisco Cloud Object Storage Release 3.12.1 API Guide.

Fanout APICOS 3.12.1 includes support for a Fanout API to enable interactions with other Cisco applications in the Virtualized Video Processing (V2P) suite. A typical use case for such interactions is a cloud DVR (cDVR) workflow, which can involve several separate applications each dedicated to a specific part of the workflow such as ingest, recording, and storage.

Note The Fanout API is supported in production environments only for configurations of three or more nodes.

The Fanout API uses a single request to create, get, or delete multiple copies of an object (hence fanout). Each copy is treated not as an individual object, but instead, is accessed by specifying the object URL and including in the request header a zero-based index to the requested copy. This saves on network resources by optimizing storage, compute, and disk utilization.

Along with Fanout API support, COS 3.12.1 supports basic authentication for password-protected access to Cisco Virtual Media Recorder (VMR). COS manages a single user name and credentials for use with VMR. This VMR “user” can be created, listed, and have its credentials modified using the COS Configuration API. For details, see the Cisco Cloud Object Storage Release 3.12.1 API Guide.

Object Store Metadata ResiliencyCOS stores metadata for Swift and Swauth accounts, users, containers, and objects as documents in a Cassandra database instance. Cassandra is a distributed document store. In a typical multi-node Cassandra cluster, no single node persists (saves) a copy of the entire database to local disk. Instead, each Cassandra cluster node locally persists a subset of the database. To ensure resiliency of the data in case of node failure, Cassandra has configuration options to specify the number of document replicas to maintain on separate cluster nodes.

For metadata resiliency in COS, each COS cluster node participates in the Cassandra cluster, and each COS node locally persists a part of the Cassandra database. The database cluster is automatically configured to create document replicas to be resilient to a single node failure.

Caution There is a risk of data loss if a second node fails before full metadata resiliency is restored, or before full content resiliency is restored.

1-16Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 29: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

Object Store Data ResiliencyCOS stores Swift object data to the local drives within the chassis. To maintain data resiliency in the event of a failed local hard drive, COS 3.12.1 enables local erasure coding (LEC) by default. LEC distributes redundant data across local hard drives (two parity blocks for 12 data blocks), enabling full recovery of lost data if any two drives in the set should fail.

Note COS 3.12.1 also supports mirroring of local hard drives as an option. However, LEC is enabled by default, and is the generally recommended choice.

If LEC is enabled and a local hard drive fails, the COS system immediately begins to regenerate any lost date due to the drive failure and place it on the surviving hard drives to regain the intended resiliency. Execution of this recovery process is scheduled with low priority, and recovery time depends on the availability of system resources, available storage capacity, and the amount of data lost.

COS cluster data resiliency is provided by object replication, or mirroring. The V2PC GUI allows for configuration of both local and remote mirror copies.

For data resiliency in the event of a COS node failure, the COS cluster can be configured to maintain copies of object data on one or more additional COS nodes. Recommended practice is to configure the COS cluster to maintain at least two copies of object data for resiliency.

When configured for multiple object copies, the COS cluster automatically attempts to create the configured object copy count within the cluster in the event of a COS node failure, without manual intervention. As soon as the COS cluster detects a node failure, the cluster begins to create additional copies of objects stored on the failed node. Upon restoring the failed node, the COS cluster purges unnecessary copies to recover storage space.

Note When configuring local mirroring for resiliency, we recommend using no more than one local mirror copy.

As an alternative to mirroring for data resiliency across nodes in a cluster, COS 3.12.1 supports distributed erasure coding (DEC). DEC allows for recovery of corrupted data in the event of loss of up to two nodes in a cluster. If a node fails, COS immediately begins to regenerate the data from the lost node and place the missing data blocks on the surviving nodes. Execution and duration of this recovery process are scheduled with low priority, and recovery time depends on the availability of system resources, network availability, available storage capacity, and the amount of data lost.

COS 3.12.1 also allows for configuration of mixed resiliency policies (local erasure coding with remote mirroring) via the GUI. Additionally, COS notifies the operator if the system gets close to the maximum loss of resiliency as defined by the SLA, and alarms if resiliency is actually lost.

For additional details, see Configuring Resiliency and Management Interface Bonding, page B-1.

Management Interface Port BondingExtending resiliency to the network management interface, COS 3.12.1 also supports defining two node ports as a primary-backup pair for management interface bonding. For the S3160, the designated ports are eth0 and eth3, and for the CDE465, the designated ports are eth0 and eth1.

For additional details, see Configuring Resiliency and Management Interface Bonding, page B-1.

1-17Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 30: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Features

Service Load BalancingThe COS cluster is composed of COS nodes, each having limited CPU, network, and disk resources. To ensure best performance and quality of service, the workloads of the Swift and Swauth operations must be distributed effectively among the nodes. The recommended solution for service load balancing is to use a DNS system to round-robin clients to different physical IP addresses hosted by the various nodes. While not perfect, such a DNS round-robin solution should provide sufficient distribution of workloads.

In addition to using DNS to distribute workload, the COS Swift implementation supports intelligently redirecting a Swift client to an optimal location for Swift object create and read operations using standard HTTP redirect semantics. Given a Swift client that supports HTTP redirect semantics, the client can provide an X-Follow-Redirect: true HTTP header in the HTTP PUT and GET requests for Swift object create and read operations. In the event that a more optimal location is used for the operation, the COS node will respond with an HTTP 307 (temporary redirect) status, indicating to the client where the operation should be requested.

For Swift object read operations, COS provides two levels of service and transfer profile: best-effort and committed rate. These levels of service contribute to service load balancing. COS provides extensions to Swift object read that allow the client to request a guaranteed and committed transfer rate as the data is sent from the COS node.

A COS node can reject a read request if the client has requested a committed rate transfer, but the COS node does not have sufficient resources available to satisfy the client request. If a client does not request a committed rate transfer, the COS node attempts to satisfy the request with the system resources available and at a priority lower than that of any in-progress committed rate requests. For more information, see the Cisco Cloud Object Storage Release 3.12.1 API Guide.

Beginning with COS 3.5.1, a remote smoothing feature facilitates load balancing by shifting content to a new node after it has been added to the cluster.

CLI UtilitiesCOS provides the following command line utilities for use on Linux:

• cos-swift – provides command-line access to the Swift API.

• cos-swauth – provides command-line access to the Swauth API.

Note These utilities do not work between two COS nodes or between a COS node and a local node, as the HTTP request will be refused.

For more information on the COS command line utilities, see COS Command Line Utilities, page C-1.

COS Cluster SupportEach COS application instance can have one or more Clusters created to service that application instance. Each cluster can have its own asset redundancy policy, shared by all COS nodes that are members of that cluster.

If a cluster is disabled, all member COS nodes will have their interfaces removed from the DNS. Likewise, when a cluster is enabled, all member node interfaces will be added back to the DNS.

1-18Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 31: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Prerequisites

COS AIC Client ManagementThe COS AIC Client process is monitored by the monit process that runs on each COS node. The AIC Client process creates a PID file that is added to the monit script so that it can be monitored and restarted automatically if the monit process discovers the AIC Client process not running.

Command line scripts are also available to stop and restart the AIC Client process manually, bypassing the automatic restart process.

Node Decommissioning Paused for Maintenance ModeIf a COS node is in the process of being decommissioned when it or any other node in its cluster is placed in Maintenance mode, the decommissioning process is paused to preserve the intended cluster resiliency.

PrerequisitesThe COS management and configuration operations require specific hardware components for deployment. For more information on the hardware requirements, see the Cisco Virtualized Video Processing Controller User Guide for your V2PC release.

The COS system is most effective in engineered networks, with separate routes for management and data flow. In designing and provisioning networks, capacity for the high data-network throughput for the expected application of COS must be ensured. Also, the high data traffic generated by the COS systems must not interfere with the management network segment or other important network segments.

Restrictions and Limitations• COS does not support IPv6.

• The OpenStack Swift and Swauth APIs continue to evolve, and COS does not currently implement all Swift or Swauth API functions. For a list of currently supported functions, see Swift Object Store API and Swauth API in this chapter.

• Secure Sockets Layer (SSL) or other means for providing session security and encryption are not supported with the Swift and Swauth APIs.

• COS 3.12.1 does not support downgrade to any earlier COS release if fanout compaction has been enabled on the node to be downgraded. See Deploying COS, page 2-1 for details.

• See the Release Notes for Cisco Cloud Object Storage 3.12.1 for open caveats and other known issues related to this release.

1-19Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 32: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 1 Overview Restrictions and Limitations

1-20Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 33: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Cis

C H A P T E R 2

Deploying COS

This chapter describes the procedures for installing and configuring COS Release 3.12.1 software. It contains the following sections:

• Hardware Options, page 2-1

• COS Network Architecture, page 2-2

• Configuring End-to-End Quality of Service, page 2-3

• Installing V2PC, page 2-4

• Installing and Provisioning the Cisco-COS Application on V2PC, page 2-4

• Configuring the COS Application, page 2-6

• Installing COS, page 2-7

• Initial COS Node Configuration, page 2-11

• Registering the COS Node to V2PC, page 2-13

• Creating User Accounts and Verifying the COS Node, page 2-13

• Upgrading the Cisco-COS Application on V2PC, page 2-15

• Automated COS Node Configuration, page 2-16

• Automated Configuration at Installation (Optional), page 2-17

• Enabling Fanout Compaction (Recommended), page 2-18

• Configuring Telemetry Forwarding, page 2-19

Hardware OptionsThe COS 3.12.x release train is designed to be deployed on the following hardware:

• Cisco CDE6032 Dual Node Storage Server with 56 x 10 TB hard drives (560 TB total storage), giving 28 drives (280 TB) to each server node

• Cisco UCSC S3260-4U5 Dual Node Storage Server with 56 x 10 TB hard drives (560 TB total storage), giving 28 drives (280 TB) to each server node

• Cisco UCSC S3260-4U4 Single Node Storage Server with 56 x 6 TB hard drives (336 TB total storage), giving all 56 drives to one server node

• Cisco UCS S3260-4U3 Dual Node Storage Server with 56 x 6 TB hard drives (336 TB total storage, 28 hard drives or 168 TB per server node)

2-1co Cloud Object Storage Release 3.12.1 User Guide

Page 34: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS COS Network Architecture

• Cisco UCS C3160-4U2 Rack Server with 54 x 6 TB hard drives (324 TB total storage)

• Cisco UCS C3160-4U1 Rack Server with 54 x 4 TB hard drives (216 TB total storage)

• Cisco Content Delivery Engine CDE465-4R4 with 36 x 6 TB hard drives (216 TB total storage)

Note COS Release 3.12.1 has been tested on the CDE6032 and UCSC S3260 Dual Node Storage Servers. Future COS releases are expected to support all hardware models listed above. Contact Cisco for updated information.

For information about installing the hardware, see the following:

• Cisco CDE6032 Storage Server Installation and Service Guide

• Cisco UCS S3260 Storage Server Installation and Service Guide

• Cisco UCS C3160 Rack Server Installation and Service Guide

• Cisco Content Delivery Engine 465 Hardware Installation Guide

Before you begin, be sure that you have the following:

• Server hardware installed per manufacturer instructions

• Cisco Integrated Management Controller (CIMC) Connection to the server

• ISO image of the COS Software (for systems without COS 3.12.1 pre-installed)

• COS COS post-installation script (for systems with COS 3.12.1 pre-installed)

Note You can convert a UCS C3160 to a UCS S3260 in the field. For details, see Migrating a Cisco UCS C3160 Server to a Cisco UCS S3260 Server in the Cisco UCS S3260 Storage Server Installation and Service Guide.

COS Network ArchitectureFigure 2-1 provides a view of the network architecture of a COS cluster based on the CDE 6032, UCS S3260, and UCS C3160 platform.

2-2Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 35: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Configuring End-to-End Quality of Service

Figure 2-1 S3x60 Platform COS Cluster Architecture

Configuring End-to-End Quality of ServiceFor optimum performance on the data network, you must configure the end-to-end data path between COS clusters, including all intervening switches and routers, for Quality of Service (QoS) and Priority Flow Control pause no-drop service. This ensures that no data packets are allowed to drop during periods of heavy traffic congestion.

Note Be sure that the Class of Service value configured on the data ports is properly matched on the switch and mapped to a no-drop QoS group for Priority Flow Control.

2-3Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 36: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Installing V2PC

About Priority Flow ControlPriority flow control (PFC; IEEE 802.1Qbb), also referred to as Class-based Flow Control (CBFC) or Per Priority Pause (PPP), is a mechanism that prevents frame loss due to congestion. PFC functions on a per class-of-service (CoS) basis.

When a buffer threshold is exceeded due to congestion, PFC sends a pause frame that indicates which CoS value needs to be paused. A PFC pause frame contains a 2-octet timer value for each CoS that indicates the length of time the traffic needs to be paused. The unit of time for the timer is specified in pause quanta. A quanta is the time required for transmitting 512 bits at the speed of the port. The range is from 0 to 65535. A pause frame with a pause quanta of 0 indicates a resume frame to restart the paused traffic.

Note Only certain classes of service of traffic can be flow controlled, while other classes are allowed to operate normally.

PFC asks the peer to stop sending frames of a particular CoS value by sending a pause frame to a well-known multicast address. This pause frame is a one-hop frame that is not forwarded when received by the peer. When the congestion is mitigated, PFC can request the peer to restart transmitting.

Installing V2PCFor V2PC installation instructions, see the Cisco Virtualized Video Processing Controller Deployment Guide for your V2PC release.

Note COS 3.12.1 has been tested for compatibility with V2PC Release Candidate 3.2.2 build 10744. Later releases of COS are expected to be compatible with later versions of V2PC. Contact Cisco for the latest compatibility information.

Installing and Provisioning the Cisco-COS Application on V2PCThis procedure involves the following steps:

• Confirm Prerequisites, page 2-4

• Create a Provider, page 2-5

• Create a Zone and Worker, page 2-5

• Download and Import the COS Application, page 2-5

• Launch the COS Application on the V2PC Master, page 2-6

Confirm PrerequisitesBefore installing the Cisco-COS application, confirm that vCenter 6.0 and V2PC are both installed.

2-4Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 37: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Installing and Provisioning the Cisco-COS Application on V2PC

Create a Provider

Step 1 Open a web browser and access the V2PC GUI login page at https://<v2pc-ip>:8443/.

The default login credentials for V2PC GUI are:

• User name: admin

• Password: default

Step 2 From the navigation panel, choose Application Deployment Manager > Resources > Providers.

Step 3 Click the + (Add) icon at the right top corner to open the New Provider dialog.

Step 4 Enter or select the following:

• Provide name

• vCenter login information

• Datastore information

• Image information

Step 5 Click Save to save your entries and return to the Providers page.

Step 6 Locate the new provider in the Providers page, click Edit, and create the management Networks.

Create a Zone and Worker

Step 1 Click the + (Add) button at the top right corner to create a zone.

Step 2 Open the zone page and click the + (Add) button to create a new Worker.

Step 3 Enter the following information for the new Worker:

• Worker name – follow local conventions

• Image flavor – select default-img-flavors

• Image flavor name – select medium

• Admin State – Select Inservice

• Image Name – enter v2p-base-image

• Image Tag – enter cisco-centos-7.0

• Worker Interfaces – enter the same IP address for data-in, data-out, and mgmt

Download and Import the COS Application

Step 1 Download the latest COS application from the COS software download page on www.cisco.com.

Step 2 Copy the cisco-cos application to the V2PC repository node, as shown in the following example:

scp -i ~/Documents/Temp/v2pcssh.key cisco-cos-1.0.429.tgz [email protected]

2-5Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 38: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Configuring the COS Application

Launch the COS Application on the V2PC MasterAfter importing the COS application to the V2PC master node, create the cisco-cos application through the V2PC GUI as follows:

Step 1 From the V2PC GUI navigation panel, choose Application Deployment Manager > Create New Application.

Step 2 Enter cisco-cos as the Package Name, and then click Save.

Step 3 From the V2PC GUI navigation panel, choose Application Deployment Manager > Deployed Applications, and drag region-0 to this page.

Step 4 Drag the created cisco-cos application to the region-0 area.

Step 5 Click the + (Add) icon to create a new application instance, and then enter the following information:

• Instance Name – follow local conventions

• Select VM-Provider

• Select zone-1

• Select v2p-base-image

Step 6 Click Next to continue to the MASTER ROLE Details page, and then enter the following information:

• Min Node – 0

• Max Node – 0

• Image Flavor Info – select default-img-flavor and medium

• Configuration – keep the value loaded by default

Step 7 Click Save to save your entries and return to the Deployed Applications page.

Step 8 Change the Admin State of the new instance to Enable, and then click Update to refresh the settings.

Configuring the COS ApplicationThis procedure involves the following steps:

• Create an IP Pool, page 2-6

• Create a COS Cluster, page 2-7

• Create COS Node Profiles, page 2-7

Create an IP Pool

Step 1 Open a web browser and access the V2PC GUI at https://<v2pc-ip>:8443/.

Step 2 Log in to the V2PC GUI using the following credentials:

• User name: admin

2-6Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 39: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Installing COS

• Password: default

Step 3 From the V2PC GUI navigation panel, choose Cisco Cloud Object Store (COS) > COS IP Pools.

Step 4 Click the + (Add) icon at top right to create an IP pool. This IP pool is used for the COS C/F interface.

Create a COS Cluster

Step 1 From the V2PC GUI navigation panel, choose Cisco Cloud Object Store (COS) > COS Clusters.

Step 2 Click the + (Add) icon at top right to create a COS cluster.

Step 3 Set the Authentication FQDN to be the same as the DNS server configuration.

Step 4 Set the Cluster State to Enabled.

Step 5 Set Node Policy and Cluster Policy to either configure or disable, per your deployment.

Create COS Node Profiles

Step 1 From the V2PC GUI navigation panel, choose Cisco Cloud Object Store (COS) > COS Node Profiles.

Step 2 Click the + (Add) icon at top right to create a COS node profile.

Step 3 Set Device Model to specify the COS hardware type used.

Step 4 Select the cluster created in Create a COS Cluster, page 2-7.

Step 5 Set Cluster Size to the number of COS nodes in a cluster defined for your deployment.

Step 6 Assign an IP pool for Data Interfaces.

Step 7 Check the Profile URL field to confirm that the node profile is generated.

Installing COSThis section provides instructions for installing COS 3.12.1 software. New installations involve installing the full COS 3.12.1 ISO image, and can be performed either via CIMC Virtual Media or using a DVD drive. Installation on CDE6032 storage servers with COS pre-installed is simpler, and only requires running a post-installation script.

The configuration procedures required after installation are the same for either type of installation, and are described in Initial COS Node Configuration, page 2-11.

Note COS Release 3.5.2 and later support remote network installation of the COS client using the Intel Preboot Execution Environment (PXE) in combination with the Red Hat Enterprises network installation feature using NFS, FTP, or HTTP, PXE. For details, see PXE Network Installation, page D-1.

2-7Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 40: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Installing COS

Procedure for New InstallationsComplete the following steps for systems without COS software pre-loaded:

Step 1 Download the full-image ISO file from the software download area of www.cisco.com to your computer.

Note • If you are installing from a DVD drive, burn the ISO image to a DVD and connect the DVD drive to the USB port of KVM console connector or dongle.

• If you are installing pre-release software, be sure to download the correct build latest build. Contact your Cisco representative for assistance, if needed.

Step 2 Extract the preinst_setup_UCSC-C3260.sh from the ISO and save this file on a server that can reach the COS system CIMC interface IP address you plan to use.

From a Linux server:

mount –o loop <COS full-image iso> /mnt/cdrom

Step 3 Connect a monitor to the system console port, and then apply power to the COS system.

Step 4 When the Cisco logo appears onscreen, press F8 to open the Cisco IMC Configuration Utility menu.

Note You may have to enter an administrator CIMC password to access the CIMC configuration screen.

Step 5 Enter the CIMC IP address, mask, and gateway, and then press F10 to save these settings.

Step 6 Press F5 to refresh the network settings.

Step 7 Confirm that the CIMC IP address is reachable from the server on which you plan to run the pre-installation script.

Note Do not press ESC to continue.

Step 8 Prepare the required IP addresses for Baseboard Management Controller (BMC) as well as for the COS management and data VLAN IDs.

Step 9 Execute preinst_setup_UCSC-C3260.sh on the Linux server as shown in the following example:

# ./preinst_setup_UCSC-C3260.shCIMC IP: 172.22.125.209Username: adminPassword: <enter your admin password here>BMC IP(s): 172.22.125.183,172.22.125.184SIOC IP(s): 172.22.125.181,172.22.125.182Mgmt VLAN ID: 96Data VLAN ID: 17Admin speed ('1x40', '4x10'): 4x10Class of Service [0]:

Confirm that the script runs without error or warning messages, and ends with the message Logging out..., indicating that the script has executed successfully.

Step 10 Are you installing from CIMC Virtual Media or a DVD drive?

• If CIMC Virtual Media, continue with Installation from CIMC Virtual Media, page 2-9.

2-8Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 41: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Installing COS

• If a DVD drive, continue with Installation from a DVD Drive, page 2-9.

Installation from CIMC Virtual Media

For new COS installations using CIMC Virtual Media, continue from Procedure for New Installations, page 2-8 as follows:

Step 1 Open a web browser and enter the CIMC IP address http://<CIMC-IP>/ to access the CIMC login page.

Step 2 Log on to CIMC using admin login credentials.

Step 3 Click Toggle Navigation at top left of the page, then navigate to Compute > Server 1 > Remote Management > Virtual Media.

Step 4 Confirm that Virtual Media is enabled (or if not, enable it) and click Save Changes.

Step 5 Click Launch KVM, select Server 1, and then click Launch.

Note You may have to click Connect in a pop-up window after the KVM window launches.

Step 6 Select Virtual Media > Activate Virtual Devices > Accept:

Step 7 Select Map CD/DVD, browse to the full COS 3.12.1 ISO image downloaded earlier, and click Map Device.

Step 8 Cycle power to the COS system, and when the Cisco logo appears onscreen, press F6 to boot to the blue Boot menu.

Step 9 Select the Cisco vKVM-mapped vDVD… boot option to launch hands-free CentOS installation.

Step 10 After a message appears indicating that CentOS installation is complete, select Virtual Media, deselect the <iso name> Map CD/DVD option, and press Enter to reboot the system.

Step 11 Confirm that the system reboots and displays the localhost login: prompt.

Step 12 Repeat steps 2-11 of this procedure, replacing Server 1 with Server 2 to install Server 2.

Step 13 Continue to Initial COS Node Configuration, page 2-11.

Installation from a DVD Drive

For new COS installations using a DVD drive, continue from Procedure for New Installations, page 2-8 as follows:

Step 1 Confirm that the DVD drive is connected to the USB port of the KVM console connector or dongle, and that the COS 3.12.1 ISO DVD burned previously is inserted into the drive.

Step 2 Cycle power to the COS system, and when the Cisco logo appears onscreen, press F6 to boot to the blue Boot menu.

Step 3 Select the option associated with the DVD drive model used for Server 1 installation. This launches hands-free CentOS installation.

Step 4 After a message appears indicating that CentOS installation is complete, detach the USB DVD drive, and then press Enter to reboot the system.

2-9Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 42: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Installing COS

Step 5 Confirm that the system reboots and displays the localhost login: prompt.

Step 6 Repeat steps 1-5 of this procedure, replacing Server 1 with Server 2 for Server 2 installation.

Step 7 Continue to Initial COS Node Configuration, page 2-11.

Procedure for Pre-Loaded InstallationsComplete the following steps for Cisco CDE6032 or other systems with COS 3.12.1 pre-loaded:

Step 1 Download the postinst_setup_UCSC-C3260.sh file from the software download area of www.cisco.com to a Linux server that can reach the COS system CIMC interface IP you plan to use.

Note If installing pre-release software, be sure to download the correct build for your deployment. Contact your Cisco representative for assistance, if needed.

Step 2 Connect a monitor to the system console port, and then apply power to the COS system.

Step 3 When the Cisco logo appears onscreen, press F8 to open the Cisco IMC Configuration Utility menu.

Note You may have to enter an administrator CIMC password to access the CIMC configuration screen.

Step 4 Enter the CIMC IP address, mask, and gateway, and then press F10 to save these settings.

Step 5 Press F5 to refresh the network settings.

Step 6 Confirm that the CIMC IP address is reachable from the server on which you plan to run the pre-installation script.

Note Do not press ESC to continue.

Step 7 Prepare the required IP addresses for BMC and CMC, and for the COS management and data VLAN IDs.

Step 8 Execute preinst_setup_UCSC-C3260.sh on the Linux server as shown in the following example:

# ./preinst_setup_UCSC-C3260.shCIMC IP: 172.22.125.209Username: adminPassword: <enter your admin password here>BMC IP(s): 172.22.125.183,172.22.125.184SIOC IP(s): 172.22.125.181,172.22.125.182Mgmt VLAN ID: 96Data VLAN ID: 17Admin speed ('1x40', '4x10'): 4x10Class of Service [0]:

Confirm that the script runs without error or warning messages, and ends with the message Logging out..., indicating that the script has executed successfully.

Step 9 On the KVM console window, press ESC key to continue.

Step 10 Confirm that the system reboots and displays the localhost login: prompt.

2-10Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 43: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Initial COS Node Configuration

Step 11 Continue with Initial COS Node Configuration, page 2-11.

Changing COS Node Parameters after InstallationIf necessary after running cosinit, you can use the cos-reinit command at the node CLI prompt to change the management IP address or hostname of the node.

To run the cos-reinit script:

Step 1 Log in to the V2PC GUI as described in Accessing the V2PC GUI, page A-2.

Step 2 From the V2PC GUI navigation panel, choose Cisco Cloud Object Store (COS) > COS IP Pools and confirm that the IP Pool configuration has enough IP addresses. These addresses are assigned to the COS node after reinitialization.

Step 3 From the V2PC GUI navigation panel, choose Cisco Cloud Object Store (COS) > COS Nodes, locate the COS node to be reconfigured, and set it to Maintenance mode.

Step 4 Run the cos-reinit script as described in Command Usage, page 2-11.

Step 5 After the script runs, return to the COS Nodes page of the V2PC GUI and delete the old node.

Step 6 Reboot or restart the related services, and confirm that the new configuration takes effect.

Command Usage

/opt/cisco/cos/config/cos-reinit [-h <hostname>] [ -i <mgmtIP> [-s <subnetMask>] [-b <broadcastIP>] [-g <gatewayIP>] ] [ -f <profile> -n <dnsIP> ]

BASIC Re-Configuration Options: -h <hostname> Change COS Node Hostname (kept same if skipped) -i <mgmtIP> Change Management IP Address (kept same if skipped)

Options below valid only with -i <mgmtIP>: -s <subnetMask> Change subnet mask (kept same if skipped) -b <broadcastIP> Broadcast IP Address (calculated if skipped) -g <gatewayIP> Gateway IP Adress (calculated if skipped)

ADVANCED Options (unchanged if skipped): -f <profile> URL for initialization profile -n <dnsIP> DNS IP Address (to resolve initialization profile URL)

Example[root@c3260-10TB-106 ~]# /opt/cisco/cos/config/cos-reinit -h ra-c3260 -i 20.0.118.75

Initial COS Node ConfigurationComplete the following steps to configure COS following successful installation:

2-11Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 44: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Initial COS Node Configuration

Step 1 Install and provision the Cisco-COS application as described in Installing and Provisioning the Cisco-COS Application on V2PC, page 2-4.

Step 2 Configure the Cisco-COS application and register the COS node to the V2PC management network as described in Configuring the COS Application, page 2-6.

Step 3 Log on to COS as user root (password rootroot). The system prompts you to execute COS initialization (cosinit), as shown in the following example.

CentOS release 6.4 (Final)Kernel 2.6.32-3.11.106_cos0.13 on an x86_64

localhost.localdomain login: rootPassword:Executing cosinit for platform configurations only

ATTENTION!!!cosinit script should be run only to configure the device after an image installation.This script modifies the network and other critical configurations based on the deployment type. Improper use of this script may result in mis-configuring the device or making it inaccessible.If a new image is installed on this server, a reboot is required before running cosinit.If a reboot is already performed, please continue. Otherwise, please exit and execute cosinit after rebooting the server

Do you want to continue ? (yes/no) [y]:

Step 4 At the “Do you want to continue?” prompt, enter yes or no according to how you want to proceed:

• To use cosinit to install and provision the cisco-cos application and register the COS node to the V2PC management network, enter yes, and then complete the remainder of this procedure.

• To use cosinit1step to install and provision the cisco-cos application and register the COS node to the V2PC management network, enter no, complete the remainder of this procedure, and then continue with Using cosinit1step (Optional), page 2-12.

Using cosinit1step (Optional)If you answered no at the end of step 3 of Initial COS Node Configuration, page 2-11, use the cosinit1step command to configure the COS node management IP address, hostname, and gateway, and to register COS to the V2PC management network, as follows:

Command Usage/opt/cisco/cos/config/cosinit1step -i <mgmtIP> -s <subnetMask> [-h <hostname>] [-b <broadcastIP>] [-g <gatewayIP>] [-n <dnsIP>] [-f <inputFile>] [-p]

Network Configuration Options:-i <mgmtIP> Management IP Address -s <subnetMask> Subnet Mask -h <hostname> COS Node Hostname (kept same if skipped) -b <broadcastIP> Broadcast IP Address (calculated if skipped) -g <gatewayIP> Gateway IP Adress (calculated if skipped)

Node Initialization Options:-n <dnsIP> DNS IP Address (to resolve initialization profile URL)-f <file> Local path or HTTP URL for initialization profile

2-12Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 45: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Registering the COS Node to V2PC

Misc. Options:-p Preserve DB initialized flag (for upgrades)

Example/opt/cisco/cos/config/cosinit -i 20.0.119.77 -s 255.255.252.0 -h c3260 -g 20.0.116.1 -n 20.0.119.85 -f https://v2pc-ui.v2pc.cisco.com:8443/sm/v2/cosnodeprofiles/c3260-config

Registering the COS Node to V2PC

Step 1 Run cosinit on the COS node with your DNS server and the generated COS profile URL, as shown in the following example:

/opt/cisco/cos/config/cosinit -skipnw -nameserver 20.0.119.85 -input https://v2pc-ui.v2pc.cisco.com:8443/sm/v2/cosnodeprofiles/c3260-config

Step 2 From the V2PC GUI navigation panel, choose Cisco Cloud Object Store > COS Nodes and confirm that the COS node registered successfully.

Step 3 From the V2PC GUI navigation panel, choose Cisco-COS Application > COS Service Status and confirm the status of the COS cluster and the storage status, disk status, interface status, and services status of the individual COS nodes in the cluster.

Step 4 Verify that cassandra is running:

[cos-node@ root] nodetool statusDatacenter: datacenter1=======================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving-- Address Load Tokens Owns (effective) Host ID RackUN 10.168.5.2 117.35 KB 256 100.0% bf862009-40eb-474a-a86b-1700c724755a rack1UN 10.168.5.18 768.4 KB 256 100.0% 39e62fbf-d0fb-4a92-9eb9-869b488ed5af rack1

Step 5 Verify that cosd is functional:

[cos-node@ root] curl –v http://fqdn/info

{"cluster":{"config_ver":2,"fqdn":"cos-utah50.lindon.lab.cisco.com","name":"local","enable_wos":true},"swauth":{"reseller_prefix":"AUTH_","path_prefix":"auth/","max_key_len":256,"max_user_len":256,"token_life":86400,"max_token_life":86400,"max_account_len":256},"swift":{"version":"2.2.0","max_account_len":256,"max_container_len":256,"max_object_len":1024,"max_container_list":10000,"max_object_list":10000},"log":{"default":"notice"},"rio":{"path_prefix":"rio/“}}

Creating User Accounts and Verifying the COS NodeUse COS Swift and Swauth API calls as shown below to create an account, user, token, and container, and then confirm that these objects can write to and read from the COS node.

2-13Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 46: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Creating User Accounts and Verifying the COS Node

Note For additional API details, see the Cisco Cloud Object Storage Release 3.12.1 API Guide.

Step 1 Create an account as follows:

curl -v -X PUT -H "X-Auth-Admin-User: .super_admin" -H "X-Auth-Admin-Key: rootroot" -H "X-Account-Suffix: 12345" http://authFQDN/auth/v2/test

Step 2 Create a user as follows:

curl -v -X PUT -H "X-Auth-Admin-User: .super_admin" -H "X-Auth-Admin-Key: rootroot" -H "X-Auth-User-Key: rootroot" -H "X-Auth-User-Reseller-Admin: true" http://authFQDN/auth/v2/test/tester

Step 3 Get a token as follows:

curl -v -H "x-auth-user: test:tester" -H "x-auth-key: rootroot" http://authFQDN/v1.0

Save the token to MyToken.

Step 4 Create a container as follows:

curl -v -X PUT -H "x-auth-token: $MyToken" http://storageFQDN/v1/AUTH_12345/container1

Step 5 Verify that the objects are able to write to and read from the COS node as follows:

Write Syntax

curl -v -X PUT -H "x-auth-token: $MyToken" http://storageFQDN/v1/AUTH_12345/container1/object1 -T test.ts

Read Syntax

curl -v -X GET -H "x-auth-token: $mytoken" http://storageFQDN/v1/AUTH_12345/container1/object1

Verifying Fanout APIIf the deployment uses fanout objects, use curl commands to confirm fanout object reads and writes, as shown in the following examples:

Example: Write Object via Fanout APIcurl -v --basic -u ".riouser:rootroot" -X PUT -H "X-fanout-Copy-Count: 100" http://192.169.219.37/rio/bucket1/abc-123-123/obj1.tx -T test.txt

Example: Read Object via Fanout APIcurl -v -X GET -H "X-fanout-Copy-Index: 10" http://192.169.219.37/rio/bucket1/abc-123-123/obj1.txt

Note For additional details on fanout objects, see Enabling Fanout Compaction (Recommended), page 2-18 and the Fanout API section of the Cisco Cloud Object Storage Release 3.12.1 API Guide.

2-14Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 47: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Upgrading the Cisco-COS Application on V2PC

Upgrading the Cisco-COS Application on V2PCThis procedure involves the following steps:

• Download and Import a New COS Application, page 2-15

• Remove the Existing COS Application Instance, page 2-15

• Create a New COS Application Instance, page 2-15

Download and Import a New COS Application

Step 1 Download the latest COS application from the COS software download page on www.cisco.com.

Step 2 Copy the cisco-cos application to the V2PC repository node, as shown in the following example:

scp -i ~/Documents/Temp/v2pcssh.key cisco-cos-1.0.429.tgz [email protected]

Step 3 On the repository node, import the COS application to the master node as shown in the following example:

-bash-4.2$ /opt/cisco/v2p/v2pc/python/v2pPkgMgr.py --import --pkgtype aic --sourcepath ./

Remove the Existing COS Application Instance

Step 1 From the V2PC GUI navigation panel, choose Application Deployment Manager > Deployed Application.

Step 2 Locate the existing cisco-cos application instance and change its Admin State to Disable.

Step 3 Change the Admin State of the now disabled COS application instance to Delete.

Step 4 Click Update, and then confirm that the existing cisco-cos application is deleted from the GUI.

Step 5 In the Applications menu at right, click the Delete icon to remove cisco-cos from the menu.

Create a New COS Application Instance

Step 1 In the Applications menu at right, click Create New Application to open the Application dialog.

Step 2 Select cisco-cos from the drop-down list and click Save.

Step 3 In the Regions menu at right, click and drag region-0 to about the middle of the page.

Step 4 Drag the newly created cisco-cos application to region-0.

Step 5 Click the + (Add) icon to create an instance. A pop-up window opens.

Step 6 Enter the following information:

• Instance Name – follow local conventions

• Select VM-Provider

2-15Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 48: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Automated COS Node Configuration

• Select zone-1

• Select v2p-base-image

Step 7 Click Next to continue to the MASTER-ROLE Details page and enter the following information:

• Min Node – 0

• Max Node – 0

• Image Flavor Info – default-img-flavor, medium

• Configuration – keep the value loaded by default

Step 8 Click Save, and then confirm that a new COS instance is created.

Step 9 Set the Admin State to Enable, and then click Update.

The new cisco-cos application instance is now enabled and ready to use.

Automated COS Node ConfigurationTo use automated COS node configuration, you use cosinit to specify the nameserver IP address and URL of a configuration file that includes a ClusterName and IP Pool reference for at least one service interface. This enables the system to configure the node without manual intervention through either the V2PC GUI or the API.

Caution Using automated configuration results in the deletion of all existing content from the disks in the node being configured. Do not use this option unless you intend to wipe all content from these disks.

Command Usage

cosinit -<nameserver> -input <URL>

Example/opt/cisco/cos/config/cosinit -skipnw -nameserver 20.0.119.85 -input https://v2pc-ui.v2pc.cisco.com:8443/sm/v2/cosnodeprofiles/c3260-config

This results in one of the following:

• If ClusterName has a value, automated configuration is triggered. Then, if at least one service interface has an IP Pool configured, the AIC Client sets the adminstate to inService in the smcosnode document before sending cosannounce to DocServer. Otherwise, adminstate is set to Maintenance.

The AIC Server handles the rest of the configuration automatically by assigning an IP address from the specified IP Pool and adding the node to the cluster. The AIC Client then writes the necessary configuration files to the COS node as usual. When configuration is complete, the AIC Client automatically starts the cassandra, cosd, and cserver cos node services.

• If ClusterName has no value, manual configuration is needed. You must manually set the COS node adminstate from Maintenance to inService, assign the node to a cluster, and then enable the service interfaces from the GUI.

You can verify the input by viewing the .cosnodeinit file as follows:

[root@perf-4t-cos01] # vi /opt/cisco/cos/config/.cosnodeinitName : 171491989

2-16Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 49: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Automated Configuration at Installation (Optional)

Model : C3160-R1DocServerHost : 10.56.194.152DocServerPort : 5087bond0 : 10.56.194.149bond1 : v1860-poolbond2 : v1860-poolbond4 : v1860-poolbond5 : v1860-poolClusterName :cluster1

Note When using automated configuration of multiple COS nodes, configure the first node, and then wait before configuring the second node until the Cassandra database service to appear as Running in the GUI of the first node configured. Otherwise, there may be unexpected behavior in the seed list configuration for the Cassandra database of the nodes added after the first node.

Automated Configuration at Installation (Optional)Beginning with Release 3.8.1, COS adds the ability to specify the location of an Initialization Profiles file that COS can use as a template to configure nodes automatically at the time of node installation. This avoids the need to configure nodes manually following installation.

To enable this feature, you provide kernel command line options at installation time to set up the node network and identify the URL of the Initialization Profiles file during installation. You can either add these options to a node-specific boot configuration file managed by a PXE installation service (see PXE Network Installation, page D-1), or append these options at the Linux installation boot: prompt after the word auto when starting a manual installation.

Caution Using automated configuration results in the deletion of all existing content from the disks in the node being configured. Do not use this option unless you intend to wipe all content from these disks.

To enable automated node configuration at installation:

Step 1 Create the Initialization Profiles file for the node being installed.

Step 2 Test the URL generated for the configuration file to ensure that it is accessible.

Step 3 Add the following command line options to the kernel command line to set up the network:

ip=<IPaddress>netmask=<mask>gateway=<gw>hostname=<name>bond=<definition>

Note The bond command line option is only used for systems having two management ports.

Step 4 Add the following option to the kernel command line to identify the remote location of the Initialization Profile:

cfg.url=<url>

Note The Initialization Profiles file can be accessed using either FTP, HTTP, or NFS protocols.

2-17Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 50: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Enabling Fanout Compaction (Recommended)

If necessary, also add the following option to resolve the name used in place of a static IP address in the URL above.

dns=<IPAddress>

Example Kernel Command Linerdblacklist=e1000e,ixgbe ksdevice=eth0ks=ftp://10.10.10.15/image/COS/latest/ks/ks_auto.cfgrepo=ftp://10.10.10.15/image/COS/latest/ ks_zerombr ks_baud_rate=115200 ip=10.10.10.72netmask=255.255.255.0 gateway=10.10.10.1 dns=10.10.10.15 hostname=Node1bond=bond0:eth0,eth1:mode=active-backup,primary=eth0 cfg.url=https://getithere.com/c3260

Note • When using automated configuration for multiple COS nodes, configure the first node, and then wait before configuring the second node until the Cassandra database service to appear as Running in the GUI of the first node configured. Otherwise, there may be unexpected behavior in the seed list configuration for the Cassandra database of the nodes added after the first node.

• When adding multiple nodes to same cluster using this method, we strongly recommended installing the nodes one at a time in sequence while monitoring nodetool status as shown in step 1 of Enabling Fanout Compaction (Recommended), page 2-18. This ensures that all nodes are in UN state before proceeding with the next addition.

Enabling Fanout Compaction (Recommended)COS 3.12.1 introduces fanout compaction, a feature that almost immediately reclaims the space allocated to deleted copies of fanout objects. Without fanout compaction, this space would not become available for reuse until the entire fanout object is deleted. The use of fanout compaction is strongly recommended to maximize storage utilization and performance.

By default, fanout compaction is disabled for both new installations and upgrades, and must be enabled following installation or upgrade. This feature has behaviors and limitations that you should clearly understand before enabling it. This section describes these considerations and provides the procedure for enabling fanout compaction.

Behavior and LimitationsWhen fanout compaction is enabled, it changes in the way that data is represented on disk. Once changed, the data on disk can only be accessed by a COS release (currently COS 3.12.1 only) that supports fanout compaction. This has several important consequences:

• To enable fanout compaction, you must do so for every node in a cluster. This means that every node in the cluster must have COS 3.12.1 installed (or be upgraded) before enabling this feature.

• If you install or upgrade to COS 3.12.1 on a node and then enable fanout compaction, you will not be able to downgrade the node to an earlier COS release without losing access to its data stores. This makes it doubly important to check that all other desired features are operating correctly after the COS 3.12.1 installation or upgrade before enabling fanout compaction.

• For lab testing and other limited deployments, it is possible to enable fanout compaction on one cluster in a COS network without enabling it on other clusters in the network.

2-18Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 51: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Configuring Telemetry Forwarding

Enablement ProcedureTo enable fanout compaction, run the following script (no input parameters are needed) at the Linux command prompt for each node in the cluster on which this feature is to be enabled:

/opt/cisco/cos/bin/enable_fanout_compaction

Example[root@Utah734 bin]# /opt/cisco/cos/bin/enable_fanout_compactionWARNING: After enabling this feature COS file system downgrade will no longer be supported. Also to fully enable the feature you must run this script on every node in the COS cluster.Proceed? (y/N): yEnabling fanout object compaction.... DONE[root@Utah734 bin]#

The script checks to be sure that every other active node in the cluster is running a COS 3.12.1 before allowing the feature to be enabled on the node. In addition to enabling this feature, the script updates runtime procedures and setup files to make the change persistent on each node.

Caution As shown by the console response, enabling fanout compaction on a COS node removes support for downgrade to an earlier COS release. This is to prevent lost access to any content stored on the node.

Configuring Telemetry ForwardingCOS Release 3.12.1 includes support for the forwarding of log events and statistics to an Elasticsearch instance or to a Cisco Zeus account. This allows for centralized log management and statistical analysis of the COS service.

Note • For more information about the Elastic Stack, visit https://www.elastic.co/webinars/introduction-elk-stack.

• For more information about Cisco Zeus, visit https://ciscozeus.io/.

The telemetry forwarding feature currently forwards the following information:

• A subset of the events from /arroyo/log/http.log.<DATE> and /arroyo/log/cosd.log.<DATE>

• A subset of the statistics from /arroyo/log/protocoltiming.log.<DATE>

• Statistics from /proc/calypso/stats/*_stats

This section explains how to install, configure, start, and if necessary, troubleshoot this service. It also describes the use of Elasticsearch index templates and Kibana search patterns.

Install the RPMsThe primary software RPMs for telemetry forwarding are td-agent and td-agent-cos-plugins. These packages must be installed on each COS node that will forward telemetry.

2-19Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 52: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Configuring Telemetry Forwarding

These packages may already be installed, either from a new installation or from a product upgrade. To verify that these RPMs are installed on a node, enter the following at the node Linux prompt:

[root@cos-node ~]# yum list installed | grep td-agent td-agent.x86_64 td-agent-cos-plugins.x86_64 [root@cos-node ~]#

If the packages are already installed, skip forward to Configure Telemetry Forwarding, page 2-21.

If these packages are not already installed, use the YUM software package manager to install them. The telemetry software packages are contained in the YUM repository on the COS 3.12.1 software ISO image.

YUM supports various locations and protocols for software repositories, including a centralized HTTP or FTP repository. The COS 3.12.1 software ISO image provides a YUM software repository image that can be used in a central location for an HTTP or FTP source. If you do not have access to a centralized repository, you can instead copy the COS software ISO image to each COS node, mount the ISO, and use it as a local YUM repository.

Note For more information about YUM, visit https://www.centos.org/docs/4/html/yum/.

Complete the following steps to install the telemetry software packages:

Step 1 Ensure that the COS 3.12.1 software repository has been made available (either locally, or through HTTP of FTP).

Step 2 Install the td-agent-cos-plugin RPM using YUM.

Note This step will also install additional requisite RPMs as needed.

[root@cos-node ~]# yum install td-agent-cos-plugins Loaded plugins: fastestmirror Setting up Install Process Loading mirror speeds from cached hostfile Resolving Dependencies ... Dependencies Resolved

========================================================================== ==========================================================================

======================== Package Arch Version Repository Size

========================================================================== ========================================================================== ======================== Installing:

td-agent-cos-plugins x86_64 3.9.102-cos0.5.69 cos_sw 463 k Installing for dependencies:

td-agent x86_64

2-20Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 53: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Configuring Telemetry Forwarding

2.3.2-0.el6 cos_sw 58 M

Transaction Summary ========================================================================== ========================================================================== ========================

...

Is this ok [y/N]: y Downloading Packages: ... Complete! [root@cos-node ~]#

Configure Telemetry ForwardingThe td-agent service configuration is supplied in /etc/td-agent/td-agent.conf. This file must be properly configured on each of the COS nodes. Configuration is currently a manual process.

Use the following configuration templates to configure the td-agent. To use these templates, create an /etc/td-agent/td-agent.conf file and replace the fields shown in brackets ([ ]) with the configuration parameters appropriate to your deployment.

Forwarding to a Private Elasticsearch Instance <system>emit_error_log_interval 60

</system>

<match cos.**>

type elasticsearch host [elasticsearch-host-name] port 9200 flush_interval 5s include_tag_key true tag_key @log_name target_index_key @index target_type_key @type

</match>

<source> @type cos_cosd_log cos_cluster [cluster-name]

</source>

<source> @type cos_stats cos_cluster [cluster-name] interval 10 collated no flatten no

</source>

<source>

2-21Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 54: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Configuring Telemetry Forwarding

@type cos_http_log cos_cluster [cluster-name] verb_filter RIO|SWIFT

</source>

<source> @type cos_proto_log cos_cluster [cluster-name]

</source>

Forwarding to a Cisco Zeus Account <system>emit_error_log_interval 60

</system>

<match cos.**> type record_reformer tag logs.${tag}.[zeus-username]-[zeus-token] remove_keys @index <record>

timestamp ${time}

</record> </match> <match logs.**>

type secure_forward shared_key cisco_zeus_log_metric_pipline self_hostname fluentd-client1.ciscozeus.io secure false keepalive 10 <server>

host [zeus-data-host]

</server> </match> <source>

@type cos_cosd_log

cos_cluster [cluster-name] </source> <source>

@type cos_stats cos_cluster [cluster-name] interval 10 collated no flatten no

</source>

<source> @type cos_http_log cos_cluster [cluster-name] verb_filter RIO|SWIFT

</source>

2-22Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 55: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Configuring Telemetry Forwarding

<source> @type cos_proto_log cos_cluster [cluster-name]

</source>

Start Telemetry ForwardingAfter creating the td-agent configuration file, you can enable and start the td-agent service as follows:

[root@cos-node ~]# chkconfig --add td-agent [root@cos-node ~]# service td-agent start td-agent td-agent: [ OK ] [root@cos-node ~]#

Note The td-agent provides a log file at /var/log/td-agent/td-agent.log.

Troubleshooting the ServiceThe td-agent service can be executed in the foreground, with logging sent to stdout. To run in the foreground, log in to the appropriate COS node, stop the td-agent service, and execute td-agent from the shell prompt.

[root@cos-node ~]# service td-agent stop Stopping td-agent: td-agent [ OK ] [root@cos-node ~]# td-agent 2016-09-23 10:29:53 -0700 [info]: reading config file path="/etc/td-agent/td-agent.conf" 2016-09-23 10:29:53 -0700 [info]: starting fluentd-0.12.26 2016-09-23 10:29:53 -0700 [info]: gem 'fluent-mixin-config-placeholders' version '0.4.0' 2016-09-23 10:29:53 -0700 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6' 2016-09-23 10:29:53 -0700 [info]: gem 'fluent-plugin-elasticsearch' version '1 ...

For more verbose output, you can start the td-agent with the -v or -vv options:

• -v: Sets the log level to debug level, which prints information about exceptions that are caught during execution. Exceptions do not necessarily mean errors, so context relative to the exception is needed to know if the reported exception is an indication of error.

• -vv: Sets the log level to trace level, which will display records parsed by the COS td-agent plugins.

Using Elasticsearch Index TemplatesCOS 3.12.1 contains Elasticsearch index templates for use with a private Elasticsearch instance. These templates provides hints to the Elasticsearch database for proper interpretation of field data within records sent from COS nodes.

These index templates do not apply to a Cisco Zeus account. The index templates are bundled in a compressed tar file (cos-elasticsearch-index-templates.tgz), which can be obtained from the same location on www.cisco.com used to download COS 3.12.1 software.

2-23Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 56: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 2 Deploying COS Configuring Telemetry Forwarding

Note For more information about Elasticsearch index templates, visit: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html

To apply the index templates, download and extract the cos-elasticsearch-index-templates.tgz to a server or workstation which supports perl, can extract compressed tar files, and has network connectivity to the Elasticsearch instance.

The cos-elasticsearch-index-templates.tgz file contains a perl script, upload_templates.pl, which can be executed to assist in applying the index templates to the Elasticsearch instance.

To apply the Elasticsearch index templates:

[user@workstation tmp]$ tar -zxvf cos-elasticsearch-index-templates.tgzelasticsearch-templates/elasticsearch-templates/upload_templates.plelasticsearch-templates/cos_cosd.templateelasticsearch-templates/cos_cserver_proto.templateelasticsearch-templates/cos_cserver_http.templateelasticsearch-templates/cos_cserver_stats.template[user@workstation tmp]$ cd elasticsearch-templates/[user@workstation elasticsearch-templates]$ ./upload_templates.pl <elasticsearch-host> Applying template cos_cosd.template ... OK Applying template cos_cserver_http.template ... OK Applying template cos_cserver_proto.template ... OK Applying template cos_cserver_stats.template ... OK [user@workstation elasticsearch-templates]$

Using Kibana Index PatternsIndex patterns are defined in Kibana to allow for searching and visualizing data stored in Elasticsearch. For information stored by COS, it is recommended to create the following time-based index patterns, using @timestamp as the time-field name:

• logstash-cos-*-cosd-*

• logstash-cos-*-cserver-proto-*

• logstash-cos-*-cserver-stats-*

• logstash-cos-*-cserver-http-*

2-24Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 57: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Cis

C H A P T E R 3

System Monitoring

The COS Service can be monitored through the following:

• COS Cluster Status Monitoring, page 3-1

• COS Node Status Monitoring, page 3-1

• COS Node Platform Monitoring with SNMP, page 3-9

COS Cluster Status MonitoringThe COS service is implemented through an instance of the application instance controller (AIC). Each application instance represents a service instance. Some earlier COS releases supported a single service instance with one endpoint, one cluster, and one redundancy policy. COS Release 3.12.1 deployments can support multiple COS clusters, and each cluster has its own asset redundancy policy.

The Cisco Cloud Object Store (COS) > COS Service Status page of the V2PC GUI reports the status of each cluster. The values for Storage Status, Disk Status, Interface Status, Service Status, and Fault Status can be reported as either:

• Normal – all member nodes report Normal for that status.

• Warning – at least one member node reports a Warning level for that status.

• Critical – at least one member node reports a Critical level for that status.

On this page, you can drill down through a COS cluster to view the status of its individual COS nodes status. Drilling down through each node reveals the status of individual disks, interfaces, and services of the node. Additionally, any active alarms for the node are displayed.

Note COS 3.12.1 does not implement Resiliency Status.

COS Node Status MonitoringA COS node is in service if both the associated COS application instance and the cluster to which it belongs are in Enabled state. The V2PC GUI displays the status of each node that is in service and part of a COS cluster. This status is updated once per minute, or sooner if a fault is detected.

If a fault is reported, either an alarm or an event (or both) will be raised and shown in the V2PC GUI. If the fault is serious, the service interfaces for that COS node will be removed from the DNS.

3-1co Cloud Object Storage Release 3.12.1 User Guide

Page 58: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Status Monitoring

When the fault is no longer present, the service interfaces will be replaced in the DNS and the node will return to normal service.

Viewing COS Node StatusTo view a summary of node usage and alarms (if any) using the V2PC GUI, open the GUI as described in Accessing the V2PC GUI, page A-2 and navigate to Cisco Cloud Object Store (COS) > COS Service Status.

Figure 3-1 V2PC GUI, COS Service Status Page

The COS Service Status page lists the service instances and displaying the associated node usage along with any alarms. This page displays the following information:

• Any components that are down, disabled, inactive, or otherwise unavailable appear with a red (as opposed to green) icon for ease of identification.

• The Disks table lists the status of drives as down whether the drive is defective or missing. If all disks are down, the node is reported as down.

• If a critical service is down, the associated node is reported as down.

Viewing Deployment StatusTo view the status of the overall deployment from the V2PC GUI, open the GUI as described in Accessing the V2PC GUI, page A-2 and navigate to Dashboard > System Overview.

The System Overview page offers information about Bandwidth (Tx/Rx), Session (Tx/Rx) Storage usage, available COS nodes, and used COS nodes.

3-2Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 59: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Status Monitoring

Viewing COS Alarms and EventsThe COS Alarms & Events page lists significant COS related system events and provides details for user evaluation. To view this page in the V2PC GUI, open the GUI as described in Accessing the V2PC GUI, page A-2 and navigate to Dashboard > Alarms & Events.

All the events for the node are listed with the oldest event first. Events belong to one of the following levels of severity:

• Info – The event represents information only and does not require operator intervention.

• Warning – The event represents an issue that is possibly transitory and the operator should investigate the cause.

• Critical – The event represents an issue from which the node may not recover without operator intervention, and the operator must act immediately because the issue may cause service outage.

COS-AIC Alarms and EventsThe COS AIC reports alarms and events to the V2PC GUI. The COS-AIC generates Alarms and Events based on both GUI/REST transactions (user input) and AIC-Client generated status notifications.

3-3Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 60: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Status Monitoring

GUI/REST Transactions

Events

Table 3-1 GUI/REST Transactions - Events

Event Name Description Severity Details

CosActiveIpPoolEdited The active IP Pool: "poolName" was edited.

critical Event is triggered when a user edits an IP Pool that is in use by COS-AIC.

CosActiveIpPoolEdited The active IP Pool: "poolName" was edited.

critical Event is triggered when a user edits an IP Pool that is in use by COS-AIC.

CosActiveIpPoolDeleted The active IP Pool: "poolName" was deleted.

critical Event is triggered when a user deletes an IP Pool that is in use by COS-AIC.

CosNodeConfigError The COS Node: "hostName" physical interface count has been changed.

critical Event triggered if the number of interfaces changes for a node.

CosNodeConfigError The COS Node: "hostName" IP Pool: "poolName" does not have sufficient IPs available.

major Event triggered by IP Pool running out of IPs.

CosAddNode New COS Node: "hostName" processed and added to cluster: "clustName"."

info Event triggered by addition of new COS Node.

CosDeleteNode - warning - "COS Node: "hostName" has been deleted." -> Event triggered when a COS Node is deleted.

CosDeleteNode - warning - "COS Node: "hostName" has been deleted." -> Event triggered when a COS Node is deleted.

CosDeleteNode - warning - "COS Node: "hostName" has been deleted." -> Event triggered when a COS Node is deleted.

CosDeleteNode - warning - "COS Node: "hostName" has been deleted." -> Event triggered when a COS Node is deleted.

CosNodeDnsAddError COS dnsOperation -> addDNSRecord: "Domain Details" returned error: "errorString".

major Event triggered when a DNS Add/Remove returns other than 200.

3-4Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 61: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Status Monitoring

Alarms

AIC Client Status Notifications

Events

Alarms

Table 3-2 GUI/REST Transactions - Alarms

Alarm Name Description Severity Details

CosClusterDeactivated COS Cluster: "clusterName" has been Deactivated.

critical Alarm is triggered when a COS Cluster is set to "disabled".

Table 3-3 AIC-Client Status Notifications - Events

Event Name Description Severity Details

CosNodeHeartBeat COS Node: "hostName" has missed a heartbeat.

critical Event triggered when a COS Node misses a scheduled heartbeat (aic_cosnodeheartbeat).

CosNodeServiceDown COS Node: "hostName" non-critical service "Sensu Client" is down.

warning Event triggered when Sensu Client is reported down, because COS Node can't send events (aic_cosnodestatus).

Table 3-4 AIC-Client Status Notifications - Alarms

Alarm Name Description Severity Details

CosNodeDiskDown COS Node: "hostName" > "disksDown" of the total "disksTotal" disks ("percentageDisksDown"%) are last reported as down.

varies based on % down

Alarm is triggered when disks are reported down (aic_cosnodestatus).

CosNodeInterfaceDown COS Node: "hostName" interfaces(s) "list" reported as down.

varies based on % down

Alarm triggered when interfaces are reported as down (aic_cosnodestatus).

CosNodeDown COS Node: "hostName" is down and removed from DNS.

critical Alarm triggered by either missed heartbeat, all disks down, or critical service down.

3-5Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 62: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Status Monitoring

COS AIC Client EventsThe COS AIC client generates events pertaining to storage (disk), network (interface), and service (process) on each node. These events are generated only if the AIC client monitoring is enabled. For more information on this monitoring activity, see Viewing Deployment Status, page 3-2.

The events generated by a COS AIC client are listed in Table 3-5.

• An event is generated for every change in the state of operation of a disk, interface, or process.

• Although events are generated only on a state change, monitoring is done every 10 seconds, so events can be generated as often as every 10 seconds.

COS AIC Server EventsThe COS AIC server generates the events listed in Table 3-6.

Viewing COS StatisticsThe COS Services Statistics page provides a graphical summary of the status and performance of the node infrastructure. The displays update every 15 minutes to track changes in key system states over time.

To view the COS Statistics page in the V2PC GUI, open the GUI as described in Accessing the V2PC GUI, page A-2 and navigate to Cisco Cloud Object Store (COS) > COS Services Statistics.

Table 3-5 COS AIC Client Events

Event Name Description Severity Event Type Event Subtype

CosNodeInterfaceDown Interface if_name down. Warning COS-Node Health

CosNodeInterfaceUp Interface if_name up. Info COS-Node Health

CosNodeServiceDown Service if_name down. Warning COS-Node Health

CosNodeServiceUp Service if_name up. Info COS-Node Health

Table 3-6 COS AIC Server Events

Event Name Description Severity Event Type Event Subtype

AddCosNode A new COS node was added.

Info COS-Node Accessibility

CosNodeInterfaceError No IP addresses are available in the IP pool.

Major COS-Node Accessibility

CosUpdatedActiveIpPool An active IP pool was edited.

Critical COS-Node Accessibility

CosDeletedActiveIpPool An active IP Pool was deleted.

Critical COS-Node Accessibility

3-6Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 63: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Status Monitoring

Figure 3-2 V2PC GUI, COS Service Statistics Page

This page displays the following information:

• Windows across the top report the regions, clusters, and nodes in the infrastructure, display any current alarms by severity, and show current storage, bandwidth, and session utilization.

• Graphical displays in the mid-section show current and trending storage, bandwidth, and session utilization for the selected COS component.

• The time zone shown at upper right in the page is that of the server, and can be changed by an Admin user. Individual nodes may be spread across multiple time zones.

• Scrolling to the bottom of the page reveals tables that list the current status of all of the disks, services, and interfaces associated with the selected COS component, along with any alarms.

COS AIC Client MonitoringThe COS AIC client running on a COS node periodically monitors the disks, interfaces, and services (processes) of that node and posts the data to the DocServer as a COS-specific document.

The AIC client begins the monitoring activity when a node is configured and added to a COS cluster. As long as the node is running and is part of a COS cluster, monitoring occurs once every 10 seconds.

Storage Monitoring

The AIC client can monitor and report storage (disk) state and statistics only if the CServer is running on the node. The following information is reported for each disk:

• Disk name

• Bytes read

• Bytes written

• Requests

• State

• S.M.A.R.T. Status

3-7Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 64: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Status Monitoring

The client also reports the total storage space on all disks and the total storage space currently in use.

Interface Monitoring

For each interface, the AIC client reports the interface state and the transmit and receive statistics. The client can monitor and report the state and statistics of CServer interfaces only when the CServer is running on the node.

Services Monitoring

The AIC client monitors the following services:

• Cisco Cache Server (CServer)

• Cisco Cloud Object Storage Daemon (cosd)

• Cassandra Server

• NTP Daemon

• SNMP Daemon

• Monit

• Consul Agent

• Sensu Client

Troubleshooting Alarms, Events, and StatisticsIf one or more COS nodes in a cluster are not generating any alarms, events, or statistics, perform the following steps to ensure that monitoring is configured and working correctly.

Checking COS Nodes

Perform these steps for each COS node attached to the cluster in V2PC master.

Step 1 To confirm that the Sensu client is running on the COS node, connect to the node using SSH, type the command service sensu-client status, and check the response to see if the client is running. If not, type sensu-client start to start the service.

Step 2 To confirm that the Sensu configurations are present on the COS node, SSH into the node, type cd /etc/sensu/conf.d, and check that the following files are present and configured correctly:

• client.json

• rabbimq.json

• transport.json

• metrics-cos-nodes.json – confirm that the interval attribute is set to 900 (15 minutes)

Note If helpful, compare the contents of each file with those on another known working COS node.

Additionally, check the plugins directory (cd /etc/sensu/plugins) and confirm that metrics-cos-nodes.js is present.

3-8Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 65: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Platform Monitoring with SNMP

Step 3 To confirm that the sensu-service log is present on the COS node, SSH into the node and type tail –f /var/log/sensu/sensu-client.log. Sensu checks for this information every 15 minutes.

Step 4 To confirm that the COS node statistics document is present, SSH into the node, type cd /tmp, then type ls -al and check the timestamp on the aic_cosnodestats.json file. This file should update every 15 minutes. If the file is present, type cat /tmp/aic_cosnodestats.json and confirm that it is not empty.

Step 5 To confirm that the rabbitmq messaging file on the COS node can be accessed, SSH into the node and type cat /etc/sensu/conf.d/rabbitmq.json.

Step 6 Try to ping the host from the COS node to confirm that it can be reached.

Note It is normal in an HA environment for the host ping to return different IP addresses.

Checking the Sensu Master

An HA environment can have multiple Sensu masters. Perform the following steps to check each master:

Step 1 Connect via SSH to the Sensu master that you are accessing using the V2PC GUI.

Step 2 Type consul members to list all of the active masters in the HA environment.

Step 3 Check the conf.d directory (cd /etc/sensu/conf.d) to see if handler-metrics-cos-nodes-influxdb.json is present. If not, copy this file from another working master and place it in the conf.d directory.

Step 4 Open influxdb.json and confirm that it has the configuration information needed to access influxdb.

Step 5 Try to ping the influxdb host to confirm that it can be reached.

Note It is normal in an HA environment for the host ping to return different IP addresses.

Step 6 Check the handlers directory (cd /etc/sensu/handlers) to see if metrics-cos-nodes-influxdb.js is present. If not, copy this file from another working master and place it in the handlers directory.

Step 7 Also check to see if the node_modules directory is present and contains influx. If not, copy this directory from another working master.

Step 8 Type systemctl status sensu-server to confirm that the Sensu server is running.

Step 9 Type tail –f /var/log/sensu/sensu-server.log to check the Sensu server logs.

COS Node Platform Monitoring with SNMP

OverviewEach COS node provide hardware statistics, events, and alerts using Simple Network Management Protocol (SNMP). In addition to hardware platform information, some basic state for the COS node disk and file system usage is also available with SNMP.

3-9Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 66: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Platform Monitoring with SNMP

Each COS node executes an instance of the Net-SNMP snmpd service as an SNMP agent. An instance of the Net-SNMP snmptrapd service also executes on each COS node, and can be customized for environment-specific configuration for the handling of traps and notifications generated by the snmpd service.

Note Beginning with Release 3.5.1, COS no longer installs SuperDoctor monitoring software or tests for its hardware statistics. However, customers using SuperMicro servers can still install SuperDoctor and set up its SNMP extension.

Caution If using SuperDoctor, the Intelligent Platform Management Interface (IPMI) kernel driver ipmi_devintf must be loaded before SuperDoctor is executed. Otherwise, SuperDoctor may not execute properly.

InstallationThe SNMP services for COS nodes are provided by the following RPMs:

• net-snmp

• net-snmp-libs

• net-snmp-utils

• cos_snmp

These RPMs come pre-installed on each COS node, and are preconfigured with a basic configuration for read-only access to the SNMP service.

ConfigurationThe cos_snmp RPM provides basic configuration for the Net-SNMP services snmpd and snmptrapd The snmpd and snmptrapd service configuration can be customized to accommodate customer-specific environments by manually editing the /etc/snmp/snmpd.conf and /etc/snmp/snmptrapd.conf configuration files, respectively.

Note Customization made to these files will be replaced with an update to the cos_snmp RPM. The cos_snmp RPM will back up configuration files in the /etc/snmp directory upon upgrade.

MIB ExtensionsNet-SNMP is distributed with support for various MIBs that provide OID instances for some hardware and Linux platform information. The definitions for MIBs distributed with the Net-SNMP service are stored in /usr/share/snmp/mibs on each COS node. For more information on the standard MIBs provided by Net-SNMP, see:

http://www.net-snmp.org/docs/mibs/

Extensions have been added to Net-SNMP to add CServer disk and storage information to the following tables:

• HOST-RESOURCES-MIB::hrDeviceTable

3-10Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 67: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Platform Monitoring with SNMP

• HOST-RESOURCES-MIB::hrDiskStorageTable

• HOST-RESOURCES-MIB::hrStorageTable

• SWRAID-MIB::swRaidTable

Extensions to HOST-RESOURCES-MIB::hrDeviceTable

This table has been extended to include status for the external facing disk drives owned by CServer. The columns of primary interest are hrDeviceDescr and hrDeviceStatus. Taken together, these two columns identify a disk device and its current operating state.

The following is a snip of the table showing the extension:

[root@utah97 ~]# snmptable -v2c -cpublic -M/usr/share/snmp/mibs -mall localhost HOST-RESOURCES-MIB::hrDeviceTableSNMP table: HOST-RESOURCES-MIB::hrDeviceTable

hrDeviceIndex hrDeviceType hrDeviceDescr hrDeviceID hrDeviceStatus hrDeviceErrors

... 7680 HOST-RESOURCES-TYPES::hrDeviceDiskStorage Cisco csd1 SAS HDD SNMPv2-SMI::zeroDotZero running 0 7681 HOST-RESOURCES-TYPES::hrDeviceDiskStorage Cisco csd2 SAS HDD SNMPv2-SMI::zeroDotZero running 0

... 7751 HOST-RESOURCES-TYPES::hrDeviceDiskStorage Cisco csd72 SAS HDD SNMPv2-SMI::zeroDotZero running 0

According to the HOST-RESOURCES-MIB definition, valid values of hrDeviceStatus are unknown, running, warning, testing, and down.

[root@utah97 ~]# snmptranslate HOST-RESOURCES-MIB::hrDeviceStatus -Tp +-- -R-- EnumVal hrDeviceStatus(5) Values: unknown(1), running(2), warning(3), testing(4), down(5)

The SNMP device status is mapped from output of the cddm utility. For more information on cddm, see the cddm man page on any COS node with man cddm.

The mapping of cddm output to SNMP status is as follows:

• cddm state contains DEV_READY and cddm smart_status is OK => SNMP running

• cddm state contains DEV_READY and cddm smart_status is ADVISORY => SNMP warning

• cddm state contains (DEV_SUSPENDING or DEV_SUSPENDED or DEV_TRACK_MAP or DEV_USER_PREP or DEV_ABANDONED) => SNMP warning

• cddm state contains (DEV_LOG_REMOVED|DEV_SICK|DEV_REMOVED) => SNMP down

All other cddm status are mapped to an SNMP unknown state.

Extensions to HOST-RESOURCES-MIB::hrDiskStorageTable

This table has been extended to show disk capacity for disk drives used by CServer for storing of object data.

The following is a snip of the table showing the extension:

[root@utah97 ~]# snmptable -v2c -cpublic -M/usr/share/snmp/mibs -mall localhost HOST-RESOURCES-MIB::hrDiskStorageTable

3-11Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 68: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Platform Monitoring with SNMP

SNMP table: HOST-RESOURCES-MIB::hrDiskStorageTable

hrDiskStorageAccess hrDiskStorageMedia hrDiskStorageRemoveble hrDiskStorageCapacity readWrite unknown false 194335744 KBytes readWrite hardDisk true 3907018583 KBytes

... readWrite hardDisk true 3907018583 KBytes readWrite hardDisk true 3907018583 KBytes

Viewing the table in this manner does not easily correlate to a particular device, because the table does not define a device description. You must manually correlate the entries from the HOST-RESOURCES-MIB::hrDiskStorageTable and HOST-RESOURCES-MIB::hrDeviceTable tables.

Extensions to HOST-RESOURCES-MIB::hrStorageTable

This table has been extended to provide information about the CServer file system that includes the file system storage totals and used.

[root@utah97 ~]# snmptable -v2c -cpublic -M/usr/share/snmp/mibs -mall localhost HOST-RESOURCES-MIB::hrStorageTableSNMP table: HOST-RESOURCES-MIB::hrStorageTable

hrStorageIndex hrStorageType hrStorageDescr hrStorageAllocationUnits hrStorageSize hrStorageUsed hrStorageAllocationFailures 1 HOST-RESOURCES-TYPES::hrStorageRam Physical memory 1024 Bytes 264524232 246219724 ? 3 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Virtual memory 1024 Bytes 268620224 246219724 ?

... 7680 HOST-RESOURCES-TYPES::hrStorageRemovableDisk Cisco CServer Storage 2097152 Bytes 135448398 179575 ?

The file system byte size and bytes used can be obtained by multiplying the hrStorageAllocationUnits by hrStorageSize and hrStorageUsed respectively.

Extensions to UCD-SNMP-MIB::dskTable

As with HOST-RESOURCES-MIB::hrStorageTable, this table has been extended to include CServer file system information. Along with size (total) and used, this table includes an available count as well as a percentage used.

[root@utah97 ~]# snmptable -v2c -cpublic -M/usr/share/snmp/mibs -mall localhost UCD-SNMP-MIB::dskTableSNMP table: UCD-SNMP-MIB::dskTable

dskIndex dskPath dskDevice dskMinimum dskMinPercent dskTotal dskAvail dskUsed dskPercent dskPercentNode dskTotalLow dskTotalHigh dskAvailLow dskAvailHigh dskUsedLow dskUsedHigh dskErrorFlag dskErrorMsg 1 / /dev/sda8 100000 -1 25728092 21299064 3122100 13 2 25728092 0 21299064 0 3122100 0 noError 2 /arroyo/db /dev/sda2 100000 -1 70555056 66755080 215976 0 0 70555056 0 66755080 0 215976 0 noError

... 7680 Cisco CServer Storage -1 -1 ? ? ? 0 ? 2520412160 64 2152642560 64 367769600 0 noError

3-12Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 69: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Platform Monitoring with SNMP

This table shows sizes in kilobyte units. The SNMP values used to store the size are 32-bit values, and can only represent file systems less than 2 TB in size in a single value. To accommodate file systems of 2 TB or larger, this table has Low and High values that combine to hold a 64-bit value. In the example output above, the total byte size would be computed as:

((64 * 2^32) + 2520412160) * 1024 bytes = 284,055,878,762,496 bytes

Extensions to SWRAID-MIB::swRaidTable

This table, added with COS 3.5.2, reports the configuration, unit count, and status of all software RAID objects in the system.

[root@utah26 mibs]# snmptable -v2c -cpublic -M/usr/share/snmp/mibs -mall localhost SWRAID-MIB::swRaidTableSNMP table: SWRAID-MIB::swRaidTable

swRaidIndex swRaidDevice swRaidPersonality swRaidUnits swRaidUnitCount swRaidStatus1 md4 raid1 sdb6[1] sda6[0] 2 active2 md1 raid1 sdb2[1] sda2[0] 2 active3 md2 raid1 sda3[0] sdb3[1] 2 active4 md3 raid1 sdb5[1] sda5[0] 2 active5 md0 raid1 sdb1[1] sda1[0] 2 active6 md6 raid1 sdb8[1] sda8[0] 2 active7 md5 raid1 sda7[0] sdb7[1] 2 active

In this example:

• Under swRaidStatus, all software RAID objects show as Active, meaning that they are both active and operational. Any objects that are operational but have one or more faulty partitions will show as Faulty, with an F appearing beside the faulty partition(s). Any objects that are not operational will show as Inactive.

• Under swRaidUnitCount, all software RAID objects show a count of 2, meaning that they are mirrored. If a mirror were missing, swRaidUnitCount would show a count of 1, and swRaidUnits would show only one partition.

Monitored ItemsThe default Net-SNMP configuration in /etc/snmp/snmpd.conf includes directives to monitor several items on the host COS node. This includes monitoring for a running cosd service, memory errors, Linux disk errors, and the Linux load average.

The default snmpd configuration has also been extended to include monitor directives to alert in the event of a CServer disk device status change, as well as to alert on overall CServer percentage of file system usage.

In the event that a disk status changes or file system usage percentage changes above or below certain thresholds, Net-SNMP generates a local trap that is received by the local snmptrapd process. The default behavior of snmptrapd is to log the event to /var/log/messages.

Disk device status change monitors are defined as follows:

monitor -r 60 -o hrDeviceIndex -o hrDeviceDescr -o hrDeviceStatus "Cisco device in unknown state" hrDeviceStatus == 1monitor -S -r 60 -o hrDeviceIndex -o hrDeviceDescr -o hrDeviceStatus "Cisco device in running state" hrDeviceStatus == 2monitor -r 60 -o hrDeviceIndex -o hrDeviceDescr -o hrDeviceStatus "Cisco device in warning state" hrDeviceStatus == 3

3-13Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 70: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Platform Monitoring with SNMP

monitor -r 60 -o hrDeviceIndex -o hrDeviceDescr -o hrDeviceStatus "Cisco device in testing state" hrDeviceStatus == 4monitor -r 60 -o hrDeviceIndex -o hrDeviceDescr -o hrDeviceStatus "Cisco device in down state" hrDeviceStatus == 5

Net-SNMP monitors for disk status changes every 60 seconds with this configuration. The following is an example trap generated by Net-SNMP for a disk device with warning status:

Sep 10 12:15:59 utah97 snmptrapd[23614]: 2014-09-10 12:15:59 localhost.localdomain [UDP: [127.0.0.1]:45956->[127.0.0.1]]:Sep 10 12:15:59 utah97 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (131) 0:00:01.31 SNMPv2-MIB::snmpTrapOID.0 = OID: DISMAN-EVENT-MIB::mteTriggerFired DISMAN-EVENT-MIB::mteHotTrigger.0 = STRING: Cisco device in warning state DISMAN-EVENT-MIB::mteHotTargetName.0 = STRING: DISMAN-EVENT-MIB::mteHotContextName.0 = STRING: DISMAN-EVENT-MIB::mteHotOID.0 = OID: HOST-RESOURCES-MIB::hrDeviceStatus.7698 DISMAN-EVENT-MIB::mteHotValue.0 = INTEGER: 3 HOST-RESOURCES-MIB::hrDeviceIndex.7698 = INTEGER: 7698 HOST-RESOURCES-MIB::hrDeviceDescr.7698 = STRING: Cisco csd19 SAS HDD HOST-RESOURCES-MIB::hrDeviceStatus.7698 = INTEGER: warning(3)

Further details for device status can be obtained from the cddm utility, described in CDDM Management Utility, page E-1. For example, in this particular case, the S.M.A.R.T status of the device is in warning state, as indicated by a smart_status of ADVISORY rather than OK.

[root@utah97 ~]# cddm -s all 19--- csd19 ---. . . . . .smart_rd_uncorrected_errors 0smart_startups 10smart_status ADVISORYsmart_wr_corrected_errors_long 0smart_wr_corrected_errors_short 0smart_wr_correction_algorithm_use 0. . . . . .

CServer file system usage monitors are defined as follows:

monitor -I -r 60 -o UCD-SNMP-MIB::dskPath.7680 -o UCD-SNMP-MIB::dskPercent.7680 "Cisco CServer Storage High" UCD-SNMP-MIB::dskPercent.7680 >= 98monitor -S -I -r 60 -o UCD-SNMP-MIB::dskPath.7680 -o UCD-SNMP-MIB::dskPercent.7680 "Cisco CServer Storage Normal" UCD-SNMP-MIB::dskPercent.7680 < 98

Net-SNMP monitors for high-usage events if the percentage used is 98 percent or above. Net-SNMP monitors for normal usage events if the percentage used falls below 98 percent.

The following is an example trap logged to /var/log/messages for high and normal usage status:

Sep 10 13:28:27 utah97 snmptrapd[23614]: 2014-09-10 13:28:27 localhost.localdomain [UDP: [127.0.0.1]:46969->[127.0.0.1]]:Sep 10 13:28:27 utah97 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (6137) 0:01:01.37 SNMPv2-MIB::snmpTrapOID.0 = OID: DISMAN-EVENT-MIB::mteTriggerFired DISMAN-EVENT-MIB::mteHotTrigger.0 = STRING: Cisco CServer Storage High DISMAN-EVENT-MIB::mteHotTargetName.0 = STRING: DISMAN-EVENT-MIB::mteHotContextName.0 = STRING: DISMAN-EVENT-MIB::mteHotOID.0 = OID: UCD-SNMP-MIB::dskPercent.7680 DISMAN-EVENT-MIB::mteHotValue.0 = INTEGER: 98 UCD-SNMP-MIB::dskPath.7680 = STRING: UCD-SNMP-MIB::dskPercent.7680 = INTEGER: 98

Sep 10 13:30:27 utah97 snmptrapd[23614]: 2014-09-10 13:30:27 localhost.localdomain [UDP: [127.0.0.1]:46969->[127.0.0.1]]:Sep 10 13:30:27 utah97 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (18138) 0:03:01.38 SNMPv2-MIB::snmpTrapOID.0 = OID: DISMAN-EVENT-MIB::mteTriggerFired DISMAN-EVENT-MIB::mteHotTrigger.0 = STRING: Cisco CServer Storage Normal DISMAN-EVENT-MIB::mteHotTargetName.0 = STRING: DISMAN-EVENT-MIB::mteHotContextName.0 =

3-14Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 71: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Platform Monitoring with SNMP

STRING: DISMAN-EVENT-MIB::mteHotOID.0 = OID: UCD-SNMP-MIB::dskPercent.7680 DISMAN-EVENT-MIB::mteHotValue.0 = INTEGER: 97 UCD-SNMP-MIB::dskPath.7680 = STRING: UCD-SNMP-MIB::dskPercent.7680 = INTEGER: 97

Manual customizations can be made to the snmpd and snmptrapd services to monitor additional items, and to forward traps and notifications to customer-specific operations.

3-15Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 72: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Chapter 3 System Monitoring COS Node Platform Monitoring with SNMP

3-16Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 73: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Cisco Clo

A

P P E N D I X A

Reference Information

This section contains additional reference material for further understanding the COS system, and information on performing commonly executed tasks and system maintenance.

COS Service ModelThe COS service model is shown in the Unified Modeling Language (UML) diagram below:

Figure A-1 COS Service Model

A COS operator can assume one of the following roles:

• System Operator – Provisions the system resources and creates the Tenant. In Figure A-1, the system resources appear in yellow and the entities managed by the Tenant appear in green.

• Tenant – Creates service instances using pre-existing service templates. The Tenant provisions a service instance by assigning system resources to the service instance and configuring it. When the service instance is activated, all related configurations are performed and the service becomes

System Operator

Resources

Network Service Instance

EndPointAssetWorkflow

Template

Tenant

Policy

COS Cluster

IP PoolLogs

AuthProfile

SLA

ResourceSLA

CapacitySLA

Capture

ContentRedundancy

Use API

Storage

Status

Statistics

Events

Alarms

Diagnostics

DNS

NTP

COS Node

COS Cluster

COS NodeRegion

AuthorizationProfile

Zone

HighAvailability

*

*

*

*-Owns

-Owns

-creates

* -creates

*

* -creates

1

1

1

1

1

1

1

1

1*

1*

3830

81

1..*

A-1ud Object Storage Release 3.12.1 User Guide

Page 74: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix A Reference Information Using the V2PC GUI

available, producing the outputs appearing in blue in Figure A-1.

Using the V2PC GUICOS Release 3.12.1 and its content are managed through Cisco Virtualized Video Processing Controller (V2PC). The V2PC GUI has pages that allow monitoring and updating many aspects of the deployment, including COS nodes and clusters.

This section describes COS related operations available from the V2PC GUI. For additional information, see the Cisco Virtualized Video Processing Controller User Guide for your V2PC release.

Accessing the V2PC GUI

Step 1 Open a web browser and access the V2PC GUI at https://<v2pc-ip>:8443/.

Step 2 Log in to the V2PC GUI using the following credentials:

• User name: admin

• Password: default

The V2PC GUI opens to the Dashboard page, System Statistics tab.

Figure A-2 V2PC GUI Dashboard, System Statistics Tab

A-2Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 75: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix A Reference Information COS Network Ports and Services

Dashboard PageThe Dashboard section contains the following pages:

• System Overview – provides general system information.

• Node Statistics – graphically displays CPU, memory, disk, and network utilization for each node and server on 24-hour timelines by region, provider, and zone.

• Alarms & Events – lists alarm and event notifications for each node, showing source, status, category, severity, and relevant details.

Cisco Cloud Object Store (COS) PageThis Cisco Cloud Object Store (COS) section contains the following pages:

• COS IP Pools – Lets you add, edit, or delete IP pools, and assign or update IP address ranges,

• COS Clusters – Lets you define asset redundancy policy and configure resiliency for the cluster.

• COS Node Profiles – Lets you add or delete COS node profiles and assign profiles to a COS cluster.

• COS Nodes – Lets you add COS nodes to a COS cluster, remove already decommissioned nodes from a cluster, and change the node Admin State (Inservice or Maintenance).

• COS Service Status – Lists the nodes in the cluster; each node list item can be expanded to show the current status of the associated disks, interfaces, and services.

• COS Service Statistics – Lets you view statistics associated with all constituent COS services.

COS Network Ports and ServicesThe following table identifies open network ports for COS nodes and the services that own these ports.

Table A-1 COS Network Ports and Services

Scope/Interface Port Purpose Owning Service

Management TCP 7000 Cassandra internode communication cassandra

Management TCP 7199 Cassandra JMX communication cassandra

Management TCP 9042 Cassandra CQL native transport port cassandra

Management TCP 9160 Cassandra Thrift client API cassandra

Management TCP 9090 Cosd request listener cosd

Data TCP 80 HTTP traffic for Swift and Swauth interfaces

cserver

Data UDP 3478 STUN traffic cserver

Data UDP 48879 Internal COS node communication cserver

Data UDP 57005 Internal COS node communication cserver

Management UDP 123 Network Time Protocol ntpd

Management TCP 25 Postfix mail system traffic postfix

Local TCP 199 Simple Network Management Protocol snmpd

A-3Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 76: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix A Reference Information COS Maintenance

COS MaintenanceIt may be necessary to shut down or reboot a COS node for conditions such as routine maintenance. You can shut down a COS node by placing it in Maintenance mode from either the command line or the V2PC GUI. You can also reboot a COS node from the CLI.

Note • You cannot reboot a COS node from the V2PC GUI.

• Putting a COS node in Maintenance mode shuts down the entire cluster to which it belongs. So, when using Maintenance mode, take care to avoid any impact to services provided by the affected COS cluster.

Command Line RebootTo reboot a COS node, execute the reboot command from a terminal console or remote shell. The system will begin a shutdown phase to reboot.

• Any active HTTP and TCP sessions with the COS node data network interfaces will be reset by the node. The client will have the responsibility of retrying operations with the remaining COS nodes in the cluster.

• The COS management system will automatically update the DNS registry to remove listings for the COS node.

• The COS services will automatically be restarted when the system is back online after reboot.

• If the COS cluster has been configured to replica data, copies of any object data residing on the COS node will be accessible from the remaining COS nodes.

Switching Node Admin State from the GUITo switch a COS node from Inservice to Maintenance mode or vice versa from the V2PC GUI:

Management UDP 161 Simple Network Management Protocol snmpd

Management UDP 162 SNMP Traps snmpdtrapd

Management TCP 22 Secure Shell (SSH) ssh

Management TCP 8301 Consul Service consul

Management TCP 8400 Consul Service consul

Management TCP 8500 Consul Service consul

Management TCP 8600 dnsmasq Service dnsmasq

Management TCP 3030 Sensu Client sensu

Table A-1 COS Network Ports and Services

Scope/Interface Port Purpose Owning Service

A-4Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 77: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix A Reference Information COS Maintenance

Step 1 Open the GUI as described in Accessing the V2PC GUI, page A-2 and navigate to Cisco Cloud Object Store (COS) > COS Nodes.

Step 2 Locate the node in the COS Nodes list and click its Edit icon to open the Edit dialog for the node.

Step 3 In the Edit dialog, select Maintenance or Inservice as the new Admin State for the node.

Step 4 Click Save to save your changes and return to the COS Nodes page.

Node Decommissioning and RemovalCOS lets you decommission a node at the CServer level. Decommissioning tells CServer to copy the data objects of the node to other nodes in the cluster until the target number of mirror copies is reached. After the node is decommissioned, it can be removed from the cluster using either the V2PC GUI or the API.

Node decommissioning itself is currently a CLI-only operation. To decommission a node, run the script cserver-control.pl decommission, installed on the node at /opt/cisco/cos-aic-client/cserver-control.pl.

As decommissioning can take several hours, the CLI does not monitor the decommissioning process for completion. To check for completion, enter the command cserver-control.pl decommission --stats periodically until the response confirms that the operation is complete.

After decommissioning is complete, you can safely remove the node using the GUI or the API. For instructions on removing a node from a cluster using the GUI, see Node Decommissioning and Removal, page A-5. For API information, see the Cisco Cloud Object Storage Release 3.12.1 API Guide.

Note • A node cannot be decommissioned after it has been removed from a cluster using the GUI or API. So, you must decommission a node before removing it.

• If a node is in the process of being decommissioned, decommissioning pauses if the node or any node in its cluster is placed in Maintenance mode. Decommissioning resumes when all nodes in the cluster are returned to In Service mode.

• Decommissioning will not start if you try to decommission a node when it or any node in its cluster is already in Maintenance mode. Decommissioning can only start when every node in the cluster is returned to In Service mode.

Verifying Node Removal from a Cluster

When you remove a node from a multi-node cluster through the GUI, the node is first decommissioned from the Cassandra database cluster, and then the Cassandra service and CServer are shut down. If you shut down the node before the Cassandra-level decommissioning completes, the node may continue to be considered part of the Cassandra cluster and listed in the nodetool status output of the remaining nodes, but now in down (DN) state, which prevents you from adding new nodes to the cluster.

To avoid this issue, we recommend opening the COS AIC Client log before removing the node through the GUI. Inspect the log periodically to confirm that Cassandra decommissioning is completed before shutting down the node.

To inspect the log for node decommissioning from the Cassandra cluster:

A-5Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 78: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix A Reference Information COS Maintenance

Step 1 Use the Linux tail command to print new lines being added to the COS AIC Client log, followed by the Linux grep command to search for db-remove:

[root@Colusa-4T-72 ~]# tail -f /arroyo/log/cos-aic-client.log.20160506 | grep 'db-remove'

Step 2 Remove the node using the GUI and inspect the log for db-remove:

2016-05-06 23:01:29 UTC 127.0.0.1 aicc - Starting db-remove

Step 3 Inspect the log for Completed db-remove, which shows that the node has been removed from Cassandra cluster:

2016-05-06 23:02:49 UTC 127.0.0.1 aicc - Completed db-remove

Step 4 To verify that CServer has also been shut down, inspect the log using tail (or cat) followed by grep for cserverControl-shutdown:

[root@Colusa-4T-72 ~]# tail /arroyo/log/cos-aic-client.log.20160506 | grep cserverControl-shutdown2016-05-06 23:01:45 UTC 127.0.0.1 aicc - Completed cserverControl-shutdown

Step 5 To confirm completion of the removal process, inspect the log to ensure that no new messages are printed:

[root@Colusa-4T-72 ~]# tail -f /arroyo/log/cos-aic-client.log.20160506

2016-05-06 23:01:45 UTC 127.0.0.1 aicc - Deleted /arroyo/test/setupfile2016-05-06 23:01:45 UTC 127.0.0.1 aicc - Deleted /arroyo/test/RemoteServers2016-05-06 23:01:45 UTC 127.0.0.1 aicc - Deleted /var/tmp/.clusterId2016-05-06 23:01:45 UTC 127.0.0.1 aicc - Deleted /tmp/.cosnodeinit2016-05-06 23:02:49 UTC 127.0.0.1 aicc - Completed db-remove2016-05-06 23:02:49 UTC 127.0.0.1 aicc - Deleted /var/tmp/.dbinitflag

Step 6 Run the command nodetool status cos on one of the remaining nodes in the cluster to confirm that the removed node is no longer listed as part of the cluster.

Reinstalling a COS Node in a ClusterIt may become necessary to reinstall a COS node in a cluster certain situations, such as:

• A change in the Linux partition size in a new COS version to allow for a larger database or log size.

• Linux file system corruption caused by a reboot (observed at least once on a CDE 465).

If necessary, reinstall a COS node as follows:

Step 1 Log into the V2PC GUI as described in Accessing the V2PC GUI, page A-2.

Step 2 From the V2PC GUI navigation panel, choose Cisco Cloud Object Store (COS) > COS Nodes.

Step 3 Locate the node to be removed and place the node in Maintenance mode as described in Switching Node Admin State from the GUI, page A-4.

Step 4 Decommission the node and remove it from the cluster as described in Node Decommissioning and Removal, page A-5.

Step 5 Is Linux still running on the node just removed?

• If yes, perform steps 1-6 of Verifying Node Removal from a Cluster, page A-5 to verify removal.

• If no, perform only step 6 of Verifying Node Removal from a Cluster, page A-5 to verify removal.

A-6Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 79: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix A Reference Information COS Maintenance

Step 6 Perform a fresh installation of the node using the full COS ISO image as described in the appropriate section of Deploying COS, page 2-1.

Step 7 On the Cisco Cloud Object Store (COS) > COS Nodes page of the V2PC GUI, locate the node and add it to the desired cluster.

Note • If you must reinstall a COS node immediately after removing it from a cluster, first verify that the node removal has completed in the associated Cassandra database cluster. To check the status of the Cassandra database cluster, execute nodetool status cos on the console of a remaining COS node. When removal has completed, the list will no longer contain the management IP address of the removed node.

• When adding more than one node to a COS cluster, wait at least 5 minutes between adding one COS node before adding the next. This ensures that the first addition to the associated Cassandra database cluster has completed. To check the status of the Cassandra database cluster, executing nodetool status cos on the console of a previously added COS node. When the node addition has completed, the list will include the management IP address of the new node, and its status will be UN, indicating that the node is up and operating normally.

Behavior of COS Services on COS Node BootThere are four primary system services that provide COS functionality on a COS node: cassandra, cosd, cos_aicc, and cserver. These services can be manipulated using standard Linux system service tools.

To prevent these services from starting automatically on a COS node boot, execute the following commands as the root user from a shell on that node:

[root@cos-node-1 ~]# chkconfig cassandra off[root@cos-node-1 ~]# chkconfig cosd off[root@cos-node-1 ~]# chkconfig cos_aicc off[root@cos-node-1 ~]# chkconfig cserver off

To enable automatic service loading on node boot, execute the following commands:

[root@cos-node-1 ~]# chkconfig cassandra on[root@cos-node-1 ~]# chkconfig cosd on[root@cos-node-1 ~]# chkconfig cos_aicc on[root@cos-node-1 ~]# chkconfig cserver on

To view the current state of service loading, execute the following command:

[root@cos-node-1 ~]# chkconfig --list

atd 0:off 1:off 2:off 3:on 4:on 5:on 6:offauditd 0:off 1:off 2:on 3:on 4:on 5:on 6:offcassandra 0:off 1:off 2:on 3:on 4:on 5:on 6:offconsul 0:off 1:off 2:on 3:on 4:on 5:on 6:offconsul-template 0:off 1:off 2:on 3:on 4:on 5:on 6:offcos_aicc 0:off 1:off 2:on 3:on 4:on 5:on 6:offcosd 0:off 1:off 2:on 3:on 4:on 5:on 6:offcrond 0:off 1:off 2:on 3:on 4:on 5:on 6:offcserver 0:off 1:off 2:on 3:on 4:on 5:on 6:offdnsmasq 0:off 1:off 2:on 3:on 4:on 5:on 6:offip6tables 0:off 1:off 2:on 3:on 4:on 5:on 6:offiptables 0:off 1:off 2:on 3:on 4:on 5:on 6:offjexec 0:off 1:on 2:on 3:on 4:on 5:on 6:offkdump 0:off 1:off 2:off 3:on 4:on 5:on 6:off

A-7Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 80: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix A Reference Information COS Service Reliability

mcelogd 0:off 1:off 2:off 3:on 4:off 5:on 6:offmdmonitor 0:off 1:off 2:on 3:on 4:on 5:on 6:offmonit 0:off 1:off 2:on 3:on 4:on 5:on 6:offnetconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetfs 0:off 1:off 2:off 3:on 4:on 5:on 6:offnetwork 0:off 1:off 2:on 3:on 4:on 5:on 6:offnginx 0:off 1:off 2:off 3:off 4:off 5:off 6:offntpd 0:off 1:off 2:on 3:on 4:on 5:on 6:offntpdate 0:off 1:off 2:off 3:off 4:off 5:off 6:offpostfix 0:off 1:off 2:on 3:on 4:on 5:on 6:offrdisc 0:off 1:off 2:off 3:off 4:off 5:off 6:offrestorecond 0:off 1:off 2:off 3:off 4:off 5:off 6:offrsyslog 0:off 1:off 2:off 3:off 4:off 5:off 6:offsalt-minion 0:off 1:off 2:off 3:off 4:off 5:off 6:offsaslauthd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsensu-api 0:off 1:off 2:off 3:off 4:off 5:off 6:offsensu-client 0:off 1:off 2:off 3:off 4:off 5:off 6:offsensu-server 0:off 1:off 2:off 3:off 4:off 5:off 6:offsnmpd 0:off 1:off 2:on 3:on 4:on 5:on 6:offsnmptrapd 0:off 1:off 2:on 3:on 4:on 5:on 6:offsshd 0:off 1:off 2:on 3:on 4:on 5:on 6:offsyslog-ng 0:off 1:off 2:on 3:on 4:on 5:on 6:offudev-post 0:off 1:on 2:on 3:on 4:on 5:on 6:off

To manually start the services, execute the following commands:

[root@cos-node-1 ~]# service cassandra start[root@cos-node-1 ~]# service cosd start[root@cos-node-1 ~]# service cos_aicc start[root@cos-node-1 ~]# service cserver start

COS Service ReliabilityThis section describes the response of the COS AIC server to changes in the states of disks, interfaces, and services on COS nodes. The COS AIC client and the Service Monitor convey the changes in state by sending appropriate events to the COS AIC server.

If a change is critical and indicates that the node cannot service requests, the AIC server ensures that the node interfaces are not part of the DNS so that service requests are not addressed to the node.

COS Node DisksWhen the COS AIC server receives a status from the COS AIC client indicating the failure of one or more disks on a COS node, it will process that status as follows:

• If more than 50% or the disks are down, the node interfaces are removed from the DNS server. The Disk Status for that node is then set to Critical and an appropriate alarm is raised.

• If less than 50% of the disks are down, the Disk Status for that node is set to Warning and an appropriate alarm is raised.

COS ServicesIf the COS AIC server receives a status from the COS AIC client indicating that one or more of the following COS services is down, the Service Status for the node is set to Critical, the node is marked as down, an appropriate alarm is raised, and its service interfaces are removed from the DNS.

A-8Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 81: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix A Reference Information COS Node Hard Drive Replacement

• Cisco Cache Server (CServer)

• Cisco Cloud Object Storage Daemon (cosd)

• Cassandra Server

If the COS AIC server receives a status from the COS AIC client indicating that one or more of the following COS services is down, the Service Status for the node is set to Warning and an appropriate alarm is raised.

• NTP Daemon

• Cisco SNMP Agent

• Consul Agent

• Monit

• Sensu Client

COS Node Interfaces When the COS AIC server receives a status from the COS AIC client indicating that one or more service interfaces are not functional, those interfaces are removed from the DNS.

Additionally, if more than 50% of the nodes interfaces are reported as down, the node is considered non-operational and all interfaces for it are removed from DNS. An appropriate alarm is also raised, and the Interface Status is set to either Critical (for 50% or more down) or Warning (for less than 50% interfaces down).

Server ReachabilityThe COS AIC server expects to receive a heartbeat message from the COS AIC client running on each node every 2 seconds. Abnormalities are processed as follows:

• If the heartbeat message ceases to arrive, the node is considered as down, an alarm is raised, and all of its interfaces are removed from the DNS.

• If heartbeats still arrive but are slower than expected, the delay is noted in the COS AIC server log and the node is kept in service.

COS Node Hard Drive ReplacementFor instructions on replacing hard drives on the platforms that COS 3.12.1 supports, see the appropriate hardware installation guide.

Note Before replacing a hard drive that is not listed as sick, you must first logically remove the drive to stop any data transfer in progress and spin down the drive. To do this, execute the command cddm –r n, where n is the drive number. Do not proceed until a response confirms that it is safe to remove the drive.

A-9Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 82: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix A Reference Information COS Node Hard Drive Replacement

A-10Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 83: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Cisco Clo

A

P P E N D I X B

Configuring Resiliency and Management Interface Bonding

To help protect against loss of assets due to network failure, COS Release 3.12.1 supports data resiliency:

• At the node level, enabling recovery of data lost due to drive failure within a node.

• At the cluster level, enabling recovery of lost data due to node failure within a cluster.

In addition, COS supports bonding of two management interface ports to provide redundant connectivity in the event of a network interface card (NIC) failure.

This section describes these features and provides instructions for configuring them through the COS setup file or (for data resiliency only) through the V2PC GUI.

Configuring ResiliencyCOS provides resiliency at the node and cluster level using one of two methods: mirroring or erasure coding. Resiliency at the node level is achieved using either local mirroring (LM) or local erasure coding (LEC), which is the default. Similarly, resiliency at the cluster level is achieved using either remote mirroring (RM), which is the default, or distributed erasure coding (DEC).

You can configure resiliency either directly, by updating the COS file /arroyo/test/aftersetupfile, or indirectly, by creating and applying asset redundancy policies through the V2PC GUI. COS also allows for the configuration of mixed resiliency policies through the GUI.

Asset redundancy policies are assigned at the service endpoint level. Because an endpoint can only be associated with a single cluster and policy, asset redundancy policies are always applied at the cluster level, rather than at the node level. When you apply a policy to an endpoint, the COS AIC client writes the appropriate line(s) to the setup file for each node in the cluster.

Note We recommend using either remote mirroring or DEC, but not both. COS 3.12.1 does not support migration from one scheme to another while preserving stored content. COS treats remote mirroring and DEC as mutually exclusive options. DEC is preferred, so if both are enabled, COS uses DEC.

An endpoint can be configured in the GUI, but it must remain in Disabled state until it is associated with both a cluster and an asset redundancy policy. There is no default policy, policy type, or rule associated with an endpoint in the GUI. After an asset redundancy policy is applied to an endpoint, you can change to a different policy simply by applying it to the endpoint instead.

B-1ud Object Storage Release 3.12.1 User Guide

Page 84: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix B Configuring Resiliency and Management Interface Bonding Configuring Resiliency

About MirroringMirroring configures the disks in a node, or the nodes in a cluster, to hold a specified number of exact copies of the object data. The number of copies can be set from 1 to 4, with 1 representing the original object data only, and 4 representing the object data plus three copies. Thus, for example, a value of 2 specifies one copy plus the original object data.

To configure mirroring in the GUI, you specify the number of desired copies differently for local and remote mirroring:

• For local mirroring, the number you specify includes the original plus the number of local copies. For example, setting the number to 3 in the GUI specifies the original and two local copies.

• For remote mirroring, the number you specify includes only the number of remote copies. For example, setting the number to 3 in the GUI specifies three remote copies in addition to the (local) original.

Note For clusters of two or more nodes, always use DEC instead of remote mirroring.

About Erasure CodingErasure coding, or software RAID, is a method of data protection in which cluster data is redundantly encoded, divided into blocks, and distributed or striped across different locations or storage media. The goal is to enable any data lost due to a drive or node failure in the cluster to be reconstructed using data stored in a part of the cluster that was not affected by the failure.

COS 3.12.1 supports both local erasure coding (LEC), in which data is striped across the disks in a node, and distributed erasure coding (DEC), in which data is striped across nodes in a cluster.

For both LEC and DEC, data recovery is performed as a low-priority background task to avoid possible impact on performance. Even so, erasure coding provides faster data reconstruction than hardware RAID. Speed is important in maintaining resiliency, because a failed node cannot help to recover other failed nodes until it has fully recovered.

Defining ResiliencyTwo key factors define the degree of resiliency of a given cluster configuration:

• Resiliency ratio (RR) functionally divides the disks or nodes in a cluster into data blocks and parity blocks. The RR of a cluster indicates how much of its total storage is devoted to protection against data loss. Conventionally, the RR is expressed as an N:M value, where N represents the number of data blocks and M is the number of parity blocks.

• Resiliency factor (RF) is the number of data (N) blocks that can fail at one time before losing the ability to fully recover any lost data. The RF is equivalent to the parity (M) value in the RR for the cluster, and can be selected in the GUI as described in About Mirroring, page B-2.

For LEC, COS assigns a default RR of 12:2, but you can choose another RR up to 18:4 maximum. For DEC, the cos-aic-client calculates the RR for a chosen cluster size and RF, but you can choose another RR up to 18:18 maximum. As you consider whether to use the defaults or assign new values, you must weigh a number of factors specific to your deployment, such as:

• The amount of storage that can be devoted to resiliency

• The degree of redundancy (RF) that must be achieved

B-2Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 85: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix B Configuring Resiliency and Management Interface Bonding Configuring Resiliency

• How quickly failed data blocks can be recovered

The following examples illustrate how these factors can be used to determine the best resiliency scheme for a particular COS node or cluster.

Example 1: LEC at Node Level

To see how the LEC resiliency ratio affects individual node behavior, consider a cluster in which each node has 4 TB disks, runs at 80% full, and can maintain a data read rate of 20 Gbps (2.5 GB/s).

If each node in the cluster has an assigned LEC RR of 10:1:

• Each node can recover from a single disk failure on its own, but if more than one concurrent disk failure occurs, DEC must be used to rebuild the data.

• 1 MB of parity data is created for every 10 MB of data written to each node, representing a storage overhead of 1/10 or 10%. Also, in the event of a lost disk, the rebuild logic must read in 10 MB of data for every 1 MB recovered, representing a rebuild overhead of 10:1.

• If a node loses a single disk when running at 80% full, then (4 TB x 0.8 =) 3.2 TB of data must be rebuilt. At a 20 Gbps data read rate and a 10:1 rebuild overhead, the data rebuild rate is (20 Gbps / 10 =) 2 Gb (256 MB) per second, or roughly 0.9 TB per hour. This means that 3.2 TB of data can be rebuilt in under 4 hours.

By comparison, if the same cluster were assigned the default LEC RR of 12:2 instead of 10:1:

• Each node can recover from two concurrent disk failures on its own, before using DEC.

• Each node stores 2 MB of parity for every 12 MB of data, representing a 17% a storage overhead and a 12:1 rebuild overhead.

• At a data rebuild rate of (20 Gbps / 12 =) 0.75 TB per hour, recovery takes about 4.3 hours for one lost disk (3.2 TB), or 8.6 hours for two lost disks.

Note • Actual data rebuild times are highly dependent on the capacity of the hardware components in the server. Servers that can deliver significantly more throughput reduce the rebuild time, while servers with less capacity increase the rebuild time. LEC uses local disk reads when recovering data, so the speed of the disk channel directly impacts recovery time.

• Rebuild activity runs at a lower priority than regular object reads and writes, in an effort to maintain normal performance during rebuild periods. However, because normal reads and writes take priority over repair activity, a busy server takes longer to rebuild data from a lost disk than an idle server, which can devote more bandwidth to repair.

• An exception to the lower priority for rebuild activity is made for on-demand repair work. This occurs when a client requests data that is waiting to be rebuilt due to a disk failure. In this case, the requested data is rebuilt immediately so that the repair work is transparent to the clients.

Example 2: DEC at Cluster Level

To see how the DEC resiliency ratio affects cluster configuration and behavior, consider a cluster of 11 nodes, with each node running 80% full, maintaining a data read rate of 16 Gbps (2 GB/s), and having a total storage capacity of 100 TB.

If the cluster has an assigned DEC RR of 8:2:

• The cluster requires a minimum of 10 active nodes (8 data plus 2 parity) to continue writing data at the assigned 8:2 resiliency ratio. With 11 nodes, this cluster can lose one node and still continue to provide the same resiliency for newly added objects.

B-3Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 86: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix B Configuring Resiliency and Management Interface Bonding Configuring Resiliency

• The cluster can recover from two concurrent node failures, but to do so, must have enough free disk space to hold the recovered data for two nodes. For an 11-node cluster, this means that 2/11, or just under 20%, of the total disk space must be kept free. This can be reduced to 10% if it is safe to assume that at least one replacement node can be brought online.

• 2 MB of DEC parity data is created for every 8 MB of data written to the cluster, representing a storage overhead of 2/8 or 25%. This compares favorably to mirroring, whose storage overhead is four times greater (100%) but only enables recovery from one node failure, rather than two. Also, in the event of a lost node, the rebuild logic must read in 8 MB of data for every 1 MB recovered, representing a rebuild overhead of 8:1.

• If one node is lost when running at 80% full, then (100 x 0.8 =) 80 TB of data must be rebuilt by the 10 remaining nodes in the cluster. Dividing the task equally among the remaining 10 servers, each must rebuild 8 TB of data. At a 16 Gbps data read rate and an 8:1 rebuild overhead, the data rebuild rate is (16 Gbps / 8 =) 2 Gb (256 MB) per second, or 0.9 TB per hour. At this rate, each node can rebuild its 8 TB share of the total lost data in 8.9 hours.

By comparison, if the same cluster were assigned a DEC RR of 8:1 instead of 8:2:

• The cluster can fully recover data from only a single failed node, but requires only half the free disk space (10%) of the 8:2 RR case to support this recovery.

• Each node stores 1 MB of parity for every 8 MB of data, representing a 12.5% a storage overhead and an 8:1 rebuild overhead.

• Because the rebuild rate is determined by the N value (8), it is the same for 8:1 as for 8:2. At a data rebuild rate of (16 Gbps / 8 =) 0.9 TB per hour, recovery of one lost node (8 TB) takes 8.9 hours.

Note • Actual data rebuild times are directly affected by the number of nodes in the cluster. If there are fewer nodes in the cluster, the work is divided among fewer servers, resulting in longer rebuild times. DEC uses remote data reads across the network when recovering data, so the speed of the network channel directly impacts the recovery time.

• Actual data rebuild times are highly dependent on the capacity of the hardware components in the server. Servers that can deliver significantly more throughput reduce the rebuild time, while servers with less capacity increase the rebuild time.

• Rebuild activity runs at a lower priority than regular object reads and writes, in an effort to maintain normal performance during rebuild periods. However, because normal reads and writes take priority over repair activity, a busy cluster takes longer to rebuild data from a lost node than an idle cluster, which can devote more bandwidth to repair.

• An exception to the lower priority for rebuild activity is made for on-demand repair work. This occurs when a client requests data that is waiting to be rebuilt due to a disk failure. In this case, the requested data is rebuilt immediately so that the repair work is transparent to the clients.

Example 3: Using LEC and DEC Together

To see how LEC and DEC work together, consider the cluster configuration described in the previous examples, but with a LEC RR of 10:1 for each node and a DEC RR of 8:2 for the cluster as a whole.

With this configuration:

• The cluster can recover from two concurrent node failures as well as a single disk failure per node at the same time. A LEC RR of 10:1 means that a node must lose at least two disks concurrently before DEC is required to recover the lost data.

B-4Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 87: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix B Configuring Resiliency and Management Interface Bonding Configuring Resiliency

• Because the DEC parity data is distributed across the nodes in the cluster, it is further protected by LEC at the node level. This allows recovery from disk failure to be performed locally, including the DEC parity data stored on the node.

• The DEC RR of 8:2 means that every 10 MB block of data includes 8 MB for content storage and 2 MB for DEC parity. In addition, the LEC RR of 10:1 means that every 10 MB block of (content + DEC parity) data requires another 1 MB of parity data. So in total, every 8 MB of content storage requires (2 DEC + 1 LEC =) 3 MB of parity data, giving a total parity overhead of (3 / 8 =) 37.5%.

For any cluster configuration using both DEC and LEC, the total parity overhead is found by first applying the DEC overhead and then applying the LEC overhead, as follows:

Total Parity Overhead = (M1/N1) + ((1 + M1/N1) X (M2/N2))

where N1:M1 represents DEC parity and N2:M2 represents LEC parity.

Using the values from this example to illustrate:

Total Parity Overhead = (2/8) + ((1 + 2/8) X (1/10) = 37.5%

Note The parity calculations just described do not account for the additional free disk space needed for storage of recovered data. Be sure to include this requirement in your total overhead calculations. For example, a DEC RR of 8:2 requires free disk space for up to two failed nodes.

Configuring Resiliency Using the V2PC GUITo configure resiliency through the V2PC GUI, open the GUI as described in Accessing the V2PC GUI, page A-2 and navigate to Cisco Cloud Object Store (COS) > COS Clusters.

Figure B-1 V2PC GUI, COS Clusters Page

Locate the cluster to be updated and click its Edit icon to enable it for editing. Choose the desired resiliency policy from the Asset Redundancy Policy drop-down list for the endpoint.

B-5Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 88: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix B Configuring Resiliency and Management Interface Bonding Configuring Resiliency

Note • If the desired policy does not appear, choose Service Domain Objects > Asset Redundancy Policies to confirm the policy exists. If not, click Add Row and create a new policy. Enter a profile name and expected cluster size, select a model (appropriate interfaces and names auto-populate), assign the profile to a cluster, configure a DNS server, and assign IP pools to each interface.

• The GUI lets you enter an expected cluster size when creating the COS node initialization profile. For DEC, the cos-aic-client then uses this cluster size to calculate a suitable N value given the expected cluster size and the selected resiliency factor M. Both values can still be overridden as described in Finding N:M Values, page B-9 by editing or creating the /arroyo/test/aftersetupfile COS file to hold them.

Caution When manually setting N:M (or any other) values that must persist, be sure to use aftersetupfile and not setupfile. The settings in setupfile can be overwritten by changes made via the GUI. Also, you must configure the aftersetupfile before registering the new node to V2PC. Otherwise, DEC settings within the cluster will be inconsistent, requiring at least one service-disrupting reboot to correct.

Configuring Local Mirroring ManuallyTo configure local mirroring on a node manually:

Step 1 Open (or if not present, create) the COS file /arroyo/test/aftersetupfile for editing.

Step 2 Include the line vault local copy count in the file and set the value to 2, 3, or 4 as appropriate.

Note Setting the value to 1 simply maintains the original data and creates no additional copies.

Step 3 Disable local erasure coding by setting allow vault raid to 0 (or simply omit or remove this line).

Example# CServer core configuration. Changes to this file require a server reboot. serverid 1 groupid 3333 arrayid 6666

. . . .allow vault raid 0vault local copy count 2vault mirror copies 2allow server raid 0allow tcp traffic 1

. . . . er_enable 0 rtp_enable 0

B-6Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 89: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix B Configuring Resiliency and Management Interface Bonding Configuring Resiliency

Configuring Local Erasure Coding ManuallyTo enable local erasure coding manually:

Step 1 Open (or if not present, create) the COS file /arroyo/test/aftersetupfile for editing.

Step 2 Set allow vault raid to 1 to enable LEC.

Step 3 Disable local mirroring by setting vault local copy count to 1 (or simply omit or remove this line).

Example# CServer core configuration. Changes to this file require a server reboot. serverid 1 groupid 3333 arrayid 6666

. . . .allow vault raid 1vault local copy count 1vault mirror copies 2allow server raid 0allow tcp traffic 1

. . . .er_enable 0

rtp_enable 0

Migrating from LM to LEC ManuallyTo migrate a service endpoint from local mirroring to local erasure coding:

Step 1 Temporarily leave local mirroring enabled for the service endpoint.

Step 2 Enable LEC for the service endpoint and let it establish the parity needed for each data object.

Step 3 When parity is established, disable local mirroring.

Configuring Remote Mirroring Manually

Note For clusters of two or more nodes, always use DEC instead of remote mirroring.

To enable and configure remote mirroring manually:

Step 1 Open (or if not present, create) the COS file /arroyo/test/aftersetupfile for editing.

Step 2 Set vault mirror copies to the value 2, 3, or 4 as appropriate to enable remote mirroring. The value you enter specifies the object data plus the number of exact copies desired.

Note Setting the value to 1 simply maintains the original data and creates no additional copies.

B-7Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 90: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix B Configuring Resiliency and Management Interface Bonding Configuring Resiliency

Step 3 Disable distributed erasure coding by setting allow server raid to 0 (or simply omit or remove this line).

Example# CServer core configuration. Changes to this file require a server reboot. serverid 1 groupid 3333 arrayid 6666

. . . .allow vault raid 0vault local copy count 2vault mirror copies 2allow server raid 0

allow tcp traffic 1 . . . . er_enable 0 rtp_enable 0

Configuring Distributed Erasure Coding ManuallyTo enable and configure distributed erasure coding manually:

Step 1 Open (or if not present, create) the COS file /arroyo/test/aftersetupfile for editing.

Step 2 Set allow server raid to 1 and add the following lines immediately below:

• target server raid data blocks <value>

This controls the number of data blocks used. The default <value> is 8, and the valid range is 1-18.

• target server raid parity blocks <value>

This controls the number of parity blocks used. The default <value> is 1, and the valid range is 1-18.

Note See Finding N:M Values, page B-9 to determine appropriate data block and parity block values.

Step 3 Disable remote mirroring by setting vault mirror copies to 0 (or simply omit or remove this line).

Example# CServer core configuration. Changes to this file require a server reboot. serverid 1 groupid 3333 arrayid 6666

. . . .allow vault raid 0vault local copy count 2vault mirror copies 1allow server raid 1target server raid data blocks 8target server raid parity blocks 1

allow tcp traffic 1 . . . . er_enable 0 rtp_enable 0

B-8Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 91: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix B Configuring Resiliency and Management Interface Bonding Configuring Resiliency

Finding N:M ValuesTo configure DEC, you must specify the number of data blocks (N) and parity blocks (M) used for data encoding. Table B-1 shows the corresponding data-to-parity block (N:M) values for a given number of nodes in a cluster and for a given degree of resiliency desired for the cluster. For details, see Defining Resiliency, page B-2.

Note COS does not currently support configuration of new N:M (data:parity) block values through the V2PC GUI. If you need to configure new N:M values, you must do so in the aftersetup file.

In this table:

• Nodes is the number of nodes in the cluster.

• RF is the desired resiliency factor, or number of nodes that can fail without data loss.

• Min is the minimum number of nodes required to achieve a given resiliency factor.

The ratios appearing in the cells of the table are N:M values, where N is the number of data blocks and M is the number of parity blocks needed to achieve the desired resiliency factor for a given node count.

To use the table to find the N:M values for a cluster:

Step 1 In the Nodes column, locate the row corresponding to the number of nodes in the cluster.

Note For COS 3.12.1, you must select the N:M configuration based upon the initial nodes in the cluster. COS does not currently support adding nodes to a cluster after DEC is configured for the cluster.

Step 2 Locate the column in the table whose header represents the desired RF value for the cluster.

Step 3 Find the corresponding N:M value at the intersection of the row and column just located.

Step 4 Configure DEC using N as the number of data blocks and M as the number of parity blocks.

Table B-1 lists the possible N:M values for DEC for 1-20 nodes and a resiliency factor (RF) of 0-4.

Caution Table B-1 includes two resiliency factors, RF = 3 and RF = 4, which are supported in COS 3.8.1 but not in COS 3.12.1, due to a change in the method used to calculate resiliency requirements for metadata. To ensure the desired resiliency for both objects and metadata with COS 3.12.1, use RF = 0, 1, or 2.

Table B-1 Possible N:M Values for DEC

Nodes RF = 0 RF = 1 RF = 2 RF = 3 * RF = 4 *

1 1:0 — — — —

2 1:0 — — — —

3 1:0 1:1 — — —

4 1:0 2:1 — — —

5 1:0 3:1 2:2 — —

6 1:0 4:1 3:2 — —

B-9Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 92: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix B Configuring Resiliency and Management Interface Bonding Configuring Resiliency

* These RF options are not supported in COS Release 3.12.1.

Table B-2 shows the total parity overhead (as described in Defining Resiliency, page B-2) for a given number of nodes and resiliency factor for each of two LEC values, 12:1 and 12:2.

7 1:0 5:1 4:2 3:3 —

8 1:0 6:1 5:2 4:3 —

9 1:0 7:1 6:2 5:3 4:4

10 1:0 8:1 7:2 6:3 5:4

11 1:0 8:1 8:2 7:3 6:4

12 1:0 9:1 9:2 8:3 7:4

13 1:0 9:1 9:2 9:3 8:4

14 1:0 10:1 10:2 10:3 9:4

15 1:0 10:1 10:2 11:3 10:4

16 1:0 11:1 11:2 12:3 11:4

17 1:0 11:1 11:2 12:3 12:4

18 1:0 12:1 12:2 13:3 13:4

19 1:0 12:1 12:2 13:3 14:4

20 1:0 12:1 13:2 14:3 15:4

Table B-1 Possible N:M Values for DEC

Nodes RF = 0 RF = 1 RF = 2 RF = 3 * RF = 4 *

Table B-2 Total Parity Overhead for LEC 12:1 and LEC 12:2

NodesRF = 012:1

RF = 012:2

RF = 112:1

RF = 112:2

RF = 212:1

RF = 212:2

RF = 312:1

RF = 312:2

RF = 412:1

RF = 412:2

1 8% 17% — — — — — — — —

2 8% 17% — — — — — — — —

3 8% 17% 117% 134% — — — — — —

4 8% 17% 62% 76% — — — — — —

5 8% 17% 44% 56% 117% 134% — — — —

6 8% 17% 35% 46% 80% 95% — — — —

7 8% 17% 30% 41% 62% 76% 117% 134% — —

8 8% 17% 26% 37% 52% 64% 90% 105% — —

9 8% 17% 23% 34% 44% 56% 73% 87% 117% 134%

10 8% 17% 22% 32% 39% 51% 62% 76% 95% 111%

11 8% 17% 22% 32% 35% 46% 55% 67% 80% 95%

12 8% 17% 20% 30% 32% 43% 49% 61% 70% 84%

13 8% 17% 20% 30% 32% 43% 44% 56% 62% 76%

14 8% 17% 19% 29% 30% 41% 41% 52% 56% 69%

15 8% 17% 19% 29% 30% 41% 38% 49% 52% 64%

B-10Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 93: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix B Configuring Resiliency and Management Interface Bonding Replicating Objects During Swift Write Operations

Replicating Objects During Swift Write OperationsWhile an object is being created or modified using Swift write operations, copies of the object data can be stored in real time on the local COS node and its peer nodes in the COS cluster. This functionality works only if the RAID feature on the node is disabled.

To disable the RAID feature on the node, open /arroyo/test/setupfile and set allow vault raid to 0.

To replicate object data on the node, open /arroyo/test/setupfile and set vault local copy count to a value greater than 1. This value specifies the how many copies of the object data are to be stored on the node.

To replicate object data on the peer nodes, open /arroyo/test/setupfile and set vault mirror copies to a value greater than 1. This value specifies how many remote copies of the object data are maintained.

Configuring Management Interface BondingCOS supports the ability to bond two NICs so that they appear to the host node as a single logical management interface. With management interface bonding, two ports on a node are defined as a primary-backup pair.

• For the CDE6032 and UCS S3260, the designated management ports are eth0 and eth1.

• For the UCS S3160, the designated management ports are eth0 and eth3.

• For the CDE465, the designated management ports are eth0 and eth1.

Note COS 3.12.1 supports bonding of management interfaces for resiliency, but not for improved performance.

If the two NICs in a node are bonded, the management link is maintained if one logical interface or its physical link are lost. The management link is also maintained if one physical link is disconnected and then reconnected, and then the other physical link is disconnected. The management interface bonding feature itself remains enabled if the node is rebooted.

To Configure Bonding ManuallyTo configure management interface bonding manually:

16 8% 17% 18% 28% 28% 38% 35% 46% 48% 60%

17 8% 17% 18% 28% 28% 38% 35% 46% 44% 56%

18 8% 17% 17% 27% 26% 37% 33% 44% 42% 53%

19 8% 17% 17% 27% 26% 37% 33% 44% 39% 51%

20 8% 17% 17% 27% 25% 35% 31% 42% 37% 48%

Table B-2 Total Parity Overhead for LEC 12:1 and LEC 12:2

NodesRF = 012:1

RF = 012:2

RF = 112:1

RF = 112:2

RF = 212:1

RF = 212:2

RF = 312:1

RF = 312:2

RF = 412:1

RF = 412:2

B-11Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 94: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix B Configuring Resiliency and Management Interface Bonding Configuring Management Interface Bonding

Step 1 Open (or if not present, create) the COS file /arroyo/test/aftersetupfile for editing.

Step 2 Add the line management bond <value>, where <value> is 0 to disable or 1 to enable the feature.

When enabled, one NIC serves as the primary management interface and the other as the backup interface.

B-12Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 95: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Cisco Clo

A

P P E N D I X C

COS Command Line Utilities

COS provides two sets of command line utilities (CLIs) for use on a separate Linux installation to leverage the APIs exposed by the COS node:

• cos-swift — provides command-line access to the Swift API

• cos-swauth — provides command-line access to the Swauth API

This section explains how to install and use these CLI utilities.

Note For details on the Swift and Swauth APIs, see the Cisco Cloud Object Storage Release 3.12.1 API Guide.

Hardware PrerequisitesBefore installing the COS CLI utilities, you must have the following hardware available:

• A COS node, release 2.1.1 or later, staged in either standalone or cluster mode

• A CENTOS 6 Linux workstation on which to install the CLI utilities

Installing the CLI UtilitiesPerform the following steps to install the COS CLI utilities.

Note In the examples shown below, commands that you enter are shown in bold, while system responses are shown in normal type. System responses are typical examples, and may vary by installation.

Step 1 Copy the COS 3.12.1 ISO image to the Linux workstation that will host the CLI utilities.

ll cos_repo-3.12.1.iso

-rw-r--r-- 1 root root 250613760 Dec 12 20:39 cos_repo-3.12.1.iso

Step 2 Mount the ISO image.

mount -oloop cos_repo-3.12.1.iso /mnt/cdrom

ls /mnt/cdromcos-3.12.1-0b8-local.repo local_repo_setup Packages repodata

C-1ud Object Storage Release 3.12.1 User Guide

Page 96: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix C COS Command Line Utilities Installing the CLI Utilities

Step 3 Install the utilities on the local workstation.

./local_repo_setup

Loaded plugins: fastestmirror, securityLoading mirror speeds from cached hostfile * base: mirror.cogentco.com * extras: mirrors.advancedhosters.com * updates: centos.mirror.nac.netbase | 3.7 kB 00:00base/group_gz | 216 kB 00:00base/filelists_db | 6.1 MB 00:00base/other_db | 2.8 MB 00:00cos-COS-0b8 | 3.6 kB 00:00 ...cos-COS-0b8/group_gz | 536 B 00:00 ...cos-COS-0b8/filelists_db | 66 kB 00:00 ...cos-COS-0b8/primary_db | 43 kB 00:00 ...cos-COS-0b8/other_db | 20 kB 00:00 ...extras | 3.4 kB 00:00extras/filelists_db | 31 kB 00:00extras/prestodelta | 605 B 00:00extras/other_db | 37 kB 00:00updates | 3.4 kB 00:00updates/filelists_db | 1.5 MB 00:00updates/prestodelta | 194 kB 00:00updates/other_db | 19 MB 00:02Metadata Cache CreatedLoaded plugins: fastestmirror, securityLoading mirror speeds from cached hostfile * base: mirror.cogentco.com * extras: mirrors.advancedhosters.com * updates: centos.mirror.nac.netrepo id repo name statusbase CentOS-6 - Base 6,518cos-COS-0b8 Cisco Cloud Object Store COS 57extras CentOS-6 - Extras 37updates CentOS-6 - Updates 748repolist: 7,360

Step 4 Run the YUM installation for the COS CLI utilities.

yum install cos-cli

Loaded plugins: fastestmirror, securityLoading mirror speeds from cached hostfile * base: mirror.cogentco.com * extras: mirrors.advancedhosters.com * updates: centos.mirror.nac.netSetting up Install ProcessResolving Dependencies--> Running transaction check---> Package cos-cli.x86_64 0:3.12.1-cos0.1 will be installed--> Finished Dependency Resolution

Dependencies Resolved

==================================================================================================================================================================================== Package Arch Version Repository Size====================================================================================================================================================================================Installing: cos-cli x86_64 COS-cos0.1 cos-COS-0b8 142 k

C-2Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 97: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix C COS Command Line Utilities Using the CLI Utilities

Transaction Summary====================================================================================================================================================================================Install 1 Package(s)

Total download size: 142 kInstalled size: 635 kIs this ok [y/N]: yDownloading Packages:Running rpm_check_debugRunning Transaction TestTransaction Test SucceededRunning Transaction Installing : cos-cli-3.12.1-cos0.1.x86_64 1/1 Verifying : cos-cli-3.12.1-cos0.1.x86_64 1/1

Installed: cos-cli.x86_64 0:COS-cos0.1

Complete!

The CLI utilities are now ready for use.

Using the CLI Utilities

cos swauthThe cos-swauth CLI utility is used to manage authentication accounts and users in a COS cluster. This utility has the following form:

cos-swauth [–a <auth-ip>] [–u <admin-user>] [–k <admin-key>] [–h/– –help <subcommand>]

[–v/– –verbose] [subcommand <options>]

You can use the following parameters with this utility:

• –a <auth-ip>

– The IP address of the COS authentication service.

– The IP address can also be specified by using the COS_AUTH_IP environment variable.

• –u <admin-user>

– Name of the user authorizing the command.

– When unspecified, the user name defaults to .super_admin.

– The user name, except in the case of the super admin, must be specified in the form <account>:<user>.

– The user name can also be specified by using the COS_ADMIN_USER environment variable.

• –k <admin-key>

– The authorization key (password) of the user.

– The key can also be specified by using the COS_ADMIN_KEY environment variable.

• –h/– –help <subcommand>

– Display help information for the specified subcommand.

C-3Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 98: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix C COS Command Line Utilities Using the CLI Utilities

• –v/– –verbose

– Display detailed information as part of the output.

• subcommand <options>

– The subcommands include –l/list, –sa/set-account, –su/set-user, –g/get, –i/info, –d/delete.

– Each subcommand is associated with a set of relevant options. Table C-1 lists the subcommands and their options.

Table C-1 cos-swauth Subcommands and Their Options

Subcommand/Option Description

–l/list

No account specified List the accounts for the specified admin user.

<account> List the users belonging to the specified account.

–sa/set-account

<account> Create or update the specified account.

–s/– –suffix Specify the suffix to be used in creating an account ID.

If a suffix is not provided, a random UUID will be used.

Note A suffix may only be specified for new accounts.

–e/– –endpoint < name>:[<IP>] [–t/– –type <service type>]

Specify a service endpoint for the account.

If the IP address is omitted, the specified endpoint will be deleted.

For the reserved endpoint name default, the name of the default endpoint for the service type must be specified instead of the IP address.

If unspecified, the service type will default to storage.

–su/set-user

<user> Create or update the details of the specified user.

–K/– –key <user key> Specify the user’s key (password).

This option can be used by admin or reseller-admin users to assign a password to a new user. It can also be used by existing users to change their password.

–A/– –admin Assign the user admin privileges for the specified account.

Account admins can create, delete, or modify users within an account, and have admin privilege in the services associated with the account.

The user privileges default to –N/– –normal if –A/– –admin and –R/– –reseller_admin options are omitted.

C-4Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 99: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix C COS Command Line Utilities Using the CLI Utilities

cos-swiftThe cos-swift CLI utility can be used to manage storage accounts, storage containers, and storage objects in a COS cluster. This utility has the following form:

cos-swift [–t <auth-token>] [–a <storage-url>] [–v/– –verbose][–h/– –help <subcommand>] [subcommand <options>]

You can use the following parameters with this utility:

• –t <auth-token>

– The authentication token returned by the cos-swauth get operation.

–R/– –reseller_admin Assign the user reseller-admin privileges for the COS cluster.

Reseller-admins can create, delete, or modify accounts, and have all the privileges of an account admin in the accounts defined in the COS authentication service.

The user privileges default to –N/– –normal if –A/– –admin and –R/– –reseller_admin options are omitted.

–N/– –normal Specify that the user has no special privileges for the specified account.

This option can also be used to withdraw the existing privileges of a user.

The user privileges default to –N/– –normal if a privilege option is not specified.

–g/get

<account>:<user> Get an authentication token and storage service URL for the specified user.

–K/– –key <user key> Specify the user’s key (password).

This option may be omitted if the user does not have a key.

–e/– –endpoint Retrieve the service endpoint catalog associated with the user account.

–i/info

<account> Retrieve account information, including the service endpoint catalog.

<account>:<user> Retrieve detailed user information.

–d/delete

<account> Delete the specified account.

Note Only an empty account can be deleted.

<account>:<user> Delete the specified user.

Table C-1 cos-swauth Subcommands and Their Options

Subcommand/Option Description

C-5Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 100: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix C COS Command Line Utilities Using the CLI Utilities

– The authentication token can also be specified by using the COS_AUTH_TOKEN environment variable.

• –a <storage-url>

– The storage URL returned by the cos-swauth get operation.

– The storage URL can also be specified by using the COS_STORAGE_URL environment variable .

• –v/– –verbose

– Display detailed information as part of the output.

• –h/– –help <subcommand>

– Display help information for the specified subcommand.

• sub-command <options>

– The subcommands include –i/info, –l/list, –s/set, –g/get, –u/update, –d/delete.

– Each subcommand is associated with a set of relevant options. Table C-2 lists the subcommands and their options.

Table C-2 cos-swift Subcommands and Their Options

Subcommand/Option Description

–i/info

No container or object specified. Display metadata for the storage account in the storage URL.

<container name> Display metadata for the specified storage container.

<container name>/<object name> Display metadata for the specified storage object.

–l/list

No container specified. List the containers of the storage account in the storage URL.

<container name> List the objects of the specified storage container.

<container name>/<object path> Form a prefix by appending a slash to the specified object path if the path does not end with a slash.

Then, list the objects of the specified storage container whose names have this prefix.

–L/– –limit <value> Restrict the number of entries listed to less than or equal to the specified value.

–m/– –marker <value> List the storage containers or objects with names greater than the specified marker value.

–e/– –end_marker <value> List the storage containers or objects with names less than the specified marker value.

–p/– –prefix <value> List the storage containers or objects whose names begin with the specified prefix.

–d/– –delimiter <character> List the storage containers or objects whose names do not have the specified delimiting character.

Note If the –p/– –prefix <value> option is used, the delimiting character may be present in the specified prefix.

C-6Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 101: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix C COS Command Line Utilities Using the CLI Utilities

–V/– –Verbose If no container name is specified, display the metadata of the containers belonging to the storage account in the storage URL. For each container, the container name, the number of subordinate objects, and the total content size of all subordinate objects are displayed.

If a container name is specified, display the metadata of the objects in that storage container. For each object, the object name, MD5 checksum, content type, content size, and modification timestamp are displayed.

–s/set

<container name> Create a container with the specified name.

<container name>/<object name> Create an object with the specified name within the mentioned container.

–m/– –meta <name>:<value> Create a custom metadata item with the specified name and value. This option may be repeated multiple times.

–r/– –read <ACL> Specify a read access control list (ACL) for a container. Specifying an empty string ("") as the ACL will cause the existing ACL to be deleted.

–w/– –write <ACL> Specify a write access control list (ACL) for a container.

–f/– –file <name> Upload the named file as object content. If an existing object is specified, this file will replace prevailing object content.

–T/– –type <content type> Specify the content type of an object. The specified content type must be a valid mime type.

–u/update

No container or object specified. Update the metadata of the storage account in the storage URL.

<container name> Update the metadata of the specified container.

<container name>/<object name> Update the metadata of the specified object in the container.

–m/– –meta <name>:<value> Create a custom metadata item with the specified name and value. This option may be repeated multiple times.

–r/– –read <ACL> Specify a read access control list (ACL) for a container. Specifying an empty string ("") as the ACL will cause the existing ACL to be deleted.

–w/– –write <ACL> Specify a write access control list (ACL) for a container.

–g/get

<container name>/<object name> Retrieve the content of the specified object.

–f/– –file <name> Store the retrieved object content in the specified file.

Note If this option is not used and file is not specified, the object content will be directed to stdout.

–d/delete

Table C-2 cos-swift Subcommands and Their Options

Subcommand/Option Description

C-7Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 102: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix C COS Command Line Utilities Using the CLI Utilities

<container name> Delete the specified container.

Note Only an empty container can be deleted.

<container name>/<object name> Delete the specified object.

Table C-2 cos-swift Subcommands and Their Options

Subcommand/Option Description

C-8Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 103: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Cisco Clo

A

P P E N D I X D

PXE Network Installation

Beginning with Release 3.5.2, COS supports network installation using the Intel Preboot Execution Environment (PXE). PXE provides a way to download a network bootstrap program (NBP) to remotely boot or install a client. When combined with the Red Hat Enterprises network installation feature using NFS, FTP, or HTTP, PXE can be used to install a COS client over the network.

Setting up a PXE installation for COS requires the following steps:

• Setting up a DHCP or Proxy DHCP Server, page D-1

• Configuring TFTP for PXE, page D-5

• Setting up a Network Installation Server, page D-9

• Enabling PXE Boot in BIOS, page D-12

These procedures are based on a Red Hat Enterprise Linux (RHEL) or CentOS Linux 6.4 distribution, and assume that yum has already been configured to install the necessary packages.

Note Each network service may require that you apply filter rules to the iptables firewall service. The steps described below account for any firewall filters needed. In a lab or other environment where the iptables service has been disabled, you do not need to apply filter rules.

Setting up a DHCP or Proxy DHCP ServerA DHCP or Proxy DHCP server is needed to supply the necessary PXE boot options. These options include the name of the NBP and the location of the TFTP server from which the NBP can be downloaded.

A Proxy DHCP server is an alternative to a DHCP server for PXE when an existing DHCP server cannot be directly administered. A DHCP server and a Proxy DHCP server can coexist on the same network. A Proxy DHCP server only responds to PXE client requests, and only supplies PXE boot information to a PXE client. A DHCP server is still needed to provide an IP address to the PXE client.

If there is an existing DHCP server on the network segment where the COS nodes are being configured, skip the DHCP server installation and continue with the configuration of DHCP PXE options. If you cannot modify the DHCP server, you must configure a Proxy DHCP server for PXE.

D-1ud Object Storage Release 3.12.1 User Guide

Page 104: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Setting up a DHCP or Proxy DHCP Server

Installing and Configuring the DHCP Server for PXEA PXE client acquires its IP address from a DHCP server. The DHCP server package (dhcp) provided by RHEL 6.4 contains the Internet Systems Consortium DHCP Server, version 4.1.1.

Install the ISC DHCP Server

Install the DHCP server package dhcp as follows:

# yum install dhcp

Configure DHCP Address Pool and Options

Edit the DHCP configuration file /etc/dhcp/dhcpd.conf to define lease options and an address pool for DHCP clients. Use values appropriate to the network segment where the DHCP server will reside.

The following example shows how to define a subnet along with a router and address pool.

## DHCP Server Configuration file.# see /usr/share/doc/dhcp*/dhcpd.conf.sample# see 'man 5 dhcpd.conf'#

authoritative;

option domain-name "cisco.com";option domain-name-servers 10.1.0.10, 10.2.0.20;default-lease-time 1800;max-lease-time 7200;

subnet 10.0.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 10.0.0.1;

pool { range 10.0.0.100 10.0.0.199; }}

Defining DHCP PXE Options and Class

Add support for PXE clients to the DHCP configuration file as shown in the following example:

# define options for the PXE classoption space PXE;option PXE.discovery-control code 6 = unsigned integer 8;

class "PXE" { match if substring (option vendor-class-identifier, 0, 9) = "PXEClient";

# Set "vendor-class-identifier" (option 60) to "PXEClient" in DHCP answer. # If this option is not set the PXE client will ignore the answer. option vendor-class-identifier "PXEClient";

# Set TFTP server address (option 66) and bootstrap file name (option 67). option tftp-server-name "10.0.0.5"; option bootfile-name "pxelinux.0";

# Set PXE discovery control to tell client to use the boot parameters

D-2Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 105: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Setting up a DHCP or Proxy DHCP Server

# supplied by the DHCP offer and not try to discover the boot service. vendor-option-space PXE; option PXE.discovery-control 11;}

This code defines the four DHCP options that must be provided to the PXE client in order to boot:

• Vendor class identifier

• Bootstrap file name

• Location of TFTP server

• PXE discovery control

It also defines a PXE class, which is applied to incoming DHCP requests from PXE clients that use the vendor class identifier PXEClient.

Defining Known Clients (Optional)

It is possible to configure DHCP to only respond to known clients.

To declare a known client:

Step 1 Add a host statement to the DHCP configuration file.

host utah10 { hardware ethernet 00:25:90:e4:50:a2; option host-name utah10; #fixed-address 10.0.0.10;}

Step 2 Add the statement allow known-clients to the address pool declaration.

pool { range 10.0.0.100 10.0.0.199; allow known-clients;}

Note To use a static address for a host, use the fixed-address statement in the host declaration.

Configuring IPTables for DHCP

The ISC DHCP service uses raw sockets that bypass IP filtering, so you do not need to add a rule for iptables.

However, if you should need a rule for DHCP, add the rule

-A INPUT -p udp -m state --state NEW -m udp --dport 67:68 -j ACCEPT

to the iptables rules table /etc/sysconfig/iptables to allow UDP ports 67 and 68, as shown in the following example:

*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT

D-3Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 106: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Setting up a DHCP or Proxy DHCP Server

-A INPUT -p udp -m state --state NEW -m udp --dport 67:68 -j ACCEPT-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibitedCOMMIT

Note You must insert any new rules before the final INPUT and FORWARD chain rules. Otherwise, the new rules are ignored.

Enabling and Starting DHCP Service

Enable and start the DHCP service as follows:

Step 1 Change the SELinux enforcing mode from Enforcing to Permissive.

# setenforce Permissive

Note • This setting allows the DHCP server to set file permissions on its lease devices.

• This setting reverts to the default value after the server reboots. To avoid this, open the file /etc/sysconfig/selinux in an editor and change the related line to SELINUX=permissive.

Step 2 As root, enable the DHCP service dhcpd and start it.

# chkconfig dhcpd on# service dhcpd start

Proxy DHCP Server for PXEA Proxy DHCP server is only needed when it is not possible to install or configure an existing DHCP server for PXE. The ISC DHCP server cannot be configured to act as a proxy server for PXE, so another solution must be used. One such solution is the python implementation found at GitHub at:

https://github.com/gmoro/proxyDHCPd

This older python source requires the following edits to run on a RHEL or CentOS 6.4 (x86_64) install.

Installing proxyDHCPd

Step 1 Download proxyDHCPd-master.zip file from GitHub and unzip it on the server destined to be the Proxy DHCP server.

Step 2 In the proxyDHCPd-master/proxydhcpd subdirectory, edit file net.py to change the line

return [namestr[i:i+32].split('\0', 1)[0] for i in range(0, outbytes, 32)]

to

return [namestr[i:i+40].split('\0', 1)[0] for i in range(0, outbytes, 40)]

D-4Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 107: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Configuring TFTP for PXE

Note This change is needed because the C structure python unpacks from the ioctl call, which is used to get the list of interfaces, changed in size with the 64-bit Intel install of RHEL 6.4.

Configuring proxyDHCPd

Step 1 Copy the sample configuration file proxy.ini to /etc/proxyDHCPd/proxy.ini:

# cp proxyDHCPd-master/proxy.ini /etc/proxyDHCPd/proxy.ini

Step 2 In the proxy.ini configuration file, edit the lines for listen_address, tftpd, and filename, and comment or remove the line vendor_specific_information.

; None of the configuration options below are optional[proxy]listen_address=10.0.0.5tftpd=10.0.0.5filename=pxelinux.0;vendor_specific_information="proxyDHCPd"

Here, listen_address is the IP address of the local interface on which the service will listen, tftpd is the address of the TFTP server, and filename the name of the bootstrap program.

Running proxyDHCPd

To start proxyDHDPd as a daemon, execute the following command:

# python proxyDHCPd-master/proxydhcpd.py –d

Configuring iptables for Proxy DHCP

In addition to using UDP ports 67 and 68 for DHCP, the Proxy DHCP server also listens on UDP port 4011. So, open port 4011 with iptables as well.

Configuring TFTP for PXETFTP is used by PXE clients to download the NBP.

Installing TFTP Server

Step 1 Install the xinetd and tftp-server packages, and then reload the xinetd service to pick up the changes.

# yum install tftp-server# chkconfig tftp on# service xinetd reload

The configuration file for the TFTP service is /etc/xinetd.d/tftp.

D-5Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 108: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Configuring TFTP for PXE

# cat /etc/xinetd.d/tftp# default: off# description: The tftp server serves files using the trivial file transfer \# protocol. The tftp protocol is often used to boot diskless \# workstations, download configuration files to network-aware printers, \# and to start the installation process for some operating systems.service tftp{

disable= nosocket_type= dgramprotocol= udpwait = yesuser = rootserver = /usr/sbin/in.tftpdserver_args= -s /var/lib/tftpbootper_source= 11cps = 100 2flags = IPv4

}

Step 2 Use the default settings and set up the SysLinux boot loader pxelinux.0 in the TFTP root directory, which is /var/lib/tftpboot.

Configuring iptables for TFTP

Step 1 Add the filter rules

-A INPUT -p tcp -m state --state NEW -m tcp --dport 69 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 69 -j ACCEPT

to the iptables rules file /etc/sysconfig/iptables to allow TCP and UDP port 69, as shown in the following example:

*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 67:68 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 69 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 69 -j ACCEPT-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibitedCOMMIT

Note You must insert any new rules before the final INPUT and FORWARD chain rules. Otherwise, the new rules are ignored.

Step 2 Once the new filter rules have been added, restart the iptables service:

# service iptables restart

D-6Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 109: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Configuring TFTP for PXE

Setting up the PXELINUX Bootstrap ProgramAfter the TFTP server is installed, the next step is to set up the SysLinux PXELINUX bootstrap program.

Installing the Syslinux Package

Step 1 Install the Syslinux package, or extract its contents into a directory using rpm2cpio.

# yum install syslinux

Step 2 Copy the PXELINUX bootstrap program pxelinux.0 and the text menu program menu.c32 (or the graphical menu program vesamenu.c32) from the Syslinux package to the TFTP root directory at /var/lib/tftpboot/.

# cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/# cp /usr/share/syslinux/menu.c32 /var/lib/tftpboot/

Configuring PXELINUX

Under /var/lib/tftpboot/, create the PXELINUX configuration directory pxelinux.cfg.

In this directory, create a boot configuration file for the client. This file is read by the PXELINUX boot loader and defines the install options for the client. The configuration file can be specific to a client, as determined by the Ethernet MAC address, IP address, or UUID string, or there can be a default configuration file named default.

The PXELINUX documentation at http://www.syslinux.org/wiki/index.php/PXELINUX shows the order in which the PXE client requests configuration files.

The following example shows the query order for configuration files for a client with UUID b8945908-d6a6-41a9-611d-74a6ab80b83d, IP address 192.168.2.91, and Ethernet MAC address 88:99:AA:BB:CC:DD:

mybootdir/pxelinux.cfg/b8945908-d6a6-41a9-611d-74a6ab80b83dmybootdir/pxelinux.cfg/01-88-99-aa-bb-cc-ddmybootdir/pxelinux.cfg/C0A8025Bmybootdir/pxelinux.cfg/C0A8025mybootdir/pxelinux.cfg/C0A802mybootdir/pxelinux.cfg/C0A80mybootdir/pxelinux.cfg/C0A8mybootdir/pxelinux.cfg/C0Amybootdir/pxelinux.cfg/C0mybootdir/pxelinux.cfg/Cmybootdir/pxelinux.cfg/default

The following example shows a PXELINUX configuration file for COS:

default menu.c32prompt 0timeout 100ontimeout local

label cos menu label Install COS 3.12.1 kernel cos/vmlinuz initrd cos/initrd.img

D-7Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 110: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Configuring TFTP for PXE

append rdblacklist=e1000e,ixgbe ksdevice=eth0 ks=http://10.0.0.15/image/COS/3.12.1/ks/ks_auto.cfg repo=http://10.0.0.15/image/COS/3.12.1/ ks_zerombr textlabel local menu label Boot from ^local drive localboot 0xffff

This PXELINUX configuration defines two menu labels. The menu label cos installs COS 3.12.1, and the menu label local will cause the client to boot from the local hard drive. The timeout value is in tenths of a second. After the timeout expires, the default menu choice will be cos.

Kernel Boot Options

In the configuration example above:

• The append statement describes the kernel command line boot options passed to the vmlinux binary.

• The rdblacklist option tells initrd to not load the e1000e and ixgbe drivers. This allows the igb management port driver to load first as network device eth0.

• The ksdevice option tells anaconda which device to use for the network install. On a CDE460 with two Intel gigabit ports (eth0 and eth1), the ksdevice boot option is necessary to keep anaconda from prompting for the network device to use for the installation. This also applies to UCS S3160 server, which has multiple enic network devices.

• The ks boot option indicates the location of the COS kickstart file, ks_auto.cfg. The repo option tells anaconda the location of the COS installation media.

• The ks_zerombr boot option tells anaconda to initialize the MBR of any drives with invalid partition tables. This is necessary when installing new hardware with uninitialized drives.

• The ks_baud_rate and text boot options are optional. ks_baud_rate is used to specify the serial terminal baud rate; the default rate is 9600. The text option tells anaconda to use a text-based rather than a GUI-based installation.

Setting up the Linux Kernel Executable

Finally, set up the Linux kernel executable vmlinuz and the initial RAM disk initrd that the PXELINUX boot loader uses to start Linux, as follows:

Step 1 In /var/lib/tftpboot/, create the directory cos.

Step 2 Copy the vmlinux and initrd.img files from the COS full install DVD into this directory.

# mount -o loop cos_full-3.12.1-0b39-x86_64.iso /mnt/# mkdir -p /var/lib/tftpboot/cos# cp /mnt/images/pxeboot/vmlinuz /var/lib/tftpboot/cos/# cp /mnt/images/pxeboot/initrd.img /var/lib/tftpboot/cos/

The directory structure of /var/lib/tftpboot/ should now appear as shown below:

/var/lib/tftpboot/cos/var/lib/tftpboot/cos/initrd.img/var/lib/tftpboot/cos/vmlinuz/var/lib/tftpboot/menu.c32/var/lib/tftpboot/pxelinux.0/var/lib/tftpboot/pxelinux.cfg/var/lib/tftpboot/pxelinux.cfg/default

D-8Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 111: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Setting up a Network Installation Server

Setting up a Network Installation ServerAfter the PXELINUX bootstrap program is loaded and starts the COS installation, anaconda installs the required packages from the COS installation media from a remote NFS, FTP, or HTTP network server.

The repo boot option, passed to the Linux kernel during installation, indicates which method to use for network installation. The format of this boot option is described in the Red Hat Enterprise Linux 6 Installation Guide.

The format for FTP, HTTP, and NFS installation methods are as follows:

FTP

repo=ftp://username:password@host/path

HTTP

repo=http://host/path

NFS

repo=nfs:host:/path

After choosing an installation method:

Step 1 Add the repo=xxx:// parameter for the method to the append keyword in the PXELINUX menu configuration file /var/lib/tftpboot/pxelinux.cfg/default.

Step 2 Set up the necessary installation server.

Setting Up the FTP ServerThe FTP server can be accessed using anonymous FTP or with an account name and password. Unless you require an FTP account name and password, the easiest way is to use anonymous FTP.

To set up the FTP server:

Step 1 Install the FTP server package vsftpd.

# yum install vsftpd

Step 2 Enable the vsftpd service and start it.

# chkconfig vsftpd on# service vsftpd start

Step 3 Create a directory for the COS installation media under /var/ftp.

Step 4 Either copy or mount the installation media to this directory.

The following example creates the media directory /var/ftp/image/COS/3.12.1/. To change the anonymous root location, edit /etc/vsftpd/vsftpd.conf and add an anon_root=/path option.

# mount -o loop cos_full-COS-0b39-x86_64.iso /mnt/# mkdir -p /var/ftp/image/COS/3.12.1# cp -r /mnt/* /var/ftp/image/COS/3.12.1/

D-9Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 112: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Setting up a Network Installation Server

Step 5 Configure iptables for FTP by adding the filter rules

-A INPUT -p tcp -m state --state NEW -m tcp --dport 20 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 21 -j ACCEPT

to the iptables rules file /etc/sysconfig/iptables to allow TCP ports 20 and 21, as shown in the following example:

*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 67:68 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 69 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 69 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 20 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 21 -j ACCEPT-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibitedCOMMIT

Step 6 Restart the iptables service to read the new changes.

# service iptables restart

Setting Up the HTTP ServerTo set up the HTTP server:

Step 1 Install the HTTP server package httpd.

# yum install httpd

Step 2 Enable the httpd service and start it.

# chkconfig httpd on# service httpd start

Step 3 Create a directory for the COS installation media under /var/www/html.

Step 4 Copy or mount the installation media to this directory.

The following example creates the media directory /var/www/html/image/COS/3.12.1.

# mount -o loop cos_full-3.12.1-0b39-x86_64.iso /mnt/# mkdir -p /var/www/html/image/COS/3.12.1# cp -r /mnt/* /var/www/html/image/COS/3.12.1/

Step 5 Configure iptables for HTTP by adding the filter rules

-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT

to the iptables rules file /etc/sysconfig/iptables to allow TCP port 80, as shown in the following example:

*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]

D-10Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 113: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Setting up a Network Installation Server

:OUTPUT ACCEPT [0:0]-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 67:68 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 69 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 69 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibitedCOMMIT

Step 6 Restart the iptables service to read the new changes.

# service iptables restart

Setting Up the NFS ServerTo set up the NFS server:

Step 1 Install the NFS server package nfs-utils.

# yum install nfs-utils

Step 2 Edit /etc/sysconfig/nfs and uncomment the lines for LOCKD_TCPPORT, LOCKD_UDPPORT, MOUNT_PORT, and STATD_PORT.

LOCKD_TCPPORT=32803LOCKD_UDPPORT=32769MOUNTD_PORT=892STATD_PORT=662

Step 3 Enable the nfs service and start it.

# chkconfig nfs on# service nfs start

Step 4 Create a directory for the COS installation media under /var/nfs.

Step 5 Copy or mount the installation media to this directory.

The following example creates the media directory /var/nfs/image/COS/3.12.1.

# mount -o loop cos_full-3.12.1-0b39-x86_64.iso /mnt/# mkdir -p /var/nfs/image/COS/3.12.1# cp -r /mnt/* /var/nfs/image/COS/3.12.1/

Step 6 Configure NFS exports by adding a read-only entry for the installation media path to the NFS exports file /etc/exports.

/var/nfs/image/COS/3.12.1 *(ro)

Step 7 Instruct NFS to re-read the /etc/exports file:

# exportfs -ra

Step 8 Configure iptables for NFS by adding the filter rules

-A INPUT -p tcp -m state --state NEW -m tcp --dport 2049 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 2049 -j ACCEPT

D-11Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 114: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Enabling PXE Boot in BIOS

-A INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 32803 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 32769 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 892 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 892 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 662 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 662 -j ACCEPT

to the iptables rules file /etc/sysconfig/iptables to allow TCP and UDP port 2049 for NFS. Also allow TCP and UDP port 111 (sunrpc), TCP port 32803 and UDP port 32769 (lockd), TCP and UDP port 892 (mountd), and TCP and UDP port 662 (statd).

*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 67:68 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 69 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 69 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 2049 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 2049 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 32803 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 32769 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 892 -j ACCEPT-A INPUT -p udp -m state --state NEW -m udp --dport 892 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 662 -j ACCEPT-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibitedCOMMIT

Step 9 Restart the iptables service to read the new changes.

Enabling PXE Boot in BIOS

Enabling the PXE Option ROMServer platforms vary in the way that the PXE option ROM is enabled for a network interface in BIOS. Only the management interface should be PXE enabled. All other network interfaces should disable PXE to speed up the boot process.

BIOS Boot OrderThe boot order is typically arranged to boot from any bootable disks first, followed by any interfaces that have PXE enabled. In this case, a new machine from manufacturing with no bootable disks will attempt to boot using PXE. Once COS is installed, the machine will then continue to boot from disk.

D-12Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 115: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Enabling PXE Boot in BIOS

Another option, possibly useful for a lab environment, is to allow a machine to always boot from the network first. In this case, the PXELINUX default menu option is set to boot from the local disk after the menu timer has expired. This allows a tester to choose whether to reinstall COS every time the machine restarts. After a small delay (10 seconds or so), the machine boots to disk.

Forcing PXE Boot Using IPMI

If the IPMI management interface is configured on the server, another option is use the ipmitool utility that ships with COS to control the boot order. IPMI can also be used to force a PXE boot the next time the server is restarted. Using this method, a deployment service could deploy a new install image.

ipmitool -I lanplus -H 10.0.0.1 -U admin -P rootroot chassis bootdev pxeipmitool -I lanplus -H 10.0.0.1 -U admin -P rootroot chassis power reset

D-13Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 116: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix D PXE Network Installation Enabling PXE Boot in BIOS

D-14Cisco Cloud Object Storage Release 3.12.1 User Guide

Page 117: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Clo

A

P P E N D I X E

CDDM Management Utility

This appendix describes the management utility for COS Content Delivery Devices. The topics covered in this appendix includes:

• Utility Name, page E-1

• Synopsis, page E-1

• Description, page E-1

• Options, page E-2

• Return Codes, page E-5

• Examples, page E-5

Utility Namecddm - Content Delivery Device Management

Synopsiscddm [options]

cddm [devnum]

cddm [options] [devnum]

DescriptionThe management utility for COS Content Delivery Devices (CDDs), or cddm, reports a variety of device information, and can be used to manage device configuration, failure, and replacement.

For options requiring a device number, devnum can be set to any of the following:

• A single value 1 to n, where n is the number of storage devices

• A set of device numbers, such as 1,3,5,7

• A range of device numbers such as 13-24, where the second number must be greater than the first

• A set of ranges such as 1-8, 16-24

E-1ud Object Storage Release 3.12.1 User Guide

Page 118: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix E CDDM Management UtilityOptions

• The word all, which indicates that options is to be applied to all devices

Note Only some options support multiple devices. Options that support multiple devices are indicated by an asterisk following the devnum in the option prototype.

Option --show=state is the default option when options is not defined.

Options-a <attribute>, --attrib=<attribute> <devnum>*

This option reports the value of the specified attribute. If devnum is provided, attribute must be an attribute of the device. If devnum is not provided, attribute is considered a global attribute. For example, cddm --attrib=vendor 1 returns the name of the device vendor of device 1; and cddm --attrib=max_error_rate reports the global value for the maximum error rate health threshold applicable to all devices. See the --show option for sets of available attributes.

-C <on|off>,--slot_check=<on|off>

This option is used to check the proper connection of device cables and the working order of device lights. It causes the identify light (red LED) of the devices to illuminate in sequence, starting with the first slot and progressing to the last slot in roughly half-second steps. After all lights are on, all lights are turned off in the same order. This behavior repeats until this option is turned off.

-F, --noformat

This option is used to remove the default output formatting performed by other options. Only options subsequent to this option on the command line are affected.

-I, --interrogate devnum

This option creates a report of events for the specified device. The events in the report may come from one or multiple sources, all of which are identified in the report.

-i [on|off], --identify[=on|off] devnum

Turns on or off the identifying indicator of the specified device. For identify, if on or off is not specified, this option toggles the state. The identification is typically a slow blinking red light (roughly 1-second period) located at the front of the slot associated with the device. The identifying indicator remains on until turned off, or until the device fails, at which time the indicator stays on continuously.

–m devnum, --make_well devnum

Sets a device well again that was previously set sick or removed.

Caution Be careful when using this option, as a sick device that is made well can negatively affect the data.

-r devnum, --remove devnum

Removes the specified device logically prior to being physically removed from the chassis. A logically removed device is dismounted from the file system and then either spun-down (if an HDD) or placed on standby (if an SSD).

When the logical removal process is completed, a notification is posted to the console indicating that the device can now be pulled. After being logically removed but before being pulled, the red identifying indicator on the device displays a fast blink (roughly half-second period or faster).

E-2Cloud Object Storage Release 3.12.1 User Guide

Page 119: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix E CDDM Management UtilityOptions

Note • It is always best to logically remove a device before pulling it from the chassis. A failed device does not need to be logically removed.

• A logically removed device cannot be place back online until it has been removed from the chassis. All contents of a logically removed device are deleted when it is reinserted into the chassis.

-s value, --show=<set,set,...> <devnum>*

This option is used to show selected sets of global or device information.

Note • Sets that require a devnum cannot be mixed with sets that do not require a devnum.

• Collection of some device information can negatively affect device performance. To mitigate such impact, this information is automatically cached and updated, usually when the device is idle.

set selects one of the different categories of information defined as follows:

all devnum

Show all available device information.

dev_spec

Shows device type specific information. This reports various points of information specific to the device technology, and is a function of vendor implementation.

errors devnum

Shows all error counters.

globals

Shows the global settings used to monitor devices.

location devnum

Displays the location of the given device within the chassis.

health devnum

Shows information relative to a device’s health

phys devnum

Shows physical information about the device such as make, model, vendor, capacity, serial number, and so on.

raw

Causes some information sets to be reported in raw data format. For example, this option causes the SATA S.M.A.R.T. attributes reported in the dev_spec set to be reported as it is collected from the device, as opposed to interpreting it.

smart devnum

Shows available Self-Monitoring, Analysis and Reporting Technology values. These values are technology specific (SCSI, ATA), and implementation is varied from vendor to vendor.

state devnum

Show device state information only.

E-3Cloud Object Storage Release 3.12.1 User Guide

Page 120: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix E CDDM Management UtilityOptions

stats devnum

Shows device statistical information.

update

This suboption is not an information set, but instead, forces a refresh of the device information cache, possibly causing a momentary impact on device performance.

-V --versions

Reports the version of cddm and CDD drivers information.

-v, --version

Reports the version of cddm.

--VIOLATE_POLICY

This option disables a policy enforced by cddm. This option must precede all other options on the command line for which a policy is to be violated.

Note Recommended practice is to use this option only for troubleshooting or other temporary purposes.

-X, --unsuspend devnum

Note The -X option applies only to CDE470 systems. Its use with other platforms results in the display of an error message. Do not use this option with COS releases that do not support the CDE470.

This option cancels suspension of a suspended but not-yet-pulled device. (See the suspend option.)

-x [minutes], --suspend[=minutes] devnum

Note The -x option applies only to CDE470 systems. Its use with other platforms results in the display of an error message. Do not use this option with COS releases that do not support the CDE470.

The suspend option prepares a healthy device to be safely removed from the chassis, allowing it to be reinserted later without incurring data loss.

The volume on the suspended device is not dismounted from the file system, but is placed in a quiescent state. The file system does not attempt to access the volume during the suspend period. While a device is suspended, any necessary data on the device is reconstructed from data on other devices.

A device cannot remain suspended indefinitely due to a potential performance impact to the system. The default amount of time that a device may remain suspended is 20 minutes. A suspended and removed device that is reinserted before the suspend period expires is automatically and immediately placed online, and its volume is returned to full operation.

If the suspend period expires before the device is reinserted, the device is considered lost, and its volume is abandoned by the file system. A suspended device reinserted after the suspend period expires is considered a new device, and its current contents are discarded. If a suspended and removed device is replaced with another device, the suspended volume is discarded by the file system. The replacement device is considered new, and its current contents are discarded.

If a suspended device is not pulled from the chassis, it returns to online and normal operation when the suspend period expires. A suspended but not yet removed device can be unsuspended with the unsuspend option.

E-4Cloud Object Storage Release 3.12.1 User Guide

Page 121: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix E CDDM Management UtilityReturn Codes

The minutes parameter may be provided to override the default suspend period. The suspend option with the minute parameter may be used on an already suspended device to modify the current suspend period.

Return Codescddm returns 0 if the command is successful. A negative return value indicates a system failure. A positive return value indicates one of the following errors generated within cddm:

• 101 — Invalid device

• 102 — Invalid option

• 103 — Invalid value

• 104 — CDD drivers are not loaded

• 105 — Incompatible drivers

• 106 — Device not found

• 107 — Policy violation

• 108 — Unsupported option

• 109 — System error

Examplescddm --show=all,dev_spec,smart 3

Produces the following output and indicates all available attributes:

alleged_media_errors 0bytes_read 110080bytes_written 4775341056connection 21.2device_type SAS HDDdev_link_rate 6000dev_max_operating_temp 69dev_max_operational_starts 0dev_mfg_wkdev_mfg_yrdev_temp 23direct_submits 20766dubious_LBAs 0errors 0errors_reported 0errors_reported_rate 0errors_to_reset 0eval_wait_time 0 mshard_resets 0location 1.3.1.0max_transfersize 262144media_error_rate 0media_errors 0model WD4001FYYG-01SL3name csd3print_flags 0x0proc_flags 0x20reqs_free 20

E-5Cloud Object Storage Release 3.12.1 User Guide

Page 122: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix E CDDM Management UtilityExamples

reqs_in_cb_queue 0reqs_in_progress 0reqs_lost 0reqs_queued 0requests 468662reset_rate 0resets 0retries 0rev VR07sector_size 512serial WMC1F1989829sick_cnt 0slot 3smart_age 00.09.45smart_glist_count 0smart_nonmedium_errors 3466smart_rd_corrected_errors_long 1smart_rd_corrected_errors_short 114490smart_rd_correction_algorithm_use 1smart_rd_retries 1smart_rd_total_bytes_processed 47071765710848smart_rd_total_corrected_errors 114491smart_rd_uncorrected_errors 0smart_startups 0smart_status OKsmart_wr_corrected_errors_long 124smart_wr_corrected_errors_short 59641smart_wr_correction_algorithm_use 124smart_wr_retries 124smart_wr_total_bytes_processed 10133345524736smart_wr_total_corrected_errors 59765smart_wr_uncorrected_errors 0state 0x800007; DEV_ALLOCATED DEV_ATTACHED DEV_READYtimeout_comp_err 0timeout_comp_max 0 mstimeout_comp_min 0 mstimeout_comp_ok 0timeout_rate 0timeouts 0total_sectors 7814037167vendor WDsmart_age 00.09.45smart_glist_count 0smart_nonmedium_errors 3466smart_rd_corrected_errors_long 1smart_rd_corrected_errors_short 114490smart_rd_correction_algorithm_use 1smart_rd_retries 1smart_rd_total_bytes_processed 47071765710848smart_rd_total_corrected_errors 114491smart_rd_uncorrected_errors 0smart_startups 0smart_status OKsmart_wr_corrected_errors_long 124smart_wr_corrected_errors_short 59641smart_wr_correction_algorithm_use 124smart_wr_retries 124smart_wr_total_bytes_processed 10133345524736smart_wr_total_corrected_errors 59765smart_wr_uncorrected_errors 0dev_link_rate 6000dev_max_operating_temp 69dev_max_operational_starts 0dev_mfg_wk

E-6Cloud Object Storage Release 3.12.1 User Guide

Page 123: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix E CDDM Management UtilityExamples

dev_mfg_yrdev_temp 23

cddm --attrib=bytes_written 1

Reports the value of the number of bytes written to the device since the last system start; for example,

85269151744.

cddm --supend=30 8

cddm --remove 17

cddm --identify=on 42

cddm --slot_check=on

E-7Cloud Object Storage Release 3.12.1 User Guide

Page 124: Cisco Cloud Object Storage Release 3.12.1 User Guide · COS-AIC Alarms and Events 3-3 COS AIC Client Events 3-6 COS AIC Server Events 3-6 Viewing COS Statistics 3-6 COS AIC Client

Appendix E CDDM Management UtilityExamples

E-8Cloud Object Storage Release 3.12.1 User Guide