148
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000 Cisco Data Center Solution for Oracle RAC 11gR2 (11.2.0.4) on Oracle Linux 6.4 using Cisco Unified Computing System and Hitachi Virtual Storage Platform G1000 Last Updated: October 14, 2014 Building Architectures to Solve Business Problems

Cisco Unified Computing System and Oracle RAC 11gR2 with

Embed Size (px)

Citation preview

Page 1: Cisco Unified Computing System and Oracle RAC 11gR2 with

Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000Cisco Data Center Solution for Oracle RAC 11gR2 (11.2.0.4) on Oracle Linux 6.4 using Cisco Unified Computing System and Hitachi Virtual Storage Platform G1000Last Updated: October 14, 2014

Building Architectures to Solve Business Problems

Page 2: Cisco Unified Computing System and Oracle RAC 11gR2 with

Cisco Validated Design2

Page 3: Cisco Unified Computing System and Oracle RAC 11gR2 with

About the Authors

patra is a Technical Marketing Engineer in Cisco Systems CSPG UCS Product nd Data Center Solutions Engineering Group, and specialist on Oracle RAC RDBMS. He ars of extensive experience on Oracle RAC Database and associated tools. Niranjan has

E and a DBA handling production systems in various organizations. He holds a Master c) degree in Computer Science and is also an Oracle Certified Professional (OCP -DBA). as strong background in Cisco UCS, Hitachi Storage and Virtualization.

bati, Senior Oracle Solution Architect, Hitachi Data Systems

bati’s main focus is developing Unified Compute Platform based solutions for Oracle n addition, Kishore defines Oracle reference architectures, implementation guides and e technical presales. Kishore has 16 + years of experience in the Oracle and Storage.

people were part of the team that made this Cisco Validation Design solution possible:

tel—Cisco Systems, Inc.

Hitachi Systems

About the AuthorsNiranjan Mohapatra, Technical Marketing Engineer, CSPG, Cisco Systems

Niranjan MohaManagement ahas over 15 yeworked as a TMof Science (MSNiranjan also h

Kishore Daggu

Kishore Dagguapplications. Isupports Oracl

Acknowledgments

The following

• Tushar Pa

• YC Chu—

3

Page 4: Cisco Unified Computing System and Oracle RAC 11gR2 with

4About Cisco Validated Desig

About the Authors

n (CVD) Program

IM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF

ILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING

SE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS

LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES,

ITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF

ABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED

IBILITY OF SUCH DAMAGES.

ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR

TION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR

SSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT

CHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY

N FACTORS NOT TESTED BY CISCO.

Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco

co logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We

y, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS,

eeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the

Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital,

ems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Cen-

ollow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone,

onPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace

MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels,

criptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to

nternet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of

, Inc. and/or its affiliates in the United States and certain other countries.

arks mentioned in this document or website are the property of their respective owners.

word partner does not imply a partnership relationship between Cisco and any other com-

Systems, Inc. All rights reserved

About Cisco Validated Design (CVD) Program

The CVD program consists of systems and solutions designed, tested, and documented to facilitate

faster, more reliable, and more predictable customer deployments. For more information visit

http://www.cisco.com/go/designzone.

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLEC-

TIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUP-

PLIERS DISCLA

MERCHANTAB

FROM A COUR

SUPPLIERS BE

INCLUDING, W

THE USE OR IN

OF THE POSS

THE DESIGNS

THEIR APPLICA

OTHER PROFE

THEIR OWN TE

DEPENDING O

CCDE, CCENT,

WebEx, the Cis

Work, Live, Pla

Bringing the M

Cisco Certified

the Cisco Syst

ter, Fast Step, F

iQuick Study, Ir

Chime Sound,

ProConnect, S

Increase Your I

Cisco Systems

All other tradem

The use of the

pany. (0809R)

© 2014 Cisco

Page 5: Cisco Unified Computing System and Oracle RAC 11gR2 with

Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Executive SummaryThis Cisco Validated Design describes how the Cisco Unified Computing System™ can be used in conjunction with Hitachi Virtual Storage Platform (VSP) G1000 storage systems to implement an Oracle Real Application Clusters (RAC) Database. The Cisco Unified Computing System provides the compute, network, and storage access components of the cluster, deployed as a single cohesive system. The result is an implementation that addresses many of the challenges that database administrators and their IT departments face today, including needs for a simplified deployment and operation model, high performance for Oracle 11gR2 RAC software, and lower total cost of ownership (TCO). This document introduces the Cisco Unified Computing System and provides instructions for implementing it.

Historically, enterprise database management systems have run on costly symmetric multiprocessing servers that use a vertical scaling (or scale-up) model. However, as the cost of one-to-four-socket x86-architecture servers continues to drop while their processing power increases, a new model has emerged. Oracle RAC uses a horizontal scaling, or scale-out, model, in which the active-active cluster uses multiple servers, each contributing its processing power to the cluster, increasing performance, scalability, and availability. The cluster balances the workload across the servers in the cluster, and the cluster can provide continuous availability in the event of a failure.

One approach used by database, system and storage administrators to meet the I/O performance needs of applications is to deploy faster CPU, high-performance drives. This may be a solution in environments with smaller database sizes and environments with minimal movement in hot datasets. However, as databases grow, requires more computing power as frequently accessed data sets change constantly. It becomes more difficult to identify data based on access frequency and redistribute it to the correct storage media.

Hitachi Virtual Storage Platform G1000 addresses the challenges with Hitachi Dynamic Provisioning software and Hitachi Dynamic Tiering software. Hitachi Dynamic Provisioning software provides efficient and cost effective mechanisms to address capacity planning and database utilization management challenges. Hitachi Dynamic Tiering software extends the mechanisms to maximize the utilization of high-cost, high-performance storage media and supports automatic migration of frequently accessed data to address the data life-cycle management challenge.

Page 6: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

Cisco is the undisputed leader in providing network connectivity in enterprise data centers. With the introduction of the Cisco Unified Computing System, Cisco is now equipped to provide the entire clustered infrastructure for Oracle RAC deployments. The Cisco Unified Computing System provides compute, network, virtualization, and storage access resources that are centrally controlled and managed as a single cohesive system. With the capability to centrally manage both blade and rack-mount servers, the Cisco Unified Computing System provides an ideal foundation for Oracle RAC deployments.

One key benefit of the Cisco Unified Computing System with Hitachi Virtual Storage Platform G1000 is the ability to customize the environment to suit a customer's requirements. This is why the reference architecture detailed in this document highlights the resiliency, cost benefit, and ease of deployment of a FC-based storage solution. A storage system that is capable of doing dynamic tiering and dynamic provisioning is the customer's choice for performance and investment protection.

Target Audience

This document is intended to assist solution architects, project managers, infrastructure managers, sales engineers, field engineers, and consultants in planning, designing, and deploying Oracle Database 11g R2 RAC hosted on Cisco Unified Computing System and Hitachi Virtual Storage Platform G1000. This document assumes that the reader has an architectural understanding of the Cisco Unified Computing System, Oracle Database 11gR2 GRID Infrastructure, Oracle Real Application Clusters, Hitachi storage systems, and related software.

Purpose of this Guide

This Cisco Validated Design demonstrates how enterprises can apply best practices to deploy Oracle Database 11g R2 RAC using Oracle Linux, Cisco Unified Computing System, Cisco Nexus family switches, and Hitachi storage. This design solution shows the deployment and scaling of a four-node Oracle Database 11g R2 RAC Database in a baremetal environment using a typical OLTP and DSS workloads to ensure expected stability, performance and resiliency design as demanded by mission critical data center deployments.

Benefits of the Configuration

Cisco and Oracle are working together to promote interoperability of Oracle's next-generation database and application solutions with the Cisco Unified Computing System, helping make the Cisco Unified Computing System a simple and reliable platform on which to run Oracle software.

Database administrators no longer need to painstakingly configure each element in the hardware stack independently as the entire cluster runs on a single cohesive system. Cisco UCS Manager dynamically provision network, compute and storage access resources statelessly. This role-based and policy-based embedded management system handles every aspect of system configuration, from a server's firmware and identity settings to the network connections that connect storage traffic to the destination storage system. This capability dramatically simplifies the process of scaling an Oracle RAC configuration or rehosting an existing node on an upgrade server. Cisco UCS Manager uses the concept of service profiles and service profile templates to consistently and accurately configure resources. The system automatically configures and deploys servers in minutes, rather than the hours or days required by traditional systems composed of discrete, separately managed components. Indeed, Cisco UCS Manager can simplify server deployment to the point where it can automatically discover, provision, and deploy a new blade server when it is inserted into a chassis.

6Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 7: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

The system is based on a 10-Gbps unified network fabric that radically simplifies cabling at the rack level by consolidating both IP and Fiber Channel traffic onto the same rack-level 10-Gbps converged network. This "wire-once" model allows in-rack network cabling to be configured once, with network features and configurations all implemented by changes in software rather than by error-prone changes in physical cabling. This Cisco Validated Configuration not only supports physically separate public and private networks; it provides redundancy with automatic failover.

The Cisco UCS B-Series Blade Servers used in this configuration feature Intel Xeon E5- 2697 v2 series processors that deliver intelligent performance, automated energy efficiency, and flexible virtualization. Intel Turbo Boost Technology automatically boosts processing power through increased frequency and use of hyper threading to deliver high performance when workloads demand and thermal conditions permit.

The Cisco Unified Computing System's 10-Gbps unified fabric delivers standards-based Ethernet and Fiber Channel over Ethernet (FCoE) capabilities that simplify and secure rack-level cabling while speeding network traffic compared to traditional Gigabit Ethernet networks. The balanced resources of the Cisco Unified Computing System allow the system to easily process an intensive online transaction processing (OLTP) and decision-support system (DSS) workload with no resource saturation.

Technology Overview

Cisco Unified Computing System

The Cisco Unified Computing System is a third-generation data center platform that unites computing, networking, storage access, and virtualization resources into a cohesive system. When used as the foundation for Oracle RAC database and software the system brings lower total cost of ownership (TCO), greater performance, improved scalability, increased business agility, and Cisco's hallmark investment protection. The system integrates a low-latency, lossless 10 Gigabit Ethernet (10GbE) unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management domain that is controlled and managed centrally.

7Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 8: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

The system represents a major evolutionary step away from the current traditional platforms in which individual components must be configured, provisioned, and assembled to form a solution. Instead, the system is designed to be stateless. It is installed and wired once, with its entire configuration—from RAID controller settings and firmware revisions to network configurations—determined in software using integrated, embedded management.

Cisco Unified Computing System is designed to be form-factor neutral. The core of the system is a pair of Fabric Interconnects that links all the computing resources together and integrates all system components into a single point of management. Today, blade server chassis are integrated into the system through Fabric Extenders that bring the system's 10-Gbps unified fabric to each chassis.

The Fibre Channel over Ethernet (FCoE) protocol collapses Ethernet-based networks and storage networks into a single common network infrastructure, thus reducing CapEx by eliminating redundant switches, cables, networking cards, and adapters, and reducing OpEx by simplifying administration of these networks. Other benefits include:

• I/O and server virtualization

• Transparent scaling of all types of content, either block or file based

• Simpler and more homogeneous infrastructure to manage, enabling data center consolidation

8Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 9: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

Figure 1 Cisco UCS Components

The main components of the Cisco Unified Computing System are:

• Compute—The system is based on an entirely new class of computing system that incorporates blade servers based on Intel Xeon® E5-2600 Series Processors. Cisco UCS B-Series Blade Servers work with virtualized and non-virtualized applications to increase performance, energy efficiency, flexibility and productivity.

9Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 10: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

• Network—The system is integrated onto a low-latency, lossless, 80-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements.

• Storage access—The system provides consolidated access to both storage area network (SAN) and network-attached storage (NAS) over the unified fabric. By unifying storage access, Cisco Unified Computing System can access storage over Ethernet, Fiber Channel, Fiber Channel over Ethernet (FCoE), and iSCSI. This provides customers with the options for setting storage access and investment protection. Additionally, server administrators can reassign storage-access policies for system connectivity to storage resources, thereby simplifying storage connectivity and management for increased productivity.

• Management—The system uniquely integrates all system components which enable the entire solution to be managed as a single entity by the Cisco UCS Manager. The Cisco UCS Manager has an intuitive graphical user interface (GUI), a command-line interface (CLI), and a robust application programming interface (API) to manage all system configuration and operations.

Cisco UCS Blade Chassis

The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis.

The Cisco UCS 5108 Blade Server Chassis is six rack units (6RU) high and can mount in an industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors.

Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support non-redundant, N+ 1 redundant and grid-redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for Cisco UCS 2208 XP Fabric Extenders.

A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot and up to 80 Gbps of I/O bandwidth for two slots. The chassis is capable of supporting future 80 Gigabit Ethernet standards.

Figure 2 Cisco Blade Server Chassis (Front, Rear, and Populated with Blades View)

10Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 11: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

Cisco UCS B200 M3 Blade Server

The Cisco UCS B200 M3 Blade Server is a half-width, two-socket blade server. The system uses two Intel Xeon® E5-2600 Series Processors, up to 768 GB of DDR3 memory, two optional hot-swappable small form factor (SFF) serial attached SCSI (SAS) disk drives, and two VIC adapters that provides up to 80 Gbps of I/O throughput. The server balances simplicity, performance, and density for production-level virtualization and other mainstream data center workloads.

Figure 3 Cisco UCS B200 M3 Blade Server

Cisco UCS Virtual Interface Card 1240

A Cisco innovation, the Cisco UCS VIC 1240 is a four-port 10 Gigabit Ethernet, FCoE-capable modular LAN on motherboard (mLOM) designed exclusively for the M3 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC 1240 capabilities can be expanded to eight ports of 10 Gigabit Ethernet.

11Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 12: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

Figure 4 Cisco Virtual Interface Card 1240

Cisco UCS 6296UP Fabric Interconnect

The Fabric interconnects provide a single point for connectivity and management for the entire system. Typically deployed as an active-active pair, the system's fabric interconnects integrate all components into a single, highly-available management domain controlled by Cisco UCS Manager. The fabric interconnects manage all I/O efficiently and securely at a single point, resulting in deterministic I/O latency regardless of a server or virtual machine's topological location in the system.

Cisco UCS 6200 Series Fabric Interconnects support the system's 80-Gbps unified fabric with low-latency, lossless, cut-through switching that supports IP, storage, and management traffic using a single set of cables. The fabric interconnects feature virtual interfaces that terminate both physical and virtual connections equivalently, establishing a virtualization-aware environment in which blade, rack servers, and virtual machines are interconnected using the same mechanisms. The Cisco UCS 6296UP is a 2-RU fabric interconnect that features up to 96 universal ports that can support 80 Gigabit Ethernet, Fiber Channel over Ethernet, or native Fiber Channel connectivity.

Figure 5 Cisco UCS 6296UP 96-Port Fabric

12Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 13: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

Cisco UCS Manager

Cisco UCS Manager is an embedded, unified manager that provides a single point of management for Cisco UCS. Cisco UCS Manager can be accessed through an intuitive GUI, a command-line interface (CLI), or the comprehensive open XML API. It manages the physical assets of the server and storage and LAN connectivity, and it is designed to simplify the management of virtual network connections through integration with several major hypervisor vendors. It provides IT departments with the flexibility to allow people to manage the system as a whole, or to assign specific management functions to individuals based on their roles as managers of server, storage, or network hardware assets. It simplifies operations by automatically discovering all the components available on the system and enabling a stateless model for resource use.

Some of the key elements managed by Cisco UCS Manager include:

• Cisco UCS Integrated Management Controller (IMC) firmware

• RAID controller firmware and settings

• BIOS firmware and settings, including server universal user ID (UUID) and boot order

• Converged network adapter (CNA) firmware and settings, including MAC addresses and worldwide names (WWNs) and SAN boot settings

• Virtual port groups used by virtual machines, using Cisco Data Center VM-FEX technology

• Interconnect configuration, including uplink and downlink definitions, MAC address and WWN pinning, VLANs, VSANs, quality of service (QoS), bandwidth allocations, Cisco Data Center VM-FEX settings, and Ether Channels to upstream LAN switches

Cisco Unified Computing System is designed from the start to be programmable and self-integrating. A server's entire hardware stack, ranging from server firmware and settings to network profiles, is configured through model-based management. With Cisco virtual interface cards (VICs), even the number and type of I/O interfaces is programmed dynamically, making every server ready to power any workload at any time.

With model-based management, administrators manipulate a desired system configuration and associate a model's policy driven service profiles with hardware resources, and the system configures itself to match requirements. This automation accelerates provisioning and workload migration with accurate and rapid scalability. The result is increased IT staff productivity, improved compliance, and reduced risk of failures due to inconsistent configurations. This approach represents a radical simplification compared to traditional systems, reducing capital expenditures (CAPEX) and operating expenses (OPEX) while increasing business agility, simplifying and accelerating deployment, and improving performance.

13Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 14: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

Cisco UCS Service Profiles

Figure 6 Traditional Provisioning Approach

A server's identity is made up of many properties such as UUID, boot order, IPMI settings, BIOS firmware, BIOS settings, RAID settings, disk scrub settings, number of NICs, NIC speed, NIC firmware, MAC and IP addresses, number of HBAs, HBA WWNs, HBA firmware, FC fabric assignments, QoS settings, VLAN assignments, remote keyboard/video/monitor etc. I think you get the idea. It's a LONG list of "points of configuration" that need to be configured to give this server its identity and make it unique from every other server within your data center. Some of these parameters are kept in the hardware of the server itself (like BIOS firmware version, BIOS settings, boot order, FC boot settings, etc.) while some settings are kept on your network and storage switches (like VLAN assignments, FC fabric assignments, QoS settings, ACLs, etc.). This results in following server deployment challenges:

Lengthy Deployment Cycles

• Every deployment requires coordination among server, storage, and network teams

• Need to ensure correct firmware & settings for hardware components

• Need appropriate LAN & SAN connectivity

Response Time to Business Needs

• Tedious deployment process

• Manual, error prone processes, that are difficult to automate

• High OPEX costs, outages caused by human errors

Limited OS and Application Mobility

• Storage and network settings tied to physical ports and adapter identities

• Static infrastructure leads to over-provisioning, higher OPEX costs

14Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 15: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

Cisco Unified Computing System has uniquely addressed these challenges with the introduction of service profiles that enables integrated, policy based infrastructure management. Cisco UCS Service Profiles hold the DNA for nearly all configurable parameters required to set up a physical server. A set of user defined policies (rules) allow quick, consistent, repeatable, and secure deployments of Cisco UCS servers.

Cisco UCS Service Profiles contain values for a server's property settings, including virtual network interface cards (vNICs), MAC addresses, boot policies, firmware policies, fabric connectivity, external management, and high availability information. By abstracting these settings from the physical server into a Cisco Service Profile, the Service Profile can then be deployed to any physical compute hardware within the Cisco UCS domain. Furthermore, Service Profiles can, at any time, be migrated from one physical server to another. This logical abstraction of the server personality separates the dependency of the hardware type or model and is a result of Cisco's unified fabric model (rather than overlaying software tools on top).

This innovation is still unique in the industry despite competitors claiming to offer similar functionality. In most cases, these vendors must rely on several different methods and interfaces to configure these server settings. Furthermore, Cisco is the only hardware provider to offer a truly unified management platform, with Cisco UCS Service Profiles and hardware abstraction capabilities extending to both blade and rack servers.

Some of key features and benefits of Cisco UCS service profiles are discussed below.

Service Profiles and Templates

In summary, service profiles represent all the attributes of a logical server in Cisco UCS data model. These attributes have been abstracted from the underlying attributes of the physical hardware and physical connectivity. Using logical servers that are disassociated from the physical hardware removes many limiting constraints around how servers are provisioned. Using logical servers also makes it easy to repurpose physical servers for different applications and services.

Figure 7 represents how Server, Network, and Storage Policies are encapsulated in a service profile.

15Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 16: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

Figure 7 Service Profile Inclusions

The Cisco UCS Manager provisions servers utilizing service profiles. The Cisco UCS Manager implements a role-based and policy-based management focused on service profiles and templates. A service profile can be applied to any blade server to provision it with the characteristics required to support a specific software stack. A service profile allows server and network definitions to move within the management domain, enabling flexibility in the use of system resources.

Service profile templates are stored in the Cisco UCS 6200 Series Fabric Interconnects for reuse by server, network, and storage administrators. Service profile templates consist of server requirements and the associated LAN and SAN connectivity. Service profile templates allow different classes of resources to be defined and applied to a number of resources, each with its own unique identities assigned from predetermined pools.

The Cisco UCS Manager can deploy the service profile on any physical server at any time. When a service profile is deployed to a server, the Cisco UCS Manager automatically configures the server, adapters, Fabric Extenders, and Fabric Interconnects to match the configuration specified in the service profile. A service profile template parameterizes the UIDs that differentiate between server instances.

This automation of device configuration reduces the number of manual steps required to configure servers, Network Interface Cards (NICs), Host Bus Adapters (HBAs), and LAN and SAN switches.

Programmatically Deploying Server Resources

Cisco UCS Manager provides centralized management capabilities, creates a unified management domain, and serves as the central nervous system of the Cisco Unified Computing System. Cisco UCS Manager is embedded device management software that manages the system from end-to-end as a single logical entity through an intuitive GUI, CLI, or XML API. Cisco UCS Manager implements role- and policy-based management using service profiles and templates. This construct improves IT productivity and business agility. Now infrastructure can be provisioned in minutes instead of days, shifting ITs focus from maintenance to strategic initiatives.

16Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 17: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

Dynamic Provisioning

Cisco UCS resources are abstract in the sense that their identity, I/O configuration, MAC addresses and WWNs, firmware versions, BIOS boot order, and network attributes (including QoS settings, ACLs, pin groups, and threshold policies) all are programmable using a just-in-time deployment model. A service profile can be applied to any blade server to provision it with the characteristics required to support a specific software stack. A service profile allows server and network definitions to move within the management domain, enabling flexibility in the use of system resources. Service profile templates allow different classes of resources to be defined and applied to a number of resources, each with its own unique identities assigned from predetermined pools.

Cisco Nexus 5548UP Switch

The Cisco Nexus 5548UP is a 1RU 1 Gigabit and 10 Gigabit Ethernet switch offering up to 960 gigabits per second throughput and scaling up to 48 ports. It offers 32 1/10 Gigabit Ethernet fixed enhanced Small Form-Factor Pluggable (SFP+) Ethernet/FCoE or 1/2/4/8-Gbps native FC unified ports and three expansion slots. These slots have a combination of Ethernet/FCoE and native FC ports.

Figure 8 Cisco Nexus 5548UP Switch

The Cisco Nexus 5548UP Switch delivers innovative architectural flexibility, infrastructure simplicity, and business agility, with support for networking standards. For traditional, virtualized, unified, and high-performance computing (HPC) environments, it offers a long list of IT and business advantages, including:

Architectural Flexibility

• Unified ports that support traditional Ethernet, Fiber Channel (FC), and Fiber Channel over Ethernet (FCoE)

• Synchronizes system clocks with accuracy of less than one microsecond, based on IEEE 1588

• Supports secure encryption and authentication between two network devices, based on Cisco TrustSec IEEE 802.1AE

• Offers converged Fabric extensibility, based on emerging standard IEEE 802.1BR, with Fabric Extender (FEX) Technology portfolio, including:

– Cisco Nexus 2000 FEX

– Adapter FEX

– VM-FEX

Infrastructure Simplicity

• Common high-density, high-performance, data-center-class, fixed-form-factor platform

• Consolidates LAN and storage

17Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 18: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

• Supports any transport over an Ethernet-based fabric, including Layer 2 and Layer 3 traffic

• Supports storage traffic, including iSCSI, NAS, FC, RoE, and IBoE

• Reduces management points with FEX Technology

Business Agility

• Meets diverse data center deployments on one platform

• Provides rapid migration and transition for traditional and evolving technologies

• Offers performance and scalability to meet growing business needs

Specifications At-a-Glance

• A 1 -rack-unit, 1/10 Gigabit Ethernet switch

• 32 fixed Unified Ports on base chassis and one expansion slot totaling 48 ports

• The slot can support any of the three modules: Unified Ports, 1/2/4/8 native Fiber Channel, and Ethernet or FCoE

• Throughput of up to 960 Gbps

Hitachi Virtual Storage Platform G1000 Technologies and Benefits

Hitachi Virtual Storage Platform G1000 (VSP G1000) provides the always-available, agile, and automated foundation you need for a continuous cloud infrastructure. This platform delivers enterprise-ready software-defined storage, advanced global storage virtualization, and high performance storage.

Supporting always-on operations, VSP G1000 includes self-service, non-disruptive migration and active-active storage clustering for zero recovery time objectives. Automate your operations with self-optimizing, policy-driven management.

18Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 19: Cisco Unified Computing System and Oracle RAC 11gR2 with

Executive Summary

Figure 9 Hitachi Virtual Storage Platform G1000

A VSP G1000 is configured as a collection of these major elements:

• 1 or 2 controller chassis containing controller boards, power supplies and fans

Each controller may be configured with a mixture of the following controller boards: processors (Virtual Storage Directors), cache switches (Cache Path Control Adapters), front-end directors (FED) and back-end directors (BED);

• Up to 12 drive chassis (DC) supporting up to 2304 drives

• Up to 2048GB of cache

• 1 to 6 19" racks

With certain configurations, a VSP G1000 can deliver extremely high single system performance:

• Up to 4 million IOPS, 8KB block, 100% random read cache miss

• Up to 50GB/sec in sustained 100% sequential read (256 KB blocks)

Hitachi Storage Virtualization Operating System

Hitachi Storage Virtualization Operating System spans and integrates multiple platforms. It is integrates storage system software to provide system element management and advanced storage system functions. Used across multiple platforms, Storage Virtualization Operating System includes storage virtualization, thin provisioning, storage service level controls, dynamic provisioning, and performance instrumentation.

Storage Virtualization Operating System includes standards-based management software on a Hitachi Command Suite base. This provides storage configuration and control capabilities for you.

19Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 20: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

This solution uses Hitachi Dynamic Tiering, a part of the Server Virtualization Operating System. Separately licensed, Dynamic Tiering virtualizes and automates mobility between tiers for maximum performance and efficiency.

Hitachi Dynamic Tiering

Hitachi Dynamic Tiering (HDT) simplifies storage administration by automatically optimizing data placement in 1, 2 or 3 tiers of storage that can be defined and used within a single virtual volume. Tiers of storage can be made up of internal or external (virtualized) storage, and the use of HDT can lower capital costs. The intuitive unified management of HDT allows for lower operational costs and reduces the challenges of ensuring applications are placed on the appropriate classes of storage.

Oracle Database 11g R2 RAC

Oracle Database 11g Release 2 provides the foundation for IT to successfully deliver more information with higher quality of service, reduce the risk of change within IT, and make more efficient use of IT budgets.

Oracle Database 11g R2 Enterprise Edition provides industry-leading performance, scalability, security, and reliability on a choice of clustered or single-servers with a wide range of options to meet user needs. Grid computing relieves users from concerns about where data resides and which computer processes their requests. Users request information or computation and have it delivered - as much as they want, whenever they want. For a DBA, the grid is about resource allocation, information sharing, and high availability. Oracle Database with Real Application Clusters provide the infrastructure for your database grid. Automatic Storage Management provides the infrastructure for a storage grid. Oracle Enterprise Manager Grid Control provides you with holistic management of your grid.

Oracle Database 11g Release 2 Enterprise Edition comes with a wide range of options to extend the world's #1 database to help grow your business and meet your users' performance, security and availability based service level expectations.

Key Features

• Protects from server failure, site failure, human error, and reduces planned downtime

• Secures data and enables compliance with unique low-level security, fine-grained auditing, transparent data encryption, and total recall of data

• High-performance data warehousing, online analytic processing, and data mining

• Easily manages entire lifecycle of information for the largest of databases

Design Topology This section presents physical and logical design in high-level considerations for Cisco UCS networking, computing and Hitachi Virtual Storage Platform G1000 for Oracle Database 11g R2 RAC deployments.

20Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 21: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Hardware and Software Used for this Solution

Table 1 Hardware and Software used for Oracle Database 11g R2 GRID Infrastructure with RAC Option Deployment

Cisco UCS Networking and Hitachi Storage Connectivity Topology

This section explains Cisco UCS networking and computing design considerations when deploying Oracle Database 11g R2 RAC with Hitachi VSP G1000. In this design, the FC traffic is isolated from the regular management and application data network using the same Cisco UCS infrastructure by defining logical VLAN networks and VSAN to provide better data security. Table 2 shows the hardware details used for this solution.

Vendor Name Version/Model DescriptionCisco Cisco 6296UP UCSM 2.2(2c) Cisco UCS 6200 UP Series Fabric

InterconnectsCisco Cisco UCS Chassis 5108 Chassis Cisco Cisco UCS IOM 2204XP IO ModuleCisco Nexus 5548UP

Switch

NX-OS Nexus 5500 series Unified Port switch

Cisco UCS Blade Server B200 M3 /Intel Xeon

E5-2697 v2/ 16 x16GB

DDR3 1866 MHz

Memory

Half width Blade server (Database

Server)

Cisco Cisco UCS VIC

Adapter

1240 mLOM Virtual Interface Card

Oracle Oracle Linux 6.4 6.4 64-bit UEK Operating SystemOracle Oracle 11g R2

GRID

11.2.0.4 GRID Infrastructure software

Oracle Oracle 11g R2

Database

11.2.0.4 Database Software

Oracle Oracle SwingBench 2.4.0.845 Oracle Benchmark kitHitachi Hitachi Virtual

Storage Platform

VSP G1000 Hitachi Virtual Storage Platform

Hitachi Hitachi Device

Manager

D/N:Isv-47.49 Hitachi Device Manager to manage

the Hitachi storageHitachi Firmware 80-01-24-00/00 Hitachi Firmware versionHitachi Hitachi Disk Drives 1600 GB FMD

800 GB 10k RPM SAS

4TB 7.2k RPM NL-SAS

Flash Module Drives

SAS drives

NL-SAS Drives

21Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 22: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Table 2 Details about Cisco Unified Computing System and Hitachi Storage

Figure 10 presents a detailed view of the physical topology, and some of the main components of Cisco Unified Computing System.

Physical Cisco Unified Computing System ConfigurationDescription QuantityCisco UCS 5108 Blade Server Chassis, with 4 power supply units, 8 fans and 2 fabric extenders 2Cisco UCS B200-M3 half width blades 4Two Socket - Twelve Core Intel Xeon E5-2697 v2 series 2.70 GHz processors 96 Core16 GB DDR3 DIMM, 1866 MHz ( 16 per server, totaling 256 GB per blade server ) 64Cisco UCS VIC 1240 Virtual Interface Card, 256 PCI devices, Dual 4 x 10G ( 1 per server ) 4Cisco UCS – 6296UP 96 port Fabric Interconnect 216 port 8 Gbps Fibre Channel expansion module 2Cisco Nexus 5548UP Switch 2

Physical Hitachi Virtual Storage Platform G1000 ConfigurationDescription QuantityHitachi Accelerated Flash chassis 11.6 TB size Flash Modules Drives (FMD) 20800GB size SAS 10k RPM drives 804 TB size NL-SAS 7.2k RPM drives 244 Front-end connectivity modules 4 x 8 Gb/sec Fiber Channel ports for 16 ports total 44 Back-end connectivity modules 8 x 6 Gb/sec SAS links each for 32 links total 4Storage Cache in GB 466

22Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 23: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 10 Cisco UCS Networking and Hitachi Virtual Storage Platform G1000 Architecture

Table 3 vPC Details

As shown in Figure 10,, a pair of Cisco UCS 6296UP fabric interconnects carries both storage and network traffic from the blades with the help of Cisco Nexus 5548UP switch. The 10GB FCoE traffic leaves the UCS Fabrics through Nexus5548 Switches to Hitachi Virtual Storage Platform G1000. To effectively handle the higher I/O requirements, FC boot is a better solution.

Both the fabric interconnect and the Cisco Nexus 5548UP switch are clustered with the peer link between them to provide high availability. Two virtual PortChannels (vPCs) are configured to provide public network and private network paths for the blades to northbound switches. Each vPC has VLANs created for application network data and management data paths. For more information about vPC

Network vPC VLAN IDPublic 33 10,191Private 34 10,191

23Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 24: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

configuration on the Cisco Nexus 5548UP Switch, refer to: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/configuration_guide_c07-543563.html.

As illustrated in Figure 10, eight (four per chassis) links go to Fabric Interconnect "A" (ports 1 through 8). Similarly, eight links go to Fabric Interconnect B. Fabric Interconnect-A links are used for Oracle Public network and FC Storage access and Fabric Interconnect-B links are used for Oracle private interconnect traffic & FC Storage access.

Note For Oracle RAC configuration on Cisco Unified Computing System, we recommend to keep all private interconnects local on a single Fabric interconnect. In such case, the private traffic will stay local to that fabric interconnect and will not be routed via northbound network switch. In other words, all inter blade (or RAC node private) communication will be resolved locally at the fabric interconnect and this significantly reduces latency for Oracle Cache Fusion traffic.

Hitachi Virtual Storage Platform G1000 Storage Layout

This section describes the storage architecture of Hitachi Virtual Storage Platform G1000 environment. The architecture takes into consideration Hitachi Data Systems and Oracle recommended practices for the deployment of database storage design.

Figure 11 illustrates the storage provisioning for this solution

Figure 11 Hitachi Virtual Storage Platform G1000 Storage Layout

24Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 25: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Cisco UCS Manager Configuration Overview

Detailed information about configuring the Cisco Unified Computing System is available at http://www.cisco.com/en/US/products/ps10281/products_installation_and_configuration_guides_list.html

Note It is beyond the scope of this document to cover all of these. However an attempt is made to include as many and as much as possible.

High-Level Steps to Configure Cisco Unified Computing System

The following are the high-level steps involved for a Cisco UCS configuration:

1. Configure Fabric Interconnects for Chassis and Blade Discovery

a. Configure Global Policies

b. Configure Server Ports

2. Configure LAN and SAN on Cisco UCS Manager

a. Configure and Enable Ethernet LAN uplink Ports

b. Configure and Enable FC SAN uplink Ports

c. Configure VLAN

d. Configure VSAN

3. Configure UUID, MAC, WWWN and WWPN Pool

a. UUID Pool Creation

b. IP Pool and MAC Pool Creation

c. WWNN Pool and WWPN Pool Creation

4. Configure vNIC and vHBA Template

a. Create vNIC templates

b. Create Public vNIC template

c. Create Private vNIC template

d. Create Storage vNIC template

e. Create HBA templates

5. Configure Ethernet Uplink Port-Channels

6. Create Server Boot Policy for SAN Boot

Details for each step are discussed in subsequent sections below.

Configuring Fabric Interconnects for Blade Discovery

Cisco UCS 6248 UP Fabric Interconnects are configured for redundancy. It provides resiliency in case of failures. The first step is to establish connectivity between the blades and fabric interconnects.

25Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 26: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Configure Global Policies

As shown in below, Navigate UCS Manager GUI menu to go to Equipment >Policies (Right pane) > Global Policies. As shown in Figure 12, select Chassis/FEX discovery policy as "4-link" from the drop-down list.

Figure 12 Configure Global Policy

Configuring Server Ports

Click Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module > Ethernet Ports and select the desired number of ports. Right-click to "Configure as Server Port" as show in Figure 13.

Figure 13 Configuring Server Ports

26Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 27: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Note We selected Port 9 to Port 16 to configure as serverports. After configuring the serverports, you would see the details of serverports as shown in Figure 14.

Figure 14 All Configured Serverports

Configuring LAN and SAN on Cisco UCS Manager

Perform LAN and SAN configuration steps in the Cisco UCS Manager as shown in Figure 15.

Configure and Enable Ethernet LAN Uplink Ports

From Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module > Ethernet Ports menu, select the desired number of ports and right-click to "Configure as Uplink Port" as shown in Figure 15.

Figure 15 Configure Ethernet LAN Uplink Ports

As shown in Figure 15, we have selected Port 1 and 2 on Fabric interconnect A and configured them as Ethernet Uplink ports. Repeat the same step on Fabric interconnect B to configure Port 1 and 2 as Ethernet uplink ports.

These ports will be used to create Virtual Port-channels in later sections.

27Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 28: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Configure and Enable FC Ports

From Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module >FC Ports menu, select the desired ports and enable those ports. Following figure shows the configuration of FC Ports.

Figure 16 Configure FC ports

Configure VLAN

In Cisco UCS Manager, click LAN >LAN Cloud > VLAN and right-click to Create VLANs. In this solution, you need to create 2 VLANs: one for private (VLAN 191) and one for public network (VLAN 10). These two VLANs will be used in the vNIC templates that are discussed later.

28Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 29: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 17 Create VLAN for Public Network

In the screenshot above, we have highlighted VLAN 10 creation for public network. It is also very important that you create both VLANs as global across both fabric interconnects. This way, VLAN identity is maintained across the fabric interconnects in case of NIC failover.

Create VLANs for public and private networks. In case you are using Oracle HAIP feature, you may have to configure additional vlans to be associated with additional VNICs as well.

The following is the summary of VLANs once you complete VLAN creation:

• VLAN ID 10 for public interfaces.

• VLAN ID 191 for Oracle RAC private interconnect interfaces.

Note The private VLAN traffic stays local within the Cisco UCS domain during normal operating conditions, it is necessary to configure entries for these private VLANs in northbound network switch. This will allow the switch to route interconnect traffic appropriately in case of partial link failures. These scenarios and traffic routing are discussed in details in later sections.

Figure 18 summarizes all the VLANs for Public and Private network.

29Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 30: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 18 VLAN Summary

Configure VSAN

In Cisco UCS Manager, click SAN > SAN Cloud > VSANs and right -click to Create VSAN. In this study we created VSAN 101 and 102 for SAN Boot and Storage access.

Figure 19 Configuring VSAN in Cisco UCS Manager

30Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 31: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 20 Creating VSAN for Fabric A

Note We created a VSAN on both the Fabrics. It is also very important that you create both VSANs as global across both fabric interconnects. This way, VLAN identity is maintained across the fabric interconnects in case of HBA failover. For the VSAN on Fabric A the VSAN ID is 101 and similarly, for Fabric B the VSAN ID is 102.

Figure 21 shows the created VSANs in Cisco UCS Manager.

Figure 21 VSAN Summary

Configure Pools

When VLANs and VSAN are created, configure the pools for UUID, MAC Addresses, Management IP and WWN.

UUID Pool Creation

In Cisco UCS Manager, click Servers > Pools > UUID Suffix Pools and right-click to "Create UUID Suffix Pool", create a new pool.

31Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 32: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 22 Create UUID Pools

Figure 23 shows the "Oracle-HDS-UUID" Pool.

Figure 23 UUID Pool Summary

IP Pool and MAC Pool Creation

In Cisco UCS Manager, click LAN > Pools > IP Pools and right-click "Create IP Pool Ext-mgmt".

Figure 24 Create IP Pool

Next, click MAC Pools to "Create MAC Pools". We created Oracle-HDS-MAC-A and Oracle-HDS-MAC-B for the all vNIC MAC addresses.

32Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 33: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 25 Create MAC Pool

The IP pools will be used for console management, while MAC addresses for the vNICs being carved out later.

WWNN Pool and WWPN Pool Creation

In Cisco UCS Manager, click SAN > Pools > WWNN Pools and right-click to "Create WWNN Pools". Next, click WWPN Pools to "Create WWPN Pools". These WWNN and WWPN entries will be used for Boot from SAN configuration. We created Oracle-HDS-WWNN Pool as world wide node name, Oracle-HDS-WWPN-A Pool and Oracle-HDS-WWPN-B as world wide port name as shown below.

Figure 26 Create WWNN and WWPN Pool

At this point pool creation is complete for this setup. Next, create vNIC and vHBA templates.

Set Jumbo Frames in both the Cisco UCS Fabric

To configure jumbo frames and enable quality of service in the Cisco UCS Fabric, follow these steps:

1. In Cisco UCS Manager, click the LAN tab in the navigation pane.

2. Choose LAN > LAN Cloud > QoS System Class.

3. In the right pane, click the General tab.

4. On the Best Effort row, enter 9216 in the box under the MTU column.

33Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 34: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

5. Click Save Changes.

6. Click OK.

Figure 27 Setting up Jumbo Frame on Fabric Interconnect

Configure vNIC and vHBA Template

Create vNIC Templates

In Cisco UCS Manager, click LAN > Policies > vNIC templates and right-click to "Create vNIC Template"

Figure 28 Create vNIC Template

Two vNIC templates have been created for this Oracle RAC on Cisco Unified Computing System with the Hitachi Storage configuration; one for Fabric A and another for Fabric B.

34Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 35: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Create vNIC Templates

Figure 29 vNIC Template for Fabric A

35Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 36: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 30 vNIC Template for Fabric B

The figure shows the created vNIC templates on Fabric A and Fabric B.

Figure 31 vNIC Template Summary

36Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 37: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Create HBA Templates

In the Cisco UCS Manager, click SAN > Policies > vHBA templates and right-click to "Create vHBA Template".

Figure 32 Create vHBA templates

Figure 33 vHBA Template for Fabric A

37Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 38: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 34 vHBA Template for Fabric B

Two vHBA templates have been created as HBA Template Oracle-HDS-HBA-A and HBA Template Oracle-HDS-HBA-B as shown.

Next, configure the Ethernet uplink port-channels.

Configure Ethernet Uplink Port-Channels

For Configuring Port-Channels, click LAN > LAN Cloud ' Fabric A > Port Channels and right-click to "Create Port-Channel". Select the desired Ethernet Uplink ports configured earlier. Repeat the same steps create Port-Channel on Fabric B. In the current setup, we have used ports 1 and 2 on Fabric A and configured as port channel 1. Similarly, ports 1 and 2 on Fabric B are configured to create port channel 2.

38Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 39: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 35 Configuring Port Channels

Figure 36 Fabric A Ethernet Port-Channel Details

The figure shows the configured port-channels on Fabric A and fabric B.

39Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 40: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 37 Port-Channels on Fabric A and Fabric B

When the above preparation steps are completed, create a service template where the service profiles can be easily derived.

Create Local Disk Configuration Policy (Optional)

A local disk configuration for the Cisco UCS environment is necessary if the servers in the environment do not have a local disk.

Note This policy should not be used on servers that contain local disks.

In Cisco UCS Manager, click the Servers tab in the navigation pane.

1. Choose Policies > root.

2. Right-click Local Disk Config Policies.

3. Choose Create Local Disk Configuration Policy.

4. Enter SAN-Boot as the local disk configuration policy name.

5. Change the mode to No Local Storage.

6. Click OK to create the local disk configuration policy.

40Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 41: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 38 Creating Local Disk Configuration Policy

7. Click OK.

Create SAN Boot Policies

This procedure applies to a Cisco UCS environment in which the storage SAN ports are configured in the following ways:

• The SAN ports 1A, 3A, 5A and 7A of Hitachi storage cluster-1 are connected to the Cisco Nexus 5548 switch A.

• The SAN ports 1C, 3C, 5C and 7C of Hitachi storage cluster-1 are connected to the Cisco Nexus 5548 switch B.

• The SAN ports 2A, 4A, 6A and 8A of Hitachi storage cluster-2 are connected to the Cisco Nexus 5548 switch A.

• The SAN ports 2C, 4C, 6C and 8C of Hitachi storage cluster-2 are connected to the Cisco Nexus 5548 switch B.

There are two SAN boot policies are configured in this procedure, one named as SAN-BOOT-A and other named as SAN-BOOT-B.

The SAN boot (SAN-BOOT-A) configures the SAN primary's primary-target to be FC port 1A on storage cluster 1 and SAN primary's secondary-target to be FC port 2A on storage cluster 2. Similarly, the SAN secondary’s primary-target should be 3C on storage cluster 1 and SAN secondary's secondary-target should be 4C on storage cluster 2.

41Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 42: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

The SAN boot (SAN-BOOT-B) configures the SAN primary's primary-target to be FC port 7C on storage cluster 1 and SAN primary's secondary-target to be FC port 8C on storage cluster 2. Similarly, SAN secondary's primary-target to be 5A on storage cluster 1 and SAN secondary's secondary-target to be 6A on storage cluster 2.

To create boot policies for the Cisco UCS environment, follow these steps:

1. In Cisco UCS Manager, click the Servers tab in the navigation pane.

2. Choose Policies > root.

3. Right-click Boot Policies.

4. Choose Create Boot Policy.

5. Enter SAN-BOOT-A as the name of the boot policy.

6. (Optional) Enter a description for the boot policy.

7. Keep the Reboot on Boot Order Change check box unchecked.

8. Expand the Local Devices drop-down menu and Choose Add CD-ROM.

9. Expand the vHBAs drop-down menu and Choose Add SAN Boot.

10. In the Add SAN Boot dialog box, enter "hba0" in the vHBA field.

11. Make sure that the Primary radio button is selected as the SAN boot type.

12. Click OK to add the SAN boot initiator.

Figure 39 Adding SAN Boot Initiator for Fabric A

13. From the vHBA drop-down menu, choose Add SAN Boot Target.

14. Keep 0 as the value for Boot Target LUN.

15. Enter the WWPN for FC port 1A on storage cluster 1.

Note To obtain this information, log in to storage cluster 1 and get the wwpn of port 1A. Make sure you enter the port name and not the node name.

16. Keep the Primary radio button selected as the SAN boot target type.

17. Click OK to add the SAN boot target.

42Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 43: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 40 Adding SAN Boot Target for Fabric A

18. From the vHBA drop-down menu, choose Add SAN Boot Target.

19. Keep 0 as the value for Boot Target LUN.

20. Enter the WWPN for FC port 2A on storage cluster 2.

Note To obtain this information, log in to storage controller and get the wwpn for port 2A. Make sure you enter the port name and not the node name.

21. Click OK to add the SAN boot target.

Figure 41 Adding Secondary SAN Boot Target for Fabric A

22. From the vHBA drop-down menu, choose Add SAN Boot.

23. In the Add SAN Boot dialog box, enter "hba1" in the vHBA box.

24. The SAN boot type should automatically be set to Secondary, and the Type option should be unavailable.

25. Click OK to add the SAN boot initiator.

43Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 44: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 42 Adding SAN Boot Initiator for Fabric B

26. From the vHBA drop-down menu, choose Add SAN Boot Target.

27. Keep 0 as the value for Boot Target LUN.

28. Enter the WWPN for FC port 3C on storage cluster1.

Note To obtain this information, log in to storage controller and get the wwpn for port 3C. Make sure you enter the port name and not the node name.

29. Keep Primary as the SAN boot target type.

30. Click OK to add the SAN boot target.

Figure 43 Adding Primary SAN Boot Target for Fabric B

31. From the vHBA drop-down menu, choose Add SAN Boot Target.

32. Keep 0 as the value for Boot Target LUN.

33. Enter the WWPN for FC port 4C on storage cluster2.

Note To obtain this information, log in to storage controller and get the wwpn for port 4C. Make sure you enter the port name and not the node name.

34. Click OK to add the SAN boot target.

44Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 45: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 44 Adding Secondary SAN Boot Target

35. Click OK, and then OK again to create the boot policy.

36. Right-click Boot Policies again.

37. Choose Create Boot Policy.

38. Enter SAN-BOOT-B as the name of the boot policy.

39. (Optional) Enter a description of the boot policy.

40. Keep the Reboot on Boot Order Change check box unchecked.

41. From the Local Devices drop-down menu choose Add CD-ROM.

42. From the vHBA drop-down menu choose Add SAN Boot.

43. In the Add SAN Boot dialog box, enter "hba1" in the vHBA box.

44. Make sure that the Primary radio button is selected as the SAN boot type.

45. Click OK to add the SAN boot initiator.

Figure 45 Adding SAN Boot Initiator for Fabric B

46. From the vHBA drop-down menu, choose Add SAN Boot Target.

47. Keep 0 as the value for Boot Target LUN.

48. Enter the WWPN for FC port 7C on storage cluster1.

45Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 46: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Note To obtain this information, log in to storage controller and get the wwpn for port 7C. Make sure you enter the port name and not the node name.

49. Keep Primary as the SAN boot target type.

50. Click OK to add the SAN boot target.

Figure 46 Adding Primary SAN Boot Target for Fabric B

51. From the vHBA drop-down menu, choose Add SAN Boot Target.

52. Keep 0 as the value for Boot Target LUN.

53. Enter the WWPN for FC port 8C on storage cluster2.

Note To obtain this information, log in to storage controller and get the wwpn for port 8C. Make sure you enter the port name and not the node name.

54. Click OK to add the SAN boot target.

Figure 47 Adding Secondary SAN Boot Target for Fabric B

55. From the vHBA menu, choose Add SAN Boot.

56. In the Add SAN Boot dialog box, enter Fabric-A in the vHBA box.

46Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 47: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

57. The SAN boot type should automatically be set to Secondary, and the Type option should be unavailable.

58. Click OK to add the SAN boot initiator.

Figure 48 Adding SAN Boot for Fabric A

59. From the vHBA menu, choose Add SAN Boot Target.

60. Keep 0 as the value for Boot Target LUN.

61. Enter the WWPN for FC port 5A on storage cluster1.

Note To obtain this information, log in to storage controller and get the wwpn for port 5A. Make sure you enter the port name and not the node name.

62. Keep Primary as the SAN boot target type.

63. Click OK to add the SAN boot target.

Figure 49 Adding Primary SAN Boot Target for Fabric A

64. From the vHBA drop-down menu, choose Add SAN Boot Target.

65. Keep 0 as the value for Boot Target LUN.

66. Enter the WWPN for FCoE port 6A on storage cluster2.

47Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 48: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Note To obtain this information, log in to storage controller and get the wwpn for port 6A. Make sure you enter the port name and not the node name.

67. Click OK to add the SAN boot target.

Figure 50 Adding Secondary SAN Boot Target for Fabric A

68. Click OK, and then click OK again to create the boot policy.

After creating the FC boot policies for Fabric A and Fabric B, you can view the boot order in the UCS Manager GUI. To view the boot order, navigate to Servers > Policies > Boot Policies. Click Boot Policy SAN-BOOT-A to view the boot order for Fabric A in the right pane of the UCS Manager. Similarly, Click Boot Policy SAN-BOOT-B to view the boot order for Fabric B in the right pane of the UCS Manager.

Figure 51 SAN Boot Details for Fabric A

48Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 49: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 52 SAN Boot Details for Fabric B

Service Profile Creation and Association to Cisco UCS Blade Servers

Service profile templates enable policy based server management that helps ensure consistent server resource provisioning suitable to meet predefined workload needs.

Create Service Profile Template

To create a service profile template, complete the following steps:

1. In Cisco UCS Manager, click Servers > Service Profile Templates > root and right-click root to "Create Service Profile Template"

49Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 50: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 53 Create Service Profile Template

2. Enter a template name and select the UUID Pool that was created earlier and click Next.

50Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 51: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 54 Creating Service Profile Template - Identify

3. In the networking window, select the Dynamic vNIC that was created earlier.

51Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 52: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 55 Creating Service Profile Template - Networking

4. In the Networking page create vNICs; one on each fabric and associate them with the VLAN policies created earlier. Select expert mode, and click add to add one or more vNICs that the server should use to connect to the LAN.

5. In the create vNIC page, select "Use vNIC template" and adapter policy as Oracle-HDS-vNICA. Enter vNIC "eth0" as the vNIC name.

52Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 53: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 56 Creating Service Profile Template - Create vNIC

6. Create the vNIC "eth1 with appropriate vNIC template mapping for each vNIC.

7. When the vNICs are created, we need to create vHBAs. In the storage page, select expert mode, choose the WWNN pool created earlier and click add to create vHBAs.

53Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 54: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 57 Creating Service Profile Template - Storage

The following four vHBAs have been created:

• hba0 using template Oracle-HDS-HBA-A.

• hba1 using template Oracle-HDS-HBA-B.

• hba2 using template Oracle-HDS-HBA-A.

• hba3 using template Oracle-HDS-HBA-B.

54Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 55: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 58 Creating Service Profile Template - Create vHBA

For this Oracle RAC configuration, the Cisco Nexus 5548UP is used for zoning, so skip the zoning section and use the default vNIC/vHBA placement. Also skip the vMedia Policy.

Server Boot Policy

To create the server boot policy, complete the following steps:

1. In the Server Boot Order page, choose the Boot Policy we created for SAN boot and click Next.

55Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 56: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 59 Configure Server Boot Policy during Service Profile template Creation

The rest maintenance and assignment policies were left as default in the configuration. However, they may vary from site-to-site depending on workloads, best practices, and policies.

2. Create one more service profile template "Oracle-HDS-Fabric-B" using boot policy "SAN-BOOT-B". There are two service profile templates created, one using boot policy "SAN-BOOT-A" and other using boot policy "SAN-BOOT-B".

Figure 60 Service Profile template Creation Details

56Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 57: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Create Service Profiles from Service Profile Templates

To create service profiles from a template, complete the following steps:

1. In Cisco UCS Manager, click Servers > Service Profile Templates and click "Create Service Profiles from Template."

Figure 61 Create Service profile from Service Profile Template

Figure 62 Create Service Profile from Service Profile Template "Oracle-HDS-Fabric-A"

57Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 58: Cisco Unified Computing System and Oracle RAC 11gR2 with

Design Topology

Figure 63 Create Service Profile from Service Profile Template "Oracle-HDS-Fabric-B"

Four service profiles have been created, as listed below; two service profiles created using the "Oracle-HDS-Fabric-A" and two service profiles created using template the "Oracle-HDS-Fabric-B"

• Oracle-HDS-SP-A1

• Oracle-HDS-SP-A2

• Oracle-HDS-SP-B1

• Oracle-HDS-SP-B2

Associating Service Profile to the Servers

To associate service profiles to the servers, complete the following steps:

1. Under the servers tab, select the desired service profile, and select change service profile association.

Figure 64 Associating Service Profile to Cisco UCS Blade Servers

2. Right-click on the name of service profile ( Ex. Oracle-HDS-SP-A1) you want to associate with the server and select the option "Change Service Profile Association".

3. In the Change Service Profile Association page, from the Server Assignment drop-down, select existing server that you would like to assign, and click OK.

58Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 59: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

Figure 65 Changing Service Profile Association

4. Repeat the same steps to associate remaining three service profiles for the blade servers as shown below.

5. Make sure all the service profiles are associated as shown below.

Figure 66 Associated Service profiles Summary

Configuring Hitachi Virtual Storage Platform (G1000)The following procedures to configure the storage for this solution assume you have installed all the appropriate licenses on your storage system.

59Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 60: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

Configure Fibre Channel Port Settings

To configure your storage Fibre Channel ports using Hitachi Device Manager software, do the following:

1. Log on to Hitachi Device Manager.

Note You must have modify privileges when using Hitachi Device Manager software to complete this process.

2. Click the Array Name link to open the Oracle database server environment storage system.

3. Expand the Settings heading and click the Ports/HostGroups link.

4. Click the Ports

5. Click Edit Ports.

6. Check the ports that are zoned to connect to the Oracle database server on the SAN.

7. Click Enable from the Port Security list

8. Click Auto from the Port Speed list

9. Click ON from the Fabric list

10. Click P-to-P from the Connection Type list and then click OK.

A message displays saying that the change will interrupt I/O to any currently-connected host to the port. Figure 67 shows the screenshot of the changes for the Edit Ports

Figure 67 Hitachi Virtual Storage Platform G1000 Edit Ports

11. Click Confirm and wait a few seconds for the change to take place.

60Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 61: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

After establishing the connection between the storage system and the host, the Ports window shows all ports in a ON status as shown in the Figure 68 below.

Figure 68 Hitachi Virtual Storage Platform G1000 FC Port details

Create Parity Groups

This solution uses twenty-eight Parity Groups created on Hitachi Virtual Storage System G1000.

Table 4 Details of the Parity Groups

To create a RAID group using Hitachi Device Manager software, do the following:

1. Log on to Hitachi Device Manager.

You must have modify privileges when using Storage Navigator Modular 2 software to complete this process.

2. Click the Array Name link to open the storage system.

3. Expand the Groups heading in the storage system pane and then click the Volumes link.

The right pane displays three tabs: Volumes, Parity Groups, and DP Pools.

Parity Group

Purpose RAID Level Drive Type No of Drives

Capacity (GB)

1-1 Operating system for Oracle RAC Database server

RAID-10 (2D+2D)

800 GB SAS 10K RPM Drives

4 1,6101-2 - 1-14

Oracle RAC Database

2-1 – 2-79-1 – 9-3 1.6 TB Flash Module

Drives (FMD)4 3,27610-1 –

10-25-1 – 5-3

RAID-6 (6D+2P)4 TB NL-SAS 7.2K RPM Drives

8 21,883

61Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 62: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

4. Click the Parity Groups tab and then click Create RG.

The Create Raid Group window opens.

5. Use Table 3 to configure the RAID Level and Combination for each RAID group in the Create Raid Group window.

The Number of Parity Groups changes based on your RAID level and combination choices.

6. Click the Automatic Selection option.

If you have different types of drives installed in the storage system (either type or capacity), click the Drive Type value and Drive Capacity value from each list.

Using automatic selection is the recommended practice from Hitachi Data Systems. Hitachi Device Manager uses the next available drives of the type and capacity clicked.

7. Click OK.

A message says that there is the successful creation of the RAID group.

8. Click Close.

The formatting process to create the RAID group starts immediately in the background.

Figure 69 shows the screenshot of the Parity Groups in the Hitachi Virtual Storage Platform G1000 used in this solution.

Figure 69 Hitachi Virtual Storage Platform G1000 Parity Group

Create Hitachi Logical Devices (LDEVs)

This procedure creates the following:

• 28 logical Devices used for the Oracle RAC database

• 4 logical devices used for the operating system of Oracle RAC database server

Table 5 lists the details of the logical devices created in Hitachi Virtual Storage Platform G1000.

62Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 63: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

Table 5 Details of Logical Devices /LDEVs

To configure the LDEVs using Hitachi Device Manager, do the following:

1. Log on to Hitachi Device Manager.

2. Click the Array Name link to open the storage system.

3. Expand the Logical Devices heading in the storage system pane and then click the Create LDEVs.

4. For Provisioning Type list, select Basic.

5. For System Type choose OPEN.

6. Select OPEN-V for Emulation Type.

7. Choose Any for Drive Type/RPM.

8. From the RAID Level list, select 2D+2D.

RAID Group LDEVsLDEV Size

(GB)Purpose

1-1

00:00:00

200

· O/S Boot for first node in a four node Oracle RAC database server

00:00:01

● O/S Boot for second node in a

four node Oracle RAC database

server

00:00:02

● O/S Boot for third node in a four

node Oracle RAC database

server

00:00:03

● O/S Boot for fourth node in a

four node Oracle RAC database

server9-1 – 9-3 00:00:08 - 00:00:0A

3276

● Oracle RAC Database10-1 – 10-2 00:00:0B – 00:00:0C1-2 – 1-14 00:00:0D - 00:00:19

16102-1 - 2-7 00:00:1A - 00:00:205-1 – 5-3 00:00:28 – 00:00:2A 3072

63Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 64: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

Figure 70 Hitachi Virtual Storage Platform G1000 Create LDEVs

9. Type the LDEV Capacity and choose GB.

10. Type 1 in the Number of LDEVs per Free Space.

11. Give the name of the LDEV in the LDEV Name.

12. Choose Normal Format from the Format Type list.

13. Click Add.

14. Click Finish.

15. The Create LDEV pane refreshes, populated with the new LDEV information. Click Finish.

16. The Confirm window opens. Click Apply.

64Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 65: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

Figure 71 Hitachi Virtual Storage Platform G1000 LDEVs

Create Hitachi Dynamic Tiering Pools

This solution uses one Hitachi Dynamic Tiering pool. Table 6 lists the details.

Table 6 Details of a Hitachi Dynamic Tiering Pool

To create the Hitachi Dynamic Tiering Pool, complete the following:

1. Create the Parity Groups.

2. Create the LDEVs.

To create a dynamic tiering pool using Hitachi Device Manager software, do the following.

1. Log on to Hitachi Device Manager.

2. Click the Array Name link to open the storage system.

3. Click the Pools on the left pane of the storage system.

4. Click Create Pool.

The Create Pools window opens.

5. Choose Dynamic Provisioning from the Pool Type list

6. Select Open from the System Type radio button

7. Select Enable from the Multi-Tier Pool radio button

8. Choose Manual from the Pool Volume Selection

9. Click Select Pool VOLs

The Select Pool Vols window appears.

Pool Name Number of Pool VOLs

Number of V-VOLs

RAID Level Capacity in GB Pool Type

Ora_hdt_pool_ 28 113 Mixed 56781.53 DT

65Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 66: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

10. In the Available Pool Volumes table, select the pool-VOL row to be associated with a pool. then click Add

The selected pool-VOL is registered in the Selected Pool Volumes table.

11. In the Pool Name text box, type the prefix and initial number of the pool.

12. Click Add

13. Click Finish and the Confirm window appears.

14. In the Confirm window, click Apply to register the setting in the task

Figure 72 Hitachi Virtual Storage Platform G1000 Pools

Figure 73 Hitachi Virtual Storage Platform G1000 Entire Pool in Tier Properties

66Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 67: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

Figure 74 Hitachi Virtual Storage Platform G1000 Tiering Policy in Tier Properties

Figure 75 Hitachi Virtual Storage Platform G1000 View Pool Management Status

Create Virtual Volumes

This procedure creates 113 storage virtual volumes used for the Oracle RAC Database. All the storage virtual volumes are mapped to the storage ports 1A, 1C, 2A, 2C, 3A, 3C, 4A, 4C, 5A, 5C, 6A, 6C, 7A, 7C, 8A and 8C. Table 7 lists the details of the virtual volumes.

67Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 68: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

Table 7 VVOLs for Oracle RAC Database

To create volumes using Hitachi Device Manager, follow these steps:

1. Log on to Hitachi Device Manager

2. Click the Array Name link to open the storage system.

3. Click the Pools on the left pane of the storage system.

4. Click Virtual Volumes tab which appears when a pool in Pools is selected.

5. Click Create LDEVs. The Create LDEVs window appears.

6. For Provisioning Type list, select Dynamic Provisioning.

7. For System Type choose OPEN.

8. Select OPEN-V for Emulation Type.

9. Choose Mixed for Drive Type/RPM.

10. From the RAID Level list, select Mixed.

11. Click Select Pool and choose the pool from the Available Pools table. Click OK.

12. Type the LDEV Capacity and choose GB.

13. Type 113 in the Number of LDEVs text box.

14. Type the name of the LDEV in the LDEV Name. In the Initial LDEV ID field, type the initial number, which can be up to 9 digits.

15. Click Add.

16. Click Finish.

17. The Confirm window appears. Click Finish.

18. Click Apply. If the Go to tasks window for status check box is selected, the Tasks window appears.

Pool Name LDEV Name LDEV Size (GB)

Purpose Storage Port

ora_hdt_pool_ ora_vvol_hdt_001 –

- ora_vvol_hdt_113

500 Oracle RAC Database

1A, 1C, 2A, 2C, 3A, 3C, 4A, 4C, 5A, 5C, 6A, 6C, 7A, 7C, 8A,

8C

68Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 69: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

Figure 76 Hitachi Virtual Storage Platform G1000 Virtual Volumes

Create Host Groups

To create host groups, you will create and configure the fibre channel zoning.

To create the hosts groups, complete the following steps:

1. Display the Create Host Groups window by performing one of the following:

2. In Device Manager - Storage Navigator, select Create Host Groups from the General Tasks menu and display the Create Host Groups window.

3. From the Actions menu, choose Ports/Host Groups, and then Create Host Groups.

4. From the Storage Systems tree, click the Ports/Hosts Groups. In the Host Groups page of the displayed window, click Create Host Groups.

5. From the Storage Systems tree, expand the Ports/Hosts Groups node, and then click the relevant port. In the Host Groups page of the displayed window, click Create Host Groups.

6. Enter the host group name in the Host Group Name box.

69Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 70: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

Figure 77 Hitachi Virtual Storage Platform G1000 Create Host Groups

7. Select a host mode from the Host Mode list.

8. Select hosts to be registered in a host group.

9. If the desired host has ever been connected with a cable to another port in the storage system, select the desired host bus adapter from the Available Hosts list.

10. If there is no host to be registered, skip this step and move to the next step. Otherwise, a host group with no host would be created.

11. If the desired host has never been connected via a cable to any port in the storage system, perform the following steps:

12. Click Add New Host under the Available Hosts list.

13. The Add New Host dialog box opens.

14. Enter the desired WWN in the HBA WWN box.

15. If necessary, enter a nickname for the host bus adapter in the Host Name box.

16. Click OK to close the Add New Host dialog box.

17. Select the desired host bus adapter from the Available Hosts list.

18. Select the port to which you want to add the host group.

19. Click Add to add the host group.

20. By repeating steps from 2 to 7, you can create multiple host groups.

21. Click Finish to display the Confirm window.

22. Click Apply in the Confirm window.

If the Go to tasks window for status check box is selected, the Tasks window appears.

70Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 71: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Hitachi Virtual Storage Platform (G1000)

Figure 78 Hitachi Virtual Storage Platform G1000 Host Groups

Add LUN Paths

To add LUN paths, complete the following steps:

1. From the Storage Systems tree, click Ports/Hosts Groups. From the Actions menu, select Logical Device, and then Add LUN Paths

2. Select the desired LDEVs from the Available LDEVs table, and then click Add.

3. Selected LDEVs are listed in the Selected LDEVs table.

4. Click Next.

5. Select the desired host groups from the Available Host Groups table, and then click Add.

6. Selected host groups are listed in the Selected Host Groups table.

7. Click Next.

8. Confirm the defined LU paths.

9. To change the LU path settings, click Change LUN IDs and type the LUN ID that you want to change.

10. To change the LDEV name, click Change LDEV Settings. In the Change LDEV Settings window, change the LDEV name.

11. Click Finish.

12. In the Confirm window, confirm the settings, in Task Name type a unique name for this task or accept the default, and then click Apply.

If Go to tasks window for status is checked, the Tasks window opens.

71Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 72: Cisco Unified Computing System and Oracle RAC 11gR2 with

Port Connectivity of Hitachi VSP G1000 and Cisco Nexus 5048 UP Switch

Figure 79 Hitachi Virtual Storage Platform G1000 Add LUN Paths

For the Operating System, this solution makes use of two paths. However, you could use multiple paths to meet your requirements. Table 8 lists the Hitachi Logical Devices configured for Operating System and mapped to the Oracle RAC Database on Cisco Unified Computing System.

Table 8 Operating System LDEVs

Port Connectivity of Hitachi VSP G1000 and Cisco Nexus 5048 UP Switch

Sixteen ports from Hitachi Virtual Storage Platform G1000 are used in this solution to connect two Cisco Nexus 5548 switches. The storage ports are equally distributed between storage cluster. Table 9 lists the port connectivity between Hitachi Virtual Storage Platform G1000 and Cisco Nexus 5548.

Server Name on

Cisco Unified

Computing

System

LDEV Name on VSP

G1000LDEV Size (GB)

Host Group on VSP

G1000

oracle-hds-srv1 Oracle_Srv_OS1 2001A-server12A-server1

oracle-hds-srv2 Oracle_Srv_OS2 2003C-server24C-server2

oracle-hds-srv3 Oracle_Srv_OS3 2005A-server36A-server3

oracle-hds-srv4 Oracle_Srv_OS4 2007C-server48C-server4

72Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 73: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Cisco Nexus 5548 UP

Table 9 Port Connectivity between Hitachi Virtual Storage Platform G1000 and Cisco Nexus 5548

Configuring Cisco Nexus 5548 UP

Enable Licenses

Cisco Nexus A

To license the Cisco Nexus A switch on <<var_nexus_A_hostname>>, follow these steps:

1. Log in as admin.

2. Run the following commands:

config tfeature fcoefeature npivfeature lacpfeature vpc

Cisco Nexus B

To license the Cisco Nexus B switch on <<var_nexus_B_hostname>>, follow these steps:

1. Log in as admin.

2. Run the following commands:

config tfeature fcoefeature npivfeature lacp

Hitachi Virtual Storage Platform G1000 Cisco Nexus 5548

Cluster Port WWPN Switch Port

Cluster 1

CL1-A 50:06:0E:80:07:C3:DA:00 Cisco Nexus 5548 A fc2/13

CL1-C 50:06:0E:80:07:C3:DA:02 Cisco Nexus 5548 B fc2/13

CL3-A 50:06:0E:80:07:C3:DA:20 Cisco Nexus 5548 A fc2/15

CL3-C 50:06:0E:80:07:C3:DA:22 Cisco Nexus 5548 B fc2/15

CL5-A 50:06:0E:80:07:C3:DA:40 Cisco Nexus 5548 A fc2/17

CL5-C 50:06:0E:80:07:C3:DA:42 Cisco Nexus 5548 B fc2/17

CL7-A 50:06:0E:80:07:C3:DA:60 Cisco Nexus 5548 A fc2/19

CL7-C 50:06:0E:80:07:C3:DA:62 Cisco Nexus 5548 B fc2/19

Cluster 2

CL2-A 50:06:0E:80:07:C3:DA:10 Cisco Nexus 5548 A fc2/14

CL2-C 50:06:0E:80:07:C3:DA:12 Cisco Nexus 5548 B fc2/14

CL4-A 50:06:0E:80:07:C3:DA:30 Cisco Nexus 5548 A fc2/16

CL4-C 50:06:0E:80:07:C3:DA:32 Cisco Nexus 5548 B fc2/16

CL6-A 50:06:0E:80:07:C3:DA:50 Cisco Nexus 5548 A fc2/18

CL6-C 50:06:0E:80:07:C3:DA:52 Cisco Nexus 5548 B fc2/18

CL8-A 50:06:0E:80:07:C3:DA:70 Cisco Nexus 5548 A fc2/20

CL8-C 50:06:0E:80:07:C3:DA:72 Cisco Nexus 5548 B fc2/20

73Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 74: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Cisco Nexus 5548 UP

feature vpc

Set Global Configurations

Cisco Nexus 5548 A and Cisco Nexus 5548 B

To set global configurations, follow these steps on both switches:

1. Run the following commands to set global configurations and jumbo frames in QoS:

2. Login as admin user.

3. Run the following commands:

conf tspanning-tree port type network defaultspanning-tree port type edge bpduguard defaultport-channel load-balance ethernet source-dest-port policy-map type network-qos jumboclass type network-qos class-defaultmtu 9216exitclass type network-qos class-fcoepause no-dropmtu 2158exitexitsystem qosservice-policy type network-qos jumboexitcopy run start

Create VLANs

Cisco Nexus 5548 A and Cisco Nexus 5548 B

To create the necessary virtual local area networks (VLANs), follow these steps on both switches:

1. From the global configuration mode, run the following commands:

2. Login as admin user.

3. Run the following commands:

conf tvlan 10name Public-VLANexitvlan 191name Private-VLANexit

74Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 75: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Cisco Nexus 5548 UP

Add Individual Port Descriptions for Troubleshooting

Cisco Nexus 5548 A

To add individual port descriptions for troubleshooting activity and verification for switch A, follow these steps:

1. From the global configuration mode, run the following commands:

2. Login as admin user.

3. Run the following commands:

conf tinterface Eth1/1description Nexus5k-B-Cluster-Interconnectexitinterface Eth1/2description Nexus5k-B-Cluster-Interconnectexitinterface Eth1/3description Fabric_Interconnect_A:1/1exitinterface Eth1/4description Fabric_Interconnect_B:1/1exit

Cisco Nexus 5548 B

To add individual port descriptions for troubleshooting activity and verification for switch B, follow these steps:

1. From the global configuration mode, run the following commands:

2. Login as admin user.

3. Run the following commands:

conf tinterface Eth1/1description Nexus5k-A-Cluster-Interconnectexitinterface Eth1/2description Nexus5k-A-Cluster-Interconnectexitinterface Eth1/3description Fabric_Interconnect_A:1/2exitinterface Eth1/4description Fabric_Interconnect_B:1/2exit

Create Port Channels

Cisco Nexus 5548 A and Cisco Nexus 5548 B

To create the necessary port channels between devices, follow these steps on both switches:

1. From the global configuration mode, run the following commands:

75Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 76: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Cisco Nexus 5548 UP

2. Login as admin user.

3. Run the following commands:

conf tinterface Po1description vPC peer-linkexitinterface Eth1/1-2channel-group 1 mode activeno shutdownexitinterface Po3description Fabric_Interconnect_Aexitinterface Eth1/3channel-group 3 mode activeno shutdownexitinterface Po4description Fabric_Interconnect_Bexitinterface Eth1/4channel-group 4 mode activeno shutdownexitcopy run start

Configure Port Channels

Cisco Nexus 5548 A and Cisco Nexus 5548 B

To configure the port channels, follow these steps on both switches:

1. From the global configuration mode, run the following commands:

2. Login as admin user.

3. Run the following commands:

conf tinterface Po1switchport mode trunkswitchport trunk native vlan 1switchport trunk allowed vlan 1,10,191spanning-tree port type networkno shutdownexitinterface Po3switchport mode trunkswitchport trunk native vlan 1switchport trunk allowed vlan 10,191spanning-tree port type edge trunkno shutdownexitinterface Po4switchport mode trunkswitchport trunk native vlan 1switchport trunk allowed vlan 10,191

76Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 77: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Cisco Nexus 5548 UP

spanning-tree port type edge trunkno shutdownexitcopy run start

Configure Virtual Port Channels

Cisco Nexus 5548 A

To configure virtual port channels (vPCs) for switch A, follow these steps:

1. From the global configuration mode, run the following commands:

2. Login as admin user.

3. Run the following commands:

conf tvpc domain 1role priority 10peer-keepalive destination <<var_nexus_B_mgmt0_ip>> source <<var_nexus_A_mgmt0_ip>>auto-recoveryexitinterface Po1vpc peer-linkexitinterface Po3vpc 3exitinterface Po4vpc 4exitcopy run start

Cisco Nexus 5548 B

To configure vPCs for switch B, follow these steps:

1. From the global configuration mode, run the following commands.

2. Login as admin user.

3. Run the following commands:

conf tvpc domain 1role priority 20peer-keepalive destination <<var_nexus_A_mgmt0_ip>> source <<var_nexus_B_mgmt0_ip>>auto-recoveryexitinterface Po1vpc peer-linkexitinterface Po3vpc 3exitinterface Po4vpc 4

77Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 78: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Cisco Nexus 5548 UP

exitcopy run start

Create and Configure Fibre Channel Zoning

This procedure sets up Fibre Channel connections between the Cisco Nexus 5548 switches, the Cisco UCS Fabric Interconnects, and the Hitachi storage systems.

Before going to the zoning details, decide how many paths are needed for each LUN and extract the wwpn numbers for each of the HBAs from each server. We have used 4 HBAs for each Server. Two HBAs (HBA1 and HBA3) are connected to Nexus-A switch and other two HBAs (HBA2 and HBA4) are connected to Nexus-B.

1. Log in to the Cisco UCS Manager > Equipment > Chassis > servers and the desired server. On the right hand menu, click the Inventory tab and HBA's sub tab to get the wwpn of HBA's.

Figure 80 WWPn of Servers

2. Connect to the Hitachi Storage System using Hitachi Device Manager and extract the WWPn of FC Ports connected to the Cisco Nexus switches. We have connected to 16 FC ports from Hitachi Storage System. FC ports (1A, 2A, 3A, 4A, 5A, 6A, 7A, 8A) connected to Nexus-A switch and similarly FC ports (1C, 2C, 3C, 4C, 5C, 6C, 7C, 8C) connected to Nexus-B switch.

Figure 81 WWPn of Hitachi Virtual Storage Platform G1000

78Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 79: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Cisco Nexus 5548 UP

Create Device Aliases for FC Zoning

Cisco Nexus 5548 A

To configure device aliases and zones for the primary boot paths of switch A on <<var_nexus_A_hostname>>, follow these steps:

1. From the global configuration mode, run the following commands:

2. Log in as admin user.

3. Run the following commands:

conf tdevice-alias databasedevice-alias name Storage1-1A pwwn 50:06:0e:80:07:c3:da:00device-alias name Storage1-3A pwwn 50:06:0e:80:07:c3:da:20device-alias name Storage1-5A pwwn 50:06:0e:80:07:c3:da:40device-alias name Storage1-7A pwwn 50:06:0e:80:07:c3:da:60device-alias name Storage2-2A pwwn 50:06:0e:80:07:c3:da:10device-alias name Storage2-4A pwwn 50:06:0e:80:07:c3:da:30device-alias name Storage2-6A pwwn 50:06:0e:80:07:c3:da:50device-alias name Storage2-8A pwwn 50:06:0e:80:07:c3:da:70device-alias name Oracle-Srv1-hba0 pwwn 20:00:00:25:b5:10:a0:0cdevice-alias name Oracle-Srv1-hba2 pwwn 20:00:00:25:b5:10:a0:0ddevice-alias name Oracle-Srv2-hba0 pwwn 20:00:00:25:b5:10:a0:0adevice-alias name Oracle-Srv2-hba2 pwwn 20:00:00:25:b5:10:a0:0bdevice-alias name Oracle-Srv3-hba0 pwwn 20:00:00:25:b5:10:a0:06device-alias name Oracle-Srv3-hba2 pwwn 20:00:00:25:b5:10:a0:07device-alias name Oracle-Srv4-hba0 pwwn 20:00:00:25:b5:10:a0:14device-alias name Oracle-Srv4-hba2 pwwn 20:00:00:25:b5:10:a0:05exitdevice-alias commit

Cisco Nexus 5548 B

To configure device aliases and zones for the boot paths of switch B on <<var_nexus_B_hostname>>, follow these steps:

1. From the global configuration mode, run the following commands:

2. Login as admin user.

3. Run the following commands:

conf tdevice-alias databasedevice-alias name Storage1-1C pwwn 50:06:0e:80:07:c3:da:02device-alias name Storage1-3C pwwn 50:06:0e:80:07:c3:da:22device-alias name Storage1-5C pwwn 50:06:0e:80:07:c3:da:42device-alias name Storage1-7C pwwn 50:06:0e:80:07:c3:da:62device-alias name Storage2-2C pwwn 50:06:0e:80:07:c3:da:12device-alias name Storage2-4C pwwn 50:06:0e:80:07:c3:da:32device-alias name Storage2-6C pwwn 50:06:0e:80:07:c3:da:52device-alias name Storage2-8C pwwn 50:06:0e:80:07:c3:da:72device-alias name Oracle-Srv1-hba1 pwwn 20:00:00:25:b5:20:b0:0edevice-alias name Oracle-Srv1-hba3 pwwn 20:00:00:25:b5:20:b0:0fdevice-alias name Oracle-Srv2-hba1 pwwn 20:00:00:25:b5:20:b0:0cdevice-alias name Oracle-Srv2-hba3 pwwn 20:00:00:25:b5:20:b0:0ddevice-alias name Oracle-Srv3-hba1 pwwn 20:00:00:25:b5:20:b0:0a

79Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 80: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Cisco Nexus 5548 UP

device-alias name Oracle-Srv3-hba3 pwwn 20:00:00:25:b5:20:b0:0bdevice-alias name Oracle-Srv4-hba1 pwwn 20:00:00:25:b5:20:b0:08device-alias name Oracle-Srv4-hba3 pwwn 20:00:00:25:b5:20:b0:09exitdevice-alias commit

Create Zones

Cisco Nexus 5548 A

To create zones for the service profiles on switch A, follow these steps:

1. Create a zone for each service profile.

2. Login as admin user.

3. Run the following commands:

conf tzone name oracle-hds-srv1-hba0 vsan 101 member device-alias Oracle-Srv1-hba0 member device-alias Storage1-1Aexit

zone name oracle-hds-srv1-hba2 vsan 101 member device-alias Oracle-Srv1-hba2 member device-alias Storage2-2Aexit

zone name oracle-hds-srv2-hba0 vsan 101 member device-alias Oracle-Srv2-hba0 member device-alias Storage1-3Aexit

zone name oracle-hds-srv2-hba2 vsan 101 member device-alias Oracle-Srv2-hba2 member device-alias Storage2-4Aexit

zone name oracle-hds-srv3-hba0 vsan 101 member device-alias Oracle-Srv3-hba0 member device-alias Storage1-5Aexit

zone name oracle-hds-srv3-hba2 vsan 101 member device-alias Oracle-Srv3-hba2 member device-alias Storage2-6Aexit

zone name oracle-hds-srv4-hba0 vsan 101 member device-alias Oracle-Srv4-hba0 member device-alias Storage1-7Aexit

zone name oracle-hds-srv4-hba2 vsan 101 member device-alias Oracle-Srv4-hba2 member device-alias Storage2-8Aexit

80Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 81: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Cisco Nexus 5548 UP

zone name server1-boot-hba0 vsan 101 member device-alias Storage1-1A member device-alias Storage2-2A member device-alias Oracle-Srv1-hba0exit

zone name server2-boot-hba0 vsan 101 member device-alias Oracle-Srv2-hba0 member device-alias Storage1-1A member device-alias Storage2-2Aexit

zone name server3-boot-hba0 vsan 101 member device-alias Storage1-5A member device-alias Storage2-6A member device-alias Oracle-Srv3-hba0exit

zone name server4-boot-hba0 vsan 101 member device-alias Oracle-Srv4-hba0 member device-alias Storage1-5A member device-alias Storage2-6Aexit

4. After the zone for the Cisco UCS service profiles has been created, create the zone set and add the necessary members.

zoneset name Oracle-HDS-A vsan 101 member oracle-hds-srv1-hba0 member oracle-hds-srv1-hba2 member oracle-hds-srv2-hba0 member oracle-hds-srv2-hba2 member oracle-hds-srv3-hba0 member oracle-hds-srv3-hba2 member oracle-hds-srv4-hba0 member oracle-hds-srv4-hba2 member server1-boot-hba0 member server2-boot-hba0 member server3-boot-hba0 member server4-boot-hba0

exitActivate the zone set.zoneset activate name Oracle-HDS-A vsan 101exitcopy run start

Cisco Nexus 5548 B

To create zones for the service profiles on switch B, follow these steps:

1. Create a zone for each service profile.

2. Login as admin user.

3. Run the following commands:

conf tzone name oracle-hds-srv1-hba1 vsan 102

81Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 82: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Cisco Nexus 5548 UP

member device-alias Oracle-Srv1-hba1 member device-alias Storage1-1Cexit

zone name oracle-hds-srv1-hba3 vsan 102 member device-alias Oracle-Srv1-hba3 member device-alias Storage2-2Cexit

zone name oracle-hds-srv2-hba1 vsan 102 member device-alias Oracle-Srv2-hba1 member device-alias Storage1-3Cexit

zone name oracle-hds-srv2-hba3 vsan 102 member device-alias Oracle-Srv2-hba3 member device-alias Storage2-4Cexit

zone name oracle-hds-srv3-hba1 vsan 102 member device-alias Oracle-Srv3-hba1 member device-alias Storage1-5Cexit

zone name oracle-hds-srv3-hba3 vsan 102 member device-alias Oracle-Srv3-hba3 member device-alias Storage2-6Cexit

zone name oracle-hds-srv4-hba1 vsan 102 member device-alias Oracle-Srv4-hba1 member device-alias Storage1-7Cexit

zone name oracle-hds-srv4-hba3 vsan 102 member device-alias Oracle-Srv4-hba3 member device-alias Storage2-8Cexit

zone name server1-boot-hba1 vsan 102 member device-alias Storage1-3C member device-alias Storage2-4C member device-alias Oracle-Srv1-hba0exit

zone name server2-boot-hba1 vsan 102 member device-alias Oracle-Srv2-hba0 member device-alias Storage1-3C member device-alias Storage2-4Cexit

zone name server3-boot-hba1 vsan 102 member device-alias Oracle-Srv3-hba0 member device-alias Storage1-7C member device-alias Storage2-8Cexit

zone name server4-boot-hba1 vsan 102

82Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 83: Cisco Unified Computing System and Oracle RAC 11gR2 with

Configuring Cisco Nexus 5548 UP

member device-alias Oracle-Srv4-hba1 member device-alias Storage1-7C member device-alias Storage2-8Cexit

4. After the zone for the Cisco UCS service profiles has been created, create the zone set and add the necessary members.

zoneset name Oracle-HDS-B vsan 102 member oracle-hds-srv1-hba1 member oracle-hds-srv1-hba3 member oracle-hds-srv2-hba1 member oracle-hds-srv2-hba3 member oracle-hds-srv3-hba1 member oracle-hds-srv3-hba3 member oracle-hds-srv4-hba1 member oracle-hds-srv4-hba3 member server1-boot-hba1 member server2-boot-hba1 member server3-boot-hba1 member server4-boot-hba1

exit

5. Activate the zone set.

zoneset activate name Oracle-HDS-B vsan 102exitcopy run start

When configuring the Cisco Nexus 5548UP with vPCs, be sure that the status for all vPCs is "Up" for connected Ethernet ports by running the commands shown in Figure 50 from the CLI on the Cisco Nexus 5548UP Switch.

Figure 82 Port-Channel Status on Cisco Nexus 5548UP

The command show vpc status should show the following for successful configuration.

83Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 84: Cisco Unified Computing System and Oracle RAC 11gR2 with

Connectivity of Cisco Unified Computing System and Hitachi VSP G1000

Figure 83 Virtual PortChannel Status on Cisco Nexus 5548UP Fabric A Switch

Verify the Virtual Port Channel status on Nexus B switch by running the command as mentioned above for Cisco Nexus A switch.

Connectivity of Cisco Unified Computing System and Hitachi VSP G1000

This section describes the connectivity layout of the solution components on completion of SAN zoning..

Table 10 the connectivity of the solution components.

84Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 85: Cisco Unified Computing System and Oracle RAC 11gR2 with

Connectivity of Cisco Unified Computing System and Hitachi VSP G1000

Table 10 Fibre Channel Connectivity of the Solution Components

Map all the logical devices and the virtual volumes created during the storage configuration for the Oracle databases with the Host Groups. We created four logical devices (size 200GB each) using a parity group out side the dynamic tiering pool. These four logical devices would be used as OS LUN to install operating system.

Cisco Unified Computing System

Cisco Nexus 5548Hitachi Virtual Storage Platform

G1000

Chassis

#

Blade

Slot#HBA Switch Name

Switch

PortPort

Registered

Host Name

Host

Group

Name

1

oracle-h

ds-srv1

hba0 Cisco Nexus 5548- A fc2/13 CL1-Aserver1-hb

a01A-server1

hba1 Cisco Nexus 5548- B fc2/13 CL1-Cserver1-hb

a11C-server1

hba2 Cisco Nexus 5548- A fc2/14 CL2-Aserver1-hb

a22A-server1

hba3 Cisco Nexus 5548- B fc2/14 CL2-Cserver1-hb

a32C-server1

oracle-h

ds-srv2

hba0 Cisco Nexus 5548- A fc2/15 CL3-Aserver2-hb

a03A-server2

hba1 Cisco Nexus 5548- B fc2/15 CL3-Cserver2-hb

a13C-server2

hba2 Cisco Nexus 5548- A fc2/16 CL4-Aserver2-hb

a24A-server2

hba3 Cisco Nexus 5548- B fc2/16 CL4-Cserver2-hb

a34C-server2

2

oracle-h

ds-srv3

hba0 Cisco Nexus 5548- A fc2/17 CL5-Aserver3-hb

a05A-server3

hba1 Cisco Nexus 5548- B fc2/17 CL5-Cserver3-hb

a15C-server3

hba2 Cisco Nexus 5548- A fc2/18 CL6-Aserver3-hb

a26A-server3

hba3 Cisco Nexus 5548- B fc2/18 CL6-Cserver3-hb

a36C-server3

oracle-h

ds-srv4

hba0 Cisco Nexus 5548- A fc2/19 CL7-Aserver4-hb

a07A-server4

hba1 Cisco Nexus 5548- B fc2/19 CL7-Cserver4-hb

a17C-server4

hba2 Cisco Nexus 5548- A fc2/20 CL8-Aserver4-hb

a28A-server4

hba3 Cisco Nexus 5548- B fc2/20 CL8-Cserver4-hb

a38C-server4

85Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 86: Cisco Unified Computing System and Oracle RAC 11gR2 with

Install Oracle Linux 6.4 from Image

Install Oracle Linux 6.4 from ImageTo install Oracle Linux 6.4 from image, follow these steps:

1. Download Oracle Linux 6.4 images from https://edelivery.oracle.com/linux or as appropriate to a staging area. Launch the KVM console for the desired server > click virtual media > activate virtual devices > accept this session > click on virtual media > click map CD/DVD > add the downloaded image > reset the server.

Figure 84 and Figure 85 shows the KVM console of the server and mapping to virtual media.

Figure 84 Launching KVM Console

86Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 87: Cisco Unified Computing System and Oracle RAC 11gR2 with

Install Oracle Linux 6.4 from Image

Figure 85 Mapping Virtual Media

2. When the server comes up, it launches the Oracle Linux Installer. Select the appropriate LUN to install the Oracle Linux operating system.

3. At the time of Oracle Linux package selection, select "customize now" to add additional packages to the existing install as shown in Figure 86.

Figure 86 Customize Oracle linux Package

4. In the servers menu of the customize package selection, select "system administration and Oracle asm support tools as shown in Figure 87.

87Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 88: Cisco Unified Computing System and Oracle RAC 11gR2 with

Install Oracle Linux 6.4 from Image

Figure 87 Select Package System Administration

88Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 89: Cisco Unified Computing System and Oracle RAC 11gR2 with

Miscellaneous Post-install Steps

When installation completes, reboot the server and accept license information, register the system as needed and synchronize the time with ntp. If ntp is not configured, Oracle RAC cluster synchronization daemon kicks in on a Oracle RAC node to sync up the time between the cluster nodes and maintaining the mean cluster time. Both NTP and OCSSD are mutually exclusive.

This completes the OS Install.

Miscellaneous Post-install Steps

Note Not all of them may have to be changed on your setup. Validate and change as needed. The following changes were made on the test bed where Oracle RAC install was done.

Disable selinux

It is recommended to disable selinux.Edit /etc/selinux/config and change to SELINUX=disabled#SELINUXTYPE=targeted

Disable Firewalls

service iptables stopservice ip6tables stopchkconfig iptables offchkconfig ip6tables offMake sure /etc/sysconfig/network has an entry for hostname. Preferably add NETWORKING_IPV6=no

Setup yum.repository

cd /etc/yum.repos.dwget http://public-yum.oracle.com/public-yum-ol6.repoedit the downloaded file public-yum-ol6.repo and change status as enabled=1Run yum update.You may have to set up http_proxy environment variable in case the server accesses internet via a proxy.

Make sure that the following RPM packages are available after the yum update. Alternatively install with yum install.oracleasmlib-2.0.4-1.el6.x86_64oracleasm-support-2.1.5-1.el6.x86_64

The exact version of the packages could be different on the uek kernel being used.

89Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 90: Cisco Unified Computing System and Oracle RAC 11gR2 with

Miscellaneous Post-install Steps

Install Linux Driver for Cisco 10G FCOE HBA

Go to http://software.cisco.com/download/navigator.html

In the download page, select servers-Unified computing. On the right menu select your class of servers say Cisco UCS B-series Blade server software and then select Unified Computing System (UCS) Drivers in the following page.

Select your firmware version under All Releases, say 2.2 and download the ISO image of

Cisco UCS-related drivers for your matching firmware, for example ucs-bxxx-drivers.2.2.2.iso.

Extract the fnic rpm from the iso.

Alternatively you can also mount the iso file. You can use KVM console too and map the iso.

After mapping virtual media - Login to host to copy the rpm

[root@oracle-hds-srv1 ~]# mount -o loop,ro /download/ucs-bxxx-drivers.2.2.2.iso /mnt [root@oracle-hds-srv1 ~]# cd /mnt[root@oracle-hds-srv1 ~]# cd /mnt/Linux/Storage/Cisco/MLOM/Oracle/OL6.4[root@oracle-hds-srv1 ~]# ls

kmod-fnic-1.6.0.10-3.8.13.13.el6uek.x86_64.rpmREADME-Oracle Linux Driver for Cisco 10G FCoE HBA.docx

Follow the instructions in README-Oracle Linux Driver for Cisco 10G FCoE HBA. In case you are running this on Oracle Linux Redhat compatible kernel, the appropriate driver for your linux version should be installed.

The following are the steps followed for uek2 kernel.

[root@oracle-hds-srv1 ~]# rpm -ivh kmod-fnic-1.6.0.10-3.8.13.13.el6uek.x86_64.rpmPreparing... ########################################### [100%] 1:kmod-fnic ########################################### [100%]

[root@oracle-hds-srv1 ~]# modinfo fnicfilename: /lib/modules/2.6.39-400.17.1.el6uek.x86_64/weak-updates/fnic/fnic.koversion: 1.6.0.10license: GPL v2author: Abhijeet Joglekar <[email protected]>, Joseph R. Eykholt <[email protected]>description: Cisco FCoE HBA Driversrcversion: BE0100FCB58E1FF9AC935C4alias: pci:v00001137d00000045sv*sd*bc*sc*i*depends: libfcoe,libfc,scsi_transport_fcvermagic: 2.6.39-400.209.1.el6uek.x86_64 SMP mod_unload modversionsparm: fnic_log_level:bit mask of fnic logging levels (int)parm: fnic_trace_max_pages:Total allocated memory pages for fnic trace buffer (uint)parm: fnic_fc_trace_max_pages:Total allocated memory pages for fc trace buffer (uint)

90Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 91: Cisco Unified Computing System and Oracle RAC 11gR2 with

Miscellaneous Post-install Steps

parm: fnic_max_qdepth:Queue depth to report for each LUN (uint)

For more details on the install, follow the README document found in the iso above.

It is good practice to install the latest drivers. In case you are planning to run RHEL compatible kernel, you may have to check for any additional drivers in enic/fnic category to be installed.

Reboot the host after making the changes and verify the fnic driver is updated. Create appropriate operating system users to own oracle clusterware binary and oracle database binary. We used "grid" as operating system user to own the clusterware binary and "oracle" as operating system user to own the oracle database binary.

Configure Multipath

Use Oracle Linux multipath software to configure multipaths to access the LUNs presented from Hitachi Storage. Modify "/etc/multipath.conf" file accordingly to give the alias name of each LUN id presented from Hitachi Storage as given below. Run "multipath -ll" command to view all the LUN id.

[root@oracle-hds-srv1 etc]# cat /etc/multipath.conf # multipath.conf written by anaconda

defaults { polling_interval 5 path_grouping_policy multibus failback immediate user_friendly_names yes no_path_retry 6 max_fds 8192}

devices { device { vendor "HITACHI" product "OPEN-V" path_grouping_policy multibus getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n" path_selector "round-robin 0" path_checker tur features "0" hardware_handler "0" prio const rr_weight uniform no_path_retry 6 rr_min_io_rq 8 }}blacklist_exceptions { wwid "360060e8007c3da000030c3da00000000"}multipaths {multipath { wwid 360060e8007c3da000030c3da0000004b alias DATADISK1 }

91Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 92: Cisco Unified Computing System and Oracle RAC 11gR2 with

Miscellaneous Post-install Steps

multipath { wwid 360060e8007c3da000030c3da0000004a alias DATADISK2 }multipath { wwid 360060e8007c3da000030c3da0000004d alias DATADISK3}}

Add all the LUNs presented from Hitachi storage in the "/etc/multipath.conf" file. Reload the the "multipath" deamon.

Configure Oracle ASM

Oracle ASM is installed as part of the install in OEL 6.and needs to be configured. Create and verifty the Oracle users and groups in each cluster nodes.

groupadd -g 1000 oinstallgroupadd -g 1200 dbauseradd -u 2000 -g oinstall -G dba gridpasswd griduseradd -u 1100 -g oinstall -G dba oraclepasswd oracle

Configure the ASM library as "root" user and give the ownership to "grid" user as follows.

[root@oracle-hds-srv1 ~]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM librarydriver. The following questions will determine whether the driver isloaded on boot and what permissions it will have. The current valueswill be shown in brackets ('[]'). Hitting <ENTER> without typing ananswer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface [oracle]: gridDefault group to own the driver interface [oinstall]: Start Oracle ASM library driver on boot (y/n) [y]: Scan for Oracle ASM disks on boot (y/n) [y]: Writing Oracle ASM library driver configuration: doneInitializing the Oracle ASMLib driver: [ OK ]Scanning the system for Oracle ASMLib disks: [ OK ]

This should create a mount point /dev/oracleasm/disks

92Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 93: Cisco Unified Computing System and Oracle RAC 11gR2 with

Miscellaneous Post-install Steps

Configure OS LUNs and Create ASM Disks

Create LUN Partitions

Partition LUNs

Partition the luns with an offset of 1MB. While it is necessary to create partitions on disks for Oracle ASM (just to prevent any accidental overwrite), it is equally important to create an aligned partition. Setting this offset aligns host I/O operations with the back end storage I/O operations.

Use host utilities like "fdisk" to create a partition on the disk.

Create a input file, "fdisk.input" as shown below.

dnp1

<- Leave a double space herexb12048pw

Execute as fdisk /dev/mapper/asmdisk1 < fdisk.input. This makes partition at 2048 cylinders. In fact this can be scripted for all the luns too.

Now all the pseudo partitions should be available in /dev/mapper as asmdisk1p1, asmdisk2p1, asmdisk3p1 etc..

Reload the multipath deamon to rescan all the multipath devices.

Create ASM Disks

When the partitions are created, create ASM disks with oracleasm API's.

oracleasm createdisk -v asm_1 /dev/mapper/asmdisk1p1

This will create a disk label as asm_1 on the partition. This can be queried with oracle supplied kfed/kfod utilities as well.

Repeat the process for all the /dev/mapper partitions and create ASM disks for all your database and RAC files.

Scan the asm disks with oracleasm on all the oracle nodes and these should be visible under /dev/oracleasm/disks mount point created by oracleasm earlier as below.

[root@oracle-hds-srv1 ~]# oracleasm scandisks Reloading disk partitions: doneCleaning any stale ASM disks...Scanning system for ASM disks...

Now the system is ready for Oracle Install.

93Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 94: Cisco Unified Computing System and Oracle RAC 11gR2 with

Oracle Database 11g R2 GRID Infrastructure with RAC Option Deployment

Oracle Database 11g R2 GRID Infrastructure with RAC Option Deployment

This section describes high level steps for Oracle Database 11g R2 RAC install. Prior to GRID and database install, verify all the prerequisites are completed. As an alternate, you can install Oracle validated RPM that will ensure all prerequisites are met before Oracle grid install. We will not cover step-by-step install for Oracle GRID in this document but will provide partial summary of details that might be relevant.

Use the following Oracle document for pre-installation tasks, such as setting up the kernel parameters, RPM packages, user creation, etc.

(http://download.oracle.com/docs/cd/E11882_01/install.112/e10812/prelinux.htm#BABHJHCJ)

1. Create and verify required oracle users and groups in each Oracle RAC nodes.

groupadd -g 1000 oinstallgroupadd -g 1200 dbauseradd -u 2000 -g oinstall -G dba gridpasswd griduseradd -u 1100 -g oinstall -G dba oraclepasswd oracle

2. Create the following local directory structure and ownerships on each RAC nodes.

mkdir -p /u01/app/11.2.0/gridmkdir -p /u01/app/oracle

chown -R oracle:oinstall /u01/app/oracle

chmod -R 775 /u01/app/oracle

chown -R grid:oinstall /u01/app chmod -R 775 /u01/app

3. Configure the private and public NICs with the appropriate IP addresses across all the nodes are art of Oracle clusterware installation.

4. Identify the virtual IP addresses and SCAN IPs and have them setup in DNS per Oracle'srecommendation. Alternatively, you can update the /etc/hosts file with all the details (private, public, SCAN and virtual IP) if you do not have DNS services available.

5. Configure ssh option (with no password) for the Oracle user and grid user. For more information about ssh configuration, refer to the Oracle installation documentation.

Note Oracle Universal Installer also offers automatic SSH connectivity configuration and testing.

6. Configure/Verify "/etc/sysctl.conf" and update shared memory and semaphore parameters required for Oracle GRID Installation. Also configure "/etc/security/limits.conf" file by adding user limits for oracle and

Note You do not have to perform these steps if Oracle Validated RPM is installed.

7. Configure hugepages.

94Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 95: Cisco Unified Computing System and Oracle RAC 11gR2 with

Installing Oracle Database 11g R2 RAC

Hugepages is a method to have larger page size that is useful for working with very large memory. For Oracle Databases, using HugePages reduces the operating system maintenance of page states, and increases Translation Lookaside Buffer (TLB) hit ratio.

Advantages of HugePages

• HugePages are not swappable so there is no page-in/page-out mechanism overhead.

• Hugepage uses fewer pages to cover the physical address space, so the size of "book keeping" (mapping from the virtual to the physical address) decreases, so it requiring fewer entries in the TLB and so TLB hit ratio improves.

• Hugepages reduces page table overhead.

• Eliminated page table lookup overhead: Since the pages are not subject to replacement, page table lookups are not required.

• Faster overall memory performance: On virtual memory systems each memory operation is actually two abstract memory operations. Since there are fewer pages to work on, the possible bottleneck on page table access is clearly avoided.

For our configuration, we used hugepages for both OLTP and DSS workloads. Please refer to Oracle metalink document 361323.1 for hugepages configuration details.

When hugepages are configured, You are now ready to install Oracle Database 11g R2 GRID Infrastructure with RAC option and the database.

Installing Oracle Database 11g R2 RAC It is not within the scope of this document to include the specifics of an Oracle RAC installation; youshould refer to the Oracle installation documentation for specific installation instructions for your Environment.

To install Oracle, follow these steps:

1. Download the Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.4.0) and Oracle Database 11g Release 2 (11.2.0.4.0) for Linux x86-64. Extract the zip file both for Oracle Database 11g Release 2 Grid

2. For this configuration, we used Oracle ASM for OCR and voting disks. For more details, See Grid Infrastructure Installation Guide for Linux.

(http://www.oracle.com/pls/db112/to_toc?pathname=install.112/e10812/toc.htm).

3. Launch the installer as the "grid" user from the staging area where the Oracle 11g R2 Grid Infrastructure software is stored.

95Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 96: Cisco Unified Computing System and Oracle RAC 11gR2 with

Installing Oracle Database 11g R2 RAC

4. Click Next.

5. Select "Install and Configure Oracle Grid infrastructure for a Cluster" and click Next to continue the installation.

96Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 97: Cisco Unified Computing System and Oracle RAC 11gR2 with

Installing Oracle Database 11g R2 RAC

6. Select "Advanced Installation" and click Next to continue with the installation.

7. Provide the cluster name, scan name and scan port. Click Next.

8. Add all the node name (public host name and virtual host name, as provided in your "/etc/hosts" file). Click Next.

97Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 98: Cisco Unified Computing System and Oracle RAC 11gR2 with

Installing Oracle Database 11g R2 RAC

9. Select the appropriate network interface for public and private interconnect use. Click Next.

10. Select "Oracle Automatic Storage Management (Oracle ASM)" and click Next.

98Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 99: Cisco Unified Computing System and Oracle RAC 11gR2 with

Installing Oracle Database 11g R2 RAC

11. Provide the ASm disk group name and check all the ASM Disks created in previous step to store OCR file and Voting disks. Click Next.

12. Provide the Oracle Base path and the software installation location. Click Next.

99Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 100: Cisco Unified Computing System and Oracle RAC 11gR2 with

Installing Oracle Database 11g R2 RAC

13. Run both the executables as an root user in each node starting from node 1. Click "OK" button to move to next step after both the executable files run on all the nodes are part of cluster installation.

100Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 101: Cisco Unified Computing System and Oracle RAC 11gR2 with

Installing Oracle 11g R2 Database Binary

When the configuration completed successfully, click Next to complete the installation.

Installing Oracle 11g R2 Database BinaryWhen Oracle Grid install is complete, install Oracle Database 11g Release 2 Database "Software Only" as oracle user. Do not create the database during the database binary installation. See Real Application Clusters Installation Guide for Linux and UNIX for detailed installation instructions http://www.oracle.com/pls/db112/to_toc?pathname=install.112/e10813/toc.htm.

1. Launch the installer as the "oracle" user from the staging area where the Oracle 11g R2 Database Binary software is stored

101Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 102: Cisco Unified Computing System and Oracle RAC 11gR2 with

Installing Oracle 11g R2 Database Binary

2. Click Next.

3. Select the option "Install database software only" and click Next.

102Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 103: Cisco Unified Computing System and Oracle RAC 11gR2 with

Installing Oracle 11g R2 Database Binary

4. Select the option "Oracle Real Application Clusters database Installation" and select all the nodes to install the database binary. Click Next.

103Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 104: Cisco Unified Computing System and Oracle RAC 11gR2 with

Create ASM DiskGroups and Create Databases

5. Run the "root.sh" file as root user in all the nodes, starting from node 1. Click OK to complete the installation once you successfully run the "root.sh" file on all the nodes.

Create ASM DiskGroups and Create DatabasesRun the command "asmca" as the "grid" operating system user to create the ASM diskgroups to store databases. We have created 3 differnet disk groups to store OLTP database, DSS database and the redo log files of both the database as shown in Table 11.

Table 11 ASM Diskgroups

Run the "dbca" tool as "oracle" user to create OLTP and DSS databases. Make sure to place the datafiles, redo logs and control files in proper disk group as created in above steps. We will discuss additional details about OLTP and DSS schema creation in workload section.

Workloads and Database ConfigurationWe used Swingbench for workload testing. Swingbench is a simple to use, free, Java based tool to generate database workload and perform stress testing using different benchmarks in Oracle database environments. Swingbench provides four separate benchmarks, namely, Order Entry, Sales History, Calling Circle, and Stress Test. For the tests described in this paper, Swingbench Order Entry benchmark

Disk group name Number of ASM disks Size of Disk Group PurposeOLTPDG 24 12 TB For OLTP databaseDSSDG 24 12 TB For DSS database

REDODG 10 5 TBFor redo log files of OLTP

and DSS Databases

104Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 105: Cisco Unified Computing System and Oracle RAC 11gR2 with

Workloads and Database Configuration

was used for OLTP workload testing and the Sales History benchmark was used for the DSS workload testing. The Order Entry benchmark is based on SOE schema and is TPC-C like by types of transactions. The workload uses a very balanced read/write ratio around 60/40 and can be designed to run continuously and test the performance of a typical Order Entry workload against a small set of tables, producing contention for database resources. The Sales History benchmark is based on the SH schema and is TPC-H like. The workload is query (read) centric and is designed to test the performance of queries against large tables.

As discussed in previous section, two independent databases were created earlier for Oracle Swingbench OLTP and DSS workloads. Next step is to pre-create the order entry and sales history schema for OLTP and DSS workload. Swingbench Order Entry (OLTP) workload uses SOE tablespace and Sales History workload uses SH tablespaces. We pre-created SOE schema and the SH schema on OLTP database and DSS database respectively.

For our setup, we created SOE tablespace "soetbs" with 276 datafiles of 30GB size each on OLTP database and SH tablespace "shtbs" with 342 datafiles of 30 GB size each on DSS database. Assign "soetbs" as the default tablespace for SOE schema on OLTP database and "shtbs" as the default tablespace of SH schema on DSS database.

When the schema for workloads are created, we populated both databases with Swingbench datagenerator as shown below.

OLTP Database

The OLTP database was populated with the following data:

[oracle@oracle-hds-srv1 ~]$ sqlplus soe/soeSQL*Plus: Release 11.2.0.4.0 Production on Tue Sep 2 14:29:41 2014Copyright (c) 1982, 2013, Oracle. All rights reserved.Connected to:Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit ProductionWith the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,Data Mining and Real Application Testing optionsSQL> select table_name, num_rows from user_tables;TABLE_NAME NUM_ROWS------------------------------ ----------CUSTOMERS 9999999984ORDER_ITEMS 3.4589E+10 ORDERS 1.1250E+10LOGON 2499999984ORDERENTRY_METADATA 4PRODUCT_DESCRIPTIONS 1000PRODUCT_INFORMATION 1000INVENTORIES 898372WAREHOUSES 1000

DSS (Sales History) Database

The DSS database was populated with the following data:

[oracle@oracle-hds-srv1 ~]$ sqlplus sh/shSQL*Plus: Release 11.2.0.4.0 Production on Tue Sep 2 14:37:37 2014Copyright (c) 1982, 2013, Oracle. All rights reserved.Connected to:

105Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 106: Cisco Unified Computing System and Oracle RAC 11gR2 with

Performance Data from the Tests

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit ProductionWith the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,Data Mining and Real Application Testing options

SQL> select table_name, num_rows from user_tables;

TABLE_NAME NUM_ROWS------------------------------ ----------CHANNELS 5COUNTRIES 23CUSTOMERS 1.1500E+10PROMOTIONS 503PRODUCTS 72SUPPLEMENTARY_DEMOGRAPHICS 1.1500E+10TIMES 6209SALES 5.7500E+10

As typically encountered in the real world deployments, we tested scalability and stress related scenarios that ran on current 4-node Oracle RAC cluster configuration.

• OLTP user scalability and OLTP cluster scalability representing small and random transactions

• DSS workload representing larger transactions

• Mixed workload featuring OLTP and DSS workloads running simultaneously for 24 hours

Performance Data from the TestsWhen the databases were created, we started out with OLTP database calibration about the number of users and database configuration. For Order Entry workload, we used 96GB SGA and ensured that hugepages were in use. Each OLTP scalability test was run for at least 12 hours and we ensured that results are consistent for the duration of a full run.

OLTP work load

For OLTP workloads, the common measurement metrics are Transactions Per Minute (TPM), users scalability with IOPs, and CPU utilization. Here are the scalability charts for Order Entry workload.

106Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 107: Cisco Unified Computing System and Oracle RAC 11gR2 with

Performance Data from the Tests

We captured the data after running the tests with different number of users (100, 150, 200 and 250) across the 4-node cluster. During the tests, we validated that Oracle SCAN listener fairly and evenly load balanced users across all 4 nodes of the cluster. We also observed appropriate scalability in transaction per minute as the number of users across clusters increased. Next graph shows increased IO and scalability as the number of users across all cluster node increased.

As indicated in the graph, we observed about 85,951 IO/Sec across the 4-node cluster. The Oracle AWR report below also summarizes Physical Reads/Sec and Physical Writes/Sec per instance. During OLTP tests, we observed some resource utilization variations due to the random nature of the workload as depicted by 250 users IOPS. We ran each test multiple times to ensure consistent numbers that are presented in this solution.

107Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 108: Cisco Unified Computing System and Oracle RAC 11gR2 with

Performance Data from the Tests

The following graph shows the CPU utilization in each node after OLTP user scaling from 100 users to 250 users.

We captured the host CPU% utilization after running the tests with different number of users (100, 150, 200 and 250) across all 4-node cluster. During the tests, we validated that Oracle SCAN listener fairly and evenly load balanced users across all 4 nodes of the cluster. We also observed CPU is equally utilized across all the nodes as number of users across clusters increased.

The table below shows interconnect traffic for the 4-node Oracle RAC cluster during 400 user run. The average interconnect traffic was 215 MB/Sec for the duration of the run.

Interconnect Traffic Sent (MB/s)Total

Received (MB/s)Total

Instance 1 112.1 110.3Instance 2 124.0 119.2Instance 3 100.8 105.7Instance 4 112.7 113.5

108Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 109: Cisco Unified Computing System and Oracle RAC 11gR2 with

Performance Data from the Tests

We also tested with different OLTP users by adding Oracle RAC node one after another. The following graph shows the node scalability as well as user scalability.

DSS Workload

DSS workloads are generally sequential in nature, read intensive and exercise large IO size. DSS workloads run a small number of users that typically exercise extremely complex queries that run for hours. For our tests, we ran Swingbench Sales history workload with 8 users, 16 users, 24 users and 32 users using one node, two node, three node and four node respectively. We captured the throughput from each test run. The charts below show DSS workload results.

During the DSS test run using swingbench sh schema, we confirmed that the throughput is scalaing after adding one node after another while increasing the number of DSS users. We found 6.12 GigaByte / second throughput generated with four node database cluster and 32 DSS users.

Total MB/Sec 449.6 448.7

109Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 110: Cisco Unified Computing System and Oracle RAC 11gR2 with

Destructive and Hardware Failover Tests

Mixed Workload

We ran both OLTP and DSS workloads simultaneously. This test will ensure that configuration in this test is able to sustain small random queries presented via OLTP along with large and sequential transactions submitted via DSS workload. We ran the tests with different setup of user (OLTP and DSS) load along with adding Oracle database cluster node one after another.

Destructive and Hardware Failover TestsThe goal of these tests is to ensure that reference architecture withstands commonly occurring failures either due to unexpected crashes, hardware failures or human errors. We conducted many hardware, software(process kills) and OS specific failures that simulate real world scenarios under stress conditions. In the destructive testing, we also demonstrate unique failover capabilities of Cisco VIC 1240 adapter. We have highlighted some of those test cases below.

Scenario Test StatusTest 1 – Chassis 1-

IOM2 Link Failure test

Run the system on full mixed work load.

Disconnect the IOM2 by pulling it out from the first

chassis and reconnect the IOM2 after 5 minutes.

Network Traffic from IOM2 will

failover without any disruption

to IOM1.Test 2 – Chassis 2-

IOM2 Link Failure test

Run the system on full mixed work load.

Disconnect the IOM2 by pulling it out from the second

chassis and reconnect the IOM2 after 5 minutes.

Network Traffic from IOM2 will

failover without any disruption

to IOM1.

Test 3– Chassis 1 & 2-

IOM2 Link Failure test

Run the system on full mixed work load.

Disconnect the IOM2 by pulling it out from both the

chassis and reconnect the IOM2 after 5 minutes

Network Traffic from IOM2 will

failover without any disruption

to IOM1.

Test 4 – UCS 6248

Fabric-B Failure test

Run the system on full load as above.

Reboot Fabric B, let it join the cluster back.

Fabric failovers did not cause

any disruption to Private/public

network and Storage traffic.

110Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 111: Cisco Unified Computing System and Oracle RAC 11gR2 with

Conclusion

ConclusionCisco Unified Computing System is built on leading computing, networking, and infrastructure software components. With Hitachi Virtual Storage Platform G1000 , customers can leverage a secure, integrated, and optimized stack that includes compute, network and storage resources that are sized, configured and deployed as a fully tested unit running industry standard applications such as Oracle Database 11g R2 RAC.

The following list describes how the combination of Cisco Unified Computing System with Hitachi Virtual Storage Platform G1000 is so powerful for Oracle environments:

• Cisco Unified Computing System’s stateless computing architecture provided by the Service Profile capability of Cisco Unified Computing System allows for fast, non-disruptive workload changes to be executed simply and seamlessly across the integrated Cisco UCS infrastructure and Cisco x86 servers.

• With the introduction of Hitachi Dynamic Tiering, complexities and overhead of implementing data lifecycle management and optimizing use of tiered storage are solved. Dynamic Tiering software simplifies storage administration by eliminating the need for time-consuming manual data classification and the movement of data to optimize usage of tiered storage.

As a result, customers can achieve dramatic cost savings when leveraging Fiber Channel based products and deploy any application on a scalable Shared IT infrastructure built on Cisco and Hitachi technologies. This solution, jointly developed by Cisco and Hitachi, is a flexible infrastructure platform composed of pre-sized storage, networking, and server components. It's designed to ease your IT transformation and operational challenges with maximum efficiency and minimal risk.

Cisco Unified Computing System and Hitachi Storage differs from other solutions by providing:

• A simplified storage administration, Hitachi Dynamic Tiering software automatically optimizes data placement.

• Highest efficiency and throughput through granular page-based data movement.

• A simplified management of up to 3 storage tiers as a single volume while automatically moveing most active data to highest performing tier..

• Integrated, validated technologies from industry leaders and top-tier software partners.

• A platform, built from unified compute, fabric, and storage technologies, that lets you scale to large-scale data centers without architectural changes.

• A centralized, simplified management of infrastructure resources, including end-to-end automation.

• A flexible cooperative support model that resolves issues rapidly and spans across new and legacy products.

Test 5 – UCS 6248

Fabric-A Failure test

Run the system on full load as above.

Reboot Fabric A, let it join the cluster back.

Fabric failovers did not cause

any disruption to Private/public

network and Storage traffic.

Test 6 – Nexus 5548

Fabric-A Failure test

Run the system on full mixed work load.

Reboot the Nexus5548 Fabric-A Switch, wait for 5

minutes, connect it back.

No disruption to the

Public/Private Network and

Storage TrafficTest 7 – Nexus 5548

Fabric-B Failure test

Run the system on full mixed work load.

Reboot the Nexus5548 Fabric-B Switch, wait for 5

minutes, connect it back.

No disruption to the

Public/Private Network and

Storage Traffic

111Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 112: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

Appendix

Appendix A: Cisco Nexus 5548 UP Configuration

The following is an example shows Cisco Nexus 5548 Fabric Zoning Configuration for all the Oracle RAC Servers.

Login to Cisco Nexus 5548 through .ssh and issue the following:

Cisco Nexus 5548 Fabric A Configuration

!Command: show running-config!Time: Fri Nov 20 20:54:05 2009

version 7.0(2)N1(1)feature fcoehostname Oracle-HDS-N5K-Afeature npivfeature telnetcfs eth distributefeature interface-vlanfeature hsrpfeature lacpfeature vpcfeature lldpusername admin password 5 $1$jwhzf7l2$2wgzBYzVsJnjrVoQI5TL01 role network-adminip domain-lookuppolicy-map type network-qos jumbo class type network-qos class-fcoe pause no-drop mtu 2158 class type network-qos class-default mtu 9216 multicast-optimizesystem qos service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type qos input fcoe-default-in-policy service-policy type network-qos jumboslot 2 port 1-16 type fcsnmp-server user admin network-admin auth md5 0xf23753e0e7c2ec2d83868f5a09b4767f priv 0xf23753e0e7c2ec2d83868f5a09b4767f localizedkeyrmon event 1 log trap public description FATAL(1) owner PMON@FATALrmon event 2 log trap public description CRITICAL(2) owner PMON@CRITICALrmon event 3 log trap public description ERROR(3) owner PMON@ERRORrmon event 4 log trap public description WARNING(4) owner PMON@WARNINGrmon event 5 log trap public description INFORMATION(5) owner PMON@INFOvlan 1vlan 10 name Public_Networkvlan 191

112Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 113: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

name Private_Networkspanning-tree port type edge bpduguard defaultspanning-tree port type network defaultvrf context management ip route 0.0.0.0/0 173.36.215.1port-channel load-balance ethernet source-dest-portvpc domain 1 peer-keepalive destination 173.36.215.62 delay restore 150 auto-recoveryvsan database vsan 101 name "Fabric_A" vsan 102 name "Fabric_B" device-alias database device-alias name Oracle-Srv4-hba2 pwwn 20:00:00:25:b5:10:a0:05device-alias commitfcdomain fcid databasevsan 101 wwn 20:21:00:2a:6a:61:5f:00 fcid 0x340000 dynamic vsan 101 wwn 20:22:00:2a:6a:61:5f:00 fcid 0x340020 dynamic vsan 101 wwn 20:23:00:2a:6a:61:5f:00 fcid 0x340040 dynamic vsan 101 wwn 20:24:00:2a:6a:61:5f:00 fcid 0x340060 dynamic vsan 101 wwn 50:06:0e:80:07:27:9a:00 fcid 0x340080 dynamic vsan 101 wwn 50:06:0e:80:07:27:9a:02 fcid 0x3400a0 dynamic vsan 101 wwn 50:06:0e:80:07:27:9a:10 fcid 0x3400c0 dynamic vsan 101 wwn 50:06:0e:80:07:27:9a:12 fcid 0x340100 dynamic vsan 101 wwn 50:06:0e:80:07:c3:da:02 fcid 0x3400a1 dynamic vsan 101 wwn 50:06:0e:80:07:c3:da:12 fcid 0x340101 dynamic vsan 101 wwn 50:06:0e:80:07:c3:da:10 fcid 0x3400c1 dynamic! [Storage2-2A] vsan 101 wwn 50:06:0e:80:07:c3:da:00 fcid 0x340081 dynamic! [Storage1-1A] vsan 101 wwn 20:00:00:25:b5:10:a0:0c fcid 0x340021 dynamic! [Oracle-Srv1-hba0] vsan 101 wwn 20:00:00:25:b5:10:a0:06 fcid 0x340001 dynamic! [Oracle-Srv3-hba0] vsan 101 wwn 20:00:00:25:b5:10:a0:0a fcid 0x340061 dynamic! [Oracle-Srv2-hba0] vsan 101 wwn 20:00:00:25:b5:10:a0:14 fcid 0x340041 dynamic! [Oracle-Srv4-hba0] vsan 101 wwn 20:00:00:25:b5:10:a0:0d fcid 0x340022 dynamic! [Oracle-Srv1-hba2] vsan 101 wwn 20:00:00:25:b5:10:a0:0b fcid 0x340062 dynamic! [Oracle-Srv2-hba2] vsan 101 wwn 20:00:00:25:b5:10:a0:07 fcid 0x340002 dynamic! [Oracle-Srv3-hba2] vsan 101 wwn 20:00:00:25:b5:10:a0:05 fcid 0x340042 dynamic! [Oracle-Srv4-hba2] vsan 101 wwn 50:06:0e:80:07:c3:da:40 fcid 0x340140 dynamic! [Storage1-5A] vsan 101 wwn 50:06:0e:80:07:c3:da:42 fcid 0x340160 dynamic vsan 101 wwn 50:06:0e:80:07:c3:da:50 fcid 0x340180 dynamic! [Storage2-6A] vsan 101 wwn 50:06:0e:80:07:c3:da:52 fcid 0x3401a0 dynamic vsan 101 wwn 50:06:0e:80:07:c3:da:20 fcid 0x3401c0 dynamic! [Storage1-3A] vsan 101 wwn 50:06:0e:80:07:c3:da:60 fcid 0x340181 dynamic! [Storage1-7A] vsan 101 wwn 50:06:0e:80:07:c3:da:30 fcid 0x340200 dynamic

113Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 114: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

! [Storage2-4A] vsan 101 wwn 50:06:0e:80:07:c3:da:70 fcid 0x340220 dynamic! [Storage2-8A]

interface Vlan1 no shutdown

interface Vlan10 no shutdown ip address 10.36.215.2/24 hsrp version 2 hsrp 10 preempt priority 110 ip 10.36.215.1interface Vlan191 no shutdown ip address 191.168.1.2/24 hsrp version 2 hsrp 191 preempt priority 110 ip 191.168.1.1

interface port-channel1 description vPC peer-link switchport mode trunk switchport trunk allowed vlan 1,10,191 spanning-tree port type network vpc peer-link

interface port-channel3 description Fabric_Interconnect_A switchport mode trunk switchport trunk allowed vlan 1,10,191 spanning-tree port type edge trunk vpc 3

interface port-channel4 description Fabric_Interconnect_B switchport mode trunk switchport trunk allowed vlan 1,10,191 spanning-tree port type edge trunk vpc 4vsan database vsan 101 interface fc2/1 vsan 101 interface fc2/2 vsan 101 interface fc2/3 vsan 101 interface fc2/4 vsan 101 interface fc2/5 vsan 101 interface fc2/6 vsan 101 interface fc2/7 vsan 101 interface fc2/8 vsan 101 interface fc2/9 vsan 101 interface fc2/10 vsan 101 interface fc2/11 vsan 101 interface fc2/12

114Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 115: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

vsan 101 interface fc2/13 vsan 101 interface fc2/14 vsan 101 interface fc2/15 vsan 101 interface fc2/16

interface fc2/1 no shutdown

interface fc2/2 no shutdown

interface fc2/3 no shutdown

interface fc2/4 no shutdowninterface fc2/5 no shutdown

interface fc2/6 no shutdown

interface fc2/7 no shutdown

interface fc2/8 no shutdown

interface fc2/9 no shutdown

interface fc2/10 no shutdown

interface fc2/11 no shutdown

interface fc2/12 no shutdown

interface fc2/13 no shutdown

interface fc2/14 no shutdowninterface fc2/15 no shutdown

interface fc2/16 no shutdown

interface Ethernet1/1 description Nexus5k-A-Cluster-Interconnect switchport mode trunk switchport trunk allowed vlan 1,10,191 channel-group 1 mode active

interface Ethernet1/2

115Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 116: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

description Nexus5k-A-Cluster-Interconnect switchport mode trunk switchport trunk allowed vlan 1,10,191 channel-group 1 mode active

interface Ethernet1/3 description Fabric_Interconnect_A:1/1 switchport mode trunk switchport trunk allowed vlan 1,10,191 channel-group 3 mode active

interface Ethernet1/4 description Fabric_Interconnect_B:1/1 switchport mode trunk switchport trunk allowed vlan 1,10,191 channel-group 4 mode active

interface Ethernet1/5

interface Ethernet1/6

interface Ethernet1/7

interface Ethernet1/8

interface Ethernet1/9

interface Ethernet1/10

interface Ethernet1/11

interface Ethernet1/12 switchport access vlan 10 speed 1000

interface Ethernet1/13

interface Ethernet1/14

interface Ethernet1/15

interface Ethernet1/16

interface Ethernet1/17 description FCoE_FI_A_17interface Ethernet1/18 description FCoE_FI_B_17

interface Ethernet1/19

interface Ethernet1/20

interface Ethernet1/21

interface Ethernet1/22

interface Ethernet1/23

116Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 117: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

interface Ethernet1/24

interface Ethernet1/25

interface Ethernet1/26

interface Ethernet1/27

interface Ethernet1/28

interface Ethernet1/29

interface Ethernet1/30

interface Ethernet1/31 shutdown speed 1000

interface Ethernet1/32 switchport access vlan 10 speed 1000

interface mgmt0 vrf member management ip address 173.36.215.61/24line consoleline vtyboot kickstart bootflash:/n5000-uk9-kickstart.7.0.2.N1.1.binboot system bootflash:/n5000-uk9.7.0.2.N1.1.bin interface fc2/1interface fc2/2interface fc2/3interface fc2/4interface fc2/5interface fc2/6interface fc2/7interface fc2/8interface fc2/9interface fc2/10interface fc2/11interface fc2/12interface fc2/13interface fc2/14interface fc2/15interface fc2/16!Full Zone Database Section for vsan 101zone name Oracle-HDS-1A vsan 101zone name oracle-hds-srv1-hba0 vsan 101 member pwwn 20:00:00:25:b5:10:a0:0c! [Oracle-Srv1-hba0] member pwwn 50:06:0e:80:07:c3:da:00! [Storage1-1A]

zone name oracle-hds-srv1-hba2 vsan 101 member pwwn 20:00:00:25:b5:10:a0:0d! [Oracle-Srv1-hba2] member pwwn 50:06:0e:80:07:c3:da:10! [Storage2-2A]

117Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 118: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

zone name oracle-hds-srv2-hba0 vsan 101 member pwwn 20:00:00:25:b5:10:a0:0a! [Oracle-Srv2-hba0] member pwwn 50:06:0e:80:07:c3:da:20! [Storage1-3A]

zone name oracle-hds-srv2-hba2 vsan 101 member pwwn 20:00:00:25:b5:10:a0:0b! [Oracle-Srv2-hba2] member pwwn 50:06:0e:80:07:c3:da:30! [Storage2-4A]

zone name Oracle-HDS-SRV1 vsan 101zone name oracle-hds-srv3-hba0 vsan 101 member pwwn 20:00:00:25:b5:10:a0:06! [Oracle-Srv3-hba0] member pwwn 50:06:0e:80:07:c3:da:40! [Storage1-5A]

zone name oracle-hds-srv3-hba2 vsan 101 member pwwn 20:00:00:25:b5:10:a0:07! [Oracle-Srv3-hba2] member pwwn 50:06:0e:80:07:c3:da:50! [Storage2-6A]

zone name oracle-hds-srv4-hba0 vsan 101 member pwwn 20:00:00:25:b5:10:a0:14! [Oracle-Srv4-hba0] member pwwn 50:06:0e:80:07:c3:da:60! [Storage1-7A]

zone name oracle-hds-srv4-hba2 vsan 101 member pwwn 20:00:00:25:b5:10:a0:05! [Oracle-Srv4-hba2] member pwwn 50:06:0e:80:07:c3:da:70! [Storage2-8A]

zone name server1-boot-hba0 vsan 101 member pwwn 50:06:0e:80:07:c3:da:00! [Storage1-1A] member pwwn 50:06:0e:80:07:c3:da:10! [Storage2-2A] member pwwn 20:00:00:25:b5:10:a0:0c! [Oracle-Srv1-hba0]zone name server2-boot-hba0 vsan 101 member pwwn 20:00:00:25:b5:10:a0:0a! [Oracle-Srv2-hba0] member pwwn 50:06:0e:80:07:c3:da:00! [Storage1-1A] member pwwn 50:06:0e:80:07:c3:da:10! [Storage2-2A]

zone name server3-boot-hba0 vsan 101 member pwwn 50:06:0e:80:07:c3:da:40! [Storage1-5A] member pwwn 50:06:0e:80:07:c3:da:50! [Storage2-6A]

118Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 119: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

member pwwn 20:00:00:25:b5:10:a0:06! [Oracle-Srv3-hba0]

zone name server4-boot-hba0 vsan 101 member pwwn 20:00:00:25:b5:10:a0:14! [Oracle-Srv4-hba0] member pwwn 50:06:0e:80:07:c3:da:40! [Storage1-5A] member pwwn 50:06:0e:80:07:c3:da:50! [Storage2-6A]

zoneset name Oracle-HDS-A vsan 101 member oracle-hds-srv1-hba0 member oracle-hds-srv1-hba2 member oracle-hds-srv2-hba0 member oracle-hds-srv2-hba2member oracle-hds-srv3-hba0 member oracle-hds-srv3-hba2 member oracle-hds-srv4-hba0 member oracle-hds-srv4-hba2 member server1-boot-hba0 member server2-boot-hba0 member server3-boot-hba0 member server4-boot-hba0

zoneset activate name Oracle-HDS-A vsan 101!Full Zone Database Section for vsan 102zone name oracle-hds-srv1-hba1 vsan 102

Cisco Nexus 5548 Fabric B Configuration

!Command: show running-config!Time: Thu Nov 19 21:04:08 2009

version 7.0(2)N1(1)feature fcoehostname Oracle-HDS-N5K-Bfeature npivfeature telnetcfs eth distributefeature interface-vlanfeature hsrpfeature lacpfeature vpcfeature lldpusername admin password 5 $1$W3eLJoVN$YSq0BYeEM42vWyMwdjguY. role network-adminip domain-lookuppolicy-map type network-qos jumbo class type network-qos class-fcoe pause no-drop mtu 2158 class type network-qos class-default mtu 9216 multicast-optimizesystem qos

119Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 120: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type qos input fcoe-default-in-policy service-policy type network-qos jumboslot 2 port 1-16 type fcsnmp-server user admin network-admin auth md5 0x15de5eb8b495705ef3fea8c58fdbdbee priv 0x15de5eb8b495705ef3fea8c58fdbdbee localizedkeyrmon event 1 log trap public description FATAL(1) owner PMON@FATALrmon event 2 log trap public description CRITICAL(2) owner PMON@CRITICALrmon event 3 log trap public description ERROR(3) owner PMON@ERRORrmon event 4 log trap public description WARNING(4) owner PMON@WARNINGrmon event 5 log trap public description INFORMATION(5) owner PMON@INFOvlan 1vlan 10 name Public_Networkvlan 191 name Private_Networkspanning-tree port type edge bpduguard defaultspanning-tree port type network defaultvrf context management ip route 0.0.0.0/0 173.36.215.1vpc domain 1 role priority 20 peer-keepalive destination 173.36.215.61 source 173.36.215.62 delay restore 150 auto-recoveryvsan database vsan 101 name "Fabric_A" vsan 102 name "Fabric_B" device-alias database device-alias name Oracle-Srv4-hba3 pwwn 20:00:00:25:b5:20:b0:09device-alias commitfcdomain fcid database vsan 102 wwn 20:22:00:2a:6a:6c:2c:80 fcid 0x210000 dynamic vsan 102 wwn 20:23:00:2a:6a:6c:2c:80 fcid 0x210020 dynamic vsan 102 wwn 20:24:00:2a:6a:6c:2c:80 fcid 0x210040 dynamic vsan 102 wwn 20:21:00:2a:6a:6c:2c:80 fcid 0x210060 dynamic vsan 102 wwn 50:06:0e:80:07:27:9a:20 fcid 0x210080 dynamic vsan 102 wwn 50:06:0e:80:07:27:9a:22 fcid 0x2100a0 dynamic vsan 102 wwn 50:06:0e:80:07:27:9a:30 fcid 0x2100c0 dynamic vsan 102 wwn 50:06:0e:80:07:27:9a:32 fcid 0x210100 dynamic vsan 102 wwn 50:06:0e:80:07:c3:da:30 fcid 0x2100c1 dynamic vsan 102 wwn 50:06:0e:80:07:c3:da:32 fcid 0x210101 dynamic! [Storage2-4C] vsan 102 wwn 50:06:0e:80:07:c3:da:20 fcid 0x210081 dynamic vsan 102 wwn 50:06:0e:80:07:c3:da:22 fcid 0x2100a1 dynamic! [Storage1-3C] vsan 102 wwn 20:00:00:25:b5:20:b0:0e fcid 0x210001 dynamic! [Oracle-Srv1-hba1] vsan 102 wwn 20:00:00:25:b5:20:b0:0a fcid 0x210061 dynamic! [Oracle-Srv3-hba1] vsan 102 wwn 20:00:00:25:b5:20:b0:0c fcid 0x210041 dynamic! [Oracle-Srv2-hba1] vsan 102 wwn 20:00:00:25:b5:20:b0:08 fcid 0x210021 dynamic! [Oracle-Srv4-hba1] vsan 102 wwn 20:00:00:25:b5:20:b0:0f fcid 0x210002 dynamic

120Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 121: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

! [Oracle-Srv1-hba3] vsan 102 wwn 20:00:00:25:b5:20:b0:0d fcid 0x210042 dynamic! [Oracle-Srv2-hba3] vsan 102 wwn 20:00:00:25:b5:20:b0:0b fcid 0x210062 dynamic! [Oracle-Srv3-hba3] vsan 102 wwn 20:00:00:25:b5:20:b0:09 fcid 0x210022 dynamic! [Oracle-Srv4-hba3] vsan 102 wwn 50:06:0e:80:07:c3:da:60 fcid 0x210140 dynamic vsan 102 wwn 50:06:0e:80:07:c3:da:62 fcid 0x210160 dynamic! [Storage1-7C] vsan 102 wwn 50:06:0e:80:07:c3:da:70 fcid 0x210180 dynamic vsan 102 wwn 50:06:0e:80:07:c3:da:72 fcid 0x2101a0 dynamic! [Storage2-8C] vsan 102 wwn 50:06:0e:80:07:c3:da:02 fcid 0x2100a2 dynamic! [Storage1-1C] vsan 102 wwn 50:06:0e:80:07:c3:da:42 fcid 0x210141 dynamic! [Storage1-5C] vsan 102 wwn 50:06:0e:80:07:c3:da:12 fcid 0x210102 dynamic! [Storage2-2C] vsan 102 wwn 50:06:0e:80:07:c3:da:52 fcid 0x2101c0 dynamic! [Storage2-6C]

interface Vlan1 no shutdown

interface Vlan10 no shutdown ip address 10.36.215.3/24 hsrp version 2 hsrp 10 preempt priority 110 ip 10.36.215.1interface Vlan191 no shutdown ip address 191.168.1.3/24 hsrp version 2 hsrp 191 preempt priority 110 ip 191.168.1.1

interface port-channel1 description vPC peer-link switchport mode trunk switchport trunk allowed vlan 1,10,191 spanning-tree port type network vpc peer-link

interface port-channel3 description Fabric_Interconnect_A switchport mode trunk switchport trunk allowed vlan 1,10,191 spanning-tree port type edge trunk vpc 3

interface port-channel4

121Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 122: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

description Fabric_Interconnect_B switchport mode trunk switchport trunk allowed vlan 1,10,191 spanning-tree port type edge trunk vpc 4vsan database vsan 102 interface fc2/1 vsan 102 interface fc2/2 vsan 102 interface fc2/3 vsan 102 interface fc2/4 vsan 102 interface fc2/5 vsan 102 interface fc2/6 vsan 102 interface fc2/7 vsan 102 interface fc2/8 vsan 102 interface fc2/9 vsan 102 interface fc2/10 vsan 102 interface fc2/11 vsan 102 interface fc2/12 vsan 102 interface fc2/13 vsan 102 interface fc2/14 vsan 102 interface fc2/15 vsan 102 interface fc2/16

interface fc2/1 no shutdown

interface fc2/2 no shutdown

interface fc2/3 no shutdown

interface fc2/4 no shutdown

interface fc2/5 no shutdown

interface fc2/6 no shutdown

interface fc2/7 no shutdown

interface fc2/8 no shutdown

interface fc2/9 no shutdown

interface fc2/10 no shutdown

interface fc2/11 no shutdown

interface fc2/12 no shutdown

122Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 123: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

interface fc2/13 no shutdown

interface fc2/14no shutdown

interface fc2/15 no shutdown

interface fc2/16 no shutdown

interface Ethernet1/1 description Nexus5k-B-Cluster-Interconnect switchport mode trunk switchport trunk allowed vlan 1,10,191 channel-group 1 mode active

interface Ethernet1/2 description Nexus5k-B-Cluster-Interconnect switchport mode trunk switchport trunk allowed vlan 1,10,191 channel-group 1 mode active

interface Ethernet1/3 description Fabric_Interconnect_A:1/2 switchport mode trunk switchport trunk allowed vlan 1,10,191 channel-group 3 mode active

interface Ethernet1/4 description Fabric_Interconnect_B:1/2 switchport mode trunkswitchport trunk allowed vlan 1,10,191 channel-group 4 mode active

interface Ethernet1/5

interface Ethernet1/6

interface Ethernet1/7

interface Ethernet1/8

interface Ethernet1/9

interface Ethernet1/10

interface Ethernet1/11

interface Ethernet1/12

interface Ethernet1/13

interface Ethernet1/14

interface Ethernet1/15

123Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 124: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

interface Ethernet1/16

interface Ethernet1/17 description FCoE_FI_A_18

interface Ethernet1/18 description FCoE_FI_B_18

interface Ethernet1/19

interface Ethernet1/20

interface Ethernet1/21

interface Ethernet1/22

interface Ethernet1/23

interface Ethernet1/24

interface Ethernet1/25

interface Ethernet1/26

interface Ethernet1/27

interface Ethernet1/28

interface Ethernet1/29

interface Ethernet1/30

interface Ethernet1/31shutdown switchport mode trunk speed 1000

interface Ethernet1/32 shutdown switchport mode trunk speed 1000

interface mgmt0 vrf member management ip address 173.36.215.62/24line consoleline vtyboot kickstart bootflash:/n5000-uk9-kickstart.7.0.2.N1.1.binboot system bootflash:/n5000-uk9.7.0.2.N1.1.bin interface fc2/1interface fc2/2interface fc2/3interface fc2/4interface fc2/5interface fc2/6interface fc2/7interface fc2/8

124Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 125: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix

interface fc2/9interface fc2/10interface fc2/11interface fc2/12interface fc2/13interface fc2/14interface fc2/15interface fc2/16!Full Zone Database Section for vsan 102zone name oracle-hds-srv1-hba1 vsan 102 member pwwn 20:00:00:25:b5:20:b0:0e! [Oracle-Srv1-hba1] member pwwn 50:06:0e:80:07:c3:da:02! [Storage1-1C]

zone name oracle-hds-srv1-hba3 vsan 102 member pwwn 20:00:00:25:b5:20:b0:0f! [Oracle-Srv1-hba3] member pwwn 50:06:0e:80:07:c3:da:12! [Storage2-2C]

zone name oracle-hds-srv2-hba1 vsan 102 member pwwn 20:00:00:25:b5:20:b0:0c! [Oracle-Srv2-hba1] member pwwn 50:06:0e:80:07:c3:da:22! [Storage1-3C]

zone name oracle-hds-srv2-hba3 vsan 102 member pwwn 20:00:00:25:b5:20:b0:0d! [Oracle-Srv2-hba3] member pwwn 50:06:0e:80:07:c3:da:32! [Storage2-4C]

zone name oracle-hds-srv3-hba1 vsan 102 member pwwn 20:00:00:25:b5:20:b0:0a! [Oracle-Srv3-hba1] member pwwn 50:06:0e:80:07:c3:da:42! [Storage1-5C]

zone name oracle-hds-srv3-hba3 vsan 102 member pwwn 20:00:00:25:b5:20:b0:0b! [Oracle-Srv3-hba3] member pwwn 50:06:0e:80:07:c3:da:52! [Storage2-6C]

zone name oracle-hds-srv4-hba1 vsan 102 member pwwn 20:00:00:25:b5:20:b0:08! [Oracle-Srv4-hba1] member pwwn 50:06:0e:80:07:c3:da:62! [Storage1-7C]

zone name oracle-hds-srv4-hba3 vsan 102 member pwwn 20:00:00:25:b5:20:b0:09! [Oracle-Srv4-hba3] member pwwn 50:06:0e:80:07:c3:da:72! [Storage2-8C]

zone name server1-boot-hba1 vsan 102

125Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 126: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix B: Cisco UCS Service Profiles

member pwwn 50:06:0e:80:07:c3:da:22! [Storage1-3C] member pwwn 50:06:0e:80:07:c3:da:32! [Storage2-4C] member pwwn 20:00:00:25:b5:20:b0:0e! [Oracle-Srv1-hba1]

zone name server2-boot-hba1 vsan 102 member pwwn 20:00:00:25:b5:20:b0:0c! [Oracle-Srv2-hba1] member pwwn 50:06:0e:80:07:c3:da:22! [Storage1-3C] member pwwn 50:06:0e:80:07:c3:da:32! [Storage2-4C]

zone name server3-boot-hba1 vsan 102 member pwwn 20:00:00:25:b5:20:b0:0a! [Oracle-Srv3-hba1] member pwwn 50:06:0e:80:07:c3:da:62! [Storage1-7C] member pwwn 50:06:0e:80:07:c3:da:72! [Storage2-8C]

zone name server4-boot-hba1 vsan 102 member pwwn 20:00:00:25:b5:20:b0:08! [Oracle-Srv4-hba1] member pwwn 50:06:0e:80:07:c3:da:62! [Storage1-7C] member pwwn 50:06:0e:80:07:c3:da:72! [Storage2-8C] member pwwn 20:00:00:25:b5:20:b0:09! [Oracle-Srv4-hba3]

zoneset name Oracle-HDS-B vsan 102 member oracle-hds-srv1-hba1 member oracle-hds-srv1-hba3 member oracle-hds-srv2-hba1 member oracle-hds-srv2-hba3 member oracle-hds-srv3-hba1 member oracle-hds-srv3-hba3 member oracle-hds-srv4-hba1 member oracle-hds-srv4-hba3 member server1-boot-hba1 member server2-boot-hba1 member server3-boot-hba1 member server4-boot-hba1

zoneset activate name Oracle-HDS-B vsan 102

Appendix B: Cisco UCS Service ProfilesOracle-HDS-FI-A# show fabric-interconnect

Fabric Interconnect:

126Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 127: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix B: Cisco UCS Service Profiles

ID OOB IP Addr OOB Gateway OOB Netmask OOB IPv6 Address OOB IPv6 Gateway Prefix Operability ---- --------------- --------------- --------------- ---------------- ---------------- ------ ----------- A 173.36.215.58 173.36.215.1 255.255.255.0 :: :: 64 Operable B 173.36.215.59 173.36.215.1 255.255.255.0 :: :: 64 Operable

Oracle-HDS-FI-A# show fabric version Fabric Interconnect A: Running-Kern-Vers: 5.2(3)N2(2.22c) Running-Sys-Vers: 5.2(3)N2(2.22c) Package-Vers: 2.2(2c)A Startup-Kern-Vers: 5.2(3)N2(2.22c) Startup-Sys-Vers: 5.2(3)N2(2.22c) Act-Kern-Status: Ready Act-Sys-Status: Ready Bootloader-Vers: v3.6.0(05/09/2012)

Fabric Interconnect B: Running-Kern-Vers: 5.2(3)N2(2.22c) Running-Sys-Vers: 5.2(3)N2(2.22c) Package-Vers: 2.2(2c)A Startup-Kern-Vers: 5.2(3)N2(2.22c) Startup-Sys-Vers: 5.2(3)N2(2.22c) Act-Kern-Status: Ready Act-Sys-Status: Ready Bootloader-Vers: v3.6.0(05/09/2012)

Oracle-HDS-FI-A# show server inventory Server Equipped PID Equipped VID Equipped Serial (SN) Slot Status Ackd Memory (MB) Ackd Cores------- ------------ ------------ -------------------- ---------------- ---------------- ----------1/1 UCSB-B200-M3 V01 FCH16507NMC Equipped 262144 241/2 UCSB-B200-M3 V01 FCH16507P21 Equipped 262144 241/3 UCSB-B200-M3 V01 FCH16217L51 Equipped 131072 241/4 UCSB-B200-M3 V01 FCH16287KJ0 Equipped 131072 241/5 Empty1/6 Empty1/7 Empty1/8 Empty2/1 UCSB-B200-M3 V01 FCH1734J49Y Equipped 262144 242/2 UCSB-B200-M3 V01 FCH1734J46P Equipped 262144 242/3 UCSB-B200-M3 V01 FCH16207L12 Equipped 131072 242/4 Empty2/5 Empty2/6 Empty2/7 Empty2/8 Empty

127Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 128: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix B: Cisco UCS Service Profiles

Oracle-HDS-FI-A# show service-profile inventory Service Profile Name Type Server Assignment Association-------------------- ----------------- ------- ---------- -----------Oracle-HDS-Fabric-A Updating Template Unassigned UnassociatedOracle-HDS-Fabric-B Updating Template Unassigned UnassociatedOracle-HDS-SP-A1 Instance 1/1 Assigned AssociatedOracle-HDS-SP-A2 Instance 2/1 Assigned AssociatedOracle-HDS-SP-B1 Instance 1/2 Assigned AssociatedOracle-HDS-SP-B2 Instance 2/2 Assigned AssociatedOracle_Client1 Instance 1/3 Assigned AssociatedOracle_Client2 Instance 2/3 Assigned Associated

Oracle-HDS-FI-A# show service-profile inventory expand Service Profile Name: Oracle-HDS-Fabric-AType: Updating TemplateServer:Description:Assignment: UnassignedAssociation: Unassociated

Service Profile Name: Oracle-HDS-Fabric-BType: Updating TemplateServer:Description:Assignment: UnassignedAssociation: Unassociated

Service Profile Name: Oracle-HDS-SP-A1Type: InstanceServer: 1/1Description:Assignment: AssignedAssociation: Associated Server 1/1: Name: Acknowledged Serial (SN): FCH16507NMC Acknowledged Product Name: Cisco UCS B200 M3 Acknowledged PID: UCSB-B200-M3 Acknowledged VID: V03 Acknowledged Memory (MB): 262144 Acknowledged Effective Memory (MB): 262144 Acknowledged Cores: 24 Acknowledged Adapters: 1

Bios: Model: UCSB-B200-M3 Revision: 0 Serial: Vendor: Cisco Systems, Inc.

Motherboard: Product Name: Cisco UCS B200 M3 PID: UCSB-B200-M3 VID: V01 Vendor: Cisco Systems Inc Serial (SN): FCH16507NMC

128Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 129: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix B: Cisco UCS Service Profiles

HW Revision: 0

Array 1: DIMM Location Presence Overall Status Type Capacity (MB) Clock ---- ---------- ---------------- ------------------------ ------------ ------------- ----- 1 A0 Equipped Operable DDR3 16384 1866 2 A1 Equipped Operable DDR3 16384 1866 3 A2 Missing Removed Undisc Unknown Unknown 4 B0 Equipped Operable DDR3 16384 1866 5 B1 Equipped Operable DDR3 16384 1866 6 B2 Missing Removed Undisc Unknown Unknown 7 C0 Equipped Operable DDR3 16384 1866 8 C1 Equipped Operable DDR3 16384 1866 9 C2 Missing Removed Undisc Unknown Unknown 10 D0 Equipped Operable DDR3 16384 1866 11 D1 Equipped Operable DDR3 16384 1866 12 D2 Missing Removed Undisc Unknown Unknown 13 E0 Equipped Operable DDR3 16384 1866 14 E1 Equipped Operable DDR3 16384 1866 15 E2 Missing Removed Undisc Unknown Unknown 16 F0 Equipped Operable DDR3 16384 1866 17 F1 Equipped Operable DDR3 16384 1866 18 F2 Missing Removed Undisc Unknown Unknown 19 G0 Equipped Operable DDR3 16384 1866 20 G1 Equipped Operable DDR3 16384 1866 21 G2 Missing Removed Undisc Unknown Unknown 22 H0 Equipped Operable DDR3 16384 1866 23 H1 Equipped Operable DDR3 16384 1866 24 H2 Missing Removed Undisc Unknown Unknown

CPUs: ID: 1

129Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 130: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix B: Cisco UCS Service Profiles

Presence: Equipped Architecture: Xeon Socket: CPU1 Cores: 12 Speed (GHz): 2.700000 Stepping: 4 Product Name: Intel(R) Xeon(R) E5-2697 v2 PID: UCS-CPU-E52697B VID: V01 Vendor: Intel(R) Corporation HW Revision: 0 ID: 2 Presence: Equipped Architecture: Xeon Socket: CPU2 Cores: 12 Speed (GHz): 2.700000 Stepping: 4 Product Name: Intel(R) Xeon(R) E5-2697 v2 PID: UCS-CPU-E52697B VID: V01 Vendor: Intel(R) Corporation HW Revision: 0

RAID Controller 1: Type: SAS Vendor: LSI Logic Symbios Logic Model: LSI MegaRAID SAS 2004 ROMB Serial: LSIROMB-0 HW Revision: B2 PCI Addr: 01:00.0 Raid Support: RAID0, RAID1 OOB Interface Supported: Yes Rebuild Rate: 30 Controller Status: Optimal

Local Disk 1: Product Name: PID: VID: Vendor: Model: Vendor Description: Serial: HW Rev: 0 Block Size: Unknown Blocks: Unknown Operability: N/A Oper Qualifier Reason: N/A Presence: Missing Size (MB): Unknown Drive State: Unknown Power State: Unknown Link Speed: Unknown Device Type: Unspecified

Local Disk 2: Product Name:

130Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 131: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

PID: VID: Vendor: Model: Vendor Description: Serial: HW Rev: 0 Block Size: Unknown Blocks: Unknown Operability: N/A Oper Qualifier Reason: N/A Presence: Missing Size (MB): Unknown Drive State: Unknown Power State: Unknown Link Speed: Unknown Device Type: Unspecified

Local Disk Config Definition: Mode: Any Configuration Description: Protect Configuration: Yes

Adapter:

Adapter PID Vendor Serial Overall Status ------- ------------ ----------------- ------------ -------------- 1 UCSB-MLOM-40G-01 Cisco Systems Inc FCH16507D06 Operable

Appendix C: Verify Oracle RAC Cluster Status Command Output

[root@oracle-hds-srv1 ~]# /u01/app/11.2.4/grid/bin/crsctl check cluster -all**************************************************************oracle-hds-srv1:CRS-4537: Cluster Ready Services is onlineCRS-4529: Cluster Synchronization Services is onlineCRS-4533: Event Manager is online**************************************************************oracle-hds-srv2:CRS-4537: Cluster Ready Services is onlineCRS-4529: Cluster Synchronization Services is onlineCRS-4533: Event Manager is online**************************************************************oracle-hds-srv3:CRS-4537: Cluster Ready Services is onlineCRS-4529: Cluster Synchronization Services is onlineCRS-4533: Event Manager is online**************************************************************oracle-hds-srv4:CRS-4537: Cluster Ready Services is onlineCRS-4529: Cluster Synchronization Services is onlineCRS-4533: Event Manager is online

131Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 132: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

**************************************************************[root@oracle-hds-srv1 ~]# /u01/app/11.2.4/grid/bin/crs_statNAME=ora.BACKUPDG.dgTYPE=ora.diskgroup.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.DATADG.dgTYPE=ora.diskgroup.typeTARGET=ONLINESTATE=OFFLINE

NAME=ora.DSSDG.dgTYPE=ora.diskgroup.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.LISTENER.lsnrTYPE=ora.listener.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.LISTENER_SCAN1.lsnrTYPE=ora.scan_listener.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv2

NAME=ora.LISTENER_SCAN2.lsnrTYPE=ora.scan_listener.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv4

NAME=ora.LISTENER_SCAN3.lsnrTYPE=ora.scan_listener.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv3

NAME=ora.OCRVOTE.dgTYPE=ora.diskgroup.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.OLTPDG.dgTYPE=ora.diskgroup.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.REDODG.dgTYPE=ora.diskgroup.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.REDOGROUP.dgTYPE=ora.diskgroup.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.asm

132Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 133: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

TYPE=ora.asm.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.cvuTYPE=ora.cvu.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv3

NAME=ora.dss16db.dbTYPE=ora.database.typeTARGET=ONLINESTATE=OFFLINE

NAME=ora.dssdb.dbTYPE=ora.database.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.gsdTYPE=ora.gsd.typeTARGET=OFFLINESTATE=OFFLINE

NAME=ora.net1.networkTYPE=ora.network.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.oc4jTYPE=ora.oc4j.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv3

NAME=ora.oltpdb.dbTYPE=ora.database.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.onsTYPE=ora.ons.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.oracle-hds-srv1.ASM1.asmTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.oracle-hds-srv1.LISTENER_ORACLE-HDS-SRV1.lsnrTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.oracle-hds-srv1.gsdTYPE=applicationTARGET=OFFLINESTATE=OFFLINE

133Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 134: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

NAME=ora.oracle-hds-srv1.onsTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.oracle-hds-srv1.vipTYPE=ora.cluster_vip_net1.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.oracle-hds-srv2.ASM2.asmTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv2

NAME=ora.oracle-hds-srv2.LISTENER_ORACLE-HDS-SRV2.lsnrTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv2

NAME=ora.oracle-hds-srv2.gsdTYPE=applicationTARGET=OFFLINESTATE=OFFLINE

NAME=ora.oracle-hds-srv2.onsTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv2

NAME=ora.oracle-hds-srv2.vipTYPE=ora.cluster_vip_net1.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv2

NAME=ora.oracle-hds-srv3.ASM3.asmTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv3

NAME=ora.oracle-hds-srv3.LISTENER_ORACLE-HDS-SRV3.lsnrTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv3

NAME=ora.oracle-hds-srv3.gsdTYPE=applicationTARGET=OFFLINESTATE=OFFLINE

NAME=ora.oracle-hds-srv3.onsTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv3

NAME=ora.oracle-hds-srv3.vipTYPE=ora.cluster_vip_net1.type

134Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 135: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

TARGET=ONLINESTATE=ONLINE on oracle-hds-srv3

NAME=ora.oracle-hds-srv4.ASM4.asmTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv4

NAME=ora.oracle-hds-srv4.LISTENER_ORACLE-HDS-SRV4.lsnrTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv4

NAME=ora.oracle-hds-srv4.gsdTYPE=applicationTARGET=OFFLINESTATE=OFFLINE

NAME=ora.oracle-hds-srv4.onsTYPE=applicationTARGET=ONLINESTATE=ONLINE on oracle-hds-srv4

NAME=ora.oracle-hds-srv4.vipTYPE=ora.cluster_vip_net1.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv4

NAME=ora.registry.acfsTYPE=ora.registry.acfs.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv1

NAME=ora.scan1.vipTYPE=ora.scan_vip.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv2

NAME=ora.scan2.vipTYPE=ora.scan_vip.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv4

NAME=ora.scan3.vipTYPE=ora.scan_vip.typeTARGET=ONLINESTATE=ONLINE on oracle-hds-srv3

[root@oracle-hds-srv1 ~]# /u01/app/11.2.4/grid/bin/ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3580 Available space (kbytes) : 258540 ID : 290914339 Device/File Name : +OCRVOTE Device/File integrity check succeeded

135Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 136: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@oracle-hds-srv1 ~]# /u01/app/11.2.4/grid/bin/crsctl query css votedisk## STATE File Universal Id File Name Disk group-- ----- ----------------- --------- --------- 1. ONLINE b00a292a5f614f76bfb48ebfb99e5e37 (/dev/oracleasm/disks/ASMDISK1) [OCRVOTE] 2. ONLINE 01c6249813e34fe0bfe2e1695b67f99d (/dev/oracleasm/disks/ASMDISK2) [OCRVOTE] 3. ONLINE 3788279ac1204f06bf09f3e98c5feba0 (/dev/oracleasm/disks/ASMDISK3) [OCRVOTE]Located 3 voting disk(s).

[grid@oracle-hds-srv1 ~]$ cluvfy comp sys -n oracle-hds-srv1,oracle-hds-srv2,oracle-hds-srv3,oracle-hds-srv4 -p crs -verbose

Verifying system requirement

Check: Total memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 252.4143GB (2.64675576E8KB) 1.5GB (1572864.0KB) passed oracle-hds-srv1 252.4143GB (2.64675576E8KB) 1.5GB (1572864.0KB) passed oracle-hds-srv4 252.4143GB (2.64675576E8KB) 1.5GB (1572864.0KB) passed oracle-hds-srv3 252.4143GB (2.64675576E8KB) 1.5GB (1572864.0KB) passedResult: Total memory check passed

Check: Available memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 105.803GB (1.10942452E8KB) 50MB (51200.0KB) passed oracle-hds-srv1 105.2837GB (1.10397972E8KB) 50MB (51200.0KB) passed oracle-hds-srv4 106.1908GB (1.11349132E8KB) 50MB (51200.0KB) passed oracle-hds-srv3 105.8552GB (1.10997228E8KB) 50MB (51200.0KB) passedResult: Available memory check passed

Check: Swap space

136Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 137: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 79.1016GB (8.2943996E7KB) 16GB (1.6777216E7KB) passed oracle-hds-srv1 79.1016GB (8.2943996E7KB) 16GB (1.6777216E7KB) passed oracle-hds-srv4 79.1016GB (8.2943996E7KB) 16GB (1.6777216E7KB) passed oracle-hds-srv3 79.1016GB (8.2943996E7KB) 16GB (1.6777216E7KB) passedResult: Swap space check passed

Check: Free disk space for "oracle-hds-srv2:/u01/app/11.2.4/grid" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.4/grid oracle-hds-srv2 / 97.2861GB 5.5GB passedResult: Free disk space check passed for "oracle-hds-srv2:/u01/app/11.2.4/grid"

Check: Free disk space for "oracle-hds-srv1:/u01/app/11.2.4/grid" Path Node Name Mount point Available Required Status---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.4/grid oracle-hds-srv1 / 35.3184GB 5.5GB passedResult: Free disk space check passed for "oracle-hds-srv1:/u01/app/11.2.4/grid"

Check: Free disk space for "oracle-hds-srv4:/u01/app/11.2.4/grid" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.4/grid oracle-hds-srv4 / 97.9219GB 5.5GB passedResult: Free disk space check passed for "oracle-hds-srv4:/u01/app/11.2.4/grid"

Check: Free disk space for "oracle-hds-srv3:/u01/app/11.2.4/grid" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.4/grid oracle-hds-srv3 / 97.3613GB 5.5GB passedResult: Free disk space check passed for "oracle-hds-srv3:/u01/app/11.2.4/grid"

Check: Free disk space for "oracle-hds-srv2:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------

137Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 138: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

/tmp oracle-hds-srv2 /tmp 97.2861GB 1GB passedResult: Free disk space check passed for "oracle-hds-srv2:/tmp"

Check: Free disk space for "oracle-hds-srv1:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /tmp oracle-hds-srv1 /tmp 35.3184GB 1GB passedResult: Free disk space check passed for "oracle-hds-srv1:/tmp"

Check: Free disk space for "oracle-hds-srv4:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /tmp oracle-hds-srv4 /tmp 97.9219GB 1GB passedResult: Free disk space check passed for "oracle-hds-srv4:/tmp"

Check: Free disk space for "oracle-hds-srv3:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /tmp oracle-hds-srv3 /tmp 97.3613GB 1GB passedResult: Free disk space check passed for "oracle-hds-srv3:/tmp"

Check: User existence for "grid" Node Name Status Comment ------------ ------------------------ ------------------------ oracle-hds-srv2 passed exists(5002) oracle-hds-srv1 passed exists(5002) oracle-hds-srv4 passed exists(5002) oracle-hds-srv3 passed exists(5002)

Checking for multiple users with UID value 5002Result: Check for multiple users with UID value 5002 passedResult: User existence check passed for "grid"

Check: Group existence for "oinstall" Node Name Status Comment ------------ ------------------------ ------------------------ oracle-hds-srv2 passed exists oracle-hds-srv1 passed exists oracle-hds-srv4 passed exists oracle-hds-srv3 passed existsResult: Group existence check passed for "oinstall"

Check: Group existence for "dba" Node Name Status Comment ------------ ------------------------ ------------------------ oracle-hds-srv2 passed exists oracle-hds-srv1 passed exists oracle-hds-srv4 passed exists

138Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 139: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

oracle-hds-srv3 passed existsResult: Group existence check passed for "dba"

Check: Membership of user "grid" in group "oinstall" [as Primary] Node Name User Exists Group Exists User in Group Primary Status ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 yes yes yes yes passed oracle-hds-srv1 yes yes yes yes passed oracle-hds-srv4 yes yes yes yes passedoracle-hds-srv3 yes yes yes yes passedResult: Membership check for user "grid" in group "oinstall" [as Primary] passed

Check: Membership of user "grid" in group "dba" Node Name User Exists Group Exists User in Group Status ---------------- ------------ ------------ ------------ ---------------- oracle-hds-srv2 yes yes yes passed oracle-hds-srv1 yes yes yes passed oracle-hds-srv4 yes yes yes passed oracle-hds-srv3 yes yes yes passedResult: Membership check for user "grid" in group "dba" passed

Check: Run level Node Name run level Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 5 3,5 passed oracle-hds-srv1 5 3,5 passed oracle-hds-srv4 5 3,5 passed oracle-hds-srv3 5 3,5 passedResult: Run level check passed

Check: Hard limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- oracle-hds-srv2 hard 262144 65536 passed oracle-hds-srv1 hard 262144 65536 passed oracle-hds-srv4 hard 262144 65536 passed oracle-hds-srv3 hard 262144 65536 passedResult: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- oracle-hds-srv2 soft 262144 1024 passed

139Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 140: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

oracle-hds-srv1 soft 262144 1024 passed oracle-hds-srv4 soft 262144 1024 passed oracle-hds-srv3 soft 262144 1024 passedResult: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- oracle-hds-srv2 hard 65535 16384 passed oracle-hds-srv1 hard 65535 16384 passed oracle-hds-srv4 hard 65535 16384 passed oracle-hds-srv3 hard 65535 16384 passedResult: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- oracle-hds-srv2 soft 65535 2047 passed oracle-hds-srv1 soft 65535 2047 passed oracle-hds-srv4 soft 65535 2047 passed oracle-hds-srv3 soft 65535 2047 passedResult: Soft limits check passed for "maximum user processes"

Check: System architecture Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 x86_64 x86_64 passed oracle-hds-srv1 x86_64 x86_64 passed oracle-hds-srv4 x86_64 x86_64 passed oracle-hds-srv3 x86_64 x86_64 passedResult: System architecture check passed

Check: Kernel version Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 2.6.39-400.17.1.el6uek.x86_64 2.6.32 passed oracle-hds-srv1 2.6.39-400.17.1.el6uek.x86_64 2.6.32 passed oracle-hds-srv4 2.6.39-400.17.1.el6uek.x86_64 2.6.32 passed oracle-hds-srv3 2.6.39-400.17.1.el6uek.x86_64 2.6.32 passedResult: Kernel version check passed

Check: Kernel parameter for "semmsl" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------

140Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 141: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

oracle-hds-srv2 8192 8192 250 passed oracle-hds-srv1 8192 8192 250 passed oracle-hds-srv4 8192 8192 250 passed oracle-hds-srv3 8192 8192 250 passedResult: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 48000 48000 32000 passed oracle-hds-srv1 48000 48000 32000 passed oracle-hds-srv4 48000 48000 32000 passed oracle-hds-srv3 48000 48000 32000 passedResult: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 8192 8192 100 passed oracle-hds-srv1 8192 8192 100 passed oracle-hds-srv4 8192 8192 100 passed oracle-hds-srv3 8192 8192 100 passedResult: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 8192 8192 128 passed oracle-hds-srv1 8192 8192 128 passed oracle-hds-srv4 8192 8192 128 passed oracle-hds-srv3 8192 8192 128 passedResult: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 4398046511104 4398046511104 4294967295 passed oracle-hds-srv1 4398046511104 4398046511104 4294967295 passed oracle-hds-srv4 4398046511104 4398046511104 4294967295 passed oracle-hds-srv3 4398046511104 4398046511104 4294967295 passedResult: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 4096 4096 4096 passed oracle-hds-srv1 4096 4096 4096 passed oracle-hds-srv4 4096 4096 4096 passed

141Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 142: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

oracle-hds-srv3 4096 4096 4096 passedResult: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 1073741824 1073741824 2097152 passed oracle-hds-srv1 1073741824 1073741824 2097152 passed oracle-hds-srv4 1073741824 1073741824 2097152 passed oracle-hds-srv3 1073741824 1073741824 2097152 passedResult: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 6815744 6815744 6815744 passed oracle-hds-srv1 6815744 6815744 6815744 passed oracle-hds-srv4 6815744 6815744 6815744 passed oracle-hds-srv3 6815744 6815744 6815744 passedResult: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------oracle-hds-srv2 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed oracle-hds-srv1 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed oracle-hds-srv4 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed oracle-hds-srv3 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passedResult: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 4194304 4194304 262144 passed oracle-hds-srv1 4194304 4194304 262144 passed oracle-hds-srv4 4194304 4194304 262144 passed oracle-hds-srv3 4194304 4194304 262144 passedResult: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 16777216 16777216 4194304 passed oracle-hds-srv1 16777216 16777216 4194304 passed

142Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 143: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

oracle-hds-srv4 16777216 16777216 4194304 passed oracle-hds-srv3 16777216 16777216 4194304 passedResult: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 4194304 4194304 262144 passed oracle-hds-srv1 4194304 4194304 262144 passed oracle-hds-srv4 4194304 4194304 262144 passed oracle-hds-srv3 4194304 4194304 262144 passedResult: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 16777216 16777216 1048576 passed oracle-hds-srv1 16777216 16777216 1048576 passedoracle-hds-srv4 16777216 16777216 1048576 passed oracle-hds-srv3 16777216 16777216 1048576 passedResult: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ oracle-hds-srv2 2097152 2097152 1048576 passed oracle-hds-srv1 2097152 2097152 1048576 passed oracle-hds-srv4 2097152 2097152 1048576 passed oracle-hds-srv3 2097152 2097152 1048576 passedResult: Kernel parameter check passed for "aio-max-nr"

Check: Package existence for "binutils" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed oracle-hds-srv1 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed oracle-hds-srv4 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed oracle-hds-srv3 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passedResult: Package existence check passed for "binutils"

Check: Package existence for "compat-libcap1" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 compat-libcap1-1.10-1 compat-libcap1-1.10 passed

143Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 144: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

oracle-hds-srv1 compat-libcap1-1.10-1 compat-libcap1-1.10 passed oracle-hds-srv4 compat-libcap1-1.10-1 compat-libcap1-1.10 passed oracle-hds-srv3 compat-libcap1-1.10-1 compat-libcap1-1.10 passedResult: Package existence check passed for "compat-libcap1"

Check: Package existence for "compat-libstdc++-33(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed oracle-hds-srv1 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed oracle-hds-srv4 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed oracle-hds-srv3 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passedResult: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "libgcc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passed oracle-hds-srv1 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passed oracle-hds-srv4 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passed oracle-hds-srv3 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passedResult: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passed oracle-hds-srv1 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passed oracle-hds-srv4 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passed oracle-hds-srv3 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passedResult: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passed oracle-hds-srv1 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passed

144Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 145: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

oracle-hds-srv4 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passed oracle-hds-srv3 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passedResult: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed oracle-hds-srv1 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed oracle-hds-srv4 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed oracle-hds-srv3 sysstat-9.0.4-20.el6 sysstat-9.0.4 passedResult: Package existence check passed for "sysstat"

Check: Package existence for "gcc" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 gcc-4.4.7-3.el6 gcc-4.4.4 passed oracle-hds-srv1 gcc-4.4.7-3.el6 gcc-4.4.4 passed oracle-hds-srv4 gcc-4.4.7-3.el6 gcc-4.4.4 passed oracle-hds-srv3 gcc-4.4.7-3.el6 gcc-4.4.4 passedResult: Package existence check passed for "gcc"

Check: Package existence for "gcc-c++" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passed oracle-hds-srv1 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passed oracle-hds-srv4 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passed oracle-hds-srv3 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passedResult: Package existence check passed for "gcc-c++"

Check: Package existence for "ksh" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 ksh-20100621-19.el6 ksh-20100621 passed oracle-hds-srv1 ksh-20100621-19.el6 ksh-20100621 passed oracle-hds-srv4 ksh-20100621-19.el6 ksh-20100621 passed

145Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 146: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix C: Verify Oracle RAC Cluster Status Command Output

oracle-hds-srv3 ksh-20100621-19.el6 ksh-20100621 passedResult: Package existence check passed for "ksh"

Check: Package existence for "make" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 make-3.81-20.el6 make-3.81 passed oracle-hds-srv1 make-3.81-20.el6 make-3.81 passed oracle-hds-srv4 make-3.81-20.el6 make-3.81 passed oracle-hds-srv3 make-3.81-20.el6 make-3.81 passedResult: Package existence check passed for "make"

Check: Package existence for "glibc(x86_64)"Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passed oracle-hds-srv1 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passed oracle-hds-srv4 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passed oracle-hds-srv3 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passedResult: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "glibc-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passed oracle-hds-srv1 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passed oracle-hds-srv4 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passed oracle-hds-srv3 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passedResult: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "libaio(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed oracle-hds-srv1 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed oracle-hds-srv4 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed oracle-hds-srv3 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed

146Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 147: Cisco Unified Computing System and Oracle RAC 11gR2 with

Appendix D: Key Linux Parameters

Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "libaio-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- oracle-hds-srv2 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed oracle-hds-srv1 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed oracle-hds-srv4 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed oracle-hds-srv3 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passedResult: Package existence check passed for "libaio-devel(x86_64)"

Checking for multiple users with UID value 0Result: Check for multiple users with UID value 0 passed

Starting check for consistency of primary group of root user Node Name Status ------------------------------------ ------------------------ oracle-hds-srv2 passed oracle-hds-srv1 passed oracle-hds-srv4 passed oracle-hds-srv3 passed

Check for consistency of root user's primary group passedCheck: Time zone consistencyResult: Time zone consistency check passed

Verification of system requirement was successful.

Appendix D: Key Linux Parameterssysctl.confkernel.sem = 8192 48000 8192 8192net.core.rmem_default = 4194304net.core.rmem_max = 16777216net.core.wmem_default = 4194304net.core.wmem_max = 16777216vm.nr_hugepages = 72100limits.conforacle soft nofile 4096oracle hard nofile 65536oracle soft nproc 32767oracle hard nproc 32767oracle soft stack 10240oracle hard stack 32768grid soft nofile 4096grid hard nofile 65536grid soft nproc 32767grid hard nproc 32767grid soft stack 10240grid hard stack 32768

147Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000

Page 148: Cisco Unified Computing System and Oracle RAC 11gR2 with

References

ReferencesCisco Unfied Computing System:

http://www.cisco.com/en/US/netsol/ns944/index.html

Hitachi Storage Platform G1000:

http://www.hds.com/products/storage-systems/hitachi-virtual-storage-platform-g1000.html?WT.ac=us_mg_pro_hvspg1000

Cisco Nexus:

http://www.cisco.com/en/US/products/ps9441/Products_Sub_Category_Home.html

Cisco Nexus 5000 Series NX-OS Software Configuration Guide:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConfigurationGuide.html

148Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000