of 124 /124
FlexPod Datacenter with Cisco Secure Enclaves Last Updated: May 15, 2014 Building Architectures to Solve Business Problems

FlexPod Datacenter with Cisco Secure Enclaves · FlexPod Datacenter with Cisco Secure Enclaves Overview The increased scrutiny on security is being driven by the evolving trends of

Embed Size (px)

Text of FlexPod Datacenter with Cisco Secure Enclaves · FlexPod Datacenter with Cisco Secure Enclaves...

  • FlexPod Datacenter with Cisco Secure Enclaves

    Last Updated: May 15, 2014

    Building Architectures to Solve Business Problems

  • 2

  • About the Authors

    re his current role, he supported and administered Nortel's worldwide training network structure. John holds a Master's degree in computer engineering from Clemson University.

    t, Solutions Architect, Infrastructure and Cloud Engineering, NetApp

    is a Solutions Architect in the NetApp Infrastructure and Cloud Engineering team. She architecture, implementation, compatibility, and security of innovative vendor develop competitive and high-performance end-to-end cloud solutions for customers. her career in 2006 at Nortel as an interoperability test engineer, testing customer roperability for certification. Lindsey has her Bachelors of Science degree in Computer d her Masters of Science in Information Security from East Carolina University.

    About the AuthorsChris O'Brien, Technical Marketing Manager, Server Access Virtualization Business Unit, Cisco Systems

    Chris O'Brien is currently focused on developing infrastructure best practices and solutions that are designed, tested, and documented to facilitate and improve customer deployments. Previously, O'Brien was an application developer and has worked in the IT industry for more than 15 years.

    John George, Reference Architect, Infrastructure and Cloud Engineering, NetApp

    John George is a Reference Architect in the NetApp Infrastructure and Cloud Engineering team and is focused on developing, validating, and supporting cloud infrastructure solutions that include NetApp products. Befoand VPN infra

    Lindsey Stree

    Lindsey Streetfocuses on thetechnologies toLindsey startedequipment inteNetworking an

    3

  • 4About Cisco Validated Desig

    About the Authors

    n (CVD) Program

    IM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF

    ILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING

    SE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS

    LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES,

    ITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF

    ABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED

    IBILITY OF SUCH DAMAGES.

    ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR

    TION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR

    SSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT

    CHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY

    N FACTORS NOT TESTED BY CISCO.

    Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco

    co logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We

    y, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS,

    eeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the

    Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital,

    ems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Cen-

    ollow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone,

    onPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace

    MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels,

    criptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to

    nternet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of

    , Inc. and/or its affiliates in the United States and certain other countries.

    arks mentioned in this document or website are the property of their respective owners.

    word partner does not imply a partnership relationship between Cisco and any other com-

    Systems, Inc. All rights reserved

    About Cisco Validated Design (CVD) Program

    The CVD program consists of systems and solutions designed, tested, and documented to facilitate

    faster, more reliable, and more predictable customer deployments. For more information visit

    http://www.cisco.com/go/designzone.

    ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLEC-

    TIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUP-

    PLIERS DISCLA

    MERCHANTAB

    FROM A COUR

    SUPPLIERS BE

    INCLUDING, W

    THE USE OR IN

    OF THE POSS

    THE DESIGNS

    THEIR APPLICA

    OTHER PROFE

    THEIR OWN TE

    DEPENDING O

    CCDE, CCENT,

    WebEx, the Cis

    Work, Live, Pla

    Bringing the M

    Cisco Certified

    the Cisco Syst

    ter, Fast Step, F

    iQuick Study, Ir

    Chime Sound,

    ProConnect, S

    Increase Your I

    Cisco Systems

    All other tradem

    The use of the

    pany. (0809R)

    2014 Cisco

    http://www.cisco.com/go/designzone

  • FlexPod Datacenter with Cisco Secure Enclaves

    OverviewThe increased scrutiny on security is being driven by the evolving trends of mobility, cloud computing, and advanced targeted attacks. More than the attacks themselves, a major consideration is the change in what defines a network, which goes beyond traditional walls and includes data centers, endpoints, virtual and mobile to make up the extended network.

    Today most converged infrastructures are designed to meet performance and function requirements with little or no attention to security. Furthermore, the movement toward optimal use of IT resources through virtualization has resulted in an environment in which the true and implied security accorded by physical separation has essentially vanished. System consolidation efforts have also accelerated the movement toward co-hosting on converged platforms, and the likelihood of compromise is increased in a highly shared environment. This situation presents a need for enhanced security and an opportunity to create a framework and platform that instills trust.

    The FlexPod Data Center with Cisco Secure Enclaves solution is a threat-centric approach to security allowing customers to address the full attack continuum, before during and after the attack on a standard platform with a consistent approach. The solution is based on the FlexPod Data Center integrated system and augmented with services to address business, compliance and application requirements. The FlexPod Data Center with Cisco Secure Enclaves is a standard approach to delivering a flexible, functional and secure application environment that can be readily automated.

    Solution Components FlexPod Datacenter with Cisco Secure Enclaves uses the FlexPod Data Center configuration as its foundation. The FlexPod Data Center is an integrated infrastructure solution from Cisco and NetApp with validated designs that expedite IT infrastructure and application deployment, while simultaneously reducing cost, complexity, and project risk. FlexPod Data Center consists of Cisco Nexus Networking, Cisco Unified Computing System (Cisco UCS), NetApp FAS Series storage systems. One especially significant benefit of the FlexPod architecture is the ability to customize or "flex" the environment to suit a customer's requirements, this includes the hardware previously mentioned as well as operating systems or hypervisors it supports.

  • Audience

    The Cisco Secure Enclaves design extends the FlexPod infrastructure by using the abilities inherit to the integrated system and augmenting this functionality with services to address the specific business and application requirements of the enterprise. These functional requirements promote uniqueness and innovation in the FlexPod, augmenting the original FlexPod design to support these prerequisites. The result is a region, or enclave, and more likely multiple enclaves, in the FlexPod built to address the unique workload activities and business objectives of an organization.

    FlexPod Data Center with Cisco Secure Enclaves is developed using the following technologies:

    FlexPod Data Center from Cisco and NetApp

    VMware vSphere

    Cisco Adaptive Security Appliance (ASA)

    Cisco NetFlow Generation Appliance (NGA)

    Cisco Virtual Security Gateway (VSG)

    Cisco Identity Services Engine (ISE)

    Cisco Network Analysis Module

    Cisco UCS Director

    Lancope StealthWatch System

    Note The FlexPod solution is hypervisor agnostic. Please go to the Reference Section of this document for URLs providing more details about the individual components of the solution.

    AudienceThis document describes the architecture and deployment procedures of a secure FlexPod Data Center infrastructure enabled with Cisco and NetApp technologies. The intended audience for this document includes but is not limited to sales engineers, field consultants, professional services, IT managers, partner engineering, and customers interested in making security an integral part of their FlexPod infrastructure.

    FlexPod Data Center with Cisco Secure Enclaves

    FlexPod Data Center with Cisco Secure Enclaves OverviewThe FlexPod Data Center with Cisco Secure Enclaves is a standardized approach to the integration of security services with a FlexPod Data Center based infrastructure. The design enables features inherit to the FlexPod platform and calls for its extension through dedicated physical or virtual appliance implementations. The main design objective is to help ensure that applications in this environment meet their subscribed service-level agreements (SLAs), including confidentiality requirements, by using the validated FlexPod infrastructure and the security additions it can readily support. The secure enclave framework allows an organization to adapt the FlexPod shared infrastructure to meet the disparate needs of users and applications based on their specific requirements.

    6FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves

    Components of FlexPod Data Center with Cisco Secure Enclaves

    FlexPod Data CenterFlexPod Data Center is a unified platform, composed of Cisco UCS servers, Cisco Nexus network switches, and NetApp storage arrays. Figure 1 shows the FlexPod base configuration and design elements. The FlexPod modules can be configured to match the application requirements by mixing and matching the component versions to achieve the optimum capacity, price and performance targets. The solution can be scaled by augmenting the elements of a single FlexPod instance and by adding multiple FlexPod instances to build numerous solutions for a virtualized and non-virtualized data center.

    Figure 1 FlexPod Datacenter Solution

    Cisco Secure Enclaves

    The Cisco Secure Enclaves design uses the common components of Cisco Integrated Systems along with additional services integration to address business and application requirements. These functional requirements promote uniqueness and innovation in the integrated computing stack that augment the original design to support these prerequisites. These unique areas on the shared infrastructure are referenced as enclaves. The Cisco Integrated System readily supports one or multiple enclaves.

    The common foundation of the Cisco Secure Enclaves design is Cisco Integrated Systems components. Cisco Integrated Systems consists of the Cisco Unified Computing System (Cisco UCS) and Cisco Nexus platforms. Figure 2 illustrates the extension of Cisco Integrated Systems to include features and functions beyond the foundational elements. Access controls, visibility, and threat defense are all elements that can be uniformly introduced into the system as required. The main feature of the enclave framework is the extensibility of the architecture to integrate current and future technologies within and upon its underpinnings, expanding the value of the infrastructure stack to address current and future application requirements

    7FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves

    Figure 2 Cisco Secure Enclaves Architecture Structure

    For more information on Cisco Secure Enclave Architecture go to http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-manager/whitepaper-c07-731204.html

    Software Revisions

    Table 1 details the software revisions of various components used in the solution validation.

    Table 1 Software Revisions

    Component Software Risk CountNetwork Nexus 5548UP NX-OS -

    6.0(2)N1(2a)Low (positioned) 2

    Nexus 7000 NX-OS 6.1(2) Low (positioned) 2Nexus 1110X 4.2(1)SP1(6.2) Low (positioned) 2Nexus 1000v 4.2(1)SV2(2.1a) Low

    (positioned)1

    Compute Cisco UCS Fabric Interconnect 6248

    2.1(3a) Low (positioned)

    2

    Cisco UCS Fabric Extender - 2232

    2.1(3a) Low (positioned)

    2

    Cisco UCS C220-M3 2.1(3a) Low (positioned)

    2

    Cisco UCS B200- M3

    2.1(3a) Low (positioned)

    4

    VMware ESXi 5.1u1 Low XCisco eNIC Driver 2.1.2.38 LowCisco fNIC Driver 1.5.0.45 Low

    8FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves

    VMware vCenter 5.1u1 Low 1Services Cisco Virtual

    Security Gateway (VSG)

    4.2(1)VSG1(1) Low (positioned)

    X

    Cisco UCS Manager (UCSM)

    2.1(3) Low (positioned)

    1

    Cisco Network Analysis Module (NAM) VSB

    5.1(2) Low (positioned)

    1

    Cisco NetFlow Generation Appliance (NGA)

    1.0(2) Low (positioned)

    2

    Cisco Identity Services Engine (ISE)

    1.2 Low (positioned)

    2

    Lancope StealthWatch

    6.3 Low (positioned)

    Cisco Intrusion Prevention System Security Services Processor (IPS SSP)

    7.2(1)E4 Low (positioned)

    2

    Cisco Adaptive Security Appliance (ASA) 5585

    9.1(2) Low (positioned)

    2

    Lancope StealthWatch FlowCollector

    6.3 Low (positioned)

    Citrix Netscaler 1000v

    10.1 Low

    (positioned)Management Cisco UCS Director 4.1 Low (positioned) 1

    Lancope StealthWatch Management Console

    6.3 Low (positioned)

    Cisco Security Manager (CSM)

    4.4 Low (positioned)

    1

    Cisco Prime Network Services Controller

    3.0(2e) Low (positioned)

    1

    NetApp OnCommand System Manager

    3.0 Low (positioned)

    9FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    FlexPod TopologyFigure 3 depicts the two FlexPod models validated in this configuration. These are the foundation platforms to be augmented with additional services to instantiate an enclave.

    Figure 3 FlexPod Data Center with Cisco Nexus 7000 (Left) FlexPod Data Center with Cisco Nexus 5000

    NetApp OnCommand Unified Manager

    6.0 Low (positioned)

    NetApp Virtual Storage Console (VSC)

    4.2.1 Low (positioned)

    NetApp NFS Plug-in for VMware vStorage APIs for Array Integration (VAAI)

    1.0.21 Low

    NetApp OnCommand Balance

    4.1.1.2R1 Low (positioned)

    Storage NetApp FAS 3250 Data ONTAP 8.2P5 Low 2

    10FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Note For more information on the FlexPod Data Center configurations used in the design go to: FlexPod Data Center with VMware vSphere 5.1 and Nexus 7000 using FCoE Design Guide FlexPod Data Center with VMware vSphere 5.1 Update 1 Design Guide FlexPod Design Zone

    The following common features between the FlexPod models are key for the instantiation of the secure enclaves on the FlexPod:

    NetApp FAS Controllers with Clustered Data ONTAP providing Storage Virtual Machine (SVM) and Quality of Service (QoS) capabilities

    Cisco Nexus Switching providing an Unified fabric, Cisco Trust Sec, Private VLANs, NetFlow, Switch Port Analyzer (SPAN), VXLAN and QoS capabilities

    Cisco Unified Computing System (UCS) with centralized management through Cisco UCS Manager, SPAN, QoS, Private VLANs, and hardware virtualization

    Adaptive Security Appliance (ASA) ExtensionThe Cisco ASA provides advanced stateful firewall and VPN concentrator functionality in one device, and for some models, integrated services modules such as IPS. The ASA includes many advanced features, such as multiple security contexts (similar to virtualized firewalls), clustering (combining multiple firewalls into a single firewall), transparent (Layer 2) firewall or routed (Layer 3) firewall operation, advanced inspection engines, VPN support, Cisco TrustSec and many more features. The ASA has two physical deployment models each has been validated to support secure enclaves.

    The enclave design uses the Security Group Firewall (SGFW) functionality of the ASA to enforce policy to and between servers in the data center. The SGFW objects are centrally defined in the Cisco Identity Services Engine (ISE) and used by the security operations team to create access policies. The Cisco ASA simply has the option to use the source and destination security groups to make decisions.

    ASA High Availability PairFigure 4 shows a traditional Cisco ASA high-availability pair deployment model in which the Cisco Nexus switches of the FlexPod provide a connection point for the appliances. The ASA uses the Virtual Port Channel (vPC) capabilities of the Cisco Nexus switch for link and device fault tolerance. The two units in a HA pair communicate over a failover link to determine the operating status of each unit. The following information is communicated over the failover link:

    The unit state (active or standby)

    Hello messages (keep-alives)

    Network link status

    MAC address exchange

    Configuration replication and synchronization

    The stateful link supports the sharing of session state information between the devices.

    11FlexPod Datacenter with Cisco Secure Enclaves

    http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/esxi51_N7k_fcoe_design.htmlhttp://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi51_design.htmlhttp://www.cisco.com/c/en/us/solutions/enterprise/data-center-designs-cloud-computing/landing_flexpod.html

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Figure 4 Physical Security Extension to the FlexPod - ASA HA Pair

    ASA Clustering ASA Clustering lets you group multiple ASAs together as a single logical device. A cluster provides all the convenience of a single device (management, integration into a network) while achieving the increased throughput and redundancy of multiple devices. Currently, the ASA cluster supports a maximum of eight nodes. Figure 5describes the physical connection of the ASA cluster to the Cisco Nexus switches of the FlexPod.

    Figure 5 Physical Extension to the FlexPod - ASA Clustering

    12FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    The ASA cluster uses a single vPC to support data traffic and a dedicated vPC per cluster node for control and data traffic redirection within the cluster. Control traffic includes:

    Master election

    Configuration replication

    Health monitoring

    Data traffic includes:

    State replication

    Connection ownership queries and data packet forwarding

    The data vPC spans all the nodes of the cluster, known as spanned Etherchannel, and is the recommended mode of operation. The Cisco Nexus switches use a consistent port channel load balancing algorithm to balance traffic distribution and in and out of the cluster to limit and optimize use of the cluster control links.

    Note The ASA clustering implementation from this validation is captured in a separate CVD titled Cisco Secure Data Center for Enterprise Design Guide.

    NetFlow Generation Appliance (NGA) ExtensionThe Cisco NetFlow Generation Appliance (NGA) introduces a highly scalable, cost-effective architecture for cross-device flow generation. The Cisco NGA generates, unifies, and exports flow data, empowering network operations, engineering, and security teams to boost network operations excellence, enhance services delivery, implement accurate billing, and harden network security. he NGA is a promiscuous device and can accept mirrored traffic from any source to create NetFlow records to export. The export target in this design is the cyber threat detection system, the Lancope StealthWatch platform.

    The use of threat defense systems allows an organization to address compliance and other mandates, network and data security concerns as well as monitoring and visibility of the data center. Cyber threat defense address several use cases including:

    Detecting advanced security threats that have breached the perimeter security boundaries

    Uncovering Network & Security Reconnaissance

    Malware and BotNet activity

    Data Loss Prevention

    Figure 6 shows the deployment of Cisco NGA on the stack to provide these services, accepting mirrored traffic from various sources of the converged infrastructure. As illustrated, the NGAs are dual-homed to the Cisco Nexus switches that use a static "always on" port channel configuration to mirror traffic from the various monitoring sessions defined on each switch. In addition, the NGAs capture interesting traffic from the Cisco UCS domain. It should be noted that the SPAN traffic originating from each fabric interconnect is rate-limited to 1 Gbps.

    13FlexPod Datacenter with Cisco Secure Enclaves

    http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Security/SDC/DG/SDC_DesignGuide/SDC_DG_2013-11-25_v10.htmlhttp://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Security/SDC/DG/SDC_DesignGuide/SDC_DG_2013-11-25_v10.html

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Figure 6 Physical Extension of the FlexPod - NetFlow Generation Appliance Integration

    The EnclaveThe enclave is a distinct logical entity that encompasses essential constructs including security along with application or customer-specific resources to deliver a trusted platform that meets SLAs. The modular construction and potential to automate delivery help make the enclave a scalable and securely separated layer of abstraction. The use of multiple enclaves delivers increased isolation, addressing disparate requirements of the FlexPod integrated infrastructure stack.

    Figure 7 provides a conceptual view of the enclave that defines an enclave in relation to an n-tier application.

    The enclave provides the following functions:

    Access control point for the secure region (public)

    Access control within and between application tiers (private)

    Cisco Cyber Security and Threat Defense operations to expose and identify malicious traffic

    Cisco TrustSec security using secure group access control to identify server roles and enforce securitypolicy

    Out-of-band management for centralized administration of the enclave and its resources

    Optional load-balancing capabilities

    14FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Figure 7 Cisco Secure Enclave Model

    Storage DesignClustered Data ONTAP is an ideal storage system operating system to support SEA. Clustered Data ONTAP is architected in such a way that all data access is done through secure virtual storage partitions. It is possible to have a single partition that represents the resources of the entire cluster or multiple partitions that are assigned specific subsets of cluster resources or Enclaves. These secure virtual storage partitions are known as Storage Virtual Machines, or SVMs. In the current implementation of SEA, the SVM serves as the storage basis for each Enclave.

    Storage Virtual Machines (SVMs)

    Introduction to SVMsThe secure logical storage partition through which data is accessed in clustered Data ONTAP is known as a Storage Virtual Machine (SVM). A cluster serves data through at least one and possibly multiple SVMs. An SVM is a logical abstraction that represents a set of physical resources of the cluster. Data volumes and logical network interfaces (LIFs) are created and assigned to an SVM and may reside on any node in the cluster to which the SVM has been given access. An SVM may own resources on multiple nodes concurrently, and those resources can be moved nondisruptively from one node to another. For example, a flexible volume may be nondisruptively moved to a new node and aggregate, or a data LIF could be transparently reassigned to a different physical network port. In this manner, the SVM abstracts the cluster hardware and is not tied to specific physical hardware.

    An SVM is capable of supporting multiple data protocols concurrently. Volumes within the SVM can be junctioned together to form a single NAS namespace, which makes all of an SVM's data available through a single share or mount point to NFS and CIFS clients. SVMs also support block-based protocols, and LUNs can be created and exported using iSCSI, Fibre Channel, or Fibre Channel over Ethernet. Any or all of these data protocols may be configured for use within a given SVM.

    Because it is a secure entity, an SVM is only aware of the resources that have been assigned to it and has no knowledge of other SVMs and their respective resources. Each SVM operates as a separate and distinct entity with its own security domain. Tenants may manage the resources allocated to them through a delegated SVM administration account. Each SVM may connect to unique authentication zones such as Active Directory, LDAP, or NIS.

    15FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    An SVM is effectively isolated from other SVMs that share the same physical hardware.

    Clustered Data ONTAP is highly scalable, and additional storage controllers and disks can be easily added to existing clusters in order to scale capacity and performance to meet rising demands. As new nodes or aggregates are added to the cluster, the SVM can be nondisruptively configured to use them. In this way, new disk, cache, and network resources can be made available to the SVM to create new data volumes or migrate existing workloads to these new resources in order to balance performance.

    This scalability also enables the SVM to be highly resilient. SVMs are no longer tied to the lifecycle of a given storage controller. As new hardware is introduced to replace hardware that is to be retired, SVM resources can be nondisruptively moved from the old controllers to the new controllers. At this point the old controllers can be retired from service while the SVM is still online and available to serve data.

    Components of an SVM

    Logical InterfacesAll SVM networking is done through logical interfaces (LIFs) that are created within the SVM. As logical constructs, LIFs are abstracted from the physical networking ports on which they reside.

    Flexible VolumesA flexible volume is the basic unit of storage for an SVM. An SVM has a root volume and can have one or more data volumes. Data volumes can be created in any aggregate that has been delegated by the cluster administrator for use by the SVM. Depending on the data protocols used by the SVM, volumes can contain either LUNs for use with block protocols, files for use with NAS protocols, or both concurrently.

    NamespaceEach SVM has a distinct namespace through which all of the NAS data shared from that SVM can be accessed. This namespace can be thought of as a map to all of the junctioned volumes for the SVM, no matter on which node or aggregate they might physically reside. Volumes may be junctioned at the root of the namespace or beneath other volumes that are part of the namespace hierarchy.

    Managing Storage Workload Performance Using Storage QoS

    Storage QoS (Quality of Service) can help manage risks around meeting performance objectives. You use Storage QoS to limit the throughput to workloads and to monitor workload performance. You can reactively limit workloads to address performance problems and you can proactively limit workloads to prevent performance problems. You can also limit workloads to support SLAs with customers. Workloads can be limited on either a workload IOPs or bandwidth in MB/s basis.

    Storage QoS is supported on clusters that have up to eight nodes.

    A workload represents the input/output (I/O) operations to one of the following storage objects:

    A Storage Virtual Machine (SVM) with FlexVol volumes

    A FlexVol volume

    A LUN

    A file (typically represents a virtual machine)

    In the SEA Architecture, since an SVM is usually associated with an Enclave, a QoS policy group would normally be applied to the SVM, setting up an overall storage rate limit for the Enclave. Storage QoS is administered by the cluster administrator.

    16FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    You assign a storage object to a QoS policy group to control and monitor a workload. You can monitor workloads without controlling them in order to size the workload and determine appropriate limits within the storage cluster.

    For more information on managing workload performance by using Storage QoS, please see "Managing system performance" in the Clustered Data ONTAP 8.2 System Administration Guide for Cluster Administrators.

    NetApp cDOT SVM with Cisco Secure Enclaves

    The cDOT SVM is a significant element of the FlexPod Data Center with Cisco Secure Enclaves design. As show in Figure 8, the physical network resources of two NetApp FAS3200 series controllers have been partitioned into three logical controllers namely the Infrastructure SVM, Enclave1 SVM and Enclave2 SVM. Each SVM is allocated to an Enclave supporting one or more applications removing the requirement for dedicated physical storage as the FAS device logically consolidates and separates the storage partitions. The Enclaves SVM have the following characteristics:

    Dedicated Logical Interfaces (LIFs) are created in each SVM from the physical NetApp Unified Target Adapters (UTAs)

    SAN LIF presence supporting SAN A(e3) and SAN B (e4) topologies

    Zoning provides SAN traffic isolation within the fabric

    The NetApp ifgroup aggregates the Ethernet interfaces (e3a, e4a) of the UTA for high availability and supports Layer 2 VLANs

    IP LIFs use the ifgroup construct for NFS(enclave_ds1) and or iSCSI based LIFs

    Management IP LIFs (svm_mgmt) are defined on each SVM for administration of that SVM and its logical resources. The management is contained to the SVM.

    Dedicated VLANs to each LIF assure traffic separation across the Ethernet fabric

    Figure 8 NetApp FAS Enclave Storage Design Using cDOT Storage Virtual Machines

    In addition, each SVM brings other features to support the granular separation and control of the FlexPod storage domain. These include:

    QoS policies allowing the administrator to manage system performance and resource consumption per Enclave through policies based on IOPS or Mbps throughput.

    Role based access control with predefined roles for at cDOT cluster layer and per individual SVM

    17FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Performance monitoring

    Management security through firewall policy limiting access to trusted protocols.

    Figure 9 describes another deployment model for the Cisco Secure Enclave on NetApp cDOT. The Enclaves do not receive a dedicated SVM but share a single SVM with multiple LIFs defined to support specific data stores. This model does not provide the same level of granularity, but it may provide a simpler operational model for larger deployments.

    Figure 9 NetApp FAS Enclave Storage Design Using cDOT Storage Virtual Machines (Service Provider Model)

    Compute DesignThe Cisco UCS Manager resides on a pair of Cisco UCS 6200 Series Fabric Interconnects using a clustered, active-standby configuration for high availability. The software gives administrators a single interface for performing server provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection, auditing, and statistics collection. Cisco UCS Manager service profiles and templates support versatile role- and policy-based management, and system configuration information can be exported to configuration management databases (CMDBs) to facilitate processes based on IT Infrastructure Library (ITIL) concepts.

    Compute nodes are deployed in a Cisco UCS environment by leveraging Cisco UCS service profiles. Service profiles let server, network, and storage administrators treat Cisco UCS servers as raw computing capacity to be allocated and reallocated as needed. The profiles define server I/O properties, personalities, properties and firmware revisions and are stored in the Cisco UCS 6200 Series Fabric Interconnects. Using service profiles, administrators can provision infrastructure resources in minutes instead of days, creating a more dynamic environment and more efficient use of server capacity.

    Each service profile consists of a server software definition and the server's LAN and SAN connectivity requirements. When a service profile is deployed to a server, Cisco UCS Manager automatically configures the server, adapters, fabric extenders, and fabric interconnects to match the configuration specified in the profile. The automatic configuration of servers, network interface cards (NICs), host bus adapters (HBAs), and LAN and SAN switches lowers the risk of human error, improves consistency, and decreases server deployment times.

    18FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Service profiles benefit both virtualized and non-virtualized environments in the Cisco Secure Enclave deployment. The profiles increase the mobility of non-virtualized servers, such as when moving workloads from server to server or taking a server offline for service or upgrade. Profiles can also be used in conjunction with virtualization clusters to bring new resources online easily, complementing existing virtual machine mobility. The profiles are a standard, a template that can be readily deployed and secured.

    Virtual Server Model

    Standardizing the host topology through Cisco UCS service profiles improves IT efficiency. Figure 9 shows the uniform deployment of VMware ESXi within the enclave framework.

    The main features include:

    The VMware ESXi host resides in a Cisco converged infrastructure.

    The VMware ESXi host is part of a larger VMware vSphere High Availability (HA) and Distributed Resource Scheduler (DRS) cluster

    Cisco virtual interface cards (VICs) offer multiple virtual PCI Express (PCIe) adapters for the VMware ESXi host for further traffic isolation and specialization.

    Six Ethernet-based virtual network interface cards (vNICs) with specific roles associated with the enclave system, enclave data, and core services traffic are created:

    vmnic0 and vmnic1 for the Cisco Nexus 1000V system uplink support management, VMware vMotion, and virtual service control traffic.

    vmnic2 and vmnic3 support data traffic originating from the enclaves.

    vmnic4 and vmnic5 carry core services traffic.

    Private VLANs isolate traffic to the virtual machines within an enclave, providing core services such as Domain Name System (DNS), Microsoft Active Directory, Domain Host Configuration Protocol (DHCP), and Microsoft Windows updates.

    Two virtual host bus adapters (vHBAs) for multihoming to available block-based storage.

    Three VMkernal ports are created to support the following traffic types:

    vmknic0 supports VMware ESXi host management traffic.

    vmknic1 supports VMware vMotion traffic.

    vmknic2 and vmknic3 provides the Virtual Extensible LAN (VXLAN) tunnel endpoint (VTEP) to support traffic with path load balancing through the Cisco UCS fabric.

    Additional Network File System (NFS) and Small Computer System Interface over IP (iSCSI) VMknics may be assigned to individual enclaves as needed to support application and segmentation requirements. These VMknics use the PortChannel dedicated to enclave data.

    Note A maximum of 256 VMkernal NICs are available per VMware ESXi host.

    Cisco Nexus 1000V is deployed on the VMware ESXi host with the following elements:

    PortChannels created for high availability and load balancing

    Segmentation of traffic through dedicated vNICs, VLANs, and VXLANs

    19FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Figure 10 Uniform ESXi Host Topology

    Bare Metal Server Model

    The enclave architecture is not restricted to virtualized server platforms. Bare-metal servers persist in many organizations to address various performance and compliance requirements. To address bare-metal operating systems within an enclave (Figure 10), the following features were enabled:

    Cisco UCS fabric failover to provide fabric-based high availability

    This feature precludes the use of host-based link aggregation or bonding.

    Cisco VICs to provide multiple virtual PCIe adapters to the host for further traffic isolation and specialization

    Ethernet-based vNICs with specific roles associated with the enclave system, enclave data, and core services traffic are created:

    vnic-a and vnic-b support data traffic originating from the host. Two vNICs were defined to allow host-based bonding. One vNIC is required.

    vcore supports core services traffic.

    Private VLANs isolate traffic to the virtual machines within an enclave, providing core services such as DNS, Microsoft Active Directory, DHCP, and Microsoft Windows Updates.

    Two virtual HBAs provide multihoming to available block-based storage.

    Dedicated VLANs per enclave for bare-metal server connections

    20FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Figure 11 Bare Metal Server Model

    Network DesignThe network fabric knits the previously defined storage and compute domains with the addition of network services into a cohesive system. The combination creates an efficient, consistent, and secure application platform, an enclave. The enclave is built using the Cisco Nexus switching platforms already included in the FlexPod Data Center. This section describes two enclave models their components and capabilities.

    Figure 12 depicts an enclave using two VLANs, with one or more VXLANs used at the virtualization layer. The VXLAN solution provides logical isolation within the hypervisor and removes the scale limitations associated with VLANs. The enclave is constructed as follows:

    Two VLANs are consumed on the physical switch for the entire enclave.

    The Cisco Nexus Series Switch provides the policy enforcement point and default gateway (SVI2001).

    Cisco ASA provides the security group firewall for traffic control enforcement.

    Cisco ASA provides virtual context bridging for two VLANs (VLANs 2001 to 3001 in the figure).

    VXLAN is supported across the infrastructure for virtual machine traffic.

    Consistent security policy is provided through universal security group tags (SGTs):

    The import of the Cisco ISE protected access credential (PAC) file establishes a secure communication channel between Cisco ISE and the device.

    Cisco ISE provides SGTs to Cisco ASA, and Cisco ASA defines security group access control lists (SGACLs).

    Cisco ISE provides SGTs and downloadable SGACLs to the Cisco Nexus switch.

    Cisco ISE provides authentication and authorization across the infrastructure.

    An SGT is assigned on the Cisco Nexus 1000V port profile.

    The Cisco Nexus 1000V propagates IP address-to-SGT mapping across the fabric through the SGT Exchange Protocol (SXP) for SGTs assigned to the enclave.

    21FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    The Cisco VSG for each enclave provides Layer 2 firewall functions.

    Load-balancing services are optional but readily integrated into the model.

    Dedicated VMknics are available to meet dedicated NFS and iSCSI access requirements

    Figure 12 Enclave Model: Transparent VLAN with VXLAN (Cisco ASA Transparent Mode)

    Figure 13 illustrates the logical structure of another enclave on the same shared infrastructure employing the Cisco ASA routed virtual context as the default gateway for the web server. The construction of this structure is identical to the previously documented enclave except for the firewall mode of operation.

    Figure 13 Enclave Model: Routed Firewall with VXLAN (Cisco ASA Routed Mode)

    22FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Security Services

    Firewall Firewalls are the primary control point for access between two distinct network segments, commonly referred to as inside, outside, public or private. The Cisco Secure Enclave Architecture uses two categories of firewalls zone or edge for access control into, between and within the enclave. The enclave model promotes security "proximity", meaning where possible traffic patterns within an enclave should remain contiguous to the compute. The use of multiple policy enforcement points promotes optimized paths.

    Cisco Virtual Security GatewayThe Cisco Virtual Security Gateway (VSG) protects traffic within the enclave, enforcing security policy at the VM level by applying policy based on VM or network based attributes. Typically this traffic is considered "east, west" in nature. The reality is any traffic into a VM is subject to the VSG security policy. The enclave model calls for a single VSG instance per enclave allowing the security operations team to develop granular security rules based on the application and associated business requirements.

    The Cisco Nexus 1000v Virtual Ethernet Module (VEM) will redirect the initial packet destined to a VM to the VSG where policy evaluation occurs. The redirection of traffic occurs using vPath when the virtual service is defined on the port profile of the VM. The VEM encapsulates the packet and forwards it to the VSG assigned to the enclave. The Cisco VSG processes the packet and forwards the result to the vPath on the VEM where the policy decision is cached and enforced for subsequent packets. The vPath will maintain the cache until the flow is reset (RST), finished (FIN) or timeouts.

    Note The Cisco Virtual Security Gateway may deployed adjacent to the Cisco Nexus 1000v VEM or across a number of Layer 3 hops.

    Cisco Adaptive Security AppliancesThe edge of the enclave is protected using the Cisco's Adaptive Security Appliance. The Cisco ASA can be partitioned into multiple security context (

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Cyber Threat DefenseCyber threats are attacks focused on seizing information related to sensitive data, money or ideas. The Cisco Cyber Threat Defense Solution provides greater visibility into these threats by identifying suspicious network traffic patterns within the network allowing security analysts the contextual information necessary to discern the level of threat these suspicious patterns represent. As shown in Figure 14, the solution is easily integrated and readily enabled on the base-FlexPod components. The entire FlexPod Data Center with Cisco Secure Enclave solution is protected.

    The CTD solution employs three primary components to provide this crucial visibility:

    Network Telemetry through NetFlow

    Threat Context through Cisco Identity Services Engine (ISE)

    Unified Visibility, Analysis and Context through Lancope StealthWatch

    Figure 14 Cisco Secure Enclave Cyber Threat Defense Model

    Network Telemetry through NetFlowNetFlow was developed by Cisco to collect network traffic information and enable monitoring of the network. The data collected by NetFlow provides insight into specific traffic flows in the form of records. The enclave framework uses several methods to reliably collect NetFlow data and provide a full picture of the FlexPod Data Center environment including:

    NetFlow Generation Appliances (NGA)

    Direct NetFlow Sources

    Cisco ASA 5500 NetFlow Secure Event Logging (NSEL)

    24FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    The effectiveness of any monitoring system is dependent on the completeness of the data it captures.With that in mind, the enclave model does not recommend using sampled NetFlow. Ideally the NetFlow records should reflect the FlexPod traffic in its entirety. To that end the physical Cisco Nexus switches are relieved of NetFlow responsibilities and implement line-rate SPAN. The NGA are connected to SPAN destination ports on the Cisco Nexus switches and Cisco UCS Fabric Interconnects. The collection points are described in the NetFlow Generation Appliance (NGA) Extension section. The NGA devices are promiscuous supporting up to 40Gbps of mirrored traffic to create NetFlow records for export to the Lancope StealthWatch Flow Collectors.

    Direct NetFlow sources generate and send flow records directly to the Lancope FlowCollectors. The Cisco Nexus 1000v virtual distributed switch provides this functionality for the virtual access layer of the enclave. It is recommended to enable Netflow on the Cisco Nexus 1000v interfaces. In larger environments where the limits of the Cisco Nexus 1000v NetFlow resources are reached, NetFlow should be enabled on VM interfaces with data sources.

    Another source of direct flow data is the Cisco ASA 5500. The Cisco ASA generates a NSEL records. These records differ from traditional NetFlow but are fully supported by the Lancope StealthWatch system. In fact, the records include the "action" permit or deny taken by the ASA on the flow as well as NAT translation that adds another layer of depth to the telemetry of the CTD system.

    Threat Context through Cisco Identity Services Engine (ISE) In order to provide some context or perspective, the Lancope StealthWatch system employs the services of the Cisco Identity Services Engine. The ISE can provide device and user information, offering more information for the security operations team to use during the process of threat analysis and potential response. In addition to the device profile and user identity, the ISE can provide time, location, and network data to create a contextual identity to who and what is on the network.

    Unified Visibility, Analysis and Context through Lancope StealthWatch The Lancope StealthWatch system collects, organizes and analyzes all of the incoming data points to provide a cohesive view into the inner workings of the enclave. The StealthWatch Management Console (SMC) is the central point of control supporting millions of flows. The primary SMC dashboards offer insight into network reconnaissance, malware propagation, command and control traffic, data exfiltration, and internal host reputation. The combination of Cisco and Lancope technologies offers a protection

    Management Design The communication between the management domain, the hardware infrastructure, and the enclaves is established through traditional paths as well as through the use of private VLANs on the Cisco Nexus 1000V and Cisco UCS fabric interconnects. The use of dedicated out-of-band management VLANs for the hardware infrastructure, including Cisco Nexus switching and the Cisco UCS fabric, is a recommended practice. The enclave model suggests the use of a single isolated private VLAN that is maintained between the bare-metal and virtual environments. This private isolated VLAN allows all virtual machines and bare-metal servers to converse with the services in the management domain, which is a promiscuous region. The private VLAN feature enforces separation between servers within a single enclave and between enclaves.

    Figure 15 shows the logical construction of this private VLAN environment, which supports directory, DNS, Microsoft Windows Server Update Services (WSUS), and other common required services for an organization

    25FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Figure 15 Private VLANs Providing Secure Access to Core Services

    Figure 16 shows on the virtual machine connection points to the management domain and the data domain. As illustrated, the traffic patterns are completely segmented through the use of traditional VLANs, VXLANs, and isolated private VLANs. The figure also shows the use of dedicated PCIe devices and logical PortChannels created on the Cisco Nexus 1000V to provide load balancing, high availability, and additional traffic separation.

    Figure 16 Enclave Virtual Machine Connections

    Management Services

    The FlexPod Data Center with Cisco Secure Enclaves employs numerous domain level managers to provision, organize and coordinate the operation of the enclaves on the shared infrastructure. The domain level managers employed during the validation are listed in Table 2 and Table 3. Table 2 describes the role of the management product while Table 3 indicates the positioning of that product within the architecture.

    26FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Table 2 FlexPod Data Center with Cisco Secure Enclaves Validated Management Platforms

    Product RoleCisco Unified Computing System Manager (UCSM)

    Provides administrators a single interface for performing server provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection, auditing, and statistics collection.

    Microsoft Active Directory, DNS, DHCP, WSUS, etc.

    Microsoft directory services provided centralized authentication and authorization for users and computers.

    DNS Services are centralized for TCP/IP name translation.

    DHCP provides automated IP address assignment this is coordinated with the DNS records.

    Windows Update Services are provided and defined and applied through AD Group Policy. This service maintains the Windows operating systems currency.

    VMware vSphere vCenter Provides centralized management of the vSphere ESXi hosts, virtual machines and enablement of VMware features such as vMotion and DRS cluster services.

    Cisco Security Manager Provides scalable, centralized management that allows administrators to efficiently manage a wide range of Cisco security devices, gain visibility across the network deployment, and share information with other essential network services, such as compliance systems and advanced security analysis systems, with a high degree of security.

    Lancope StealthWatch System Ingests and processes NetFlow records providing unique insight into network transactions, allowing for greater understanding of the network and fine grained analysis of security incidents under its watch.

    Cisco Identity Services Engine Provides user and device identity and context information to create policies that govern authorized network access. ISE is the policy control point of the Cisco TrustSec deployment allowing for centralized object based security.

    Cisco Prime Network Services Controller Provides centralized device and security policy management of the Cisco Virtual Security (VSG) and other virtual services.

    NetApp OnCommand System Manager Manages individual or clustered storage systems through a browser-based interface

    NetApp OnCommand Unified Manager Provides a single dashboard to view the health of your NetApp storage availability, capacity, and data protection relationships. Unified Manager offers risk identification and proactive notifications and recommendations.

    27FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Table 3 FlexPod Data Center with Cisco Secure Enclaves Validated Management Platforms

    Unified Management with Cisco UCS Director

    Cisco UCS Director provides a central user portal for managing the environment and enables the automation of the manual tasks associated with the provisioning and subsequent operation of the enclave. Cisco UCS Director can directly or indirectly manage the individual FlexPod Data Center components and enclave extensions.

    NetApp Virtual Storage Console (VSC) Provides integrated, comprehensive, end-to-end virtual storage management for the VMware vSphere infrastructure, including discovery, health monitoring, capacity management, provisioning, cloning, backup, restore, and disaster recovery.

    NetApp NFS Plug-in for VMware vStorage APIs for Array Integration (VAAI)

    VAAI is a set of APIs and SCSI commands allowing VMware ESXi hosts to offload VM operations such as cloning and initialization to the FAS controllers.

    NetApp OnCommand Balance Provides directions to optimize the performance and capacity of the virtual and physical data center resources including NetApp storage, physical servers, and VMware virtual machines.

    Cisco Nexus 1000v Virtual Supervisor Module for VMware vSphere

    Provides a comprehensive and extensible architectural platform for virtual machine (VM) and cloud networking

    Cisco Virtual Security Gateway Delivers security, compliance, and trusted access for virtual data center and cloud computing environments

    Cisco Prime Network Analysis Module (NAM)

    Delivers application visibility and network analytics to the physical and virtual network

    Product PositionedMicrosoft Active Directory, DNS, DHCP, WSUS, etc.

    VMware vSphere Management Cluster

    VMware vSphere vCenter VMware vSphere Management ClusterCisco Security Manager VMware vSphere Management ClusterLancope StealthWatch System VMware vSphere Management ClusterCisco Identity Services Engine VMware vSphere Management ClusterCisco Prime Network Services Controller VMware vSphere Management ClusterNetApp OnCommand System Manager VMware vSphere Management ClusterNetApp OnCommand Unified Manager VMware vSphere Management ClusterNetApp Virtual Storage Console (VSC) VMware vSphere Management ClusterNetApp NFS Plug-in for VMware vStorage APIs for Array Integration (VAAI)

    VMware ESXi Host

    NetApp OnCommand Balance VMware vSphere Management ClusterCisco Nexus 1000v Virtual Supervisor Module

    Nexus 1110-X Platform

    Cisco Virtual Security Gateway Nexus 1110-X PlatformCisco Prime Network Analysis Module (NAM)

    Nexus 1110-X Platform

    28FlexPod Datacenter with Cisco Secure Enclaves

  • FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

    Figure 17 Cisco UCS Director for FlexPod Management

    Figure 18 shows the interfaces that Cisco UCS Director employs. Ideally, the north bound APIs of the various management domains are used but the UCS Director may also directly access devices to create the Enclave environment. It should be noted that the Cyber Threat Defense components are not directly accessed as these protections are overlays encompassing the entire infrastructure.

    Figure 18 Cisco UCS Director Secure Enclave Connections

    The instantiation of multiple enclaves on the FlexPod Data Center platform through Cisco UCS Director offers operational efficiency and consistency to the organization. Figure 19 illustrates the automation of the infrastructure through a single pane of glass approach.

    Figure 19 Cisco UCS Director Automating Enclave Deployment

    29FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    Enclave ImplementationThe implementation section of this document builds off of the baseline FlexPod Data Center deployment guides and assumes this baseline infrastructure is in place containing Cisco UCS, NetApp FAS and Cisco Nexus configuration. Please reference the following documents for FlexPod Data Center deployment with the Cisco Nexus 7000 or Cisco Nexus 5000 series switches.

    VMware vSphere 5.1 on FlexPod Deployment Guide for Clustered ONTAP at http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_ucsm2_Clusterdeploy.html

    VMware vSphere 5.1 on FlexPod with the Cisco Nexus 7000 Deployment Guide at http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi_N7k.html

    The deployment details provide example configurations necessary to achieve enclave functionality. It is assumed that the reader has installed and has some familiarity with the products.

    Cisco Nexus SwitchingThe FlexPod Data Center solution supports multiple Cisco Nexus family switches including the Cisco Nexus 9000, Cisco Nexus 7000, Cisco Nexus 6000, and Cisco Nexus 5000 series switches. This section of the document will address using either the Cisco Nexus 7000 or Cisco Nexus 5000 series switches as the FlexPod Data Center networking platform.

    Cisco Nexus 7000 as FlexPod Data Center SwitchThe Cisco Nexus 7000 has three Virtual Device Contexts (VDC); one admin VDC, one storage VDC and one LAN or Ethernet VDC. VDC are abstractions of the physical switch and offer operational benefits of fault isolation and traffic isolation. The VDCs were built using the deployment guidance of the FlexPod Data Center with Cisco Nexus 7000 document. The majority of the configurations are identical to the based FlexPod implementation. This section discusses the modifications.

    ISE Integration

    Two Identity Services Engines are provisioned in a primary secondary configuration for high availability. Each ISE assumes the following personas:

    Administration Node

    Policy Service Node

    Monitoring Node

    The ISE provides RADIUS services to each of the Cisco Nexus 7000 VDCs which are configured as Network.

    30FlexPod Datacenter with Cisco Secure Enclaves

    http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_ucsm2_Clusterdeploy.htmlhttp://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi_N7k.html

  • Enclave Implementation

    The following AAA commands were used:

    Cisco TrustSec

    Cisco TrustSec provides an access-control solution that builds upon an existing identity-aware infrastructure to ensure data confidentiality between network devices and integrate security access services on one platform. In the Cisco TrustSec solution, enforcement devices utilize a combination of user attributes and end-point attributes to make role-based and identity-based access control decisions.

    Nexus 7000-A (Ethernet VDC) Nexus 7000-B (Ethernet VDC) radius-server key 7 "K1kmN0gy"

    radius distribute

    radius-server host 172.26.164.187 key 7 "K1kmN0gy" authentication accounting

    radius-server host 172.26.164.239 key 7 "K1kmN0gy" authentication accounting

    radius commit

    aaa group server radius ISE-Radius-Grp

    server 172.26.164.187

    server 172.26.164.239

    use-vrf management

    source-interface mgmt0

    ip radius source-interface mgmt0

    radius-server key 7 "K1kmN0gy"

    radius distribute

    radius-server host 172.26.164.187 key 7 "K1kmN0gy" authentication accounting

    radius-server host 172.26.164.239 key 7 "K1kmN0gy" authentication accounting

    radius commit

    aaa group server radius ISE-Radius-Grp

    server 172.26.164.187

    server 172.26.164.239

    use-vrf management

    source-interface mgmt0

    ip radius source-interface mgmt0

    Nexus 7000-A (Ethernet VDC) Nexus 7000-B (Ethernet VDC) aaa authentication login default group ISE-Radius-Grp

    aaa authentication dot1x default group ISE-Radius-Grp

    aaa accounting dot1x default group ISE-Radius-Grp

    aaa authorization cts default group ISE-Radius-Grp

    aaa accounting default group ISE-Radius-Grp

    no aaa user default-role

    aaa authentication login default group ISE-Radius-Grp

    aaa authentication dot1x default group ISE-Radius-Grp

    aaa accounting dot1x default group ISE-Radius-Grp

    aaa authorization cts default group ISE-Radius-Grp

    aaa accounting default group ISE-Radius-Grp

    no aaa user default-role

    31FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    In this release, the ASA integrates with Cisco TrustSec to provide security group based policy enforcement. Access policies within the Cisco TrustSec domain are topology-independent, based on the roles of source and destination devices rather than on network IP addresses.

    The ASA can utilize the Cisco TrustSec solution for other types of security group based policies, such as application inspection; for example, you can configure a class map containing an access policy based on a security group.

    The Cisco TrustSec environment is enabled on the Nexus 7000. The Cisco Nexus 7000 aggregates Security Exchange Protocol (SXP) information and sends it to any listener. In the enclave design the Cisco Nexus 1000v is a speaker and the Cisco ASA virtual contexts are listener devices.

    Figure 20 Cisco TrustSec Implementation as Validated

    32FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    Note The SXP information is common across ASA virtual contexts The SGT mappings are global and should not overlap between contexts.

    Private VLANs

    The use of private VLANs allows for the complete isolation of control and management traffic within an Enclave. The Cisco Nexus 7000 supports private VLANs and used the following structure during validation. In this sample, VLAN 3171 is the primary VLAN and 3172 is an isolated VLAN carried across the infrastructure.

    Nexus 7000-A (Ethernet VDC) Nexus 7000-B (Ethernet VDC) ! Enable Cisco TrustSec on the Nexus 7000

    feature cts

    ! Name and password shared for ISE device registration

    cts device-id k02-fp-sw-a password 7 K1kmN0gy

    cts role-based counters enable

    !Enable SXP

    cts sxp enable

    ! Default SXP password used for all SXP communications

    cts sxp default password 7 K1kmN0gy

    ! SXP connection to an ASA virtual context N7k in speaker role

    cts sxp connection peer 10.0.101.100 source 172.26.164.218 password default mode listener

    ! SXP connection to the Nexus 1000v N7k in listener mode

    cts sxp connection peer 172.26.164.18 source 172.26.164.218 password default mode speaker

    ! Enable Cisco TrustSec on the Nexus 7000

    feature cts

    ! Name and password shared for ISE device registration

    cts device-id k02-fp-sw-b password 7 K1kmN0gy

    cts role-based counters enable

    !Enable SXP

    cts sxp enable

    !Default SXP password used for all SXP communications

    cts sxp default password 7 K1kmN0gy

    ! SXP connection to an ASA virtual context N7k in speaker role

    cts sxp connection peer 10.0.101.100 source 172.26.164.217 password default mode listener

    ! SXP connection to the Nexus 1000v N7k in listener mode

    cts sxp connection peer 172.26.164.18 source 172.26.164.217 password default mode speaker

    Nexus 7000-A (Ethernet VDC) Nexus 7000-B (Ethernet VDC) vlan 3171 name core-services-primary private-vlan primary private-vlan association 3172vlan 3172 name core-services-isolated private-vlan isolated

    vlan 3171 name core-services-primary private-vlan primary private-vlan association 3172vlan 3172 name core-services-isolated private-vlan isolated

    33FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    Port Profiles

    A port profile is a mechanism for simplifying the configuration of interfaces. A single port profile can be assigned to multiple interfaces to give them all the same configuration. Changes to a port profile are propagated to the configuration of any interface that is assigned to it.

    In the validated architecture, three port profiles were created supporting the Cisco UCS, NetApp FAS controllers and Cisco Nexus 1110 Cloud Services Platform. The following details the port profile configurations which are applied to the virtual and physical interfaces on the Cisco Nexus 7000.

    34FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    Quality of Service (QoS)

    The Enclave design in the Nexus 7000 uses multiple VDCs one of them dedicated to supporting block based storage through FCoE. As such, the system defaults may be adjusted and the environment optimized to address the complete separation of FCoE from other Ethernet traffic through the Nexus 7000 VDCs. Cisco Modular QoS CLI (MQC) provides this functionality allowing administrators to:

    Nexus 7000-A (Ethernet VDC) Nexus 7000-B (Ethernet VDC)

    port-profile type port-channel UCS-FI switchport switchport mode trunk switchport trunk native vlan 2 spanning-tree port type edge trunk mtu 9216 switchport trunk allowed vlan 2,98-99,201-219,666,2001-2019,3001-3019 switchport trunk allowed vlan add 3170-3173,3175-3179,3250-3251,3253-3255 description state enabledport-profile type ethernet Cloud-Services-Platforms switchport switchport mode trunk spanning-tree port type edge trunk switchport trunk allowed vlan 98-99,3175-3176,3250 description state enabledport-profile type port-channel FAS-Node switchport switchport mode trunk switchport trunk native vlan 2 spanning-tree port type edge trunk mtu 9216 switchport trunk allowed vlan 201-219,3170 description state enabled

    interface port-channel11 inherit port-profile FAS-Node

    interface port-channel12 inherit port-profile FAS-Node

    interface port-channel13 inherit port-profile UCS-FI

    interface port-channel14 inherit port-profile UCS-FI

    interface Ethernet4/17 inherit port-profile Cloud-Services-Platforms

    interface Ethernet4/19 inherit port-profile Cloud-Services-Platforms

    port-profile type port-channel UCS-FI switchport switchport mode trunk switchport trunk native vlan 2 spanning-tree port type edge trunk mtu 9216 switchport trunk allowed vlan 2,98-99,201-219,666,2001-2019,3001-3019 switchport trunk allowed vlan add 3170-3173,3175-3179,3250-3251,3253-3255 description state enabledport-profile type ethernet Cloud-Services-Platforms switchport switchport mode trunk spanning-tree port type edge trunk switchport trunk allowed vlan 98-99,3175-3176,3250 description state enabledport-profile type port-channel FAS-Node switchport switchport mode trunk switchport trunk native vlan 2 spanning-tree port type edge trunk mtu 9216 switchport trunk allowed vlan 201-219,3170 description state enabled

    interface port-channel11 inherit port-profile FAS-Node

    interface port-channel12 inherit port-profile FAS-Node

    interface port-channel13 inherit port-profile UCS-FI

    interface port-channel14 inherit port-profile UCS-FI

    interface Ethernet4/17 inherit port-profile Cloud-Services-Platforms

    interface Ethernet4/19 inherit port-profile Cloud-Services-Platforms

    35FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    Create traffic classes by classifying the incoming and outgoing packets that match criteria such as IP address or QoS fields.

    Create policies by specifying actions to take on the traffic classes, such as limiting, marking, or dropping packets.

    Apply policies to a port, port channel, VLAN, or a sub-interface.

    Queues (optional modifications)Queues are one method to manage network congestion. Ingress and egress queue selection is based on CoS values. The default network-qos queue structure nq-7e-4Q1T-HQoS is shown below for the system with F2 linecards. The F2 line card supports four queues each supporting specific traffic classes assigned by CoS values.

    Note F2 series line cards were used for validation.

    The Enclave does not require modification of the QoS environment but this is provided as an example of optimizing FlexPod resources. The following command copies the default queuing policy of the system, inherited from the admin VDC, to the local Ethernet VDC.

    The new local copy of the ingress queuing policy structure (as shown above) is redefined to address Ethernet traffic. The "no-drop" or FCoE traffic is given the minimal amount of resources as this traffic will not traverse this Ethernet VDC but traverse the VDC dedicated to storage traffic. Essentially, class of service (CoS) 3 no-drop traffic is not defined or expected within this domain.

    In the following example, the c-4q-7e-drop-in is given 99% of the available resources.

    Nexus 7000-A (Ethernet VDC) Nexus 7000-B (Ethernet VDC) qos copy policy-map type queuing default-4q-7e-in-policy prefix FP-

    qos copy policy-map type queuing default-4q-7e-in-policy prefix FP-

    Nexus 7000-A (Ethernet VDC) Nexus 7000-B (Ethernet VDC) policy-map type queuing FP-4q-7e-in

    class type queuing c-4q-7e-drop-in

    service-policy type queuing FP-4q-7e-drop-in

    queue-limit percent 99

    class type queuing c-4q-7e-ndrop-in

    service-policy type queuing FP-4q-7e-ndrop-in

    queue-limit percent 1

    policy-map type queuing FP-4q-7e-in

    class type queuing c-4q-7e-drop-in

    service-policy type queuing FP-4q-7e-drop-in

    queue-limit percent 99

    class type queuing c-4q-7e-ndrop-in

    service-policy type queuing FP-4q-7e-ndrop-in

    queue-limit percent 1

    36FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    The queuing policy maps are then adjusted to reflect the new percentages total. For example, the 4q4t-7e-in-q1 class will receive 50% of the 100% queue-limits within the FP-4q-7e-drop-in class, but that is really 50% of the 99% queue limit available in total meaning the 4q4t-7e-in-q1 will receive 49.5% of the total available queue.

    Note Effective queue limit % = assigned queue-limit % from parent class * local queue limit %

    The 4q4t-7e-in-q4 under the FP-4q-7e-ndrop-in class will receive 100% of the 1% effectively assigned to it. Again the lab implementation did not expect any CoS traffic in the Ethernet VDC.

    37FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    The bandwidth percentage should total 100% across the class queues. The no-drop queue was given the least amount of resources, 1%. Note zero resources is not an option for any queue.

    Table 4 Effective Queuing Configuration Example

    The queuing policy can be applied to one or more interfaces. To simplify the deployment, the service policy is applied to the relevant port profiles, namely the FAS and Cisco UCS ports.

    Note The egress queue buffer allocations are non-configurable for the F2 line cards used for validation.

    Nexus 7000-A (Ethernet VDC) Nexus 7000-B (Ethernet VDC) policy-map type queuing FP-4q-7e-drop-in

    class type queuing 4q4t-7e-in-q1

    queue-limit percent 50

    bandwidth percent 50

    class type queuing 4q4t-7e-in-q-default

    queue-limit percent 25

    bandwidth percent 24

    class type queuing 4q4t-7e-in-q3

    queue-limit percent 25

    bandwidth percent 25

    policy-map type queuing FP-4q-7e-ndrop-in

    class type queuing 4q4t-7e-in-q4

    queue-limit percent 100

    bandwidth percent 1

    policy-map type queuing FP-4q-7e-drop-in

    class type queuing 4q4t-7e-in-q1

    queue-limit percent 50

    bandwidth percent 50

    class type queuing 4q4t-7e-in-q-default

    queue-limit percent 25

    bandwidth percent 24

    class type queuing 4q4t-7e-in-q3

    queue-limit percent 25

    bandwidth percent 25

    policy-map type queuing FP-4q-7e-ndrop-in

    class type queuing 4q4t-7e-in-q4

    queue-limit percent 100

    bandwidth percent 1

    Queuing ClassQueue-limit % -

    Effective %

    Bandwidth % -

    Effective4q4t-7e-in-q1 (CoS 5-7) 50 49.5 50 - 504q4t-7e-in-q-default (CoS 0-1) 25 24.75 24 244q4t-7e-in-q3 (CoS 2,4) 25 24.75 25 254q4t-7e-in-q4 (no drop) (CoS 3)

    100 11 - 1

    Nexus 7000-A (Ethernet VDC) Nexus 7000-B (Ethernet VDC) port-profile type port-channel UCS-FI

    service-policy type queuing input FP-4q-7e-in

    port-profile type port-channel FAS-Node

    service-policy type queuing input FP-4q-7e-in

    port-profile type port-channel UCS-FI

    service-policy type queuing input FP-4q-7e-in

    port-profile type port-channel FAS-Node

    service-policy type queuing input FP-4q-7e-in

    38FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    ClassificationThe NAS traffic originating from the NetApp FAS controllers will be classified and marked to receive the appropriate levels of service across the Enclave architecture. The FP-qos-fas policy map was created to mark all packets with a CoS of 5 (Gold). Marking the traffic from the FAS is a recommended practice. CoS 5 aligns with the policies created in the Cisco UCS and Cisco Nexus 1000v platforms.

    The ability to assign this at the VLAN simplifies the classifications of packets and aligns well with the VLAN to NetApp Storage Virtual Machines (SVMs) which require dedicated VLANs for processing on the controller. After this configuration, the CoS of 5 is effectively marked on all frames within the VLANs listed. The VLANs in this example support Enclave NFS traffic.

    Monitoring

    The ability to monitor network traffic within the Nexus platform is key to ensure the efficient operation of the solution. The design calls for the use of Switched Port Analyzer (SPAN) as well as NetFlow services to provide visibility.

    SPANSwitched Port Analyzer (SPAN) sends a copy of the traffic to a destination port. The network analyzer, which is attached with destination port, analyzes the traffic that passes through source port. The Cisco Nexus 7000 supports all SPAN sessions in hardware, the supervisor CPU is not involved.

    The source port can be a single port or multiple ports or a VLAN, which is also called a monitored port. You can monitor all the packets for source port which is received (rx), transmitted (tx), or bidirectional (both). A replication of the packets is sent to the destination port for analysis.

    The destination port is a port that connects to a probe or security device that can receive and analyze the copied packets from single or multiple source ports. In the design, the SPAN destination ports are the Cisco NetFlow Generation Appliances NGA). It is important to note the capacity of the destination SPAN interfaces should be equivalent or exceed the capacity of the source interfaces to avoid potential SPAN drops obscuring network visibility.

    Figure 21 describes the connectivity between the Cisco Nexus 7000 switches and the Cisco NGA devices. Notice that a static port channel is configured on the Cisco Nexus 7000 to the NGAs. The NGA are promiscuous devices and do not participate in port aggregation protocols such as PAGP or LACP on their data interfaces. Each of the links are 10 Gig enabled. The port channel may contain up to 16 active interfaces in the bundle allowing for greater capacity. It is important to note that the NGA devices are independent devices so adding more promiscuous endpoint devices to the port channel is not an issue. SPAN traffic will be redirected and load balanced across the static link members of the port channel.

    Nexus 7000-A (Ethernet VDC) Nexus 7000-B (Ethernet VDC) policy-map type qos FP-qos-fas

    class class-default

    set cos 5

    policy-map type qos FP-qos-fas

    class class-default

    set cos 5

    Nexus 7000-A (Storage VDC) Nexus 7000-B (Storage VDC) vlan configuration 201-219

    service-policy type qos input FP-qos-fas

    vlan configuration 201-219

    service-policy type qos input FP-qos-fas

    39FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    Figure 21 Cisco Nexus 7000 to Cisco NGA Connectivity

    Note Span may use the same replication engine as multicast on the module and there is a physical limit to the amount of replication that each replication engine can do. Nexus 7000 modules have multiple replication engines for each module and under normal circumstances, multicast is unaffected by a span session. But it is possible to impact multicast replication if you have a large number of high rate multicast streams inbound to the module, and the port you monitor uses the same replication engine.

    NetFlow

    NetFlow technology efficiently provides accounting for various applications such as network traffic accounting, usage-based network billing, network planning, as well as Denial Services monitoring capabilities, network monitoring, outbound marketing, and data mining capabilities for both Service Provider and Enterprise organizations. The NetFlow architecture consists of flow records, flow exports and flow monitors. NetFlow consumes hardware resources such as TCAM and CPU in the switching environment. It is also not a recommended practice to use NetFlow sampling as this provides an incomplete view of network traffic.

    Nexus 7000-A (Ethernet VDC) Nexus 7000-B (Ethernet VDC) interface port-channel8

    description

    switchport mode trunk

    switchport monitor

    monitor session 1

    description SPAN ASA Data Traffic from Po20

    source interface port-channel20 rx

    destination interface port-channel8

    no shut

    interface port-channel8

    description

    switchport mode trunk

    switchport monitor

    monitor session 1

    description SPAN ASA Data Traffic from Po20

    source interface port-channel20 rx

    destination interface port-channel8

    no shut

    40FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    To avoid NetFlow resource utilization in the Nexus switch and potential "blindspots" the NetFlow service is offloaded to dedicated devices, namely the Cisco NetFlow Generation Appliances (NGA). The NGAs consume SPAN traffic from the Nexus 7000. The NGAs are promiscuous endpoints of Port Channel 8 described above. Please see the Cisco NetFlow Generation Appliance section for details on its implementation in the design.

    Cisco Nexus 5000 as FlexPod Data Center SwitchThe switch used in this FlexPod data center architecture is the Nexus 5548UP model. The base switch configuration is based on the FlexPod Data Center with VMware vSphere deployment model. The following configurations describe significant implementations to realize the secure enclave architecture.

    ISE Integration

    Two Identity Services Engines are provisioned in a primary secondary configuration for high availability. Each ISE assumes the following personas:

    Administration Node

    Policy Service Node

    Monitoring Node

    The ISE provides RADIUS services to each of the Nexus 5000 VDCs which are configured as Network Devices. The Cisco Nexus 5000 configuration is identical to the Cisco Nexus 7000 implementation captured in the Cisco Nexus 7000 ISE Integration section.

    Cisco TrustSec

    Cisco TrustSec allows security operations teams to create role-based security policy. The Cisco Nexus 5500 platform supports TrustSec but cannot act as an SXP "listener". his means it cannot aggregate and advertise through SXP the IP to SGT mappings learned from the Cisco Nexus 1000v. In light of this, the Nexus 1000v will implement an SXP connection to each ASA virtual context directly to advertise the CTS tag to IP information.

    Note The Cisco Nexus 7000 and 5000 support enforcement of Security Group ACLs in the network fabric. This capability was not explored in this design.

    Private VLANs

    The use of private VLANs allows for the complete isolation of control and management traffic within an Enclave. The Cisco Nexus 5548UP supports private VLANs and used the following structure during validation. In this sample, VLAN 3171 is the primary VLAN and 3172 is an isolated VLAN carried across the infrastructure.

    41FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    Port Profiles

    A Port profile is a mechanism for simplified configuration of interfaces. A port profile can be assigned to multiple interfaces giving them all the same configuration. Port profiles provide consistency. Changes to the port profile can be propagated automatically to the configuration of any interface assigned to it. Please use the port profile guidance provided in the Nexus 7000 Port Profiles section for configuration details.

    Quality of Service (QoS)

    The Nexus 5500 platform inherently trusts the CoS values it receives. In the FlexPod Data Center platform the same assumption is made, CoS values are trusted and expected to be properly set prior to egressing the unified computing domain. The NetApp FAS controller traffic will be marked on ingress to the Nexus 5500 platform.

    A system class is uniquely identified by a qos-group value. The Nexus 5500 platform supports six classes or qos-groups. qos-group 0 is reserved for default drop traffic. The Nexus 5500 by default assigns all traffic to this class with the exception of FCoE which is reserved for qos-group 1. This essentially leaves groups 2 through 5 for cos mapping. Each qos-group will define policies and attributes to assign to traffic in that class such as MTU, CoS value and bandwidth. The CoS 5 Gold class will be assigned to qos-group 4.

    The NAS traffic originating from the NetApp FAS controllers will be classified and marked to receive the appropriate levels of service across the Enclave architecture. The pm-qos-fas policy map was created to mark all packets with a CoS of 5 (Gold). CoS 5 aligns with the policies created in the remaining QoS enabled infrastructure.

    The Nexus 5000 supports VLAN based marking. The ability to assign this at the VLAN simplifies the analysis of packets and aligns well with the VLAN to NetApp Storage Virtual Machines (SVMs) relationship which requires dedicated VLANs for processing on the FAS controller. The QoS policy is applied to the appropriate VLANs. After this configuration, the CoS of 5 is effectively marked on all frames within the VLANs listed. The VLANs in this example 201-219 support NFS traffic.

    The TCAM tables must be adjusted to support VLAN QoS entries. The limit is user adjustable and should be modified to support the number of CoS 5 (NFS,iSCSI) VLANs required in the environment. The class map cm-qos-fas classifies all IP traffic through the permit "any any" acl-fas ACL as subject to the policy map pm-qos--fas.

    Nexus 5000-A Nexus 5000-BFeature private-vlanvlan 3171 name core-services-primary private-vlan primary private-vlan association 3172vlan 3172 name core-services-isolated private-vlan isolated

    Feature private-vlanvlan 3171 name core-services-primary private-vlan primary private-vlan association 3172vlan 3172 name core-services-isolated private-vlan isolated

    42FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    Note Use the show hardware profile tcam feature qos command to display TCAM resource utilization.

    The following configuration speaks to the classifications defined (qos) on the Nexus switch. A class-map defines the CoS value and is subsequently used to assign the CoS to a system class or qos-group through a system assigned policy map pm-qos-global.

    Nexus 5000-A Nexus 5000-B hardware profile tcam feature interface-qos limit 20

    ip access-list acl-fas

    10 permit ip any any

    class-map type qos match-any cm-qos-fas

    match access-group name acl-fas

    policy-map type qos pm-qos-fas

    class cm-qos-fas

    set qos-group 4

    vlan configuration 201-219

    service-policy type qos input pm-qos-fas

    hardware profile tcam feature interface-qos limit 20

    ip access-list acl-fas

    10 permit ip any any

    class-map type qos match-any cm-qos-fas

    match access-group name acl-fas

    policy-map type qos pm-qos-fas

    class cm-qos-fas

    set qos-group 4

    vlan configuration 201-219

    service-policy type qos input pm-qos-fas

    43FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    The queuing and scheduling definitions are defined for ingress and egress traffic to the Nexus platform. The available queues (2-5) are given bandwidth percentages that align with those defined on the Cisco UCS system. The ingress and egress policies are applied at the system level through the service-policy command.

    Nexus 5000-A Nexus 5000-B class-map type qos match-all cm-qos-gold

    match cos 5

    class-map type qos match-all cm-qos-bronze

    match cos 1

    class-map type qos match-all cm-qos-silver

    match cos 2

    class-map type qos match-all cm-qos-platinum

    match cos 6

    policy-map type qos pm-qos-global

    class cm-qos-platinum

    set qos-group 5

    class cm-qos-gold

    set qos-group 4

    class cm-qos-silver

    set qos-group 3

    class cm-qos-bronze

    set qos-group 2

    class class-fcoe

    set qos-group 1

    system qos

    service-policy type qos input pm-qos-global

    class-map type qos match-all cm-qos-gold

    match cos 5

    class-map type qos match-all cm-qos-bronze

    match cos 1

    class-map type qos match-all cm-qos-silver

    match cos 2

    class-map type qos match-all cm-qos-platinum

    match cos 6

    policy-map type qos pm-qos-global

    class cm-qos-platinum

    set qos-group 5

    class cm-qos-gold

    set qos-group 4

    class cm-qos-silver

    set qos-group 3

    class cm-qos-bronze

    set qos-group 2

    class class-fcoe

    set qos-group 1

    system qos

    service-policy type qos input pm-qos-global

    44FlexPod Datacenter with Cisco Secure Enclaves

  • Enclave Implementation

    Nexus 5000-A Nexus 5000-B

    class-map type queuing cm-que-qos-group-2

    match qos-group 2

    class-map type queuing cm-que-qos-group-3

    match qos-group 3

    class-map type queuing cm-que-qos-group-4

    match qos-group 4

    class-map type queuing cm-que-qos-group-5

    match qos-group 5

    policy-map type queuing pm-que-in-global

    class type queuing class-fcoe

    bandwidth percent 20

    class type queuing cm-que-qos-group-2

    bandwidth percent 10

    class type queuing cm-que-qos-group-3

    bandwidth percent 20

    class type queuing cm-que-qos-group-4

    bandwidth percent 30

    class type queuing cm-que-qos-group-5

    bandwidth percent 10

    class type queuing class-default

    bandwidth percent 10

    policy-map type queuing pm-que-out-global

    class type queuing class-fcoe

    bandwidth percent 20

    class type queuing cm-que-qos-group-2

    bandwidth percent 10

    class type queuing cm-que-qos-group-3

    bandwidth percent 20

    class type queuing cm-que-qos-group-4

    bandwidth percent 30

    class type queuing cm-que-qos-group-5

    bandwidth percent 10

    class type queuing class-default

    bandwidth percent 10

    system qos

    service-policy type queuing input pm-que-in-global

    service-policy type queuing output pm-que-out-global