of 90/90
1 FlexPod Datacenter with Microsoft Exchange 2013, F5 BIG-IP and Cisco Application Centric Infrastructure Design Guide Last Updated: May 2, 2016

FlexPod Datacenter with Microsoft Exchange 2013, F5 …flexpod.com/external-assets/usecases/exchange2013_aci...1 FlexPod Datacenter with Microsoft Exchange 2013, F5 BIG-IP and Cisco

  • View
    213

  • Download
    1

Embed Size (px)

Text of FlexPod Datacenter with Microsoft Exchange 2013, F5...

  • 1

    FlexPod Datacenter with Microsoft

    Exchange 2013, F5 BIG-IP and Cisco

    Application Centric Infrastructure Design

    Guide Last Updated: May 2, 2016

  • 2

    About Cisco Validated Designs

    The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster,

    more reliable, and more predictable customer deployments. For more information visit:

    http://www.cisco.com/go/designzone.

    ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS

    (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND

    ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF

    MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM

    A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE

    LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING,

    WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR

    INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE

    POSSIBILITY OF SUCH DAMAGES.

    THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR

    THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER

    PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR

    OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON

    FACTORS NOT TESTED BY CISCO.

    CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco

    WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We

    Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS,

    Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the

    Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the

    Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast

    Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study,

    IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound,

    MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect,

    ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your

    Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc.

    and/or its affiliates in the United States and certain other countries.

    All other trademarks mentioned in this document or website are the property of their respective owners. The

    use of the word partner does not imply a partnership relationship between Cisco and any other company.

    (0809R)

    2016 Cisco Systems, Inc. All rights reserved.

    http://www.cisco.com/go/designzone

  • 3

    Table of Contents

    About Cisco Validated Designs ................................................................................................................................................ 2

    Executive Summary ................................................................................................................................................................. 6

    Solution Overview .................................................................................................................................................................... 7

    Introduction ......................................................................................................................................................................... 7

    Audience ............................................................................................................................................................................. 7

    Technology Overview .............................................................................................................................................................. 8

    Cisco Unified Computing System ......................................................................................................................................... 8

    Cisco UCS Blade Chassis .............................................................................................................................................. 10

    Cisco UCS B200 M4 Blade Server ................................................................................................................................. 11

    Cisco UCS Virtual Interface Card 1240 .......................................................................................................................... 11

    Cisco UCS 6248UP Fabric Interconnect ......................................................................................................................... 11

    Cisco Nexus 2232PP 10GE Fabric Extender .................................................................................................................. 12

    Cisco Unified Computing System Manager .................................................................................................................... 12

    Cisco UCS Service Profiles ............................................................................................................................................ 13

    Cisco Nexus 9000 Series Switch ....................................................................................................................................... 16

    Cisco UCS Central ............................................................................................................................................................. 16

    FlexPod ............................................................................................................................................................................. 16

    FlexPod System Overview ............................................................................................................................................. 17

    NetApp FAS and Data ONTAP ....................................................................................................................................... 18

    NetApp Clustered Data ONTAP ..................................................................................................................................... 19

    NetApp Storage Virtual Machines .................................................................................................................................. 19

    VMware vSphere ............................................................................................................................................................... 20

    Firewall and Load Balancer ............................................................................................................................................ 20

    SnapManager for Exchange Server Overview .................................................................................................................... 23

    SnapManager for Exchange Server Architecture ................................................................................................................ 24

    Migrating Microsoft Exchange Data to NetApp Storage ................................................................................................. 25

    High Availability ................................................................................................................................................................. 25

    Microsoft Exchange 2013 Database Availability Group Deployment Scenarios ............................................................... 26

    Exchange 2013 Architecture .............................................................................................................................................. 27

    Client Access Server ..................................................................................................................................................... 27

    Mailbox Server .............................................................................................................................................................. 28

    Client Access Server and Mailbox Server Communication ............................................................................................. 28

    High Availability and Site Resiliency ............................................................................................................................... 29

  • 4

    Exchange Clients ........................................................................................................................................................... 30

    POP3 and IMAP4 Clients ............................................................................................................................................... 30

    Name Space Planning ........................................................................................................................................................ 30

    Namespace Models ....................................................................................................................................................... 30

    Network Load Balancing .................................................................................................................................................... 31

    Common Name Space and Load Balancing Session Affinity Implementations ................................................................ 32

    Cisco Application Centric Infrastructure ............................................................................................................................. 33

    Cisco ACI Fabric ............................................................................................................................................................ 33

    Solution Design ...................................................................................................................................................................... 35

    FlexPod, Cisco ACI and L4-L7 Services Components ...................................................................................................... 35

    Validated System Hardware Components ...................................................................................................................... 38

    FlexPod Infrastructure Design ............................................................................................................................................ 38

    Hardware and Software Revisions ................................................................................................................................. 38

    FlexPod Infrastructure Physical Build ............................................................................................................................. 39

    Cisco Unified Computing System ................................................................................................................................... 40

    Cisco Nexus 9000 ......................................................................................................................................................... 49

    Application Centric Infrastructure (ACI) Design .................................................................................................................. 52

    ACI Components ........................................................................................................................................................... 52

    End Point Group (EPG) Mapping in a FlexPod Environment ............................................................................................ 54

    Virtual Machine Networking ........................................................................................................................................... 55

    Virtual Machine Networking ........................................................................................................................................... 56

    Onboarding Infrastructure Services ................................................................................................................................ 57

    Onboarding Microsoft Exchange on FlexPod ACI Infrastructure ......................................................................................... 58

    Exchange Logical Topology ........................................................................................................................................... 59

    Microsoft Exchange as Tenant on ACI Infrastructure ...................................................................................................... 59

    Common Services and Storage Management ................................................................................................................ 65

    Connectivity to Existing Infrastructure ............................................................................................................................ 66

    Exchange - ACI Design Recap ....................................................................................................................................... 67

    Exchange Server Sizing ..................................................................................................................................................... 70

    Exchange 2013 Server Requirements Calculator Inputs ................................................................................................. 70

    Exchange 2013 Server Requirements Calculator Output ................................................................................................ 72

    Exchange and Domain Controller Virtual Machines ............................................................................................................ 76

    Namespace and Network Load Balancing ...................................................................................................................... 77

    NetApp Storage Design ..................................................................................................................................................... 77

    Network and Storage Physical Connectivity ................................................................................................................... 77

  • 5

    Storage Configurations .......................................................................................................................................................... 84

    Aggregate, Volume, and LUN Sizing .................................................................................................................................. 84

    Storage Layout .................................................................................................................................................................. 84

    Exchange 2013 Database and Log LUNs ........................................................................................................................... 85

    Validation ............................................................................................................................................................................... 86

    Validating the Storage Subsystem with JetStress .............................................................................................................. 86

    Validating the Storage Subsystem with Exchange 2013 LoadGen .................................................................................. 87

    Conclusion ............................................................................................................................................................................. 88

    References ........................................................................................................................................................................ 88

    Interoperability Matrixes ................................................................................................................................................ 89

    About the Authors .................................................................................................................................................................. 90

    Acknowledgements ........................................................................................................................................................... 90

  • Executive Summary

    6

    Executive Summary

    Microsoft Exchange 2013 deployed on FlexPod with Cisco ACI and F5 BIG-IP LTM is a predesigned, best

    Cisco Nexus 9000 family of switches, F5 BIG-IP Application Delivery Controller (ADC) and NetApp fabric-

    attached storage (FAS) or V-Series systems. The key design details and best practices to be followed for

    deploying this new shared architecture are covered in this design guide.

    This Exchange Server 2013 solution is implemented on top of the FlexPod with VMware vSphere 5.5 and

    Cisco Nexus 9000 Application Centric Infrastructure (ACI). The details for this infrastructure is not covered in

    this document, but can be found at the following link:

    FlexPod Datacenter with Microsoft Exchange 2013, F5 BiG-IP, and Cisco Application Centric Infrastructure

    (ACI) Deployment Guide

    http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/exchange2013_aci_flexpod_vmware.htmlhttp://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/exchange2013_aci_flexpod_vmware.html

  • Solution Overview

    7

    Solution Overview

    Introduction

    Cisco Validated Designs include systems and solutions that are designed, tested, and documented to

    facilitate and improve customer deployments. These designs incorporate a wide range of technologies and

    products into a portfolio of solutions that have been developed to address the business needs of customers.

    Achieving the vision of a truly agile, application-based data center requires a sufficiently flexible

    infrastructure that can rapidly provision and configure the necessary resources independently of their

    location in the data center.

    This document describes the Cisco solution for deploying Microsoft Exchange with NetApp FlexPod

    solution architecture, F5 BIG-

    using Cisco Application Centric Infrastructure (ACI). Cisco ACI is a holistic architecture that introduces

    hardware and software innovations built upon the new Cisco Nexus 9000 Series product line. Cisco ACI

    provides a centralized policy-driven application deployment architecture, which is managed through the

    Cisco Application Policy Infrastructure Controller (APIC). Cisco ACI delivers software flexibility with the

    scalability of hardware performance.

    Audience

    The audience of this document includes, but is not limited to, sales engineers, field consultants, professional

    services, IT managers, partner engineers, and customers who want to take advantage of an infrastructure

    that is built to deliver IT efficiency and enable IT innovation.

  • Technology Overview

    8

    Technology Overview

    Cisco Unified Computing System

    The Cisco Unified Computing System is a third-generation data center platform that unites computing,

    networking, storage access, and virtualization resources into a cohesive system designed to reduce TCO

    and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet (10GbE)

    unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated, scalable,

    multi-chassis platform in which all resources participate in a unified management domain that is controlled

    and managed centrally.

    Figure 1 Cisco Unified Computing System

  • Technology Overview

    9

    Figure 2 Cisco Unified Computing System Components

    Figure 3 Cisco Unified Computing System

    The main components of the Cisco UCS are:

    Compute

    The system is based on an entirely new class of computing system that incorporates blade servers

    based on Intel Xeon E5-2600 Series Processors. Cisco UCS B-Series Blade Servers work with

  • Technology Overview

    10

    virtualized and non-virtualized applications to increase performance, energy efficiency, flexibility and

    productivity.

    Network

    The system is integrated onto a low-latency, lossless, 80-Gbps unified network fabric. This network

    foundation consolidates LANs, SANs, and high-performance computing networks which are separate

    networks today. The unified fabric lowers costs by reducing the number of network adapters, switches,

    and cables, and by decreasing the power and cooling requirements.

    Storage access

    The system provides consolidated access to both storage area network (SAN) and network-attached

    storage (NAS) over the unified fabric. By unifying storage access, Cisco UCS can access storage over

    Ethernet, Fiber Channel, Fiber Channel over Ethernet (FCoE), and iSCSI. This provides customers with

    the options for setting storage access and investment protection. Additionally, server administrators can

    reassign storage-access policies for system connectivity to storage resources, thereby simplifying

    storage connectivity and management for increased productivity.

    Management

    The system uniquely integrates all system components which enable the entire solution to be managed

    as a single entity by the Cisco UCS Manager. The Cisco UCS Manager has an intuitive graphical user

    interface (GUI), a command-line interface (CLI), and a robust application programming interface (API) to

    manage all system configuration and operations.

    The Cisco UCS is designed to deliver:

    A reduced Total Cost of Ownership (TCO), increased Return on Investment (ROI) and increased

    business agility.

    Increased IT staff productivity through just-in-time provisioning and mobility support.

    A cohesive, integrated system which unifies the technology in the data center. The system is

    managed, serviced and tested as a whole.

    Scalability through a design for hundreds of discrete servers and thousands of virtual machines and

    the capability to scale I/O bandwidth to match demand.

    Industry standards supported by a partner ecosystem of industry leaders.

    Cisco UCS Blade Chassis

    The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing

    System, delivering a scalable and flexible blade server chassis.

    The Cisco UCS 5108 Blade Server Chassis is six rack units (6RU) high and can mount in an industry-

    standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers

    and can accommodate both half-width and full-width blade form factors.

    Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power

    supplies are 92 percent efficient and can be configured to support non-redundant, N+ 1 redundant and grid-

  • Technology Overview

    11

    redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors

    (one per power supply), and two I/O bays for Cisco UCS 2208 XP Fabric Extenders.

    A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot and up to 80 Gbps of I/O

    bandwidth for two slots. The chassis is capable of supporting future 80 Gigabit Ethernet standards.

    Figure 4 Cisco Blade Server Chassis (Front, Rear and Populated Blades View)

    Cisco UCS B200 M4 Blade Server

    The Cisco UCS B200 M4 Blade Server is a half-width, two-socket blade server. The system uses two Intel

    Xeon E5-2600 Series Processors, up to 384 GB of DDR3 memory, two optional hot-swappable small form

    factor (SFF) serial attached SCSI (SAS) disk drives, and two VIC adapters that provides up to 80 Gbps of I/O

    throughput. The server balances simplicity, performance, and density for production-level virtualization and

    other mainstream data center workloads.

    Figure 5 Cisco UCS B200 M4 Blade Server

    Cisco UCS Virtual Interface Card 1240

    A Cisco innovation, the Cisco UCS VIC 1240 is a four-port 10 Gigabit Ethernet, FCoE-capable modular LAN

    on motherboard (mLOM) designed exclusively for the M3 generation of Cisco UCS B-Series Blade Servers.

    When used in combination with an optional port expander, the Cisco UCS VIC 1240 capabilities can be

    expanded to eight ports of 10 Gigabit Ethernet.

    Cisco UCS 6248UP Fabric Interconnect

    The Fabric interconnects provide a single point for connectivity and management for the entire system.

    Typically deployed as an active-

  • Technology Overview

    12

    single, highly-available management domain controlled by Cisco UCS Manager. The fabric interconnects

    manage all I/O efficiently and securely at a single point, resulting in deterministic I/O latency regardless of a

    -logical location in the system.

    Cisco UCS 6200 Series Fabric Interconnect -Gbps unified fabric with low-latency,

    lossless, cut-through switching that supports IP, storage, and management traffic using a single set of

    cables. The fabric interconnects feature virtual interfaces that terminate both physical and virtual connections

    equivalently, establishing a virtualization-aware environment in which blade, rack servers, and virtual

    machines are interconnected using the same mechanisms. The Cisco UCS 6248UP is a 1-RU fabric

    interconnect that features up to 48 universal ports that can support 80 Gigabit Ethernet, Fiber Channel over

    Ethernet, or native Fiber Channel connectivity.

    Figure 6 Cisco UCS 6248UP Fabric Interconnect

    Cisco Nexus 2232PP 10GE Fabric Extender

    The Cisco Nexus 2232PP 10G provides 32 10 Gb Ethernet and Fibre Channel Over Ethernet (FCoE) Small

    Form-Factor Pluggable Plus (SFP+) server ports and eight 10 Gb Ethernet and FCoE SFP+ uplink ports in a

    compact 1 rack unit (1RU) form factor.

    When a C-Series Rack-Mount Server is integrated with Cisco UCS Manager, through the Nexus 2232

    platform, the server is managed using the Cisco UCS Manager GUI or Cisco UCS Manager CLI. The Nexus

    2232 provides data and control traffic support for the integrated C-Series server.

    Cisco Unified Computing System Manager

    Cisco UCS Manager provides unified, centralized, embedded management of all Cisco Unified Computing

    System software and hardware components across multiple chassis and thousands of virtual machines.

    Administrators use the software to manage the entire Cisco Unified Computing System as a single logical

    entity through an intuitive GUI, a command-line interface (CLI), or an XML API.

    The Cisco UCS Manager resides on a pair of Cisco UCS 6200 Series Fabric Interconnects using a clustered,

    active-standby configuration for high availability. The software gives administrators a single interface for

    performing server provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault

    detection, auditing, and statistics collection. Cisco UCS Manager service profiles and templates support

    versatile role- and policy-based management, and system configuration information can be exported to

    configuration management databases (CMDBs) to facilitate processes based on IT Infrastructure Library

    (ITIL) concepts. Service profiles benefit both virtualized and non-virtualized environments and increase the

    mobility of non-virtualized servers, such as when moving workloads from server to server or taking a server

    offline for service or upgrade. Profiles can also be used in conjunction with virtualization clusters to bring

    new resources online easily, complementing existing virtual machine mobility.

    Some of the key elements managed by Cisco UCS Manager include:

    UCS 6248UP Rear

    UCS 6248UP Front

  • Technology Overview

    13

    Cisco UCS Integrated Management Controller (IMC) firmware

    RAID controller firmware and settings

    BIOS firmware and settings, including server universal user ID (UUID) and boot order

    Converged network adapter (CNA) firmware and settings, including MAC addresses and worldwide

    names (WWNs) and SAN boot settings

    Virtual port groups used by virtual machines, using Cisco Data Center VM-FEX technology

    Interconnect configuration, including uplink and downlink definitions, MAC address and WWN

    pinning, VLANs, VSANs, quality of service (QoS), bandwidth allocations, Cisco Data Center VM-FEX

    settings, and Ether Channels to upstream LAN switches

    For more Cisco UCS Manager information, refer to:

    http://www.cisco.com/en/US/products/ps10281/index.html

    Cisco UCS Service Profiles

    Figure 7 Traditional Provisioning Approach

    BIOS settings, RAID settings, disk scrub settings, number of NICs, NIC speed, NIC firmware, MAC and IP

    addresses, number of HBAs, HBA WWNs, HBA firmware, FC fabric assignments, QoS settings, VLAN

    other

    server within your data center. Some of these parameters are kept in the hardware of the server itself (like

    BIOS firmware version, BIOS settings, boot order, FC boot settings, etc.) while some settings are kept on

    http://www.cisco.com/en/US/products/ps10281/index.html

  • Technology Overview

    14

    your network and storage switches (like VLAN assignments, FC fabric assignments, QoS settings, ACLs,

    etc.). This results in the following server deployment challenges:

    Lengthy Deployment Cycles

    Every deployment requires coordination among server, storage, and network teams

    Need to ensure correct firmware & settings for hardware components

    Need appropriate LAN and SAN connectivity

    Response Time To Business Needs

    Tedious deployment process

    Manual, error prone processes, that are difficult to automate

    High OPEX costs, outages caused by human errors

    Limited OS And Application Mobility

    Storage and network settings tied to physical ports and adapter identities

    Static infrastructure leads to over-provisioning, higher OPEX costs

    Cisco UCS has uniquely addressed these challenges with the introduction of service profiles (see Figure 8)

    that enables integrated, policy based infrastructure management. Cisco UCS Service Profiles hold the DNA

    for nearly all configurable parameters required to set up a physical server. A set of user defined policies

    (rules) allow quick, consistent, repeatable, and secure deployments of Cisco UCS servers.

    Figure 8 Cisco UCS Service Profiles

    Cisco UCS Service Profiles contain values for a server's property settings, including virtual network interface

    cards (vNICs), MAC addresses, boot policies, firmware policies, fabric connectivity, external management,

    and high availability information. By abstracting these settings from the physical server into a Cisco UCS

  • Technology Overview

    15

    Service Profile, the Service Profile can then be deployed to any physical compute hardware within the Cisco

    UCS domain. Furthermore, Service Profiles can, at any time, be migrated from one physical server to

    another. This logical abstraction of the server personality separates the dependency of the hardware type or

    This innovation is still unique in the industry despite competitors claiming to offer similar functionality. In

    most cases, these vendors must rely on several different methods and interfaces to configure these server

    settings. Furthermore, Cisco is the only hardware provider to offer a truly unified management platform, with

    Cisco UCS Service Profiles and hardware abstraction capabilities extending to both blade and rack servers.

    Some of the key features and benefits of Cisco UCS Service Profiles are:

    Service Profiles and Templates

    A service profile contains configuration information about the server hardware, interfaces, fabric

    connectivity, and server and network identity. The Cisco UCS Manager provisions servers utilizing

    service profiles. The UCS Manager implements a role-based and policy-based management focused on

    service profiles and templates. A service profile can be applied to any blade server to provision it with

    the characteristics required to support a specific software stack. A service profile allows server and

    network definitions to move within the management domain, enabling flexibility in the use of system

    resources.

    Service profile templates are stored in the Cisco UCS 6200 Series Fabric Interconnects for reuse by

    server, network, and storage administrators. Service profile templates consist of server requirements and

    the associated LAN and SAN connectivity. Service profile templates allow different classes of resources

    to be defined and applied to a number of resources, each with its own unique identities assigned from

    predetermined pools.

    The UCS Manager can deploy the service profile on any physical server at any time. When a service

    profile is deployed to a server, the Cisco UCS Manager automatically configures the server, adapters,

    Fabric Extenders, and Fabric Interconnects to match the configuration specified in the service profile. A

    service profile template parameterizes the UIDs that differentiate between server instances.

    This automation of device configuration reduces the number of manual steps required to configure

    servers, Network Interface Cards (NICs), Host Bus Adapters (HBAs), and LAN and SAN switches.

    Programmatically Deploying Server Resources

    Cisco UCS Manager provides centralized management capabilities, creates a unified management

    domain, and serves as the central nervous system of the Cisco UCS. Cisco UCS Manager is embedded

    device management software that manages the system from end-to-end as a single logical entity

    through an intuitive GUI, CLI, or XML API. Cisco UCS Manager implements role- and policy-based

    management using service profiles and templates. This construct improves IT productivity and business

    maintenance to strategic initiatives.

    Dynamic Provisioning

    Cisco UCS resources are abstract in the sense that their identity, I/O configuration, MAC addresses and

    WWNs, firmware versions, BIOS boot order, and network attributes (including QoS settings, ACLs, pin

    groups, and threshold policies) all are programmable using a just-in-time deployment model. A service

  • Technology Overview

    16

    profile can be applied to any blade server to provision it with the characteristics required to support a

    specific software stack. A service profile allows server and network definitions to move within the

    management domain, enabling flexibility in the use of system resources. Service profile templates allow

    different classes of resources to be defined and applied to a number of resources, each with its own

    unique identities assigned from predetermined pools.

    Cisco Nexus 9000 Series Switch

    The 9000 Series Switches offer both modular and fixed 10/40/100 Gigabit Ethernet switch configurations

    with scalability up to 30 Tbps of non-blocking performance with less than five-microsecond latency, 1152

    10 Gbps or 288 40 Gbps non-blocking Layer 2 and Layer 3 Ethernet ports and wire speed VXLAN gateway,

    bridging, and routing support.

    Figure 9 Cisco Nexus 9000 Series Switch

    For more information, see: http://www.cisco.com/c/en/us/products/switches/nexus-9000-series-

    switches/index.html

    Cisco UCS Central

    For Cisco UCS customers managing growth within a single data center, growth across multiple sites, or

    both, Cisco UCS Central Software centrally manages multiple Cisco UCS domains using the same concepts

    that Cisco UCS Manager uses to support a single domain. Cisco UCS Central Software manages global

    resources (including identifiers and policies) that can be consumed within individual Cisco UCS Manager

    instances. It can delegate the application of policies (embodied in global service profiles) to individual

    domains, where Cisco UCS Manager puts the policies into effect. In its first release, Cisco UCS Central

    Software can support up to 10,000 servers in a single data center or distributed around the world in as many

    domains as are used for the servers.

    For more information on Cisco UCS Central, see:

    http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-central-software/index.html

    FlexPod

    Cisco and NetApp have carefully validated and verified the FlexPod solution architecture and its many use

    cases while creating a portfolio of detailed documentation, information, and references to assist customers

    in transforming their data centers to this shared infrastructure model. This portfolio includes, but is not

    limited to, the following items:

    Best practice architectural design

    http://www.cisco.com/c/en/us/products/switches/nexus-9000-series-switches/index.htmlhttp://www.cisco.com/c/en/us/products/switches/nexus-9000-series-switches/index.htmlhttp://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-central-software/index.html

  • Technology Overview

    17

    Workload sizing and scaling guidance

    Implementation and deployment instructions

    Technical specifications (rules for what is, and what is not, a FlexPod configuration)

    Frequently asked questions (FAQs)

    Cisco Validated Designs (CVDs) and NetApp Validated Architectures (NVAs) covering a variety of use

    cases

    Cisco and NetApp have also built a robust and experienced support team focused on FlexPod solutions,

    from customer account and technical sales representatives to professional services and technical support

    engineers. The support alliance between NetApp and Cisco gives customers and channel services partners

    direct access to technical experts who collaborate with cross vendors and have access to shared lab

    resources to resolve potential issues.

    FlexPod supports tight integration with virtualized and cloud infrastructures, making it the logical choice for

    long-term investment. FlexPod provides a uniform approach to IT architecture, offering a well-characterized

    and documented pool of shared resources for application workloads. FlexPod delivers operational efficiency

    and consistency with the versatility to meet a variety of SLAs and IT initiatives, including the following:

    Application roll out or application migration

    Business continuity and disaster recovery

    Desktop virtualization

    Cloud delivery models (public, private, hybrid) and service models (IaaS, PaaS, SaaS)

    Asset consolidation and virtualization

    FlexPod System Overview

    FlexPod is a best practice data center architecture that includes three components:

    Cisco Unified Computing System (Cisco UCS)

    Cisco Nexus Switches

    NetApp fabric-attached storage (FAS) systems

  • Technology Overview

    18

    Figure 10 FlexPod Component Families

    These components are connected and configured according to the best practices of both Cisco and NetApp

    and provide the ideal platform for running a variety of enterprise workloads with confidence. FlexPod can

    scale up for greater performance and capacity (adding compute, network, or storage resources individually

    as needed), or it can scale out for environments that require multiple consistent deployments (rolling out

    additional FlexPod stacks). The reference architecture covered in this document leverages the Cisco Nexus

    9000 for the switching element.

    One of the key benefits of FlexPod is the ability to maintain consistency at scale. Each of the component

    families shown in Figure 10 (Cisco UCS, Cisco Nexus, and NetApp FAS) offers platform and resource

    options to scale the infrastructure up or down, while supporting the same features and functionality that are

    required under the configuration and connectivity best practices of FlexPod.

    NetApp FAS and Data ONTAP

    NetApp solutions offer increased availability while consuming fewer IT resources. A NetApp solution includes

    hardware in the form of FAS controllers and disk storage and the NetApp Data ONTAP operating system

    that runs on the controllers. Data ONTAP is offered in two modes of operation: Data ONTAP operating in 7-

    Mode and clustered Data ONTAP. The storage efficiency built into Data ONTAP provides substantial space

  • Technology Overview

    19

    savings, allowing more data to be stored at a lower cost. The NetApp portfolio affords flexibility for selecting

    the controller that best fits customer requirements.

    NetApp offers the NetApp unified storage architecture which simultaneously supports storage area network

    (SAN), network-attached storage (NAS), and iSCSI across many operating environments such as VMware,

    Windows, and UNIX. This single architecture provides access to data by using industry-standard

    protocols, including NFS, CIFS, iSCSI, FCP, SCSI, FTP, and HTTP. Connectivity options include standard

    Ethernet (10/100/1000, or 10GbE) and Fibre Channel (1, 2, 4, or 8Gb/sec). In addition, all systems can be

    configured with high-performance solid state drives (SSDs) or serial ATA (SAS) disks for primary storage

    applications, low-cost SATA disks for secondary applications (such as backup and archive), or a mix of

    different disk types.

    For more information, see: http://www.netapp.com/us/products/platform-os/data-ontap-8/index.aspx

    NetApp Clustered Data ONTAP

    With clustered Data ONTAP, NetApp provides enterprise-ready, unified scale-out storage. Developed from a

    solid foundation of proven Data ONTAP technology and innovation, clustered Data ONTAP is the basis for

    large virtualized shared storage infrastructures that are architected for nondisruptive operations over the

    system lifetime. Controller nodes are deployed in HA pairs in a single storage domain or cluster.

    Data ONTAP scale-out is a way to respond to growth in a storage environment. As the storage environment

    grows, additional controllers are added seamlessly to the resource pool residing on a shared storage

    infrastructure. Host and client connections as well as datastores can move seamlessly and non-disruptively

    anywhere in the resource pool, so that existing workloads can be easily balanced over the available

    resources, and new workloads can be easily deployed. Technology refreshes (replacing disk shelves, adding

    or completely replacing storage controllers) are accomplished while the environment remains online and

    continues to serve data.

    Data ONTAP is the first product to offer a complete scale-out solution, and it offers an adaptable, always-

    available storage infrastructure for today's highly virtualized environment.

    NetApp Storage Virtual Machines

    A cluster serves data through at least one and possibly multiple storage virtual machines (SVMs; formerly

    called Vservers). An SVM is a logical abstraction that represents the set of physical resources of the cluster.

    Data volumes and network logical interfaces (LIFs) are created and assigned to an SVM and may reside on

    any node in the cluster to which the SVM has been given access. An SVM may own resources on multiple

    nodes concurrently, and those resources can be moved non-disruptively from one node to another. For

    example, a flexible volume can be non-disruptively moved to a new node and aggregate, or a data LIF can

    be transparently reassigned to a different physical network port. In this manner, the SVM abstracts the

    cluster hardware and is not tied to specific physical hardware.

    An SVM is capable of supporting multiple data protocols concurrently. Volumes within the SVM can be

    junctioned together to form a single NAS namespace, which makes all of an SVM's data available through a

    single share or mount point to NFS and CIFS clients. SVMs also support block-based protocols, and LUNs

    can be created and exported using iSCSI, Fibre Channel, or FCoE. Any or all of these data protocols may be

    configured for use within a given SVM.

    http://www.netapp.com/us/products/platform-os/data-ontap-8/index.aspx

  • Technology Overview

    20

    Because it is a secure entity, an SVM is only aware of the resources that have been assigned to it and has no

    knowledge of other SVMs and their respective resources. Each SVM operates as a separate and distinct

    entity with its own security domain. Tenants may manage the resources allocated to them through a

    delegated SVM administration account. Each SVM may connect to unique authentication zones such as

    Active Directory, LDAP, or NIS.

    VMware vSphere

    VMware vSphere is a virtualization platform for holistically managing large collections of infrastructure

    resources-CPUs, storage, networking-as a seamless, versatile, and dynamic operating environment. Unlike

    traditional operating systems that manage an individual machine, VMware vSphere aggregates the

    infrastructure of an entire data center to create a single powerhouse with resources that can be allocated

    quickly and dynamically to any application in need.

    The VMware vSphere environment delivers a robust application environment. For example, with VMware

    vSphere, all applications can be protected from downtime with VMware High Availability (HA) without the

    complexity of conventional clustering. In addition, applications can be scaled dynamically to meet changing

    loads with capabilities such as Hot Add and VMware Distributed Resource Scheduler (DRS).

    For more information, see: http://www.vmware.com/products/datacenter-

    virtualization/vsphere/overview.html

    Firewall and Load Balancer

    Cisco ACI is a policy driven framework which optimizes application delivery. Applications consist of server

    end points and network services. The relationship between these elements and their requirements forms an

    application-centric network policy. Through Cisco APIC automation application-centric network policies are

    managed and dynamically provisioned to simplify and accelerate application deployments on the fabric.

    Network services such as load balancers and firewalls can be readily consumed by the application end

    points as the APIC controlled fabric directs traffic to the appropriate services. This is the data center network

    agility application teams have been demanding to reduce deployment from days or weeks to minutes.

    L4-L7 service integration is achieved by using service specific Device Packages. These Device Packages are

    imported into the Cisco APIC which are used to define, configure, and monitor a network service device

    such as a firewall, SSL offload, load balancer, context switch, SSL termination device, or intrusion prevention

    system (IPS). Device packages contain descriptions of the functional capability and settings along with

    interfaces and network connectivity information for each function.

    The Cisco APIC is an open platform enabling a broad ecosystem and opportunity for industry interoperabil-ity with Cisco ACI. Numerous Device Packages associated with various vendors are available and can be

    found at http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-

    infrastructure/solution-overview-c22-732445.html

    An L4-L7 network service device is deployed in the fabric by adding it to a service graph which essentially

    identifies the set of network or service functions that are provided by the device to the application. The

    service graph is inserted between source and destination EPGs by a contract. The service device itself can

    be configured through the Cisco APIC or optionally through the devices traditional GUI or CLI. The level of

    APIC control is dependent on the functionality defined in the Device Package device scripts.

    http://www.vmware.com/products/datacenter-virtualization/vsphere/overview.htmlhttp://www.vmware.com/products/datacenter-virtualization/vsphere/overview.htmlhttp://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-732445.htmlhttp://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-732445.html

  • Technology Overview

    21

    It should be noted that firewalls and load balancers are not a core component of the FlexPod solution but

    since most of the application deployments are incomplete without security and load distribution, Firewall and

    Load balancer designs are covered as part of the infrastructure deployment.

    Cisco Adaptive Security Appliance (ASA)

    The Cisco ASA 5500-X Series helps organizations to balance security with productivity. It combines the

    industry's most deployed stateful inspection firewall with comprehensive, next-generation network security

    services. All Cisco ASA 5500-X Series Next-Generation Firewalls are powered by Cisco Adaptive Security

    Appliance (ASA) Software, with enterprise-class stateful inspection and next-generation firewall capabilities.

    ASA software also:

    Integrates with other essential network security technologies

    Enables up to eight ASA 5585-X appliances to be clustered, for up to 320 Gbps of firewall and 80

    Gbps of IPS throughput

    Delivers high availability for high-resiliency applications

    Based on customer policies defined for a particular tenant, Cisco ASA configuration is automated using

    device packages installed in the APIC.

    F5 BiG-IP

    F5 BIG-

    optimize, and load balance application traffic. This gives the control to add servers easily, eliminate

    downtime, improve application performance, and meet the security requirements.

    Figure 11 F5 BIG-IP

    F5s Synthesis architecture is a vision for delivering Software Defined Application Services. Its high

    performance services fabric enables organizations to rapidly provision, manage and orchestrate a rich

    catalog of services using simplified business models that dramatically changes the economy of scale for

    layer 4-7 services.

    The Cisco ACI APIC provides a centralized service automation and policy control point to integrate F5

    through

    critical data center technology directly incorporates F5 application solutions into the ACI automation

  • Technology Overview

    22

    framework. Both Cisco ACI and F5 Synthesis are highly extensible through programmatic extensions,

    enabling consistent automation and orchestration of critical services needed to support application

    requirements for performance, security and reliability. Cisco's ACI and F5 SDAS offer a comprehensive,

    application centric set of network and L4-L7 services, enabling data centers to rapidly deploy and deliver

    applications.

    The F5 BIG-IP platform supports SDAS and Cisco ACI. The BIG-IP platform provides an intelligent

    application delivery solution for Exchange Server, which combines dynamic network products for securing,

    optimizing, and accelerating the Exchange environment. The F5 BIG-IP application services are instantiated

    and managed through Cisco APIC readily providing Microsoft Exchange services when required. Figure 12

    captures the service graph instantiation of the BIG-IP application service.

    Figure 12 F5 BIG-IP Service Graph Example

    NetApp OnCommand System Manager and Unified Manager

    NetApp OnCommand System Manager allows storage administrators to manage individual NetApp storage

    systems or clusters of NetApp storage systems. Its easy-to-use interface saves time, helps prevent errors,

    and simplifies common storage administration tasks such as creating volumes, LUNs, qtrees, shares, and

    exports. System Manager works across all NetApp storage systems: FAS2000 series, FAS3000 series,

    FAS6000 series, FAS8000 series, and V-Series systems or NetApp FlexArray systems. NetApp

    OnCommand Unified Manager complements the features of System Manager by enabling the monitoring and

    management of storage within the NetApp storage infrastructure.

    This solution uses both OnCommand System Manager and OnCommand Unified Manager to provide storage

    provisioning and monitoring capabilities within the infrastructure.

    NetApp Virtual Storage Console

    The NetApp Virtual Storage Console (VSC) software delivers storage configuration and monitoring, datastore

    provisioning, virtual machine (VM) cloning, and backup and recovery of VMs and datastores. VSC also

    includes an application-programming interface (API) for automated control.

    -in that provides end-to-end VM lifecycle management for

    VMware environments that use NetApp storage. VSC is available to all VMware vSphere Clients that connect

    to the VMware vCenter Server. This availability is different from a client-side plug-in that must be installed

    on every VMware vSphere Client. The VSC software can be installed either on the VMware vCenter Server or

    on a separate Windows Server instance or VM.

    VMware vCenter Server

    VMware vCenter Server is the simplest and most efficient way to manage VMware vSphere, irrespective of

    the number of VMs you have. It provides unified management of all hosts and VMs from a single console and

    aggregates performance monitoring of clusters, hosts, and VMs. VMware vCenter Server gives

    administrators a deep insight into the status and configuration of compute clusters, hosts, VMs, storage, the

    guest OS, and other critical components of a virtual infrastructure. A single administrator can manage 100 or

  • Technology Overview

    23

    more virtualization environment workloads using VMware vCenter Server, more than doubling typical

    productivity in managing physical infrastructure. VMware vCenter manages the rich set of features available

    in a VMware vSphere environment.

    For more information, see:

    http://www.vmware.com/products/vcenter-server/overview.html

    NetApp SnapDrive

    NetApp SnapDrive data management software automates storage provisioning tasks. It can back up and

    restore business-critical data in seconds by using integrated NetApp Snapshot technology. With Windows

    Server and VMware ESX server support, SnapDrive software can run on Windows-based hosts either in a

    physical environment or in a virtual environment. Administrators can integrate SnapDrive with Windows

    failover clustering and add storage as needed without having to pre-allocate storage resources.

    For additional information about NetApp SnapDrive, refer to the NetApp SnapDrive Data Management Software datasheet.

    SnapManager for Exchange Server Overview

    SnapManager for Exchange provides an integrated data management solution for Microsoft Exchange Server

    2013 that enhances the availability, scalability, and reliability of Microsoft Exchange databases. SME

    provides rapid online backup and restoration of databases, along with local or remote backup set mirroring

    for disaster recovery.

    SME uses online Snapshot technologies that are part of Data ONTAP. It integrates with Microsoft Exchange

    backup and restores APIs and the Volume Shadow Copy Service (VSS). SnapManager for Exchange can use

    SnapMirror to support disaster recovery even if native Microsoft Exchange Database Availability Group (DAG)

    replication is leveraged.

    SME provides the following data-management capabilities:

    Migration of Microsoft Exchange data from local or remote storage onto NetApp LUNs.

    Application-consistent backups of Microsoft Exchange databases and transaction logs from NetApp

    LUNs.

    Verification of Microsoft Exchange databases and transaction logs in backup sets.

    Management of backup sets.

    Archiving of backup sets.

    Restoration of Microsoft Exchange databases and transaction logs from previously created backup

    sets, providing a lower recovery time objective (RTO) and more frequent recovery point objectives

    (RPOs).

    Capability to reseed database copies in DAG environments and prevent full reseed of a replica

    database across the network.

    Some of the new features released in SnapManager for Exchange 7.1 include the following:

    http://www.vmware.com/products/vcenter-server/overview.html

  • Technology Overview

    24

    Capability to restore a passive database copy without having to reseed it across the network, by

    using the database Reseed wizard.

    Native SnapVault integration without using Protection Manager.

    RBAC support for service account.

    Single installer for E2010/E2013.

    Retention enhancements.

    SnapManager for Exchange Server Architecture

    SnapManager for Microsoft Exchange versions 7.1 and 7.2 support Microsoft Exchange Server 2013. SME is

    tightly integrated with Microsoft Exchange, which allows consistent online backups of Microsoft Exchange

    environments while leveraging NetApp Snapshot technology. SME is a VSS requestor, meaning that it uses

    the VSS framework supported by Microsoft to initiate backups. SME works with the DAG, providing the

    ability to back up and restore data from both active database copies and passive database copies.

    Figure 13 shows the SnapManager for Exchange Server architecture. For more information about VSS, refer

    to Volume Shadow Copy Service Overview on the Microsoft Developer Network.

    http://msdn.microsoft.com/en-us/library/aa384649(v=VS.85).aspx

  • Technology Overview

    25

    Figure 13 SnapManager for Exchange Server Architecture

    Migrating Microsoft Exchange Data to NetApp Storage

    The process of migrating Microsoft Exchange databases and transaction log files from one location to

    another can be a time-consuming and lengthy process. Many manual steps must be taken so that the

    Microsoft Exchange database files are in the proper state to be moved. In addition, more manual steps must

    be performed to bring those files back online for handling Microsoft Exchange traffic. SME automates the

    entire migration process, eliminating any potential user errors. After the data is migrated, SME automatically

    mounts the Microsoft Exchange data files and allows Microsoft Exchange to continue to serve e-mail.

    High Availability

    In Microsoft Exchange 2013, the DAG feature was implemented to support mailbox database resiliency,

    mailbox server resiliency, and site resiliency. The DAG consists of two or more servers, and each server can

    store up to one copy of each mailbox database.

    The DAG Active Manager manages the database and mailbox failover and switchover processes. A failover

    is an unplanned failure, and a switchover is a planned administrative activity to support maintenance

    activities.

  • Technology Overview

    26

    The database and server failover process is an automatic process when a database or mailbox server incurs

    a failure. The order in which a database copy is activated is set by the administrator.

    For more information on Microsoft Exchange 2013 DAGs, refer to the Microsoft TechNet article Database

    Availability Groups.

    Microsoft Exchange 2013 Database Availability Group Deployment Scenarios

    Single-Site Scenario

    Deploying a two-node DAG with a minimum of two copies of each mailbox database in a single site is best

    suited for companies that want to achieve server-level and application-level redundancy. In this situation,

    deploying a two-node DAG using RAID DP provides not only server-level and application-level redundancy

    but protection against double disk failure as well. Adding SnapManager for Exchange in a single-site

    scenario enables point-in-time restores without the added capacity requirements and complexity of a DAG

    copy. Using the reseed functionality in SME 7.1 allows the database copies to be in a healthy state and

    reduces the RTO for failed databases, enabling resiliency all the time.

    Multisite Scenario

    Extending a DAG across multiple data centers provides high availability of servers and storage components

    and adds site resiliency. When planning a multisite scenario, NetApp recommends having at least three

    mailbox servers as well as three copies of each mailbox database, two in the primary site and one in the

    secondary site. Adding at least two copies in both primary and secondary sites provides not only site

    resiliency but also high availability in each site. Using the reseed functionality in SME 7.1 allows the database

    copies to be in a healthy state and reduces the RTO for failed databases, enabling resiliency all the time.

    For additional information on DAG layout planning, refer to the Microsoft TechNet article Database

    Availability Groups.

    When designing the storage layout and data protection for a DAG scenario, use the following design

    considerations and best practices.

    Deployment Best Practice

    In a multisite scenario, it is a best practice to deploy at least three mailbox servers as well as three copies of each mailbox database, two in the primary site and one in the secondary site. Adding at least two copies in both primary and secondary sites provides not only site resiliency but also high availability in each site.

    Storage Design Best Practices

    Design identical storage for active and passive copies of the mailboxes in terms of capacity

    and performance.

    Provision the active and passive LUNs identically with regard to path, capacity, and perfor-

    mance.

    Place flexible volumes for active and passive databases onto separate aggregates that are

    connected to separate SVMs. If a single aggregate is lost, only the database copies on that

    aggregate are affected.

    http://technet.microsoft.com/en-us/library/dd979799.aspxhttp://technet.microsoft.com/en-us/library/dd979799.aspxhttp://technet.microsoft.com/en-us/library/dd979799.aspxhttp://technet.microsoft.com/en-us/library/dd979799.aspx

  • Technology Overview

    27

    Volume Separation Best Practice

    Place active and passive copies of the database into separate volumes.

    Backup Best Practices

    Perform a SnapManager for Exchange full back up on one copy of the database and a copy-

    only backup on the rest of the database copies.

    Verification of database backups is not required if Microsoft Exchange 2013 is in a DAG con-

    figuration with at least two copies of the databases, with Microsoft Exchange background da-

    tabase maintenance enabled.

    Verification of database backups and transaction log backups is required if Microsoft Ex-

    change 2013 is in a standalone (non-DAG) configuration.

    In Microsoft Exchange 2013 standalone environments that use SnapMirror, configure data-

    base backup and transaction log backup verification to occur on the SnapMirror destination

    storage.

    For more information about the optimal layout, refer to SnapManager 7.1 Microsoft Exchange Server

    documentation.

    For more information about the NetApp Clustered Data ONTAP best practices, refer to TR-4221 Microsoft

    Exchange Server 2013 and SnapManager

    Exchange 2013 Architecture

    Microsoft Exchange 2013 introduces significant architectural changes. While Exchange 2010 had five server

    roles, Exchange 2013 has two primary server roles: Mailbox server and Client Server. The mailbox server

    processes the client connections to the active mailbox database. The Client Access Server is a thin and

    stateless component that proxies client connections to the Mailbox server. Both the Mailbox server role and

    the Client Access Role can run on the same server or on separate servers.

    Revisions in Exchange 2013 architecture has changed the some aspects of Exchange client connectivity.

    RPC/TCP is no longer used for Outlook client access protocol. RPC/TCP access protocol has been replaced

    with RPC over HTTPS. This is known as Outlook Anywhere or, starting with Exchange 2013 SP1, MAPI over

    HTTP. This change removes the need for the RPC Client Access protocol on the Client Access Server and

    thus simplifies the Exchange server namespace.

    Client Access Server

    The Client Access Server provides authentication, limited redirection, and proxy services. It support standard

    client access protocols such as HTTP, POP IMAP, and SMTP. The thin Client Access Server is stateless and

    does not render data. It does not queue or store any messages or transactions.

    The Client Access Server has the following attributes:

    Stateless Server

    The Client Access Server no longer requires session affinity. This means that it no longer matters with

    member of the Client Access Array receives the individual client request. This functionality avoids the

    need for the network load balancer to have session affinity. The hardware load balancer can support a

    greater number of concurrent sessions when session affinity is not required.

    http://mysupport.netapp.com/documentation/docweb/index.html?productID=61819&language=en-UShttp://mysupport.netapp.com/documentation/docweb/index.html?productID=61819&language=en-UShttp://www.netapp.com/us/media/tr-4221.pdfhttp://www.netapp.com/us/media/tr-4221.pdf

  • Technology Overview

    28

    Connection Pooling

    Connection pooling allow the Client Access Server to use fewer connections to the Mailbox server when

    acting as the client request proxy. This improves processing efficiency and client response time latency.

    Mailbox Server

    The Mailbox server role hosts the mailbox and public folder databases like Exchange Server 2010. In

    addition to these roles, The Exchange 2013 Mailbox server role also includes Client Access protocols,

    Transport services, and Unified Messaging services. The Exchange Store has been rewritten in managed

    code to improve performance and scale across a greater number of physical processor cores in a server.

    Each Exchange database now runs under its own process instead of sharing a single process for all

    database instances.

    Client Access Server and Mailbox Server Communication

    The Client Access Server, Mailbox Server and Exchange clients communicate in the following manner:

    Client Access Server and Mailbox Server use LDAP to query Active Directory.

    Outlook Clients connect to the Client Access Server.

    Client

  • Technology Overview

    29

    Figure 14 Logical Connectivity of Client Access and Mailbox Server

    High Availability and Site Resiliency

    The Mailbox server has built-in high availability and site resiliency. Like in Exchange 2010, the DAG

    (Database Availability Group) and Windows Failover Clustering are the base component for high availability

    and site resiliency in Exchange 2013. Up to 16 Mailbox server can participate in a single DAG.

    Database Availability Group

    The DAG use database copies and replication combined with databases mobility and activation to implement

    data center high availability and site resiliency. Up to 16 copies of each database can be maintained at any

    given time. One of these copies can be of each database can be active at any time while the remainder of

    the databases are passive copies. These databases are distributed across multiple DAG member serves.

    Activation Manager manages the activation of these database on the DAG member servers.

    Active Manager

    Active Manager manages the health and status of the database and database copies. It also managers

    continues replication and mailbox server high availability. Mailbox servers that are member of a DAG have a

    Primary active Manager (PAM) role and a Standby Active Manager (SAM) role. Only one server in the DAG

    runs the PAM role at any given time.

  • Technology Overview

    30

    The PAM role determines which database copies are active and which are passive. The PAM role also reacts

    to DAG member server failures and topology changes. The PAM role can move from one server to another

    within a DAG so there will always be a DAG member server running the PAM role.

    The SAM role tracks which DAG member server is running the active database copy and which one is

    running the passive copy. This information is provided to other Exchange roles such as the Client Access

    Server role and the Transport Service role. The SAM also tracks the state of the databases on the local

    server and informs the PAM when database copy failover is required.

    Site Resiliency

    Site Resiliency can be implemented by stretching a DAG across two data center sites. This is achieved by

    placing member servers from the same and Client Access Server roles in two sites. Multiple copies of each

    database are deployed on DAG members in both sites to facilitated mailbox databases availability in all sites.

    Database activation controls which site has the active database. The DAG replication keeps the database

    copies synchronized.

    Exchange Clients

    Exchange 2013 mailboxes can be accessed by variety of clients. These clients run in web browsers, mobile

    devices, and computers. Most clients access their mailboxes through the virtual directory which is presented

    by the Internet Information Service that runs on the Mailbox server.

    Outlook Client

    Outlook 2007, Outlook 2010, and Outlook 2013 run on computers. They use RPC over HTTP to access the

    Exchange 2013 mailboxes.

    Exchange ActiveSync Clients

    Exchange ActiveSync Clients run on mobile devices and use the Exchange ActiveSync protocol to access

    the Exchange Mailboxes.

    Outlook WebApp

    Outlook WebApp provides access to Exchange 2013 mailboxes from a web browsers.

    POP3 and IMAP4 Clients

    POP3 and IMAP4 clients can run on a mobile devise or a computer. They use the POP3 protocol to access

    the mailbox. The SMTP protocol is used in combination with these clients to send email.

    Name Space Planning

    Exchange 2013 simplifies the namespace requirements compared to earlier versions of Exchange.

    Namespace Models

    Various namespace models are commonly used with Exchange 2013 to achieve various functional goals.

  • Technology Overview

    31

    Unified Namespace

    The unified name space is the simplest to implement. It can be used in a single data center and multi-data

    center deployments. The namespace is tide to one or more DAGs. In the case of multiple data centers with

    DAGs that span the data centers, the name space also spans the data centers. In this namespace model all

    mailbox servers in the DAGs are have active mailboxes and Exchange clients connect to the Exchange

    servers irrespective of the location of the Exchange server or the location of the client.

    Dedicated Namespace

    The dedicated namespace is associated with a specific data center or geographical location. This

    namespace usually corresponds mailbox servers in one or more DAGs in that location. Connectivity is

    controlled by where which data center has the mailbox server with the active database. Dedicated name

    space deployments typically use two name spaces for each data center. One is the primary namespace that

    is used during normal operation, and the other is a failover namespace that is used service availability is

    transferred to a partner data center. Switchover to the partner data center is an administrator managed event

    in this case.

    Internal and External Namespace

    Internal and external namespace is typically used in combination with a split-DNS scheme for providing

    different IP address resolution for a given namespace based on the client connection point. This is

    commonly used to provide different connection endpoints for clients that are connected on the external

    sided of the firewall as compared to the internal side of the firewall.

    Regional Namespace

    Regional Namespace provides a method to optimize client connectivity based on client proximity to the

    mailbox server hosting their mailbox. Regional namespace can be used with both unified namespace and

    dedicated namespace schemes. For example, a company that has data centers in Europe for Europe based

    employees and data centers in America for America bases employees can deploy separate name space for

    Europe and for America.

    Network Load Balancing

    Network load balancing enables scalability and high availability for the Client Access Servers. Scalability and

    redundancy is enabled by deploying multiple Client Access Servers and distributing the Exchange client

    traffic between the deployed Client Access Servers.

    Exchange 2013 has several options for implementing network load balancing. Session affinity is no longer

    required at the network load balancing level, although it can still be implemented to achieve specific health

    probe goals.

    Health probe checking enables the network load balancer to verify which Client Access Server is servicing

    specific Exchange client connectivity protocols. The health probe checking scheme determines the

    granularity of detecting protocol availability on the Client Access Server. Exchange 2013 has a virtual

    directory for health checking. This directory is Exchange client specific and can be used by load balancers to

    verity the arability of a particular protocol on a Client Access Server.

  • Technology Overview

    32

    Common Name Space and Load Balancing Session Affinity Implementations

    There are four common network load balancing implementations that are used for load balancing client

    connections to Exchange 2013 Client Access Servers. Each implementation has pros and cons for simplicity,

    health probe granularity, and network load balancer resource consumption.

    Layer 4 Single Namespace without Session Affinity

    Layer 7 Single Namespace without Session Affinity

    Layer 7 Single Namespace with Session Affinity

    Multi-Namespace without Session Affinity

    Layer 4 Single Namespace without Session Affinity

    This is the simplest implementation and consumes the fewest load balancer resources. This implementation

    uses a single namespace and layer 4 load balancing. Health probe checks are performs based on IP

    address, network port, and a single Exchange Client virtual directory for health checks. Since most Exchange

    clients use HTTP and thus the same HTTP port, the health check can be performed on just one Exchange

    client protocol that is in use. The most frequently used Exchange client protocol is unusually selected for

    health probe checking in this implementation. When the health probe fails, the network load balancer will

    remove the entire Client Access server from the Client Access Server pool until the time the health check

    returns to a healthy state.

    Pros: Simple to implement. Requires fewest load balancer resources

    Cons: Server level granularity. Health Check Probe may miss a particular Exchange Client protocol

    being offline and thus fail to remove the Client Access Server from the rotation pool.

    Layer 4 Multi-Namespace without Session Affinity

    This implementation is like the previous layer 4 implementation without session affinity with the exception

    that an individual namespace is used for each Exchange Client protocol type. This method provides the

    ability to configure a health check probe for each Exchange client protocol and thus gives the load balancer

    the capability to identifying and removing just the unhealthy protocols on a given Client Access Server from

    the Client Access Server pool rotation.

    Pros: Protocol level service availability detection. Session affinity maintained on the Client Access

    Server.

    Cons: Multiple Namespace management. Increased load balancer complexity.

    Layer 7 Single Namespace without Session Affinity

    This implementation uses a single namespace and layer 7load balancing. The load balancer performs SSL

    checks are configured and performed for each Exchange Client protocol virtual directory. Since the health

    probe check is Exchange Client protocol specific, the load balancer is capable of identifying and removing

    just the unhealthy protocols on a given Client Access Server from the Client Access Server pool rotation.

    Pros: Protocol level service availability detection. Session affinity maintained on the Client Access

    Server.

  • Technology Overview

    33

    Cons: SSL termination uses more network load balancer resources

    Layer 7 Single Namespace with Session Affinity

    This implementation is like the previous layer 7 single namespace implementation with the exception that

    session affinity is also implemented.

    Pros: Protocol level service availability detection. Session affinity maintained on the Client Access

    Server.

    Cons: SSL termination and session affinity uses more network load balancer resources. Increased

    load balancer complexity.

    Cisco Applicat