Upload
paulmathews
View
436
Download
5
Embed Size (px)
DESCRIPTION
A look at the BICSI FEAR Model to understand IT specifications for Data Centres.
Citation preview
Data Centre EnvironmentsCCS Commercial Overview Session 8Using the FEAR Model in Data Centre Design
19th November 2010
Paul Mathews MInstSMM, CCNA, MIEEE
Global Channel Manager
• Presented at the BICSI Winter ConferenceMatt Parker RCDD, Stantec Consulting Services Inc
• Discuss how FOOTPRINT-ENERGY-ARRANGEMENT-REDUNDANCY helps the management of data centre projects
• This presentation provides;
– Survey guide to apply basic design calculations to develop a flexible space plan and gain a strategic position on the design team for DC projects
– Review of design and construction processes, linear Vs integrated design– Use block design approaches to data centre– Does not provide a single tool to use for designing a data centre as every facility is
different. Provides a knowledgeable background for common design processes– Use best practice design as a sales technique for DC owners / architects– Ideally useful for Tier I, Tier II – Tier II data centres
About FEAR Model
• The heartbeat of any business, designed to manage the flow, processing and storage of information
• Must be reliable, secure and flexible to enable growth and reconfiguration
• A data centre can support small singular businesses through to thousands of clients ecommerce facilities
• “A building or portion of a building whose primary function is to house a computer room and its support areas,” according to TIA 942
• User centric-tool
• Combination of;
– Storage Area Network (SAN)– High Performance Cluster (HPC)– Enterprise Process Servers
Introduction to Data Centres
ITS Professional Role
Integrated Design Stage Benefits
• ITS designers and consultants and have a larger impact on building design by synergistic approaches with architects, owners and contractors
• Eliminates ‘pass it on’ processes;
– through sub-contracting; lack of project management and ownership of objectives– Understand best practices for IT concepts and planning– Control scope of works before bidding and contraction
• Increase the value of the IT designer / consultant
Integrated Design – what does it mean?
• Allows involvement with the facility owner;
– Addressing IT design during budget allocation– Owner can make a value decision instead of budget decision
• Allows input into design processes;
– Developing peer relationships with traditional consultants / architects– Ensure standards compliance for IT cabling
• Increase revenue for your company
Integrated Design – installer benefits
• F ootprint
• E nergy
• A rrangement
• R edundancy
About FEAR Model
• Define the facility for equipment quantity
• What type of process equipment is to be installed?– Independent rack equipment– Inter-dependent rack equipment
• Process support equipment?– Test racks (burn-in, troubleshooting)– IT / Network Connectivity (switches / routers)
• Non-process support equipment?– UPS / PDUs (electrical distribution)– HVAC / Mechanical Equipment
FOOTPRINT
FOOTPRINT• All rack enclosures are relatively the same footprint – a block of 4ft x 2ft
(1200mm L x 600mmW )
• Agree on the size of racks for equipment
1200mm
600mm
FOOTPRINT• Group racks together on multiple levels, to create modular planning cells
(this example uses a block cell of x 5 racks);
– Smaller data centres use a 5 rack configuration of modular cells to enhance flexibility
1200mm
600mm
FOOTPRINT• Pair the cells into modules
FOOTPRINT• The paired cells become rows – where we
consider space allowances
• Aisles;
– Air movement for hot and cold zones– Operator access and movement
• Non-process equipment electronics / M&E management)
• These space allowances creating the core planning blocks (discussed in more detail in ARRANGEMENT)
• The energy required for a data centre is defined the same way as the footprint, calculating the total kW needed for the process equipment
• What type of process equipment is to be installed?– Independent rack equipment– Inter-dependent rack equipment– Today’s technology allows most equipment to be flexibility collocated
• Process support equipment?– Test racks (burn-in, troubleshooting)– IT / Network Connectivity (switches / routers)
• Non-process support equipment?– UPS / PDUs (electrical distribution)– HVAC / Mechanical Equipment
ENERGY
• Always calculate Energy PER RACK (Vs Energy per square foot/metre)
• Allows energy distribution (cooling and energy) to move with the FOOTPRINT
• Data Centre remains more adaptable to various processing equipment
• ENERGY should be kept scalable and flexible;– Not new concept for data centres
– Moore’s Law becomes evident (DC structures will change in 16-18 months)
ENERGY
• AT DESIGN STAGE; Data Centre owners specify the equipment to be used in the racks, advising a model number and product data sheet
• This can give Electrical Engineers only very basic information to be able to calculate ENERGY requirements
• Data Centre infrastructure designers can use some baseline techniques using general power requirements;
– Traditional rack equipment density uses 2 kW – 8 kW / rack– High density processing equipment racks use 8 kW – 12 kW / rack– 2010 beyond 12 kW / rack
• Additionally include the cooling needs;– Input power requirements do not include cooling power energy
• Total Energy Formula to include distribution method, cooling density and equivalent power needs
ENERGY
• The number 1 question for ENERGY in data centres is;
HOW MUCH POWERPOWER IS NEEDED?
• This can be calculated by adding the input power and power loss budget together
• INPUT POWER BUDGET;– Electricity require to operate the process equipment loads
• POWER LOSS BUDGET;– Electricity required to remove heat (cooling equipment, air distribution equipment
etc)
Input Power + Power Loss = Total Power
ENERGY
ENERGY• Calculations for capital and expenditure costing can only be approximated with the
manufacturers data sheet information
• Many designers can misinterpret or overstate power and cooling requirements without correct data
• Data sheets need vital information including;– Input power– Heat dissipation– Processing Equipment physical dimensions
ENERGY• Data can be stated in different formats;
• Electricity and power supplies are the drivers for heat dissipation and cooling
• The Power Loss value must consider inefficiency of using HVAC equipment to remove heat;
– Cooling power generates accumulated losses from 2 level of heat exchange;
• Chillers to condenser (for cooling equipment)
• Water to air (for air handling equipment)
• 3 Useful Rule of Thumb equations for planning;– 12,000 BTU = 1 cooling ton = 350 Cubic Feet per Minute (CFM)– 1 cooling ton = 1.2 kW electric power (consult ASHRAE guidelines)
ENERGY – Calculating Power Loss budget
ENERGY – further basic formulas for TOTAL POWER• BTU = 1.08 * CFM (air flow) * ΔT (temp. rise)• 1 BTU = 0.293 = 1 Watt = .001 kW = I (Current) x V (Voltage)
• An example to demonstrate equivalence from best data;
• Rack enclosure filled with 42 x 1u servers from Manufacturer A;– Rack capacity = 42 RU, total servers = 42
– Average power data for 1U server = 400 W = 0.4 kW
– Heat dissipation = (400 W / 0.293) = 1365 BTUh
• Total electric power for a fully loaded average server rack is;– (0.4 * 42) + (((1365 * 42) / 12,000) * 1.2) =
16.8 + (4.75 * 1.2) = 22.5 kW per rack
• Equivalent to a typical house 2500 sq ft into 8 sq ft of floor
ENERGY – 42U Rack Example
• Gives more flexibility, so power and cooling distribution can be selected to fit the internal rack equipment
• kW / sq ft thinking leads to bulky, inflexible and incorrectly specified process support equipment
• Modern day server racks typically utilise the following power densities;
– Storage / Application Rack = 20,000 – 35,000 BTU, <8 kW / rack– Network Racks = 5,000 – 10,000 BTU, 2 – 4 kW / rack
• Modern day server racks accommodate planned diversity (grouping different systems together – e.g. not 10 HPC side-by-side, but adding more cabling
ENERGY kW per rack Vs kW per sq ft
• Placing the process equipment around the building area with consideration for;
– Architectural and Structural Design;• New or existing building• Adjacency and size of support spaces?• Raised floor and finished ceiling?
– Non-Process Support Equipment;• Power and cooling distribution methods?• Entrance facilities and equipment spaces?• Equipment size and quantities?
ARRANGEMENT
• The FEAR model is based on a new building concept, but can be adaptable for an existing facility
• Design experience implies a raised floor AND finished ceiling is the most flexible and efficient, creating hot and cold plenums
• Suggested optimised height is 14 ft (floor to deck), with 18 – 24” for raised floor, 9-9.5” finished ceiling
ARRANGEMENT
• Does not specifically address support spaces such as chiller room, main electrical room (should be consulted on a project by project basis).
• Assumes new building design has adequate mechanical, electrical and site space
• Existing buildings can adapt the FEAR model to determine the max. power to the building and make a model that will fit the available power (on a block basis) to help owner observe if a facility is suitable for a design
• Provides new building conceptual cost plans, evaluation of sustainability or suitability for existing spaces
• Non-process equipment – considers power and cooling distribution methods. Raised floor plenums are not the most efficient for electrical distribution or cabling due to cable tray, drop boxes, junction boxes within the plenum distorting the HVAC system effectiveness
ARRANGEMENT – FEAR MODEL CONCEPT
• Power is from overhead feeds;– Busway – allowing compact, flexible implementation (or
manufacturers power distribution panels)– Conduit-and-box – familiar to electricians, lower first cost but
some issues with TCO (relocation)
• Network cabling routed overhead;– Multi-level cable tray for copper or optical fibre– Parallel wire baskets
ARRANGEMENT
• Primary Cooling distribution through FLOOR plenum;
– Proven system
– Cool air is delivered on equipment where needed
– Optimise rack efficiency using blanking panels
– FEAR model allows use of new point cooling methods (such as spray directly on equipment)
ARRANGEMENT
• Entrance facilities and Equipment Spaces;
– Depend on other building issues– Includes chillers, generators, centralised Ups– By using the Integrated Building Process to define equipment
size and numbers, the FEAR model provides consultants with plans for ERs when architects are laying out block plans
• Equipment size and quantities;
– Defined by PDUs and CRAC units– Footprint of processing equipment– Size of support equipment depends on average power density
per module
ARRANGEMENT
ARRANGEMENT (using FOOTPRINT model)• 4 blocks of 10 racks
• Incorporate standard aisle spacing (2 rows grouped into a block) ;– 6ft between racks, providing high density
applications with air distribution tile on either side of aisle to provide air and includes walk tile in the middle – can reduce to 4ft if needed
• Hatched area provides space reserved for process support equipment (CRAC, PDUs)
• Block dimension is 20ft x 50ft;– Floor dimension is 25ft x 60ft, matching
common structural grid spacing of 25ft x 30ft (2 bays)
ARRANGEMENT (using FOOTPRINT model)• CRACUs sized to rack power density;
– accommodate 4 CRACUs (for high density applications)
– 2 power distribution units with redundant capacity
• Within the racks;– the back space houses the power distribution
as the hot aisle
– Front space faces the cold aisle where network cabling is configured, equipment LEDs are checked (as green for working) by network technicians
ARRANGEMENT (using FOOTPRINT model)• Blocks are modular – can be combined in full
or half to create any shape
• Blocks can be added until rack quantity meets facility owners stated objectives
• Each block is self-contained and includes support equipment
REDUNDANCY• Must be specified as a tiering model by the facility owner in
advance
• Difficult to make Tier I/II into a III or IV at a later date
• Must consider process AND power AND cooling
• FEAR model covers hybrid solutions
REDUNDANCY
Tier Classifications (Uptime Institute) Tier 1 Tier 2 Tier 3 Tier 4
Site availability 99.67% 99.74% 99.98% 99.99%
Downtime (hours per year) 28.8 22 1.6 0.8
Operations Centre n/a n/a required required
Active Capacity Components to support the IT Load N N+1 N+1 N after any failure
Distribution Paths 1 11 Active and 1
Alternative2 Simultaneously
Active
Concurrently Maintainable No No Yes Yes
Fault Tolerant No No No Yes
Compartmentalisation No No No Yes
Continuous CoolingLoad Density
DependentLoad Density
DependentLoad Density
Dependent Class A
• Tier 1;– 99.671% availability (2 nines)– Single path for power and cooling distribution– No redundant components
• Tier 2;– 99.741% availability (2 nines)– Tier 1, add redundant capacity components (computer equipment)
• Tier 3;– 99.982% availability (3 nines)– Multiple power and cooling distribution paths– Redundant components, one active path– Concurrently maintainable
• Tier 4;– 99.995% availability (4 nines)– Same as Tier 3 with multiple active paths– Fault Tolerant
REDUNDANCY
• FEAR model targets Tier I and Tier II facilities;
– A Tier IV data centre requirement is mostly specified outside of the data centre footprint
– Tier III or Tier IV facility costs are non-linear (get expensive very, very quickly) and require a strong business case
– Tier II classifications are the most common data centres;• N+1 configuration• Using FEAR model Tier II to Tier III can be scaled with a modest effort
(with early discussions with facility owner;– Providing power density and scalability of Tier III without non-linear cost
increases– Coordination of power and cooling equipment with building engineer
critical to achieving Tier III (provide the completed model to mechanical designers)
REDUNDANCY
• FEAR model considers these key items;
– True tier classification is dependent on owner maintenance practices (after handover – teething issues)
– Focused on a hybrid classification – Tier II to Tier III
– Using Integrated Business Development process as an upfront relationship to educate the owner on IT options
– Set the classification objectives early at design conceptual stages, to avoid rising costs, overruns and delays
REDUNDANCY
• FEAR model is to be used to provide a conceptual plan to indicate a COST for the data centre
• FEAR model has used several sources for costing;– Per unit analysis;
• $ / sq ft for architectural and electrical elements• $ / kW for generator and UPS• $ / ton combined with $ / Cubic Feet per Minute for HVAC system• $ / port for network connectivity
– Rely on vendors for budgetary pricing on support equipment
– Always use per rack basis
– Make sure owner understand processing equipment purchases are not within the price of the Data Centre
FEAR MODEL DELIVERY
• Develop several versions to emphasise different focus items;– Adjacency / flow or work / ease of maintenance;
– Critical components
– First cost / Life cycle cost– Total cost of ownership
– Energy efficiency / Sustainability– Building is costing from day its constructed to the day its knocked down
FEAR MODEL OPTIONS
• Outlined why ITS professionals should care
• Reviewed phases of design and construction process (linear Vs integrated design)
• Understand FOOTPRINT-ENERGY-ARRANGEMENT-REDUNDANCY
• Discovered block design approach to Data Centres
Summary & Conclusion
Solutions from Connectix Cabling Systems• CCS Starlight™ MTP ® Optical Fibre Cabling System;
- High Density, Compact, Smallest Diameter Cabling- 40 GbE and 100 GbE – long life investment- Pre-terminated (green packaging / recycling)
• CCS Data Centre Rax
- Bespoke design and build to suit each data centre environment
- Co-location, enterprise or patching frames designs with management, intelligence and efficient airflow and cooling options
• CCS Connectix Express Pre-Terminated Copper and Optical Fibre Solutions;
- Onsite design and measurement, pre-terminated (no cable wastage)- Supplied on reusable reels/drums- Cat 5e, Cat 6, Cat 6a (copper), OM1, OM2, OM3, OM4, OS1, OS2 (fibre optic)
Connectix Technical Articles• Log on to www.connectixcablingsystems.com for full access to our data centre and high speed LAN support articles
• Or join us on at
http://uk.linkedin.com/in/paulmathews12 http://www.linkedin.com/groups?gid=2618209&goback=%2Egdr_1262986473649_1
Connectix Cabling Systems
Global Head Office;
500 Avenue West, Skyline 120, Braintree
Essex, CM77 7AA. UK
Telephone; +44 1376 346 600
Fax; +44 1376 346 620
www.connectixcablingsystems.com
Thanks for your time
Paul Mathews MInstSMM, CCNA, MIEEE
Global Channel Manager