28
2013 DATA CENTER INDUSTRY SURVEY DATA CENTER BUDGETS COST AND PERFORMANCE METRICS DCIM

DATA CENTER INDUSTRY SURVEY - Uptime Institute LLC · PDF fileThe third annual Uptime Institute Data Center Industry Survey is an in-depth study, collecting responses via email (February

  • Upload
    ngotu

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

2013DATA CENTER INDUSTRY SURVEY

DATA CENTER BUDGETS

COST AND PERFORMANCE METRICS

DCIM

The third annual Uptime Institute Data Center Industry Survey is an in-depth study, collecting responses via email (February to April 2013) from

1,000 data center facilities operators, IT manag-ers and senior executives from around the globe.

As in previous surveys, the sample is heavily rep-resented by large data center operators in North America. The vast majority of respondents man-age more than one site, and the vertical industry makeup is similar to previous years, with over half of the respondents from traditional enter-prise companies and the rest made up of coloca-tion and technology service providers.

For the purpose of this report, the term “third party” describes companies that provide comput-

ing capacity as a service in any form: Software as a Service, cloud computing, multi-tenant colocation, or wholesale data center providers. The enterprise data center operators are made up of banking, manufacturing, healthcare, retail, education, government and other industries.

Lopsided data center budget growthData center budgets are expanding worldwide. In 2013, 36% of data center organizations are receiving large year-over-year budget increases, versus

just 32% in 2012. That number has grown from 27% in 2011.

Nearly one third of data center operations in North America and Europe are receiving 10% or greater budget increases year-over-year. De-veloping economies in Latin America and Asian regions are seeing huge growth, twice as much expansion as the developed countries.

The other aspect of this growth is that it’s weight-ed to the third party – 63% of the third-party data center operators report a large budget increase, versus 48% in 2011.

Only 25% of enterprise data center operators report large budget increases. In fact, since 2011 a growing percentage of enterprise data center operators have reported budget decreases and that number jumped to 21% in 2013. Isolating

2013 UPTIME INSTITUTE ANNUAL DATA CENTER INDUSTRY SURVEY REPORT AND FULL RESULTS

Matt Stansberry, Director of Content and Publications,Uptime Institute

Click thumbnail to enlarge

Click thumbnail to enlarge

the North American enterprise responses, 23% report budget decreases. In total, 57% of North American enterprise operators report no budget growth.

This data suggests third-party data center service providers are growing at the expense of in-house IT operations. This isn’t the death knell for the enterprise-owned data center, but it is reflective of a growing shift we’ve noted in these surveys and anecdotally for the last few years.

Increasingly, organizations that have traditionally owned and maintained their own data centers, companies that even a few years ago would have re-jected an outsourcing option out-of-hand, have gone partially or wholesale into commercial data center services in some form.

According to our survey of very large enterprise data center operators at Uptime Institute Charrette in November 2011, 85% of respondents em-ployed some form of third-party compute capacity as a supplement to their existing internal digital infrastructure portfolio.

Today, the onus is on the enterprise operator to demonstrate that a new in-house data center build is the best choice for the company. The burden of articulating value has shifted from the third-party provider to the internal enterprise staff. It’s up to enterprise data center operators to be educated advocates for the services they are providing their organizations.

That’s not to say that a third-party data center offering isn’t the best option for an enterprise company. But making a decision between an in-house data center build and outsourced computing infrastructure is fraught with complexity and emotional influences due to potential staffing impacts. This is why Uptime Institute began development of the FORCSS Methodology

in 2011, a process to document, compare and communicate the value of in-house data center resources against other deployment alternatives in a concise, repeatable format.

Unfortunately, reality is that many enterprise data center operators are not effectively collect-ing or presenting cost and performance data to their executives. This lack of data can drive a misinformed outsourcing decision.

The third-party data center service providers are certainly focused on efficiency, cost and perfor-mance.

Over 70% of third-party operators report data center cost and performance information to the C-suite on a monthly or more frequent basis, versus 42% of enterprise operators. Nearly 40% of enterprise data center own-ers have no scheduled reporting on data center issues back to the C-suite, versus only 14% of the third-party operators.

Click thumbnail to enlarge

Allocating the cost of IT inefficiencyA big part of focusing on cost and performance in the data center has to do with managing and allocat-ing the power bill. For three years running, less than 20% of companies reported that their IT organiza-tions pay their data center power bill, and the vast majority of companies allocate this cost to the facili-ties or real estate budgets.

Obviously organizations are resistant to change on this issue. Uptime Institute has been like a broken record on this topic, recommending changing the allocation to IT since 2006, but the statistic has held steady for years. Many IT practitioners say their departments have visibility or accountability for the

power bill – dotted-line reporting, though no direct responsibility. This is like a kid running around a mall with his or her parent’s credit card.

Data center efficiency gains have hit a plateauThis statistic has been cited in the trade press recently: According to Uptime Institute’s 2013 survey of data center operators, only 50% of respondents in North America said they considered energy efficiency to be very important to their companies. That was down from 52% last year and 58% in 2011.

Is the data center efficiency crisis over? Or is the industry walking away from this with the job half done?

The Green Grid’s Power Usage Effectiveness (PUE) metric was discussed by Christian Belady at the inaugural Uptime Institute Symposium in 2006, and in the years since, PUE has become the industry-preferred metric for measuring infrastructure energy efficiency for data centers.

In 2007, Uptime Institute surveyed its Network Members (a user group of large data center owners and operators), and found an average PUE of 2.5.

The average PUE improved from 2.5 in 2007 to 1.89 in 2011 in Uptime Institute’s inaugural data center industry survey and to 1.65 in this year’s survey.

It’s worth mentioning that 6% of respondents in the 2013 survey reported a PUE less than 1.0 – which is in fact impossible – so take these self-reported PUE numbers with a grain of salt. The important thing to note is that the biggest infrastructure efficiency gains happened five years ago.

So how did the industry make those initial improve-ments? A lot of these efforts were simple fixes that prevented bypass airflow, like ensuring hot-cold aisle

arrangement in data centers, installing blanking panels in racks and sealing

Click thumbnail to enlarge

Click thumbnail to enlarge

openings in the raised floor. Also, the data center infrastructure vendors re-sponded, improving efficiency on UPS and power distribution systems.

But today the industry has reached the point of diminishing returns. Data center facilities teams led “Green IT” efficiency initiatives because the cost of inefficiency was allocated to their department.

Many data center facilities teams have done what they can. The more advanced facilities infrastructure technologies and operational changes come with significant cost, either in capital expense or in-house expertise.

Companies whose IT operations are a huge portion of their cost structure, or just have the scale to do it, have radically outpaced the rest of the industry in adopting leading-edge data center infrastructure efficiency best practices.

Outside air economization, airflow containment and higher server inlet air temperatures require an in-creased level of operational sophistication and ac-cepting higher risk. Organizations need to have the

expertise and a significant return on the investment in order to use these tech-niques and technologies effectively, which leaves out the bulk of smaller data center operators.

For example, the majority of data centers are not operating near the upper limit of server inlet air temperature recommended by the American Soci-ety of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) of 80.6 degrees.

According to the survey, nearly half of all data centers reported operating at 71 to 75 degrees Fahrenheit. This is largely unchanged from last year's survey. The next largest temperature segment, from 65 to 70 degrees, was 37% of data centers and is unchanged from last year.

Nonetheless, the survey points to changes at the margins. The most notice-able one is the percentage of data centers operating at temperatures of more than 75 degrees. That figure increased from 3% to 7% from last year. It is still a small percentage, but it is representative of the leading-edge companies pushing efficiency.

Another sign that data centers are warming slightly: In 2011, the survey report-ed that 15% of the data centers were at temperatures below 65 degrees. But in the last two surveys, only 6% are running below 65 degrees.

To run near the ASHRAE limits, an organization needs to have the operations expertise on staff to manage a higher-risk environment and to have precise controls over the cooling.

Click thumbnail to enlarge

Yet, only 15% of the data center managers responding to the survey said they were measuring and control-ling air temperatures from the server inlet, which is the most accurate location. Nearly a third of the re-spondents are managing inlet air temperature at the room level, which is the least accurate method.

Obviously there is room for some further cooling effi-ciency improvements. But the next round of facilities efficiency gains will require significant staff expertise and investment.

On the other hand, there are many data center effi-ciency opportunities on the “left side of the decimal”

of the PUE equation – the IT load.

Killing comatose servers, deleting outdated applications Enterprise IT departments are inherently looking forward, providing IT ser-vices for dynamic business demands. They are also tasked with keeping mis-sion-critical legacy systems running, which are typically fragile and resource intensive. There is little incentive to look back at existing assets to make IT operations more efficient. And yet, that is where the next advances in IT effi-ciency will have to take place.

Companies need to ensure IT departments are held accountable for defining and implementing energy-efficient projects. Without budget incentives and executive-level directive, IT departments are not going to audit and remove comatose zombie servers, or go in and root out unused, duplicate applications.This is why Uptime Institute launched the Server Roundup contest in October 2011 – to raise awareness about the removal and recycling of comatose and obsolete IT equipment and reduce data center energy use.

Uptime Institute invited companies around the globe to help address and solve this problem by participating in the Uptime Institute Server Roundup, an initiative to promote IT and Facilities integration and improve data center

energy efficiency.

According to Uptime Institute’s estimates based on industry experience, around 20% of servers in data centers today are obsolete, outdated or unused. De-commissioning one rack unit (1U) of servers can result in a savings of $500 per year in energy costs, an additional $500 in operating system licenses and $1,500 in hardware maintenance costs.

The problem is likely more widespread than reported. According to the 2013 survey data, only 14% of respon-dents believe their server populations include 10% or more comatose machines. Yet, nearly half of the respon-

dents have no scheduled auditing for identifying and removing unused machines.

Click thumbnail to enlarge

Click thumbnail to enlarge

Removing comatose servers isn’t easy – it’s an uphill battle against corporate culture. It’s largely a manual process that requires staff time, while many orga-nizations experienced staff cuts in the recent financial crisis. And server hard-ware vendors have spent years convincing IT departments that machines are cheaper than people, so buy lots of them and let them fail in place.

But the fact is that disciplined hardware decommissioning can provide a signif-icant financial impact, and it’s a great starting point toward better IT-Facilities team integration and taking steps toward better IT utilization, which is the ultimate goal of Green IT.

Green certifications and carbon reportingOne of the fascinating trends this year is the large increase in companies pursuing “green” certifications like the U.S. Green Building Council (USGBC) LEED program. Over half of the survey respondents report-ed pursuing a green certification, and of the largest data center operators, 77% are seeking recognition for their green initiatives. These programs have been often criticized for driving ineffective behavior; for example, installing high-efficiency washing machines or bike racks. But many corporations today see this stamp of approval as an important step in a major data center capital project.

While these certifications are increasing in popular-ity, fewer data center operators are quantifiably tracking their data center’s environmental impact. Carbon reporting in the data center actually dropped year-over-year. Today, only 21% of data center operators are reporting their site’s carbon emissions, and less than one third track water usage.

Data center capacity trendsData center construction has been booming over the past several years but may have slowed down. Globally, 70% of companies built a new site or significantly renovated a data center in the past five years, versus 80% reporting in 2012 and 2011.

Of large data center operators managing over 5,000 servers, 81% built new infrastructure. Colos/multi- tenant data centers are leading growth at 85%; but enterprise growth has still been impressive at 66%.

Europe, Asia and Latin America built out data center capacity in smaller increments – over 40% of data center installations in these locations are under 1 MW, versus only 30% in North America.

So how much does this capacity cost? For its 2013 survey, Uptime Institute asked:

Click thumbnail to enlarge

Click thumbnail to enlarge

What was the approximate data center facility project cost, in terms of $ mil-lion per megawatt of design IT load ($M/MW), including: architecture and engineering fees, land, building core, shell, mechanical and electrical systems and fire protection? (Please do not include costs for IT gear, racks, structured cabling, or IT migration into the new data center. If this is a multiple-phase project, please determine the $M/MW based on projected cost for full buildout.)

The answer was surprising. Nearly half of the respon-dents reported paying $5 million per MW or less.

This survey is a fairly blunt tool. This is the first time Uptime Institute has asked this question. There is no qualifying question about Tier Level. There was no accounting for local currency.

Uptime Institute field experience, and feedback from the preliminary survey results, put the actual cost somewhere closer to $10 million per MW.

Uptime Institute will hone this question in future surveys, but for now, here are the takeaways: Larger data center operators spend more on redundancy, automation, and efficiency features that increase cost. If these numbers are accurate, the cost of building out data center infra-structure has come down from recent industry estimates.

Cloud computing adoption and hurdlesA big part of this survey is tracking technologies and trends with the potential for rapid adoption. While the three technologies covered in the last part of the sur-vey (cloud computing, DCIM and prefab modular data centers) are not exactly new, they have been poised for major growth and mainstream deployment.

Global adoption of public cloud computing in our 2013 survey is 28%, up very slightly from 2012’s 25% adoption. The huge jump in cloud adoption (at least in the survey sample) occurred between 2011 at 2% adoption to 2012 at 25% deployment.

For comparison, 451 Research’s ChangeWave survey data from July 2010 shows 11% public cloud adop-tion, steadily increasing to 30% public cloud deploy-ment in April 2012.

But remove the third-party data center providers from the sample and growth in public cloud adoption

actually was significant this year. Enterprise public cloud adoption was only 10% in 2012, and jumped to 17% in the 2013 results.

Contrary to conventional wisdom, large companies are twice as likely to deploy pub-lic cloud versus smaller data center operators: around 40% adoption for companies managing over 5,000 servers versus 22% for companies managing under 1,000.

Click thumbnail to enlarge

Click thumbnail to enlarge

Private cloud adoption dropped: 44% deployment in 2013, versus 49% in 2012, and the number of companies in the planning or considering stage with private cloud dropped from 37% in 2012 to just 25% in 2013. This seems to suggest that the companies who could make use of a private cloud platform have made the investment, and companies on the fence are either going to public cloud or walking away from the hype cycle.

There are a lot of factors driving public cloud adop-tion, from speed of deployment, scalability and po-tential cost savings. But the breakout driver for cloud computing adoption in 2013 is end-user or customer demand. In 2012, only 13% of respondents listed cus-tomer demand as a top driver, versus 43% in 2013, making it the leading driver over all other factors driving public cloud deployments.

As to the barriers to adoption, security concerns have been the leading barrier to adoption since the term “cloud computing” was coined. Around half of IT and Facilities management staff listed security as

the main barrier to adoption, versus just one third of senior execs. Increas-ingly, senior execs point to a lack of cloud computing skills and expertise as a primary barrier to adoption.

DCIM adoption through the roof?Data center infrastructure management (DCIM) software adoption seems to have skyrocketed. An astounding 38% reported that they have deployed DCIM software. This DCIM adoption statistic is one of the significant findings in the survey.

Uptime Institute asked this question about DCIM adoption slightly differ-ently in previous surveys, so parallel data to compare year-over-year is not available. But industry-watchers and DCIM vendors have suggested that 38% adoption is too high.

So where is the discrepancy?

Uptime Institute’s audience base tends to be larger, more advanced data center owners and operators – Uptime Institute Network members, Tier-certified data center owners, etc. – therefore, more likely to be ahead on leading-edge technology adoption. Also, the term DCIM has been used to include anything from a home-grown system of spreadsheets and monitors, to an advanced suite of fully integrated tools spanning multiple data centers.

So this year, Uptime Institute provided respondents with 451 Research’s definition of DCIM before the question, in order to provide clarity:

Click thumbnail to enlarge

Click thumbnail to enlarge

DCIM is defined as a data center-wide or organization-wide system or suite that collects and manages information about a data center's assets, resource use and operational status.

The question specifically told respondents to exclude spreadsheets, BIM-type drafting programs and basic BMS systems.

And the adoption rate is even higher in Europe, which leads global DCIM adop-tion at nearly 50%. Also, large organizations and colocation companies have adopted DCIM in the 50% range.

That’s not to say that the companies have fully in-stalled the tools, or are reaping the rewards of the software’s capabilities. For many companies, it’s just the beginning of a long cycle of implementation and maintenance of the systems.

DCIM can be a major investment, both from the fi-nancial and human resources perspective. Over 60% of respondents globally said cost was the primary barrier to DCIM adoption.

While small companies are deploying inexpensive DCIM tools (28% report spending less than $100k), the largest companies are spending significantly

more for scalability and features (17% report spending over $400k).

Seventy-one percent of respondents listed “improving capacity planning” as a top driver for buying DCIM. All other drivers were distant runners up. Capacity plan-ning mistakes will be expensive, and could cripple a business. The industry is hun-gry for any solution that will help make this exercise less of a guessing game.

Prefab modular data centersThis last section of the survey has to do with modular, pre-fabricated data cen-ter technology. While the market seems open-minded about the potential for this kind of data center design, the actual deployment numbers are low.

Only 8% of data center operators have deployed prefabricated modular data centers, and an ad-ditional 8% are planning to deploy the prefab modular products. Both the installed and planned rates for modular adoption were the same as 2012.

Fifty-three percent of survey respondents reported no interest in modular, prefab designs and com-ponents, versus 42% last year. Deployment among largest operators is slightly higher at 15%, but even that represents zero growth over last year.

Click thumbnail to enlarge

Click thumbnail to enlarge

We’re not saying the market won’t grow again, just that it’s slowed down significantly compared to what might have been expected.

Survey respondents were mixed on what kind of prefab modular designs they would most likely de-ploy, but the highest percentage preferred the pre-fab power and cooling blocks used with traditional-built computer rooms.

Also, prefab modular customers seem to be divided on how much they’re willing to pay to deploy this technology. An increasing percentage over last year is willing to pay more for a prefab deployment

(based on meeting a 3-5 month project completion deadline), while an in-creased percentage demand to pay less for a prefab modular data center and a large percentage would demand to pay substantially less.

As noted in previous surveys, the prefab modular data center seems to resonate with a specific subset of operators whose business requirements match up well with this kind of deployment strategy. The rest of the market remains unconvinced.

Conclusions:• Data center budgets are growing overall, but the most significant growth is occurring in the third party, possibly reflecting a shift in spending away from enterprise-owned data centers.• Enterprise operators are unwilling or unable to report data center cost or performance metrics to C-level executives, whereas third-party operators are very adept at tracking and reporting this information.• Facilities managers have shouldered the data center power bill and have made significant efficiency improvements over the past five years. IT departments remain unaccountable unless faced with an eight-figure capital expense for a new data center build.• A small percentage of data center owners are reporting carbon and water resource usage for their operations, versus significant adoption of “green” certifications from USGBC and others.• Increasingly, enterprise companies are adopting public cloud computing, and the biggest driver is end-user demand.• Reported DCIM adoption is very high among this survey sample, with the major driver being capacity planning.• Prefab modular data center growth has stalled out, and adoption is still in the single digits.

This concludes the 2013 Uptime Institute Annual Data Center Industry Survey report. We sincerely appreciate the participation of all of the respondents.

Please send questions and feedback to Matt Stansberry, Uptime Institute Director of Content and Publications, at [email protected] paper provides analysis and commentary of the Uptime Institute survey responses. Uptime Institute makes reasonable efforts to facilitate a survey that is reliable and relevant. All participant responses are assumed to be in good faith. Uptime Institute does not verify or endorse the responses of the participants; any claims to savings or benefits are entirely the representations of the survey participants.

Click thumbnail to enlarge