Conventional data centre layouts using legacy power and cooling approaches are not meeting today’s business demands.
It is not hard to see why blades make such an attractive option for high-density clusters. Blade servers are just what the name implies - thin and sleek. They are slim enough, in fact, to sit side by side in a single chassis. While each is an independent server, blades can share resources with one another.
The whole chassis operates on one power supply, one set of input/output devices and one set of connections to the network and storage systems. This considerably reduces the servers' footprint and allows for centralisation in one secure physical location. This, in turn, reduces wiring and network traffic and even improves back-up and business continuity.
Although the benefits of blade servers are well known, to make the most of an investment, a business must ensure they are managed in the right environment. True, the small footprint offered is appealing for managing resources. However, failure to recognise the extent of potential power and cooling problems associated with blade servers could cause serious deployment challenges.
C-Level managers deal with critical business needs every day. Common business drivers include increasing IT productivity: utilising existing resources and assets, becoming an adaptable, realtime enterprise to provide businesses with up-to-the-minute information to optimise their management and execution of critical decision-making and emphasising IT business value.
Such goals inevitably affect IT strategy, and vice versa. As these drivers cascade through organisations, they result in challenges for IT decision makers such as the deployment of new technology (such as blade servers) or changes to IT environments, including server consolidation. Both of these changes can result in the need for high-density deployment.
Conventional approaches may be limited
While high density is on the rise, studies show that conventional data centre layouts using legacy power and cooling approaches are limited in their support of these deployments.
New technologies such as blade servers are resulting in IT loads that are greatly exceeding rated capacity. Data centres are typically unable to provide information about localised density capability. As a result, users do not realise that there is a problem until they attempt deployment. High-density applications are driving network critical physical infrastructure (NCPI) challenges beyond the capabilities of today's data centre.
These high-density deployments cause rack power consumption to vary dramatically. Currently, the average rack power consumption is about 1,7 kW. The maximum power in a fully populated high-density enclosure can be over 20 kW. Such loads greatly exceed the power and cooling design capabilities of the typical data centre. Conventional data centre layouts using legacy cooling approaches have practical limits in achieving 'per-tile' airflow above 300cfm and cannot practically support over 6 kW per rack. In addition, higher velocities in high-density applications cause airflow challenges.
There are several approaches for defining high density in an IT environment, including room, row and rack level. Clearly defining the power density is a critical step in providing the appropriate power and cooling infrastructure and measuring the power density on a rack-level is the most accurate reflection of the required NCPI infrastructure required to support it.
Assess the data centre
An assessment of a data centre's existing conditions is essential to high-density deployment and understanding its capabilities. This assessment may be superficial if the number of blade servers is of the order of one rack of blades or less. However, for more complex deployments, the depth and detail of the assessment must increase substantially.
For complex deployments, data centre simulations using computer models are desirable, to determine the 'as is' conditions and provide verification of the proposed design. While all data centre operators should have a rudimentary knowledge of data centre assessments, for complicated, high cost, or high risk installations, specialists are recommended.
Make cooling predictable
Addressing NCPI trouble areas can also be difficult. High-density deployment raises cooling challenges such as supplying cool air and removing hot exhaust air and keeping it away from the intake. These challenges can be best addressed by making cooling predictable, which is accomplished by closely coupling power and cooling and neutralising hot air.
To best deploy high density, the rack and row-based design (versus traditional legacy room-design approaches) can minimise the potential and costly oversizing.
APC has studied this issue in detail and has found that data centres and network rooms are routinely oversized to three times their required capacity. Oversizing drives excessive capital and maintenance expenses, which are a substantial fraction of the overall lifecycle cost.
In addition, while there are several approaches to defining ‘high density’, measuring the power density on a rack level is the most accurate reflection of the required NCPI infrastructure required to support it. The demanding requirements of high density may initially cause one to think of oversizing for safety sake, however, the better solution is to right-size using rack- and row-based designs.
Limited time to deploy?
Overall, standardisation and its close relative, modularity, improve business value by creating wide ranging benefits in NCPI that streamline and simplify every process. These range from initial planning to daily operation, with significant positive effects on all three major components of NCPI business value: availability, agility and total cost of ownership.
Failure to adopt modular standardisation as a design strategy for NCPI is costly on all fronts: unnecessary expense, avoidable downtime and lost business opportunity.
Neill Schreiber, country sales manager: South Africa, at APC