SOLUTIONS


Data Center Infrastructure: Lifting Performance to the Cloud

Cloud computing is the utility of the future, according to industry pundits. Sooner or later, you will use the cloud, either in your own data center or with a service provider.  If you use Hotmail, you’re accessing the cloud. Your ERP payroll application is most likely in a cloud. YouTube videos are streaming from a hosted cloud site.

Designing a cloud-friendly data center infrastructure is imperative for ensuring your enterprise’s ability to support growing demand from everyone touching IT -- from your own data center pros, to employees, customers, and suppliers.

A data center design strategy that employs both internal and external cloud approaches gives you new flexible capacity and maximum availability. It also ensures that any new capital expenditures are allocated to creating innovation, continuity and driving competitive advantage.

In the world of the cloud, there is no room for stranded capacity, non-virtualized servers, nor taking systems down over the weekend for routine maintenance.  Additional IT capacity is deployed quickly, with zero impact on application or data availability. At the same time, there are higher demands on speed and performance.  What’s more, this all needs to be monitored in real time, so you can minimize hot spots and balance loads.

Best Practices to Prepare Data Center Infrastructure for Cloud Computing

By enhancing the efficiency and elasticity of data center infrastructure by optimizing existing facilities, enterprises can be well positioned to achieve performance optimization and agility. They can reduce the risk of compromising the high availability supported by uninterruptible power and other infrastructure solutions that’s needed to support mission-critical applications.

From an IT perspective, support for rapid provisioning and deployment appeals to growing enterprises. Because cloud computing architectures offer nearly infinite on-demand capacity, new applications can be deployed immediately without extensive provisioning, speeding time-to-market. Enterprises can take 3 critical steps now to optimize their data center infrastructures to prepare for cloud computing.

Employ a High-Density System Configuration

First, data center management must take steps to ensure that critical systems are backed with adequate cooling support optimized for virtualized, high-density environments.

Rack- and row-based data center cooling solutions are the ideal choice for cooling high-density architectures thoroughly and efficiently. By placing the cooling element close to heat sources – typically in the rack or rack row – you can expect to gain up to 50 percent energy savings over traditional perimeter cooling architectures.

Cold aisle containment systems can be retrofitted easily into existing facilities and optimize cooling efforts by sealing the data center environment as well as individual rack rows.

Further, intelligent row-based cooling systems, such as the Liebert CRV, offer the added benefit of increasing data center power efficiency at reduced loads by “flexing” as operating conditions change. This allows enterprises to meet the extreme cooling needs of high-density cloud computing environments during peak demand periods, without sacrificing operating efficiency during non-peak hours.

Optimize the Power Architecture for High Availability

For cloud to work, you need scalable precision cooling systems supporting your high-density infrastructure. It is also critical to address potential vulnerabilities within the data center power architecture.

For enterprises seeking to achieve scalability without impacting availability, N+1 redundancy remains the most cost-effective option for high availability data centers and is well-suited for high density cloud computing environments. In a parallel redundant (N+1) system, multiple UPS modules are sized so that there are enough modules to power connected equipment (N), plus one additional module for redundancy (+1).

Next, deploy intelligent UPS systems that feature redundant components, fault tolerances for input currents and integrated battery monitoring capabilities. These data center solutions also are capable of achieving up to 97 percent efficiency through the integration of an “always-on” inverter.

Preventive maintenance is another critical component in maximizing the performance, life and availability of the data center’s end-to-end power infrastructure.

Manage Complexity through Infrastructure Management

To create an optimal cloud computing environment, data centers need to bridge the gap between the physical layer of the data center infrastructure (primarily comprised of power, cooling and facility resources) and the IT infrastructure (actual compute, storage and communications activity).

Data center infrastructure management solutions provide real-time visibility into critical systems across the data center’s physical infrastructure, as well as automated management capabilities. This also lets you reduce requirements for specialized IT expertise, while achieving the highest levels of availability and operating efficiency.

Proactive changes to critical systems can be automated based on real-time data, multiplying the effectiveness of skilled staff while deferring to automated solutions for routine processes. These include asset/capacity optimization, predictive analysis energy forecasting and preventive maintenance scheduling. Specialized sensors and switches also can be integrated to pinpoint root causes and isolate failures quickly and accurately.

For data center management, building a robust data center infrastructure for internal cloud computing will help future-proof your organization. This can mean you need to tie up any loose ends – by eliminating single points of failure, employing redundant infrastructure support including power and cooling redundancy, and deploying a flexible architecture with expandable capacity. Only with a hardened infrastructure will you be ready for your critical apps and services to be lifted into the cloud.

We’re going after economies of scale. If you buy a 250 kVA UPS system for X-price, it does not cost three times X to buy a 750 kVA UPS system. The larger system is much more cost-effective. We maintain that cost-effective approach by consistently using larger building blocks with modular scalability and phase-build designs.

-Joe Kava, chief operating officer, RagingWire

Featured Resources