The Korea Herald

피터빈트

Keep eye on data centers’ electricity bills

By Korea Herald

Published : Feb. 20, 2013 - 20:25

    • Link copied

Byun Seong-jun Byun Seong-jun
In 2013, an unfamiliar item may start landing on the desks of many chief information officers: the electricity bill for their corporate data centers.

My bet is that bill will be unwanted, but not unexpected. For years, managers in charge of these facilities handled those cost while CIOs ― well, most never saw or cared about that side of the business.

But after a dozen years of designing data centers in an ad hoc way, throwing inefficient, commodity servers at the problem of ever-rising demand for data and processing speeds, the CIO is becoming responsible for paying for the energy those systems are gobbling up.

A reckoning is coming for this server sprawl and the rising cost of electricity, which we used to be able to treat as a commodity. Some data centers today now use more electricity than a company’s manufacturing or other key business operations. A whopping 70 percent of the average corporate IT budget is spent on basic operations and maintenance, according to a new study from IDG commissioned by IBM.

These higher costs aren’t going away because rising competition for energy is now a global reality. So, stuck with an electricity bill that threatens to eat up an even bigger piece of the tech budget pie, leaving less for investing in new business models and processes, it’s more urgent than ever that CIOs push data center efficiency.

There’s a lot of learning to plug into. Forward-looking companies such as Google and Facebook are focused on deploying better cooling systems and software to slash power waste, providing a playbook for other companies to follow.

But attacking the operations side of the equation doesn’t get at the real issue dogging the average data center: utilization. Concern about downtime, traditional lack of planning, and little cost pressure in the past created the situation we’re all too familiar with today, where servers simply sit idle most of the time. In fact, despite the fact that servers have gotten more efficient in energy usage, underutilization is doggedly unchanged at the average datacenter.

It’s commonly understood that the utilization of a commodity x86 server, the widespread deployment of which contributed to today’s problem, is around 5 percent to 12 percent. And virtualization will only get us so far. Even with the most advanced virtualization of x86 servers, the data centers using them still run under 50 percent capacity.

Instead, it’s time for CIOs to reconsider some of their long-held precepts and take advantage of strides made in the past few years to tackle underutilization.

First, consider the impulse to overbuild. It’s an understood one. No one wants to be held responsible when the digital services that are coming to define our work and social lives go down. So, a fear of crowding too many applications on too few servers or not having capacity ready when a spike occurs ends up driving a massive overinvestment in capacity.

But this is reactive thinking. It hasn’t kept up with the industry innovations being made. Smarter, more integrated end-to-end solutions are available, ones built from the ground up with utilization and power efficiency as design goals.

For instance, IBM’s System Z and Power Systems have utilization rates of nearly 100 percent. In part, that’s due to their virtualized type of environment, which uses different partitions to maximum efficiency. The systems also make the most of hypervisor technologies that let multiple images of an operating system run on the same machine at the same time. Just as crucial is the shared-everything design philosophy of System Z, which allows thousands of Linux virtual machines to use a common pool of processor, memory and I/O processors instead of resorting to dedicated hardware for each VM.

At IBM, we also put a lot of effort into designing our servers and giving them the analytics they need to automatically or manually adjust power as usage goes down, integrating in, for instance, different states of sleep mode and power-capping capabilities. If the utilization on a server goes down to 20 percent, for instance, we can bring the power down to about half, which is a big change from a few years ago.

These kinds of analytics and monitoring are the linchpins of higher utilization. Our latest suites of products have tools that monitor in real-time power and cooling and utilization so that managers can determine whether they want to modify how the systems are running or move an application on one server to another with a different utilization rate or power performance rate.

These are just two examples of the changes in approach available for tackling underutilization. Yet perhaps the biggest argument in favor of pushing for higher utilization is the knowledge that needlessly underutilized assets are a drag on the bottom line and growth.

By deciding to not take the steps needed to improve utilization, CIOs are essentially opting to keep spending money on basic operations, or money they could be using for more strategic investments. In fact, data centers that operate at the highest level of efficiency spend 50 percent more of their IT resources on new projects, according to the recent IDG survey of more than 300 IT executives managing datacenters.

Habits are hard to change, especially when they’re institutionalized, industry habits. But there are so many compelling reasons for CIOs to reconsider their old approach to overbuilding data centers. Not the least of which is the electricity bill that’s coming their way.

By Byun Seong-jun (sjbyun@kr.ibm.com)

The writer is sales executive of Data Center Service Line, Global Technology Services at IBM Korea ― Ed.