.shock - Fotolia

Tip

The case against data center consolidation

Having one centralized data center can bring IT benefits. Use cases such as IoT, edge computing and cloud don't necessarily work with consolidated infrastructure.

Data center consolidation has been a key factor in data center growth and management. The ability to use and share as much as 100% of every server has helped to contain the spiraling costs of hardware, power, cooling and physical data center space.

The pendulum toward consolidation can only swing so far. There is value, but there are some circumstances where a business may want to rethink consolidation and consider the benefits that only more hardware can provide.

Placing resources and services outside of the principal data center is a legitimate response to situations where network limitations, scale and redundancy are vital to the organization. It's important to consider each situation and its alternatives individually and make the choices that are most appropriate for the business.

The appeal and challenges of consolidation

The advent of server virtualization coupled with hypervisor software offerings expanded processor command sets to abstract applications from the underlying computing hardware.

The translation of physical computing resources into logical equivalents made it possible for workloads to use more available resources than ever before and share those resources in ways that were never possible through the installation of an application on a bare-metal computer.

Since the introduction of server consolidation, virtualization has evolved to expand and support the consolidation of other resources including I/O and network elements and allowed for even more sharing of finite data center capacity.

As virtualization and consolidation shrink a corporate data center, businesses may realize that consolidation cannot always be pursued as a singular goal.

Consolidation has become an indispensable benefit for some organizations, but data center consolidation eventually runs up against the following harsh realities of physical limits:

Physical limits exist within servers. There is a finite amount of memory and CPU cycles to share across infrastructure. Virtualization makes it possible to share those limited resources to some extent, but not without the risk of performance degradation in the VMs.

Physical limits exist across networks. Admins can share available bandwidth, but total network bandwidth is finite. Exchanging data across global distances within acceptable timeframes demands ample bandwidth and introduces the undesirable physical limitation of latency.

Physical limits exist in operational reliability. Servers, storage and networking gear fail eventually. The consequences can affect vital swaths of data center infrastructure and all VMs on the system. In traditional physical server deployments, a server failure affects only the hosted application. In a consolidated server running eight or 10 virtual machines, the same server failure would affect all those VMs.

Use cases hindered by data center consolidation

Business and IT leaders can make a compelling argument against consolidation in several vital use cases that depend on resilience, distance, scale and isolation of infrastructure.

Resiliency

Redundant workload deployment is a common practice to boost workload throughput. This effectively multiplies the application's ability to perform vital work using load balancers to funnel traffic across duplicate instances. Although the number of duplicate instances boosts redundancy, it is the choice of deployment locations (physical servers) that defines the application's resiliency.

If the business' strategy is to boost workload throughput, duplicate instances can potentially be located on the same consolidated server. However, this is generally considered poor practice because an underlying system fault could stop these additional workload instances.

When the goal includes application resiliency, best practice dictates that organizations should never locate duplicate VM instances on the same hardware setup. Instead, each redundant workload instance should be located to a different server.

A mission-critical application architected for resilience requires at least two servers that are lightly loaded and configured to employ affinity/anti-affinity hypervisor options to ensure that live migrations or restarts do not inadvertently locate instances on the same hardware.

A similar push against consolidation arises with disaster recovery setups where a duplicate workload instance may be located on a second server located at a second/remote data center site or even the public cloud.

Edge computing and IoT

Organizations are embracing a growing torrent of vital data in order to make critical business decisions, perform research and operate mission-critical facilities. But data storage and processing pose serious challenges for the single centralized data center in terms of network latency, bandwidth and reliability.

Consider a manufacturing facility. It is unlikely that a business would build its data center within a manufacturing facility. All of the sensor data the facility produces, along with some level of command and control signals to manage and operate the facility, must move across a WAN to the consolidated data center.

The network latency introduced by large geographic distances and sprawling network gear can make real-time control problematic. At the same time, unforeseen disruptions to WAN availability, such as internet congestion, can make centralized data collection and control unsteady.

A deployment of some level of computing and storage resources outside of the primary data center and close to the actual facility, location or task at hand can potentially alleviate the challenges of network dependence; this is typically referred to as edge computing.

Big data and scale

A data center can be severely stressed by the demands of heavy data processing workloads. Consider that a big data project may require the computing power of hundreds or even thousands of servers in order to process terabytes -- or even petabytes -- of stored data for processing tasks that might require only several hours to complete.

It is certainly possible to deploy racks of physical servers and field a fleet of VMs within the principal data center to handle such demanding tasks. The costs and underlying infrastructure to support a large influx of servers can be an enormous undertaking for all but the largest enterprises, which is particularly prohibitive.

Rather than make the long-term capital investment in building out a principal data center for big data projects, organizations often turn to alternative computing and storage resources such as the public cloud as a short-term operational expense. The technology provides scale without a large capital investment.

Private and hybrid cloud

The emergence of cloud computing is hardly limited to AWS, Azure and Google. Organizations are embracing private clouds as a reflection of changing business needs. New services and self-service capabilities allow employees and business partners to use applications and services as part of the organization's business model, rather than wait for IT to implement them.

The introduction of private and even hybrid clouds poses the same challenges of scale found in big data use cases. Most data centers are live deployments that depend on consistency, regularity and control in daily operations.

It is unlikely that a business will reallocate those production resources to a private cloud infrastructure, and it is even less likely that a business has the excess compute and storage resources available to build a scalable private cloud from scratch.

One option is to make the capital investment in additional private cloud infrastructure, but there are alternatives to this approach. A business can engage various private cloud services outside of the principal data center.

Public cloud providers can offer virtual private cloud services. For example, a business might use services such as Amazon or Google Virtual Private Cloud. Beyond the major public cloud providers, an organization can also implement private cloud as a service from third-party providers such as VMware, Cisco and IBM.

Dig Deeper on Data center ops, monitoring and management