Cisco et al. push memory-dense servers; but at what cost?

Today’s x86 servers offer impressive memory densities, but some IT shops want even more.

You can never be too rich or too thin or have too much server memory. At least, that’s the message from x86 server vendors including Cisco Systems and IBM in their push for technologies such as Unified Computing System Extended Memory and MAX 5, respectively.

These memory-extension technologies involve proprietary silicon that increases the number of memory sticks that can be recognized by a standard Intel or AMD processor, and target memory-hungry workloads such as virtualization, business analytics and high-performance computing that are increasingly finding their way on to x86 systems.

Some IT buyers agree wholeheartedly with the need for more memory than traditional systems can support. Others say the memory densities offered by contemporary server models meet their needs, and say memory-dense systems aren’t worth the extra cost, or buying into proprietary technology.

Virtualization: Memory-hungry workload du jour
Little Canada, Minn,-based Slumberland Inc. errs on the side of more memory. Last year, the furniture retailer upgraded dozens of Dell PowerEdge servers with a Cisco UCS environment last year, and cited its Extended Memory capability as one of the main drivers.

“The extended-memory feature in the B250s was the principal draw,” said Seth Mitchell, the IT director at Slumberland. “The more RAM, the better, and the lower the latency the better.”

The extended-memory feature in the B250s was the principal draw.

 Seth Mitchell, IT director, Slumberland Inc.

The extended memory feature in the Cisco UCS B250 M1 is enabled by a proprietary ASIC that allows the processor to recognize up to 48 DIMMs, rather than the usual 32. In this way, servers can be equipped with much more memory or can be armed with a lot of memory using smaller, cheaper dual inline memory modules (DIMMs). In addition, the ASIC allows DIMMs to access the memory at higher access speeds. Normally, the more memory attached to a processor, the slower the access speeds.

Slumberland replaced about 95 mostly Dell servers with a Cisco UCS consisting of 10 half-width B200M1s and six two-processor B250s for its Microsoft Hyper-V virtualization cluster. It fills the B250s with 192 GB of RAM using 8 GB memory sticks, with the intention of filling them to 384 GB over time.

Meanwhile, concerns about Cisco UCS being proprietary are exaggerated, Mitchell added. “People talk about it being a proprietary system and lock-in, but we don’t see it that way at all. We run Hyper-V, Compellent, Red Hat; it’s agnostic as anything we’ve ever run before,” Mitchell said. Extended Memory aside, “It looks very much like a vanilla system.”

Memory-dense systems suit VDI scenarios
Memory-dense systems are particularly appealing to shops running virtual desktop infrastructure, or VDI, because they don’t tend to have high I/O demands that limit consolidation ratios.

“Of all the virtualization workloads you could run, said Jonathan Eunice, principal IT adviser at Illuminata in Nashua, N.H., "VDI is the one with the highest desired consolidation ratio. You don’t just want 10 or 20 VMs per host – ideally you want as many as you can.” With VDI, “the amount of CPU used is not that high,” so adding extra memory translates directly into more VMs per host.

Cisco estimates that users can achieve 60% more virtual desktop instances on its servers running with Extended Memory than on those without it, said Todd Brannan, Cisco product marketing manager for UCS. The company doesn’t break out what percent of its UCS server shipments ship with Extended Memory, but “the B250 with Extended Memory is what’s catching people’s attention,” he said.

Snowballing memory densities
Other IT buyers note that it’s possible to configure mainstream x86 servers with oodles of memory today without buying proprietary technology.

Matt Lavallee at Shrewsbury, Mass.-based MLS Property Information Network Inc. doesn’t feel the need for more memory than can be obtained in a generic system.

“We tend to have workloads that correspond to the core utilization and the [HP] DL585 allows up to 512 GB of memory. Our highest provisioning to date has been 160 GB, which is still well below the total capacity and handles 50-plus VMs without constraint,” he told SearchDataCenter.com via email.

“Frankly, it's probably less expensive to buy another server than to use premium-capacity DIMMs,” he added.

Case in point: At press time, a quick trip to Dell’s website shows that the cost of a quad-processor PowerEdge R905 with 32 4 GB DIMMs (128 GB total memory) costs $12,440. The same server configured with higher-density 8 GB DIMMs (or 256 GB of total memory) costs $30,709, or about two-and-a-half times as much.

Given the decreases in memory prices, it makes sense for some IT managers to configure servers with only some of the DIMM slots filled, adding extra memory as demand arises – and as prices fall. Rick Vanover, an IT infrastructure  manager at Alliance Data in Columbus, Ohio, leans toward VMware hosts consisting of HP ProLiant DL380s half-full with eight 16 GB DIMMs, for 128 GB memory. “That way I don’t fill up every slot, and I have options in the future,” he said, adding that “I see [the servers] being maxed out in the end.”

Ironically, the same virtualization technology that places heavy demands on memory also lessens up-front system design needs. That’s because live migration allows workloads to be easily migrated if the workload outgrows its underlying server, said Illuminata’s Jonathan Eunice.

“Hardware specs are a little less fixed as to what a workload will have to deal with because we can move the workload around as it grows,” he said. As a result, “we’re not trying to get supernova or black-hole densities anymore.”

Let us know what you think about the story; email Alex Barrett, News Director at [email protected], or follow @aebarrett on twitter.

Dig Deeper on Data center hardware and strategy