Thin clients vs. thick clients for desktop virtualization
Find out when you should recommend thin clients or PCs (aka thick clients) to your desktop virtualization customers.
It's time to get your client on board with desktop virtualization, but what type of desktop hardware should you recommend? Can your customer continue to use its current conventional PCs, also known as thick clients, or is it time to migrate the company to a thin-client -- or even a zero-client -- computing platform?
There are countless choices for matching each project's scope and budget with a platform. Let's review the characteristics of each principal desktop type, and that will help you and your customers weigh in on the thin client vs. thick client debate as it applies to them.
Desktop virtualization computing requirements
With conventional PCs, each individual computer processes applications, which requires significant local processing power, memory and disk storage. Desktop virtualization computing --specifically virtual desktop infrastructure (VDI) and other server-hosted architectures -- instead shifts the bulk of the operating system and application processing tasks to a central server.
Consequently, the desktop hardware no longer needs to be a processing powerhouse sporting the latest CPU and massive quantities of memory.
"People have used desktop virtualization as a way of getting around extensive hardware requirements on the PC end," said Brien Posey, an independent technology consultant in Rock Hill, S.C. "That way they can get away with low-end or even legacy hardware."
Instead, the desktop device is typically relegated to the role of what is called a "dumb terminal." Dumb terminals simply pass keyboard and mouse data to the server, then display screen renderings of the desktop and applications the server returns. That means each virtual desktop can function with a 1 GHz processor and 1 GB of RAM, or sometimes even less. These thin client platforms also forgo local disk storage, replacing hundreds of GB of disk space with 512 MB to 1 GB of flash memory -- less storage than a common thumb drive. Gigabit Ethernet connectivity is usually recommended, especially when using the thin client for graphics-intensive work that requires more network bandwidth.
Using thick clients for desktop virtualization
Given the low computing requirements, any thick client in service today should be able to work in desktop virtualization. It requires no special hardware modifications and the PC can run its current operating system.
"I'm just using my PC essentially as a browser," said Barb Goldworm, president and chief analyst at Focus Consulting, an industry research and analysis firm in Boulder, Colo.
The wildcards here are usually network connectivity and display performance. Even though display-over-IP protocols, like Microsoft's Remote Desktop Protocol, have improved tremendously, some PCs may need upgrades to support Gigabit Ethernet for streaming media or other graphics applications.
Aside from adequate connectivity, the main requirement is that each thick client shares the same protocols used by the desktop server. Other protocols, like Citrix's Independent Computing Architecture, may require additional software for Windows or Linux.
Reallocating existing thick clients is a real cost savings for desktop virtualization customers because there's no upfront investment for desktop systems. But it's important to remember that the computing requirements for desktop virtualization are often subjective. They can vary depending on the solutions provider and the customer's needs and preferences. There is no software utility or other tool that can tell you whether a PC is suited for desktop virtualization or not.
Hot Spot Tutorial: Desktop virtualization
Learn more about desktop virtualization in our Hot Spot Tutorial for solutions providers.
Solution providers may base their evaluations on the relative age of the PC. For example, if a PC is new enough to be covered under warranty, then it's probably more than suitable for desktop virtualization. Systems that are old enough to be out of warranty may still be suitable, but you should scrutinize their specifications and support costs more closely before making an upgrade decision.
Using thin clients for desktop virtualization
Solutions providers often redeploy conventional desktop PCs for desktop virtualization, but they are increasingly turning to purpose-built thin-client or zero-client endpoints. Thin clients use little local software -- often just Windows Embedded CE 6.0 or another stripped-down OS -- to manage the initial startup and connection to the desktop server. Examples of thin clients include Neoware, Hewlett-Packard's t5500, t5600 and t5700 families and the SunRay clients from Sun Microsystems.
By comparison, zero clients are scarcely computers at all. The Pano device from Pano Logic Inc., for example, has no CPU, memory, storage or software. The device is merely an appliance that connects a keyboard, mouse, display, audio and USB peripherals over the LAN to an instance of Windows XP or Windows Vista running on a virtual desktop server. Otherwise, thin clients and zero clients share all of the desktop virtualization benefits.
Before choosing to go with either thin clients or thick clients, solutions providers must also consider the need for expansion devices. Older desktop virtualization technologies had trouble supporting expansion ports, but today's offerings can handle a rich array of ports. For example, HP's t5630 sports four USB 2.0 ports. And the t5730 has eight USB 2.0 ports, one VGA input, one DVI-D port, two PS/2 (mouse/keyboard) ports, an RJ-45 (modem/telephone) port and a serial port, along with an optional PCI expansion module. Even the Pano device and other zero-client endpoints provide three USB 2.0 ports for expansion.
Migrating to standard virtual desktop platforms
It's important to note that choosing a standard thick-, thin- or zero-client endpoint has absolutely no effect on desktop virtualization. The servers are doing all the work, so the question of thin clients vs. thick clients is really moot from a technology standpoint.
But desktop virtualization is mainly about reducing risk and simplifying the customer's environment, and the move to standardize on a single desktop platform does offer related benefits. Purchasing endpoint devices in volume saves money. When there is only one make and model of platform, with only one OS and patch set, it simplifies desktop support. And technicians can then focus on the nuisances of one product, rather than dealing with possibly dozens of different kinds of endpoints on the network.
The choice to standardize depends on the customer's needs and budget, but the best tactic is often to migrate in phases -- perhaps coinciding with the expiration of expensive service contracts or other end-of-life timing. A phased approach minimizes disruption to the end users and lets the customer distribute the acquisition costs across several fiscal quarters or even years.