Dual booting vs. virtualization: Using Windows and Linux on the same server
Interoperability between Linux and Windows can be achieved in two ways: one, by installing each operating system separately via dual booting -- which allows both operating systems to run on the same hardware -- and two, by running one system inside another with virtualization. In this tip, Serdar Yegulalp explains when to use each option.
Many Windows administrators get their first taste of Linux by booting a live CD version of a given distribution, or by running it in a virtual machine. Eventually, they may find themselves in a position where they need to have side-by-side installations of Linux and Windows on the same server for the sake of convenience -- or simply because there isn't another server handy!
|
||||
There are typically two ways to get Windows and Linux to co-exist on the same box: dual-booting and virtualization. The first one involves installing each operating system on a separate volume (either a different physical disk or different partitions of the same disc), and using some mechanism for choosing which one to boot each time the computer's powered on. The second involves using a virtual computing solution to run one operating system instance inside another, in effect allowing both to be run at the same time.
Dual-booting and virtualization are, in my opinion, the two main approaches to Linux-Windows interoperability. In this tip, we'll explore which is the best and why. There are other possibilities, of course, and I'll look at them briefly later in this tip. As it turns out, each approach is suited to pretty specific scenarios, and what may seem like a superior approach to you may not be after some closer examination. With servers, virtualization is that much better a bit, and may seem like the natural choice -- but in some cases dual-booting as an interim solution might be more appropriate.
Virtualization
In the time since virtualization's inception and its growth from a mere lab -- toy curiosity to a genuine productivity tool, it's been regarded as something of a wunderkind technology -- it allows you to get two, or three or even more PCs for the price of one. It's hard to deny there's something almost magical about it all, but virtualization isn't magic, just a clever re-use of existing computing resources.
To that end, virtualizing Windows and Linux side-by-side works best when you have adequate resources, such as memory to run instances of both operating systems at the same time. On my own system, which sports 2GB of RAM and runs the 32-bit edition of Vista, I can run a 1GB virtual machine without too many problems, although I typically allocate 512MB for a VM of a desktop OS. Most up-to-date server hardware should be 64-bit, and running a 64-bit OS to match, since a.) 64-bit systems can emulate both 32- and 64-bit OSes with the proper software, and b.) aren't hampered by the memory restrictions of a 32-bit environment.
Which OS you use as the "guest" (the OS that's actually running in virtualization) and which OS you run as the "host" will largely depend on a few things: which OS, if any, you already have installed; which OS you're most familiar and comfortable with; which OS is most immediately available to use as a host (which is typically what's most important with a server environment, since you typically can't just scrape and replace that OS); and which OS you can run a virtualization solution on. The last item on this list is typically the most flexible, since virtualization products are available at no cost for both Windows (Virtual PC), Linux or both (VirtualBox). The host should be the OS you know best since that it is that much easier to get virtualization running.
One thing to be aware of when using a particular virtualization solution in both Windows and Linux is host integration. This is a software package, installed on the host OS, to allow better cooperation between the guest and host. For instance, the host integration package for Windows running under Virtual PC allows things like dragging and dropping files into the host operating system's window. Linux host integration software exists as well, but be sure that the variety of Linux you're using in virtualization has proper integration support. Most of the time Linux will still work fine without host integration, but will simply lack a few features that make it that much easier to use in a VM.
Virtualizing Windows on Linux (or vice versa) also allows a certain degree of liberation from hardware requirements for the guest OS. An instance of a virtual machine is portable, and can be run anywhere where there's a copy of the virtualization software. This way, if in the future you have another computer available, you could put the VM on a portable HD,or simply copy it to the other computer, and run it there if needed. Windows is, of course, finicky about being moved from computer to computer, not just because of Product Activation, but because of things like possible changes in CPU and mass-storage controller setups, but having an instance of Windows in a VM helps do away with a lot of these issues.
Dual-booting
Before there was virtualization, there was dual-booting, and since virtualization hasn't completely eliminated dual-booting, it must still be useful. It is, especially when dealing with operating systems as radically unalike as Windows and Linux. It's also the least flexible solution when dealing with a server, as opposed to a workstation, since the luxury to reboot on demand is not always available.
One of the biggest advantages that dual-booting has over virtualization is that both operating systems can run directly on the same hardware, just not at the same time. Granted, virtualization doesn't impose the same kind of performance hit it used to, but it also adds a sometimes unwanted layer of abstraction between the OS and the hardware. For instance, one common drawback of using Virtual PC for emulation is that USB hardware is not emulated at all. There are other virtual machine solutions that do have USB support (VirtualBox, for example), but that's often just the tip of the iceberg. Sometimes, the best full and proper hardware support solution for a given OS is just to dual-boot, since then the OS can access all the hardware on the host system without fetters.
Dual-booting is mostly handy if simultaneous usage of both OSes is not required, again, something that may be problematic when using a server. If you're migrating data from one OS instance to another, for instance, having both OSes running at the same time is not always needed, although it is convenient. All that's required is that both OSes are able to read the file systems in question which, today, is typically not an issue. Most distributions of Linux are able to read FAT and NTFS transparently, although Windows still isn't able to read Linux file systems like ext2 and ext3 without third-party add-ons.
One thing to be especially careful of with dual-booting is dealing with boot managers. This could easily be expanded into an article by itself, come to think of it, but the short version is that if Windows is installed on a given system, it will typically attempt to install its own boot manager without regard to whether or not one already exists. To get around this issue as best I can, I typically use a third-party boot manager in its own mini-partition, such as Acronis OS Selector or Terabyte Unlimited's BootIt Next Generation.
Other solutions
There are a few other solutions that are also possible, depending on what's available and what you need. One particularly clever way to get Linux running under Windows (albeit without a GUI, at least at first), is a Linux distribution called coLinux, which runs directly under Windows as a Win32 application. Yet another solution would involve using a hypervisor solution like Xen or VMware's ESX Server to run both operating systems side-by-side in the same manner as a virtual machine, probably the most elegant and all-inclusive solution, although to use Xen with Windows requires hardware support that might not be available to everyone.