Tip

How to size a SAN

If your DAS storage infrastructure has reached its limits, or your existing SAN is out of date or just plain end-of-life, your first step will be to know what your storage requirements are.

Your legacy storage infrastructure has reached its limits. Maybe you have a lot of servers with internal or external direct attached storage (DAS) and management of capacity and backup is beyond the point of reason. Maybe your existing storage area network (SAN) is out of date or just plain end-of-life. Either way, it's reached the limits of practical use.

Whichever your situation, you think you need a new SAN. But do you go iSCSI? Fibre Channel? Both? How much will it cost? Do you use SAS drives, Fibre drives, SATA drives or some combination of the three?

You're worried about not getting enough capacity or performance or scalability. Since your budget is not unlimited, your requirements will have to reflect what you need for the present, yet address improvements in capacity, performance and scalability. How much of your existing technology do you keep?

The devil is in the details, and many an IT person has found that to be true late in the implementation of a plan when requirements changed midstream. Unless you plan on leveraging a file virtualization or storage virtualization solution, you'll be locked into whatever you choose.

Before you even decide which vendor to talk to, you need to know what your requirements are, unless you want your vendor to make that decision for you. This tip will give you a rule of thumb to get things rolling.

Determine each server's performance requirements

When measuring throughput of data, you need a meaningful baseline unit of measure, then convert everything to that same data rate. You want to be talking megabytes per second (MBps) and gigabytes per second (GBps) when referring to speeds and feeds of the network, the disks, even the limits of system buses.

This table on computer buses and storage can put things in perspective. A 1 Gbit network connection doesn't sound so fast once you realize that's only about 125 MBps. And that's theoretical wire speed. Don't forget the dirty little secret of data: There's overhead of the protocol, overhead caused by congestion or bandwidth sharing and overhead caused by latency. And, of course, each application is different. Whether or not it is mostly reads or writes, or sequential or random, affects the performance you get over your network and from your storage.

You might only get 15 MBps to 25 MBps in a normal 1 Gigabit Ethernet (GigE) LAN, and as high as 65 MBps in a high-end NAS solution. To get to wire speed on 1 GigE, you might need an optimized TOE, NIC or iSCSI host bus adapter (HBA), jumbo frames enabled and a sustained sequential read.

Given all these variables for each server, you might think the only way you can be sure of addressing all your requirements is to get the biggest, baddest brute of a storage solution your money can buy. But there's an easier way. If you don't waste your entire budget on unnecessary hardware, you'll leave yourself room for all those nice options that make your job easier.

Gather basic information on your existing environment or what it will be in the near term. After all, if you get that new server refresh budget approved, your requirements are going to change again. Create a table. You'll create a basic inventory of all the elements of each server that relate to speeds and feeds. This information will also help your vendors match the correct solution to your requirements.

For each server, you'll need to know:

  • Existing network ports. Do you have utility VLAN in place or just production LAN traffic?
  • Internal or external disks. (How many spindles? SATA, SCSI, SAS or a SAN resource?)
  • RAID type in use.
  • Are there existing iSCSI or Fibre Channel HBAs? What kind of slot does it take up in the server?
  • How many PCI-X and PCI-E slots are free? Are they 133 MHz, 66 MHz, etc., or in the case of PCI-E, x4, x8, x16 and so on? In many motherboards, your expansion buses share bandwidth with each other. A high-end server will have some dedicated buses for I/O purposes.
  • The server manufacturer and model.
  • What operating system and version, and what application(s) are running on the server. You'll want to note specific versions for things like multipath support, HBA driver support and snapshot support. Not all applications or operating systems are supported by everything.
  • You'll definitely want to note any servers running anything special, like an application cluster for the database or email, or with unusual performance characteristics, such as a video streaming server.

Once you pull all this information together, you can determine what most of your servers need in the way of performance, capacity and reliability without having to worry about complicated capacity planning analysis tools and without having to run a lot of benchmark tests on your systems.

How? By taking inventory of the theoretical limits of the critical components. By planning the storage infrastructure against the maximum theoretical specs of your existing server's components, you have a conservative baseline for determining whether or not you can use SATA, SAS or Fibre Channel disks and whether you'll need to leverage iSCSI or Fibre Channel.

For example, you have a Windows 2003 Enterprise Exchange server running on a Hewlett-Packard DL580, which has two integrated 1 GigE network ports and five 15,000 rpm 36 GB Seagate Cheetah 7 SCSI drives, using a PCI-X 133 MHz Ultra320 SCSI adapter. The server is running W2003 R2 and has four x4 PCI-E slots and two PCI-X 133 MHz slots.

It has about 120 GB of usable space and it's full. You're out of space on this and a lot of other similar servers and want to consolidate on a SAN for scalability, and ROI and TCO and all those other great reasons. You haven't noticed any issues with performance and you want to keep it that way when you move it over to your SAN.

Let's consider the theoretical performance limits spec for each item in the hardware inventory.

For each drive, you can work from max spec IOPS and read/write data.

  • Each NIC port has a theoretical wire speed limit of about 125 Mbps.
  • Ultra 320 SCSI has a theoretical limit of 320 MBps.
  • The PCI-X 133 MHz bus that the Ultra 320 SCSI adapter is plugged into has a limit of 1,033 MBps.
  • The Seagate 36 GB SCSI 15,000 rpm disks have a throughput range up to 90 MBps and 300 IOPS. Total the number of disks in each RAID group (in this case there is only one RAID group -- this number of spindles is also a limit on performance) and multiply times the IOPS and these throughput ratings in MB/sec.

Let's think about this for a moment. The server's five drives have an existing capability of potentially delivering up 450 MBps of throughput (and 1,500 IOPS), which is more than the theoretical limit of the 320 MBps SCSI adapter for its application. Since the PCI-X bus is almost three times as fast as the card in it, the obvious bottleneck is the two 1 GigE network ports that share the data from the fast disks and controller card.

So the performance requirement, as far as throughput is concerned, must be able to provide a storage resource somewhere between the 320 MBps of the SCSI adapter in the server and the 250 MBps that the two 1 GigE ports are able to share it and at least 1,500 IOPS. Considering that your existing performance was fine with only two 1 GigE ports, your performance requirements will likely be in the 50 MBps to 150 MBps range, but you'll want to keep your IOPS requirement high, to ensure that your new storage solution has adequate spindle count.

Do this for all of the servers you want on the new storage solution. Add up the capacity you need, the total throughput and total IOPS. Note: The storage solution will likely need to meet these requirements, as well as those of the other servers/applications, and the only safe way to guarantee adequate performance is to ensure that the solution can step up for all these requirements running at maximum simultaneously. If, after tallying this up, you find that you need more than one RAID array or controller-equipped device, if you were planning on modular storage solutions, or a bigger enterprise-class storage array, at least you have working numbers for justifying your requirements.

The server in our example has a PCI-E slot free and that's plenty fast enough to support the new 8 Gbit Fibre Channel HBAs, or a dual-port 4 Gbit Fibre Channel HBA, or a quad-port 2 Gbit Fibre Channel HBA, or a 10-GigE NIC card, or an InfiniBand host channel adapter (HCA), or a quad-port 1 GigE NIC, etc. Your choices on how you're going to talk to your new storage solution are pretty exciting today.

About the author: Joel Lovell is senior storage consultant for Storage Engine Inc. His specialty is high-performance storage and storage consolidation. He is EMC-trained in business continuity solutions, enterprise storage infrastructure and enterprise storage management. He previously was a strategic storage specialist for the Americas for Silicon Graphics and a senior systems engineer for EMC.
 

Dig Deeper on Primary storage devices