In recent months, I’ve been assembling a lab to provide a test-bed for various network and infrastructure applications. My current role at Dell often involves multi-vendor networks, so having an easily accessible test bed including Cisco, Dell, Juniper and HP devices can be very useful for interoperability troubleshooting such as Spanning Tree Protocol.
I wanted to provide a robust virtual infrastructure, and in my experience that usually means VMware. I’m fortunate enough to have extra ESX Enterprise and Plus licenses from VMware partner registration. To utilize all the most useful VMware features like Vmotion and HA, a shared storage system is required. In addition, I wanted to incorporate as many iSCSI “best practices” such as using dedicated infrastructure, dedicated VLAN and Jumbo Frames without breaking the bank.
Without an extra $1-2 K on hand to go out and purchase a full-blown iSCSI SAN such as EqualLogic or Compellent (shameless Dell plugs), and already having a home NAS set up, my goal was to assemble a SAN utilizing as much extra or existing hardware as possible and of course limiting new expenses.
For my purposes, performance took precedence over storage capacity, and redundancy was not as important as keeping costs down (and streamlining design).
- DISK: Crucial 128 GB m4 2.5-Inch Solid State Drive SATA 6Gb/s CT128M4SSD2 – $125
- NETWORK: Dell PowerConnect 5324 1GB 24-port switch, Jumbo Frame support (used, Ebay) – $120
- Intel Gigabit NIC – $37
- SERVER: Starwind iSCSI SAN Free edition
- MISC.: 9 Pin null modem cable (console for Dell 5324) – $10
- Mounting kit for SSD – $3
- TOTAL – $295 (not incl. tax or shipping)
- I was able to re-purpose an unused PC for the iSCSI Starwind server, w/dual core CPU, 3 GB RAM, and Windows 7 Home. Starwind Free Edition doesn’t require a server OS so that was helpful.
- The Intel GigE NIC was installed into the PC for a dedicated NIC to the iSCSI network, separate from the LOM.
- The SSD was installed into the spare PC, and presented as a new iSCSI device.
- I thought I had a 9 pin F-F cable already but didn’t…not common these days, anyway got lucky finding the last one in stock at Fry’s 🙂
- For the SAN server, ideally this should be a Windows or Linux server O/S, however my hardware was more than adequate.
- Starwind is a good option for Windows users, OpenNAS is an option for Linux folks.
- JUMBO FRAMES are a MUST!! Jumbo Frames must be enabled end to end for optimal performance, and must be supported on the physical switch for starters. In addition, you’ll need to update VMware components for Jumbo frame support including vSwitch, port group, VMkernel, and guest OS NIC adapter. Here’s a great article on configuration for VSphere 4.
- It’s always a good practice to create a separate VLAN for iSCSI as well.
- LAN cables not included
- I’m very pleased with my new iSCSI-based shared storage system, supporting VSphere 4 on (2) Dell SC1425 64-bit 1U servers. Responsiveness is snappy within VI Client, as well as within RDP for Windows guest VM’s.
- VMotions on shared storage: 20-30 seconds, not bad compared to Enterprise-class SAN’s which I’ve observed at 10-20 seconds.