In recent months, I’ve been assembling a lab to provide a test-bed for various network and infrastructure applications. My current role at Dell often involves multi-vendor networks, so having an easily accessible test bed including Cisco, Dell, Juniper and HP devices can be very useful for interoperability troubleshooting such as Spanning Tree Protocol.
I wanted to provide a robust virtual infrastructure, and in my experience that usually means VMware. I’m fortunate enough to have extra ESX Enterprise and Plus licenses from VMware partner registration. To utilize all the most useful VMware features like Vmotion and HA, a shared storage system is required. In addition, I wanted to incorporate as many iSCSI “best practices” such as using dedicated infrastructure, dedicated VLAN and Jumbo Frames without breaking the bank.
Without an extra $1-2 K on hand to go out and purchase a full-blown iSCSI SAN such as EqualLogic or Compellent (shameless Dell plugs), and already having a home NAS set up, my goal was to assemble a SAN utilizing as much extra or existing hardware as possible and of course limiting new expenses.
For my purposes, performance took precedence over storage capacity, and redundancy was not as important as keeping costs down (and streamlining design).
- DISK: Crucial 128 GB m4 2.5-Inch Solid State Drive SATA 6Gb/s CT128M4SSD2 – $125
- NETWORK: Dell PowerConnect 5324 1GB 24-port switch, Jumbo Frame support (used, Ebay) – $120
- Intel Gigabit NIC – $37
- SERVER: Starwind iSCSI SAN Free edition
- MISC.: 9 Pin null modem cable (console for Dell 5324) – $10
- Mounting kit for SSD – $3
- TOTAL – $295 (not incl. tax or shipping)
- I was able to re-purpose an unused PC for the iSCSI Starwind server, w/dual core CPU, 3 GB RAM, and Windows 7 Home. Starwind Free Edition doesn’t require a server OS so that was helpful.
- The Intel GigE NIC was installed into the PC for a dedicated NIC to the iSCSI network, separate from the LOM.
- The SSD was installed into the spare PC, and presented as a new iSCSI device.
- I thought I had a 9 pin F-F cable already but didn’t…not common these days, anyway got lucky finding the last one in stock at Fry’s 🙂
- For the SAN server, ideally this should be a Windows or Linux server O/S, however my hardware was more than adequate.
- Starwind is a good option for Windows users, OpenNAS is an option for Linux folks.
- JUMBO FRAMES are a MUST!! Jumbo Frames must be enabled end to end for optimal performance, and must be supported on the physical switch for starters. In addition, you’ll need to update VMware components for Jumbo frame support including vSwitch, port group, VMkernel, and guest OS NIC adapter. Here’s a great article on configuration for VSphere 4.
- It’s always a good practice to create a separate VLAN for iSCSI as well.
- LAN cables not included
- I’m very pleased with my new iSCSI-based shared storage system, supporting VSphere 4 on (2) Dell SC1425 64-bit 1U servers. Responsiveness is snappy within VI Client, as well as within RDP for Windows guest VM’s.
- VMotions on shared storage: 20-30 seconds, not bad compared to Enterprise-class SAN’s which I’ve observed at 10-20 seconds.
One of my recent IT consulting clients is a digital media shop that operates a local 60-node rendering farm. Based on their available bandwidth, scale, and, most importantly, established workflows and style, this client is not interested in exploring an external cloud solution at this time. Fast and stable access to rendering farm resources from desktop workstations is absolutely critical for their ongoing projects, and the trade-offs involved with a cloud provider are not favorable. Most likely, comparable new media and effects studios of this size will come to a similar conclusion.
However, larger studios such as DreamWorks are already beginning to assign additional work to specialized compute clouds, in this case developed by the state of New Mexico. As graphics and rendering-specialized clouds emerge onto the commercial market, a key question is once again determining the relative strategic value of IT infrastructure to the business. Certainly, in the case of a digital/3D studio, the primary source of competitive value resides in the creative capabilities of designers, upon whose completed projects the studio builds it reputation upon. Tangible assets from graphics software to Mac workstations, network switches and SAN’s are generic and easily replaced. But the utilization of the technical infrastructure, encompassing a workflow from importing raw media to editing and rendering files, and managing IT resources such as render node availability, storage space, security, and backups, are also critical components of this business and therefore not likely candidates to be fully sourced externally.
As the cloud computing market matures and delivers more customized solutions, the 3D/render space should get very interesting. New and specialized processor offerings from vendors such as AMD and Intel will also make the choice between local and cloud-based render farms more challenging.
The rumors of this IT mega-merger have been swirling and were in full force this week. I’m not sure yet how the long-term balance of strategic benefits will work out for IBM, as well as the impact on the industry. Who knows, maybe Big Blue wants to take spotlight away from possible controversy involving CEO Sam Palmisano’s monstrous $21 million bonus in 2008, in light of the current AIG drama? Just sayin’…
- Sun’s new Open cloud API, very cool
- Sun’s virtualization
- MySQL, and more open-source credentials
- Bigger imprint into the Data center (and the cloud)
- Accumulating more overhead…HP took years to digest Compaq
- Sun’s cachet has been fading for many years
- Still not a major network player, despite recent partnership with Juniper
- Still not a major storage player – see EMC, NetApp, Dell and Hitachi
It’s hard not to interpret IBM wanting to swipe back at Cisco in the race to dominate the emerging cloud market, given that the rumors emerged barely a day after Cisco’s major Unified Computing initiative. But IBM is enhancing their strengths – servers, open source, applications – and not addressing weaknesses in networking and storage with this potential acquisition. HP appears to have a more compelling end to end Data Center offering, with an established EVA StorageWorks line in addition to ProCurve networking. I’m not sold on this one yet…