Cisco

Building an iSCSI SAN for VMware for Under $300

Posted on Updated on

In recent months, I’ve been assembling a lab to provide a test-bed for various network and infrastructure applications. My current role at Dell often involves multi-vendor networks, so having an easily accessible test bed including Cisco, Dell, Juniper and HP devices can be very useful for interoperability troubleshooting such as Spanning Tree Protocol.

I wanted to provide a robust virtual infrastructure, and in my experience that usually means VMware. I’m fortunate enough to have extra ESX Enterprise and Plus licenses from VMware partner registration. To utilize all the most useful VMware features like Vmotion and HA, a shared storage system is required. In addition, I wanted to incorporate as many iSCSI “best practices” such as using dedicated infrastructure, dedicated VLAN and Jumbo Frames without breaking the bank.

Without an extra $1-2 K on hand to go out and purchase a full-blown iSCSI SAN such as EqualLogic or Compellent (shameless Dell plugs), and already having a home NAS set up, my goal was to assemble a SAN utilizing as much extra or existing hardware as possible and of course limiting new expenses.

For my purposes, performance took precedence over storage capacity, and redundancy was not as important as keeping costs down (and streamlining design).

Ingredients:

Configuration:

  • I was able to re-purpose an unused PC for the iSCSI Starwind server, w/dual core CPU, 3 GB RAM, and Windows 7 Home. Starwind Free Edition doesn’t require a server OS so that was helpful.
  • The Intel GigE NIC was installed into the PC for a dedicated NIC to the iSCSI network, separate from the LOM.
  • The SSD was installed into the spare PC, and presented as a new iSCSI device.
  • I thought I had a 9 pin F-F cable already but didn’t…not common these days, anyway got lucky finding the last one in stock at Fry’s ūüôā

Caveats:

  • For the SAN server, ideally this should be a Windows or Linux server O/S, however my hardware was more than adequate.
  • Starwind is a good option for Windows users, OpenNAS is an option for Linux folks.
  • JUMBO FRAMES are a MUST!! Jumbo Frames must be enabled end to end for optimal performance, and must be supported on the physical switch for starters. In addition, you’ll need to update VMware components for Jumbo frame support including vSwitch, port group, VMkernel, and guest OS NIC adapter. Here’s a great article on configuration for VSphere 4.
  • It’s always a good practice to create a separate VLAN for iSCSI as well.
  • LAN cables not included

Results:

  • I’m very pleased with my new iSCSI-based shared storage system, supporting VSphere 4 on (2) Dell SC1425 64-bit 1U servers. Responsiveness is snappy within VI Client, as well as within RDP for Windows guest VM’s.
  • VMotions on shared storage: 20-30 seconds, not bad compared to Enterprise-class SAN’s which I’ve observed at 10-20 seconds.

Here are my two Dell SC1425 servers, each with (2) 3 Ghz Xeon CPU’s and 6GB RAM, with dedicated 1 GB NIC’s for the iSCSI network.

Advertisements

Cisco Nexus 5596’s with Redundant Uplinks to Catalyst 6509 Core’s Using vPC

Posted on Updated on

I recently had the opportunity to deploy a Cisco Nexus solution 5596UP switches for a healthcare customer. The Nexus switches represent Cisco’s presence in the Converged Fabric segment, which is gaining momentum recently as more IT shops seek to streamline datacenter infrastructure towards a “private cloud” approach. The Nexus 5000 series is primarily aimed at the edge, with Layer 3 capabilities available as an add-on.

The Nexus switches support vPC (Virtual Port Channels), which “allows links that are physically connected to two different Cisco Nexus 5000 Series switches or Cisco Nexus 2000 Series Fabric Extenders to appear as a single port channel by a third device.” (Cisco)

Here is a diagram of the planned configuration of vPC uplinks between the Nexus 5596UP and Catalyst 6509 (core) switches:


To setup vPC on the Nexus switches, first you need to create a vPC peer-link between the pair of Nexus switches. The peer-link must include at least 2 interfaces.

feature vpc
vpc domain 1
  role priority 4096
  system-priority 2000
  peer-keepalive destination 192.168.100.20
  auto-recovery

interface port-channel20
  switchport mode trunk
  vpc peer-link
  switchport trunk allowed vlan 100,103-104,901
  spanning-tree port type network

interface Ethernet1/23
  description Link 1 to 5596-sw2
  switchport mode trunk
  switchport trunk allowed vlan 100,103-104,901
  channel-group 20 mode active

interface Ethernet1/24
  description Link 2 to 5596-sw2
  switchport mode trunk
  switchport trunk allowed vlan 100,103-104,901
  channel-group 20 mode active

Repeat on the 2nd Nexus switch.

Next you need to create the virtual port-channels on the Nexus side. We will create one port-channel per uplink interface.

interface port-channel1
  switchport mode trunk
  vpc 1
  switchport trunk allowed vlan 100,103-104,901

interface port-channel2
  switchport mode trunk
  vpc 2
  switchport trunk allowed vlan 100,103-104,901

interface Ethernet1/1
  description uplink to Core1-7/8
  switchport mode trunk
  switchport trunk allowed vlan 100,103-104,901
  spanning-tree guard loop
  channel-group 1 mode active

interface Ethernet1/2
  description uplink to Core2-6/7
  switchport mode trunk
  switchport trunk allowed vlan 100,103-104,901
  spanning-tree guard loop
  channel-group 2 mode active

Again, repeat for the 2nd Nexus switch.

Notice that spanning-tree Loop Guard has been enabled on the uplinks to prevent STP looping issues. Also, the Allowed VLAN’s should match the VLAN ID’s allowed in the peer-link.

Finally, create the port channels on the Catalyst side. Here we will create ONE port channel per Catalyst, consisting of the uplinks from each Nexus switch, so that the Catalyst will see the Nexus pair as a single switch. Until this step is complete, the vPC status will show as down.

interface Port-channel200
¬†description “Connection to Nexus”
 switchport
 switchport trunk encapsulation dot1q
 switchport mode trunk
 no ip address
 spanning-tree guard root

interface TenGigabitEthernet7/6
 description NEXUS SW2 PORT1
 switchport
 switchport trunk encapsulation dot1q
 switchport mode trunk
 no ip address
 spanning-tree guard root
 channel-group 200 mode active

interface TenGigabitEthernet7/8
 description NEXUS SW1 PORT1
 switchport
 switchport trunk encapsulation dot1q
 switchport mode trunk
 no ip address
 spanning-tree guard root
 channel-group 200 mode active

Repeat this process for the 2nd Catalyst switch.

Now the vPC status should show as up-

5596-sw1(config-if)# sh vpc
Legend:
                (*) Рlocal vPC is down, forwarding via vPC peer-link

vPC domain id                   : 1   
Peer status                     : peer adjacency formed ok      
vPC keep-alive status           : peer is alive                 
Configuration consistency status: success
Per-vlan consistency status     : success                       
Type-2 consistency status       : success
vPC role                        : primary                       
Number of vPCs configured       : 2   
Peer Gateway                    : Disabled
Dual-active excluded VLANs¬†¬†¬†¬†¬† : –
Graceful Consistency Check      : Enabled

vPC Peer-link status
———————————————————————
id   Port   Status Active vlans    
—¬†¬† —-¬†¬† —— ————————————————–
1    Po20   up     1,100,103-104                                            

vPC status
—————————————————————————-
id     Port        Status Consistency Reason           Active vlans
—— ———– —— ———– ————————– ———–
1      Po1         up     success     success                   1,100,103-104         
2      Po2         up     success     success                  1,100,103-104    

Here
is a useful Cisco reference document on vPC for the Nexus 5000 series.

Notes:

  • This configuration only applies if there is NOT VSS established between the Catalyst 6509 core’s.
  • You can rely on Spanning Tree for establishing redundant links from a Nexus pair to dual Core’s, with only one uplink marked as active by STP. This customer insisted on getting the aggregated bandwidth from both 10Gb uplinks, as they planned to converge additional applications onto the Nexus in the future.
  • ¬†If at all possible, you should test this configuration in a lab before production deployment. At a minimum, deploy these changes during a maintenance window as there is a risk of network outage mainly due to looping behavior – spanning-tree root guard, loop guard are strongly recommended!

Happy vPC’ing!

Reactions to VMware’s VSphere 4

Posted on Updated on

I was impressed with the VMware simulcast this morning announcing VSphere, the next iteration of their enterprise virtualization platform, dubbed the first “Cloud OS.” Having deployed and administered VMware products for several years, it’s exciting to see them continue to push the evolution of virtualization, which has now expanded from a single server up to multiple data centers.

It’s also becoming quite apparent that a loose alliance is coalescing between several of the established leaders in the infrastructure space. In particular, VMware continues to align with Cisco, whose recent unveiling of a “Unified Computing System” combined with VSphere offers the promise of a private “cloud in a box.” Other members of this confederation are Intel, whose recent Xeon 5500 Nehalem chip is tailored for VM loads, in addition to EMC, whose updated Symmetrix SAN is optimized for VMware and Microsoft Hyper-V support. Dell appears to be more closely aligned than HP, and has a better position in the SMB market.

And don’t count out Oracle / Sun, one of today’s VSphere demo’s featured Sunfire servers, and when Cisco CEO John Chambers left the stage to congratulate VMware’s lead engineering team, Sun racks were featured quite prominently.

So who’s not joining the party, yet?

Here’s my list –

  • HP – not seeing innovation, very quiet these days
  • IBM – passed on Sun, noticeably low-key at today’s VSphere event
  • Google – how long before they offer a full-blown Cloud service
  • Microsoft – no support for Hyper-V in VMware VSphere
  • Citrix – falling further behind, no support from EMC Symmetrix, or VSphere

The continued limited inter-operability between major virtualization vendors – VMware, Microsoft, Citrix – and subsequent “vendor lock-in” really makes me wonder about the feasibility and likelihood of a truly Open Cloud platform, given the symbiotic relationship between Virtualization and Cloud computing.

P.S. I still think Cisco should have picked up Sun…

Dry reflections on a possible IBM purchase of Sun

Posted on Updated on

The rumors of this IT mega-merger have been swirling and were in full force this week. I’m not sure yet how the long-term balance of strategic benefits will work out for IBM, as well as the impact on the industry. Who knows, maybe Big Blue wants to take spotlight away from possible controversy involving CEO Sam Palmisano’s monstrous $21 million bonus in 2008, in light of the current AIG drama? Just sayin’…

Benefits

  • Sun’s new Open cloud API, very cool
  • Sun’s virtualization
  • MySQL, and more open-source credentials
  • Bigger imprint into the Data center (and the cloud)
  • Java

Drawbacks

  • Accumulating more overhead…HP took years to digest Compaq
  • Sun’s cachet has been fading for many years
  • Still not a major network player, despite recent partnership with Juniper
  • Still not a major storage player – see EMC, NetApp, Dell and Hitachi
  • Java

It’s hard not to interpret IBM wanting to swipe back at Cisco in the race to dominate the emerging cloud market, given that the rumors emerged barely a day after Cisco’s major Unified Computing initiative. But IBM is enhancing their strengths – servers, open source, applications – and not addressing weaknesses in networking and storage with this potential acquisition. HP appears to have a more compelling end to end Data Center offering, with an established EVA StorageWorks line in addition to ProCurve networking. I’m not sold on this one yet…