This has been an unusually busy summer. Aside from various projects with IBM Security, as well as personal changes, the most interesting professional development is that I’ve been approved to author a new book on the subject of VMware NSX and Micro-Segmentation, to be published by Taylor and Francis. I’m very excited as a first-time author, it’s been a goal for some time to write a technical volume and here’s my chance at last.
Just wanted to share a brief update regarding an exciting upcoming step in my professional journey. I’ve recently accepted a role as Senior Managing Consultant in IBM’s Infrastructure and Endpoint Security group, and will be leaving my position at Dell later this month.
The past 5 1/2 years at Dell have been quite a ride, and looking back need to acknowledge that now is a time of mixed feelings as I’m leaving behind some great colleagues and a supportive team and manager. Over these past years I’ve learned so much about how IT and network services are actually managed in a large global consulting practice, and have built up a track record of planning and delivery of complex, multi-vendor network solutions along the way. There have been major changes at Dell along the way too, from the acquisitions of Force10, Sonicwall and Secureworks, to going private in 2013 and now the pending acquisition of EMC along with divestiture of Dell (formerly Perot) Services.
Having said that, I’m truly excited about the potential with IBM, which has continued to execute a smart strategy of moving away from hardware-based, transactional business towards higher value-add, as a leading player in security, Big Data, cognitive computing and cloud. IBM’s deep history in technology and leadership in innovation were also very persuasive in making this move.
For myself, this pivot towards security feels very natural and timely, in the near term giving me the opportunity to fully leverage skills and training around VMware NSX, especially with microsegmentation. More broadly, I have always been fascinated with the infosec field especially pertaining to network, and my diverse background across network, virtualization, Linux and Windows, and software should serve me well here.
On a fun side note, for the first time in nearly 20 years, with IBM I was given a choice between a Mac and Windows PC as my daily workstation. I’m now taking the plunge with a snazzy MacBook Pro with Retina display, and have to say I’m loving it so far! 🙂
I’ve been spending so much time (and money…) recently in my basement studio, that I thought it would be a good idea to write up a quick post and include a couple pictures. It’s finally coming together, just in time to record new material and some older songs that have been kicking around in my brain for too long.
Without further do, here’s the link, enjoy-
I’ve often thought about finding effective metaphors for conceptualizing IT/Data Center Infrastructure. As my professional experience really focused to networking and advanced storage about five years ago, initially I thought of a simple three-legged stool as Infrastructure, with Servers, Storage and Networking each composing one leg.
The problem with this analogy is it’s feels too inwardly directed, and posits a free-floating infrastructure which could practically exist in a void with no express purpose. But in the business world, strategists and decision makers are most interested in applications, and increasingly data (Big), which then drive supporting infrastructure requirements and not the other way around.
(Maybe the seat of the stool = Apps and Data, however this is kind of boring!)
Outside of IT infrastructure, one of my enduring interests has been music. As a teenager, I was fortunate enough to learn several instruments passably including piano, violin, and drums, as well as participate in several bands and orchestral groups. My first forays into the early, pre-Web information technology field were a sideways step during a phase when I was teaching myself digital audio engineering, and hip-hop production…I was so comfortable setting up my mess of home studio wires (RCA, MIDI, etc.) that it felt only natural to apply myself with Coax, Cat5 and Ethernet at my day job.
Listen to this timeless tune by The Beatles, “Baby You Can Drive My Car.” Note how Ringo Starr effortlessly supports John’s lead, never intrusive yet steady and perfectly matched for the feel and theme.
This reminds me of the ideal function of a well-architected Network within a modern data center infrastructure. The Network quite literally provides the heartbeat, or pulse, to which every local device and application requires some level of connectivity.
Think of drums, which provide tempo; time signature; beat, fills…
Maybe this makes sense?
- Drums – Networks
- Guitars – Servers
- Bass – Storage
- Vocals – Applications
- Lyrics – Data
I’ve recently been coming up to speed on an innovative, disruptive new technology named Software Defined Networking (SDN). It’s likely the most significant development in the networking industry for many years to come. With the promise of substantially streamlining network provisioning, management and configuration, SDN strictly speaking is about decoupling the control (Layer3) and forwarding (Layer2) planes. There is certainly the potential for rippling disruptions in the established network industry (Cisco…) as the “brains” for the network move towards software and out of the hardware.
My approach to rapidly familiarizing with any new technology is to read and absorb as much as possible, while at the same time getting hands-on exposure. Accordingly, here are some suggested resources-
1. Network World offers regular coverage of SDN-related updates, here is a recent overview. For a less technical viewpoint, read this article from Economist magazine. For networking techies, go to the Open Networking Summit , and check out their Video archive of conference sessions. Nick McKeown’s keynote video “How SDN Will Shape Networking” is an excellent introduction.
2. Register for the free OpenFlow Tutorial to learn about the primary SDN protocol, OpenFlow. You’ll get to build a real SDN switch, capture OpenFlow packets and maybe get into some Python.
Another useful free online course is the SDN class offered by Coursera , taught by Dr. Nick Feamster of Georgia Tech. Keep in mind this class is highly technical, and assumes a prior advanced knowledge of network engineering.
Enjoy and Happy Fourth!
P.S. On Twitter, here are my favorite sources on SDN – @etherealmind @openflownetwork @sdn_news @openNetSummit @openflow @nicira
In recent months, I’ve been assembling a lab to provide a test-bed for various network and infrastructure applications. My current role at Dell often involves multi-vendor networks, so having an easily accessible test bed including Cisco, Dell, Juniper and HP devices can be very useful for interoperability troubleshooting such as Spanning Tree Protocol.
I wanted to provide a robust virtual infrastructure, and in my experience that usually means VMware. I’m fortunate enough to have extra ESX Enterprise and Plus licenses from VMware partner registration. To utilize all the most useful VMware features like Vmotion and HA, a shared storage system is required. In addition, I wanted to incorporate as many iSCSI “best practices” such as using dedicated infrastructure, dedicated VLAN and Jumbo Frames without breaking the bank.
Without an extra $1-2 K on hand to go out and purchase a full-blown iSCSI SAN such as EqualLogic or Compellent (shameless Dell plugs), and already having a home NAS set up, my goal was to assemble a SAN utilizing as much extra or existing hardware as possible and of course limiting new expenses.
For my purposes, performance took precedence over storage capacity, and redundancy was not as important as keeping costs down (and streamlining design).
- DISK: Crucial 128 GB m4 2.5-Inch Solid State Drive SATA 6Gb/s CT128M4SSD2 – $125
- NETWORK: Dell PowerConnect 5324 1GB 24-port switch, Jumbo Frame support (used, Ebay) – $120
- Intel Gigabit NIC – $37
- SERVER: Starwind iSCSI SAN Free edition
- MISC.: 9 Pin null modem cable (console for Dell 5324) – $10
- Mounting kit for SSD – $3
- TOTAL – $295 (not incl. tax or shipping)
- I was able to re-purpose an unused PC for the iSCSI Starwind server, w/dual core CPU, 3 GB RAM, and Windows 7 Home. Starwind Free Edition doesn’t require a server OS so that was helpful.
- The Intel GigE NIC was installed into the PC for a dedicated NIC to the iSCSI network, separate from the LOM.
- The SSD was installed into the spare PC, and presented as a new iSCSI device.
- I thought I had a 9 pin F-F cable already but didn’t…not common these days, anyway got lucky finding the last one in stock at Fry’s 🙂
- For the SAN server, ideally this should be a Windows or Linux server O/S, however my hardware was more than adequate.
- Starwind is a good option for Windows users, OpenNAS is an option for Linux folks.
- JUMBO FRAMES are a MUST!! Jumbo Frames must be enabled end to end for optimal performance, and must be supported on the physical switch for starters. In addition, you’ll need to update VMware components for Jumbo frame support including vSwitch, port group, VMkernel, and guest OS NIC adapter. Here’s a great article on configuration for VSphere 4.
- It’s always a good practice to create a separate VLAN for iSCSI as well.
- LAN cables not included
- I’m very pleased with my new iSCSI-based shared storage system, supporting VSphere 4 on (2) Dell SC1425 64-bit 1U servers. Responsiveness is snappy within VI Client, as well as within RDP for Windows guest VM’s.
- VMotions on shared storage: 20-30 seconds, not bad compared to Enterprise-class SAN’s which I’ve observed at 10-20 seconds.