I agree with Sun Microsystems CTO Greg Papadopoulos’ assertion that Open Source has several advantages over proprietary systems relating to cloud computing. After all, current market leader Amazon built their EC2 offering on open source Xen virtualization, and startups especially tend to benefit from the increased freedom of open source licensing compared to Microsoft. MS has an uphill battle to get established with mindshare for cloud computing, and notwithstanding the recent Azure outage, their complex licensing schemes continue to befuddle developers and IT personnel alike. At last year’s Hosting Summit in Redmond, I remember attending a breakout session on licensing changes with Windows 2008 and IIS7 during which several Microsoft staff appeared to openly disagree about new system licensing requirements. Ummm-kay…
And circling back again to Amazon, it seems that load balancing continues to emerge as an important feature currently lacking in EC2. Users are experimenting with various workarounds for load balancing, but more importantly there are relative cost considerations between EC2 and competing solutions. It’s all about who’s in control, and businesses blindly marching down the marketing-induced path to a public cloud without a thorough evaluation of their applications relationship to infrastructure, such as disk-write intensity, or network upstream traffic vs. downstream, are headed for a rude surprise.
This is officially my first WordPress blog. I’ve been blogging somewhat infrequently at IT Toolbox, however their security-related outage over the weekend among other things has convinced me it’s time to launch a full stand-alone blog. I’ll be exploring and discussing primarily IT infrastructure-related topics, ranging from newer technologies such as virtualization and cloud computing to more general issues around network and system management.
Without further ado, I’ve come to the realization that, following several recent conversations and articles I’ve read, the current buzz around “Cloud Computing” is raising as many questions as answers. To wit, there seem to be widespread assumptions that presume all this messy “infrastructure stuff” – from physical servers to network switches, routers, backup devices, firewalls, appliances all the way down to cabling – is magically going away so that developers, and by extension IT, can get back to focusing on the soft and chewy application stuff. Hate to be the spoiler, folks…but it just ain’t happening, not yet, maybe never. Here’s why…The physical layer will continue to comprise one of the most support-intensive areas for IT. Desktops giving way to laptops, giving way to netbooks and mobile devices, all becoming smaller and more portable – but it’s still hardware, and still prone to failure. Who will that user call when their wristwatch/semi-neurally embedded PC stops functioning? Likewise, on a broader network level, pushing the responsibility for hosting applications and data out into the “cloud” away from local servers and infrastructure will just make the upstream connection, including the circuit, firewall, caching devices, LAN switches, that much more critical. It seems that there could be more than a passing semblance between the ASP hype of the dot-com era and today’s Cloud. And yet there are bound to be different implications in the business IT versus consumer space.
I plan to explore the Cloud more comprehensively in this blog, and will be sharing my experiences with real-world examples such as Amazon’s EC2 and Microsoft Azure. Other major topics to be covered include IT infrastructure, virtualization, Green IT, data centers, and hosting.