I agree with Sun Microsystems CTO Greg Papadopoulos’ assertion that Open Source has several advantages over proprietary systems relating to cloud computing. After all, current market leader Amazon built their EC2 offering on open source Xen virtualization, and startups especially tend to benefit from the increased freedom of open source licensing compared to Microsoft. MS has an uphill battle to get established with mindshare for cloud computing, and notwithstanding the recent Azure outage, their complex licensing schemes continue to befuddle developers and IT personnel alike. At last year’s Hosting Summit in Redmond, I remember attending a breakout session on licensing changes with Windows 2008 and IIS7 during which several Microsoft staff appeared to openly disagree about new system licensing requirements. Ummm-kay…
And circling back again to Amazon, it seems that load balancing continues to emerge as an important feature currently lacking in EC2. Users are experimenting with various workarounds for load balancing, but more importantly there are relative cost considerations between EC2 and competing solutions. It’s all about who’s in control, and businesses blindly marching down the marketing-induced path to a public cloud without a thorough evaluation of their applications relationship to infrastructure, such as disk-write intensity, or network upstream traffic vs. downstream, are headed for a rude surprise.
With all the recent attention around the Cloud, it appears that the line of questioning for determining strategy should begin with “how” and not “if.” The Cloud is already emerging, and while the lining is still blurry, there’s no question that an understanding and response is in order. Many analysts seem to suggest that Cloud-based services, such as Amazon EC2, are a very compelling choice, especially for emerging SMB’s seeking to maximize the value of an elastic, cost-effective, easily scalable platform, in a recessionary climate. But there are certainly cases where it may not make sense to entrust core infrastructure to an external Cloud.
A key determinant in evaluating the appropriate investment into a Cloud-based service is the degree of relative criticality of IT infrastructure to the core business. Regarding IT infrastructure, I am referring here to the collection of hardware, operating systems, networking, data, and back-end applications which comprise modern Data Centers. If your core business is driven from your IT infrastructure, why would you rush to cede control of strategic technologies to an outside vendor? Several few years ago, I managed IT operations for an online gaming company based in Los Angeles, and made a conscious decision to de-couple the central office and Data Center, separated without a WAN or permanent VPN connection. One day a technician from our ISP was onsite in the office to upgrade the Internet connection. Unfortunately, he snipped the wrong fiber cable, and the office was completely disconnected from the Net. Yet the online games hosted from the co-located Data Center continued to operate normally throughout the day and generate revenue, albeit with limited support and monitoring. Here is a very concrete illustration of the heightened criticality of IT infrastructure in a particular business environment, to the degree that having the office offline for a full business day was of minimal impact. In this case, a move to external Cloud-based services would be ill-advised, given the central role of infrastructure in this online gaming business.
Canonical Software, which administers the popular Ubuntu distribution of Linux (and my personal favorite), recently announced improved support for Cloud computing in the upcoming release 9.10 , named Karmic Koala. In particular, Ubuntu will be available as a pre-packaged AMI, or Amazon Machine Instance, for convenient deployment on Amazon’s EC2 cloud. During a recent hands-on workshop on Amazon EC2 at SCaLE I noticed a potential security risk inherent in the large number of uncertified server images proliferating as “community AMI’s” on EC2. It’s not a good idea to build a production server, or even a dev server which may later be thrown into production, on an unknown image, just as you wouldn’t use a CD-R from a random stranger to build your new server. Good to see Canonical and Ubuntu moving forward with this initiative, the open-source community needs to take more leadership with emerging cloud technologies.
This is officially my first WordPress blog. I’ve been blogging somewhat infrequently at IT Toolbox, however their security-related outage over the weekend among other things has convinced me it’s time to launch a full stand-alone blog. I’ll be exploring and discussing primarily IT infrastructure-related topics, ranging from newer technologies such as virtualization and cloud computing to more general issues around network and system management.
Without further ado, I’ve come to the realization that, following several recent conversations and articles I’ve read, the current buzz around “Cloud Computing” is raising as many questions as answers. To wit, there seem to be widespread assumptions that presume all this messy “infrastructure stuff” – from physical servers to network switches, routers, backup devices, firewalls, appliances all the way down to cabling – is magically going away so that developers, and by extension IT, can get back to focusing on the soft and chewy application stuff. Hate to be the spoiler, folks…but it just ain’t happening, not yet, maybe never. Here’s why…The physical layer will continue to comprise one of the most support-intensive areas for IT. Desktops giving way to laptops, giving way to netbooks and mobile devices, all becoming smaller and more portable – but it’s still hardware, and still prone to failure. Who will that user call when their wristwatch/semi-neurally embedded PC stops functioning? Likewise, on a broader network level, pushing the responsibility for hosting applications and data out into the “cloud” away from local servers and infrastructure will just make the upstream connection, including the circuit, firewall, caching devices, LAN switches, that much more critical. It seems that there could be more than a passing semblance between the ASP hype of the dot-com era and today’s Cloud. And yet there are bound to be different implications in the business IT versus consumer space.
I plan to explore the Cloud more comprehensively in this blog, and will be sharing my experiences with real-world examples such as Amazon’s EC2 and Microsoft Azure. Other major topics to be covered include IT infrastructure, virtualization, Green IT, data centers, and hosting.