cloud

Reactions to VMware’s VSphere 4

Posted on Updated on

I was impressed with the VMware simulcast this morning announcing VSphere, the next iteration of their enterprise virtualization platform, dubbed the first “Cloud OS.” Having deployed and administered VMware products for several years, it’s exciting to see them continue to push the evolution of virtualization, which has now expanded from a single server up to multiple data centers.

It’s also becoming quite apparent that a loose alliance is coalescing between several of the established leaders in the infrastructure space. In particular, VMware continues to align with Cisco, whose recent unveiling of a “Unified Computing System” combined with VSphere offers the promise of a private “cloud in a box.” Other members of this confederation are Intel, whose recent Xeon 5500 Nehalem chip is tailored for VM loads, in addition to EMC, whose updated Symmetrix SAN is optimized for VMware and Microsoft Hyper-V support. Dell appears to be more closely aligned than HP, and has a better position in the SMB market.

And don’t count out Oracle / Sun, one of today’s VSphere demo’s featured Sunfire servers, and when Cisco CEO John Chambers left the stage to congratulate VMware’s lead engineering team, Sun racks were featured quite prominently.

So who’s not joining the party, yet?

Here’s my list –

  • HP – not seeing innovation, very quiet these days
  • IBM – passed on Sun, noticeably low-key at today’s VSphere event
  • Google – how long before they offer a full-blown Cloud service
  • Microsoft – no support for Hyper-V in VMware VSphere
  • Citrix – falling further behind, no support from EMC Symmetrix, or VSphere

The continued limited inter-operability between major virtualization vendors – VMware, Microsoft, Citrix – and subsequent “vendor lock-in” really makes me wonder about the feasibility and likelihood of a truly Open Cloud platform, given the symbiotic relationship between Virtualization and Cloud computing.

P.S. I still think Cisco should have picked up Sun…

Dry reflections on a possible IBM purchase of Sun

Posted on Updated on

The rumors of this IT mega-merger have been swirling and were in full force this week. I’m not sure yet how the long-term balance of strategic benefits will work out for IBM, as well as the impact on the industry. Who knows, maybe Big Blue wants to take spotlight away from possible controversy involving CEO Sam Palmisano’s monstrous $21 million bonus in 2008, in light of the current AIG drama? Just sayin’…

Benefits

  • Sun’s new Open cloud API, very cool
  • Sun’s virtualization
  • MySQL, and more open-source credentials
  • Bigger imprint into the Data center (and the cloud)
  • Java

Drawbacks

  • Accumulating more overhead…HP took years to digest Compaq
  • Sun’s cachet has been fading for many years
  • Still not a major network player, despite recent partnership with Juniper
  • Still not a major storage player – see EMC, NetApp, Dell and Hitachi
  • Java

It’s hard not to interpret IBM wanting to swipe back at Cisco in the race to dominate the emerging cloud market, given that the rumors emerged barely a day after Cisco’s major Unified Computing initiative. But IBM is enhancing their strengths – servers, open source, applications – and not addressing weaknesses in networking and storage with this potential acquisition. HP appears to have a more compelling end to end Data Center offering, with an established EVA StorageWorks line in addition to ProCurve networking. I’m not sold on this one yet…

Microsoft licensing, and load-balancing in the cloud

Posted on Updated on

I agree with Sun Microsystems CTO Greg Papadopoulos’ assertion that Open Source has several advantages over proprietary systems relating to cloud computing. After all, current market leader Amazon built their EC2 offering on open source Xen virtualization, and startups especially tend to benefit from the increased freedom of open source licensing compared to Microsoft. MS has an uphill battle to get established with mindshare for cloud computing, and notwithstanding the recent Azure outage, their complex licensing schemes continue to befuddle developers and IT personnel alike. At last year’s Hosting Summit in Redmond, I remember attending a breakout session on licensing changes with Windows 2008 and IIS7 during which several Microsoft staff appeared to openly disagree about new system licensing requirements. Ummm-kay…

And circling back again to Amazon, it seems that load balancing continues to emerge as an important feature currently lacking in EC2. Users are experimenting with various workarounds for load balancing, but more importantly there are relative cost considerations between EC2 and competing solutions. It’s all about who’s in control, and businesses blindly marching down the marketing-induced path to a public cloud without a thorough evaluation of their applications relationship to infrastructure, such as disk-write intensity, or network upstream traffic vs. downstream, are headed for a rude surprise.

Clouds can’t lift you away from Infrastructure

Posted on Updated on

Despite demonstrated examples of migrating entire applications within a cloud, the reality is that most organizations are still unprepared for the required infrastructure and network to support this level of dynamic architecture. More importantly, there are compelling reasons to consider before shifting an application to a cloud-based provider. I’ve met developers whose lack of understanding of infrastructure – and reluctance to spend – has led them towards seeing the cloud as a silver bullet, but as Greg Ness has stated, bluntly, this may be an “escapist fantasy.”

Size is a critical factor in determining an organization’s ability to utilize the cloud, but needs to be balanced with business objectives and depth of IT resources. While tech-based startups and enterprises may be ripe for cloud-based services, especially a hybrid approach for the enterprise, small businesses might not be appropriate candidates. Applications such as accounting and financial systems, HR, development servers. and Exchange/Sharepoint are likely better served locally for small and some medium-sized businesses, due to reasons of security, auditing and performance.

Don’t close up that Data Center yet.

One size does not fit all

Posted on Updated on

With all the recent attention around the Cloud, it appears that the line of questioning for determining strategy should begin with “how” and not “if.” The Cloud is already emerging, and while the lining is still blurry, there’s no question that an understanding and response is in order. Many analysts seem to suggest that Cloud-based services, such as Amazon EC2, are a very compelling choice, especially for emerging SMB’s seeking to maximize the value of an elastic, cost-effective, easily scalable platform, in a recessionary climate. But there are certainly cases where it may not make sense to entrust core infrastructure to an external Cloud.

A key determinant in evaluating the appropriate investment into a Cloud-based service is the degree of relative criticality of IT infrastructure to the core business. Regarding IT infrastructure, I am referring here to the collection of hardware, operating systems, networking, data, and back-end applications which comprise modern Data Centers. If your core business is driven from your IT infrastructure, why would you rush to cede control of strategic technologies to an outside vendor? Several few years ago, I managed IT operations for an online gaming company based in Los Angeles, and made a conscious decision to de-couple the central office and Data Center, separated without a WAN or permanent VPN connection. One day a technician from our ISP was onsite in the office to upgrade the Internet connection. Unfortunately, he snipped the wrong fiber cable, and the office was completely disconnected from the Net. Yet the online games hosted from the co-located Data Center continued to operate normally throughout the day and generate revenue, albeit with limited support and monitoring. Here is a very concrete illustration of the heightened criticality of IT infrastructure in a particular business environment, to the degree that having the office offline for a full business day was of minimal impact. In this case, a move to external Cloud-based services would be ill-advised, given the central role of infrastructure in this online gaming business.

Ubuntu gets the Cloud too

Posted on Updated on

Canonical Software, which administers the popular Ubuntu distribution of Linux (and my personal favorite), recently announced improved support for Cloud computing in the upcoming release 9.10 , named Karmic Koala. In particular, Ubuntu will be available as a pre-packaged AMI, or Amazon Machine Instance, for convenient deployment on Amazon’s EC2 cloud. During a recent hands-on workshop on Amazon EC2 at SCaLE I noticed a potential security risk inherent in the large number of uncertified server images proliferating as “community AMI’s” on EC2. It’s not a good idea to build a production server, or even a dev server which may later be thrown into production, on an unknown image, just as you wouldn’t use a CD-R from a random stranger to build your new server. Good to see Canonical and Ubuntu moving forward with this initiative, the open-source community needs to take more leadership with emerging cloud technologies.

A Cloud of Questions

Posted on Updated on

This is officially my first WordPress blog. I’ve been blogging somewhat infrequently at IT Toolbox, however their security-related outage over the weekend among other things has convinced me it’s time to launch a full stand-alone blog. I’ll be exploring and discussing primarily IT infrastructure-related topics, ranging from newer technologies such as virtualization and cloud computing to more general issues around network and system management.

Without further ado, I’ve come to the realization that, following several recent conversations and articles I’ve read, the current buzz around “Cloud Computing” is raising as many questions as answers. To wit, there seem to be widespread assumptions that presume all this messy “infrastructure stuff” – from physical servers to network switches, routers, backup devices, firewalls, appliances all the way down to cabling – is magically going away so that developers, and by extension IT, can get back to focusing on the soft and chewy application stuff. Hate to be the spoiler, folks…but it just ain’t happening, not yet, maybe never. Here’s why…The physical layer will continue to comprise one of the most support-intensive areas for IT. Desktops giving way to laptops, giving way to netbooks and mobile devices, all becoming smaller and more portable – but it’s still hardware, and still prone to failure. Who will that user call when their wristwatch/semi-neurally embedded PC stops functioning? Likewise, on a broader network level, pushing the responsibility for hosting applications and data out into the “cloud” away from local servers and infrastructure will just make the upstream connection, including the circuit, firewall, caching devices, LAN switches, that much more critical. It seems that there could be more than a passing semblance between the ASP hype of the dot-com era and today’s Cloud. And yet there are bound to be different implications in the business IT versus consumer space.

I plan to explore the Cloud more comprehensively in this blog, and will be sharing my experiences with real-world examples such as Amazon’s EC2 and Microsoft Azure. Other major topics to be covered include IT infrastructure, virtualization, Green IT, data centers, and hosting.