Friday, August 15, 2008

Um, Just Who is Managing Your Public Cloud?

This article summarizes some of the recent bumps in public clouds. While these bumps are inevitable (and unenviable!) in the early stages of a new technology, they do shine the light on the management of the data center. And, as may be obvious, the people that lost their data in one case most likely have no recourse with the holders of that data. In the case of outages, "well, gee, so sorry" is a pretty weak excuse at the moment for problems in managing the public cloud.

My guess is that this will start a bit of a turn towards more conservative cloud management (that loose and free stuff looks good on paper) and that in turn may start to put a little pressure on prices or start to reduce the license/contractual assurances that current cloud providers make available.

Another thing worth noting here, Google and Amazon, two of the biggest cloud providers, have internal architectures that are designed with high availability in mind. These types of outages would not have affected their core operations, typically. However, most applications that are running in their clouds today were not architected for the same style of high availability.

Anyway, I'll continue to assert that issues like this will help foster the drive towards at least initially, private clouds, with a limited subset of workloads moving into the public clouds based on the type of workload.

It is going to be a bumpy take off into these clouds - fasten your seat belt and hope that the people getting sick along the way aren't on your plane...

BTW here are a couple of other links to recent glitches and failures such as the evaporating cloud or "oops, sorry we deleted your cloud". Some are Web 2.0, but a couple are effectively cloud computing providers which have had public failures - in large part because the data centers and applications were not designed for true high availability or had maintenance issues. And, the last of those links (thanks, Brian!) was just the typical human error problem. Even if you don't create your own cloud, you may well want to really know who is managing your cloud and how - at least until we have some higher end service level agreements available.

Wednesday, August 13, 2008

VMware joins the Linux Foundation

So Cloud Computing is the rage today, it is based on virtualization. Many claim that Linux and Open Source was the master key that opened the door to Cloud Computing. So, it seems very fitting that VMware has joined the Linux Foundation. The recent re-/free-pricing of VMware ESX definitely helps make core virtualization a commodity and thus makes it easier to build the more complex software solutions that ultimately will simplify information technology management over the next several years. Linux with VMware ESX, Xen, KVM, etc. now provide a powerful base platform on which to build more complex solutions which will ultimately enrich our lives and reduce the amount of time we spend managing our IT infrastructure.

Welcome to the Linux Foundation, VMware!

You want to participate in an open source development community?

Then read this: . Kudos to Jonathan Corbet of fame. A very good (and relatively short) booklet on how to participate in an open source community. Specifically, this is geared towards Linux, but many of the observations in here will span communities and relate to any development project which is developed where a mailing list is the primary communication medium for developers.

Definitely a good read!

How Secure is your Public Cloud, anyway?

I've been chatting with people lately a bit about the rate of uptake and adoption of these so-called "public" clouds. While I'm a big fan of the potential here, they still aren't the right thing for all workloads. There are problems with availability, security, latency, etc. which have not all been resolved. As an example, VMware was recently hit by this bug, and black hats identified some holes in Xen security. And these are surely not the last holes. Sometime around the time I was born, IBM started working with virtualization and providing very high end availability, reliability, security and such. VMware and Xen are much younger cousins which have a lot more growing up to do before they provide the security and isolation of physical machines. Of course, the push for Cloud Computing and ubiquitous virtualization will accellerate the improvements in security and isolation in these more modern hypervisors. But I probably wouldn't be putting my corporate intellectual property on a public cloud just yet. Many other workloads may be just fine but think carefully about what goes out into the public domain, er, cloud, and what you protect with those corporate firewalls.

On the other hand, those corporate firewalls give you some protection if you want to use private clouds inside your enterprise today. Those security holes mean that your own employees might get access to more information than you might have intended, but there are other things, like employment contracts, that give you some control over those types of misuses. And, unintentional access resulting from bugs at least puts your data in the hands of people you generally consider reliable.

Wednesday, August 06, 2008

How can I get a padded jail?

Jim Zemlin is a featured speaker and panel coordinator today at LinuxworldExpo. His intro to the panel questions started with the assertion that over the years (have there really been 18 LinuxWorld Expos so far? wow) that Linux has become nearly ubiquitous, including a lot of pictures of mobile devices, servers, desktops, laptops, services, collaborations tools, etc, which are all using Linux. He talked also about initiatives, including Green data centers, Cloud Computing, etc. which are more widely enabled as a result of Linux being so prevalent and accessible within the industry. In many ways, Linux is enabling many of these emerging technologies because it provides a common basis for innovation which is easily accessible and eliminates the need to build every new initiative or product from scratch.

Jim also provided a reinforcement that the "competitor" from which we in the Linux community need to learn from today is no longer Microsoft (well, they might have a trick or two that we can still learn) but the real competitor today is Apple. Jim took a poll to see who has some sort of Apple device today and at first glance, it appeared to be the entire room -- At a Linux conference! -- had an Apple product. A little digging showed that Apple products weren't quite ubiquitous but the point was by then made. Jim also pointed out how Microsoft and Apple are finding a way to sell products that have vendor lock in. The products are not open, not easily available, controlled by a single entity and basically are a jail for consumers. Of course, he then pointed out that the Apple Jail looked a lot like a 4 star hotel room with video on demand, a great view, clean and neat, and was a jail that most of us find to be rather luxurious. The next slide, though showed the Microsoft Jail - emphasizing that the roughness of conditions were exacerbated by the fact that you were often trapped in that jail with no amentities, some very large rough looking malware types, and a raft of viruses to make your stay as unpleasant as possible. And, the wrap up was the equivalent Linux "Jail" is more like a visit to Burning Man - free and open, yeah, there may not be a lot of frills, the power might go out, but you are free to come and go as you will, you can improve your surroundings as you choose, and ultimately you can really enjoy yourself. Perhaps Burning Man is not the best analogy here, but it makes the point quite nicely.

Jim's panelists included James Bottomley of kernel community fame, Christie from the Motorola alliance providing Linux enabled cell phones, and David who helped create the (no longer available in stores) Walmart PC.


Picking the right target for the Linux Desktop

I think it has long been recognized that having a Linux desktop look "as good as" a Windows desktop has been a pretty low bar from an easy-to-use point of view. Mark Shuttleworth brought it up again at the Linux Symposium during his keynote speech (something he has clearly been thinking about for a while), and I just saw Bob Sutor bring it up again at the Next Generation Data Center keynote speech that he gave.

I think the Linux desktop has gotten a lot closer to the simplicity that the Mac offers or that Windows offers, but in either case, it still has a long way to go. There are still so many areas that I've been fighting with on an Ubuntu laptop (T61p at the moment since the display on my tried and true T41p decided to blink out for good last week). Because I have worked with Linux for a long time, I'm relatively confident that with time and enough good google searches I will resolve the problems. But boy do I rue spending the time on realizing that NetworkManager is trying to take over my wireless and doing everything it can to make sure I can never connect to a wireless access point. Or, I can use the nv driver without compiz, or the nvidia driver without suspend/hibernate. Oh, if I dig through various forums, it looks like there are possible fixes/configuration changes that might move me forward, but if those answers are out there, why does an apt-get install not just fix all those problems?

Then there are the annoyances - I put in a USB key on this box and for some reason there is a hard hang of the desktop sometimes for a minute or two and then Nautilus opens, finally. I put the same USB key in the Mac and Finder just opens. My iMac 24 isn't very mobile but when I powered it up at home, I had to choose/enter an SSID, fill in a password and select an authentication mechanism, and through three wireless routers, several reconfigs and such, it just *does the right thing*. The use cases are a little bit different but Linux still doesn't seem to do the right thing.

And these are just the basics - what about all of the more complex, cool tasks. Dragging video, editing it and copy/pasting subsets of audio or video. Managing my music library or managing TV recordings seems to be always "possible" on Linux but never easy (my MythTV stopped last time I lost power and/or had an automated upgrade).

But I think the message remains: Pick the right target for comparison, and that right target is hopefully clearly not Vista, XP or any past Windows product, but is instead the much more user friendly environment of the Mac...

Tuesday, August 05, 2008

Cloud Computing paper presented at the Linuxsymposium is now available

Well, I've dropped way behind being "as it happens" with news and info, but my Cloud Computing paper is now available from the site - my paper is about page 197 of Volume 1 I will also make the slides available to anyone that asks (stripped down a little but still a bit chunky because I went overboard on pictures of clouds a bit, oops.)

I'm currently at the Next Generation Data Center - interestingly enough there was a lot of alignment between the Cisco keynote speech and what we are working on in IBM. I'm also sitting in on the Virtualization 2.0 track where there is a lot of discussion about the pain points and progress in moving from virtualization 1.0 to virtuailzation 2.0 (btw, I do not see a crisp definition of the differences from the presenters, most just a view that virtualization is evolving - and rather slowly at that. The most appropriate quote was that the adoption cycles are much slower than the talk-about cycles. But there is clearly progress in adoption of virtualization and some of the new problems I've referred to before are also becoming more visible across the industry, such as consolidation exposing more problems in high availability and such.

All in all, the rate and pass of change related to virtualization, grid, cloud computing (oh, matrix computing came up - I have to look that one up) is constant, although pretty slow in the eyes of the technologists.