Wednesday, August 29, 2007

Microsoft has many tricks...

It is interesting to watch as Microsoft tries different strategies in dealing with the open source community, open standards and interoperability. This interesting ballot box management trick with OOXML is clever (oh, and if it is so good, repeat it, over and over again). Basically, manage the voting populace to ensure that only your voters turn out for a vote.

So, does that imply that they know that their proposal is inferior (oh, with 6,546 pages, I guess they didn't leave much undefined, compared to ODF's mere 867 pages, right)? Does it leave gaps which allow them to maintain a proprietary interface and extensions, and therefore lock out open source or 3rd party players? Does it add obscurity instead of clarity, allowing only mega-monolithic companies to play in the open document specification? I like Google's stance on this, which is well thought out. And, in watching the discourse as the Linux Foundation pulled their response together (and some comments from the open source desktop architects), the focus was not on Microsoft vs. Linux (which at some level this *is* about, although by no means the primary focus) but instead the architects clearly focused on the technical aspects of the proposals.

I'm not impressed by the style of tactic here. Microsoft should have enough technical savvy that they should play this game openly and based on the merits of any standard. While I'm sure the days of corporate jury rigging of standards for vendor advantage are nowhere near over, any standard relating to true opennes affecting so many end users should get much better treatment and input that favors we end users.

Friday, August 24, 2007

A laptop in every (airline) seat

Now this is cool! Singapore Airlines is putting a Linux laptop in every seat. I had the opportunity to use my laptop with wireless on Luftansa flights via Boeing's Connexion service referenced here and that *really* made my life useful. Now if they combined this with a Xen base for that Red Hat version, you could install your own OS and images on a USB key or a USB disk, boot your own environment, and be up and on your way.

Now where does Singapore Airlines fly? I have to figure out my next boondoggle! ;)

The Sun begins to Set, but is Java on the rise?

As my colleage Paul said, you just can't make this stuff up. Jonathan Schwarts blogged that they are changing the Sun Microsystems stock symbol from SUNW to JAVA. Isn't that about as smart as changing IBM's ticker symbol to, oh LINUX, or maybe AIX? Most big companies that are successful build a rounded portfolio; to emphasize something that is so close to being a commodity as the rallying call for an entire company suggests that perhaps there isn't enough sunlight getting into the corporate offices at Sun any more. As a company which once set the standard for workstations, networking, networking file systems, etc., I think this is a pretty silly move and sounds more like desperation. This seems more like a child yelling "look at me! look what I did!" than a company with a clear vision of the future and their part in it.

Friday, August 10, 2007

Novell owns the UNIX and UnixWare copyrights!

Groklaw has this cool article up which shows basically the conclusion of one of those big SCO cases. Now, the question I've had for a very long time is: If Novell owns the copyright, could they consider releasing the source to all of the Unixes and Unixware that they own under, oh, BSD, GPLv2, GPLv3, something interesting and shareable? I, of course, would vote for GPLv2 in case there's any compatibility code that Linux could borrow, but BSD might be mostly as good. That might be a bit like pouring an ocean of salt in a very large wound (or would that be a lake of salt, hmm?) but it might be a generous gesture on the part of Novell to balance out the Microsoft Kum Ba Yah refrain of earlier this year.

Ah, I can wish, I guess. It probably doesn't matter all that much in the end but it would be an interesting symbolic gesture.

Next Generation Data Center

I talked to a few customers at the Next Generation Data Center expo who had one very interesting observation. While there is a lot of talk today about the the design of the next generation data center, how it optimizes space, how it handles cooling, how it has a lower power consumption profile (or adapts power consumption to load), how it minimizes heat output, how water cooled heat exchangers reduce cooling costs, etc., there are no good templates, recommendations, or guidelines. IBM Global Business Services does provide a contract rate for designing data centers with most of these in mind, based on your lab size, your power constraints, your cooling constraints, etc., but everything is still a one-off design and there are still a lot of tools missing along the way.

It *is* clear, however, that space, power, and cooling are still primary concerns, and the normal constraints of price and performance are not released very much if at all. There are little things coming that will help with this, such as powertop which will help optimize applications and workloads a bit for power consumption. However, that is only a drop in the bucket compared to optimizing data centers for power consumption or cooling. There are some mechanisms in use with workload consolidation such as IBM's recent announcement with System Z which reduce costs in terms of power, cooling, systems administration, while increasing application availability and throughput monitoring.

However, it is starting to look like the holy grail for large data centers involves a combination of software metric measurements of power consumption and heat generation, combined with the software to move workloads around, reduce power while meeting service level agreements, and powering machines or portions of machines on and off on demand. Most of that software does not exist today, and what does exist isn't yet targetted at managing a full data center that many corporations use today. I believe this will be a key area to watch over the next year or three as more and more companies get on the Green bandwagon.

Monday, August 06, 2007

rPath and Software Appliances

My chosen after lunch dessert at NGDC is another virtualization topic, this time presented by rPath. rPath provides a means for making software appliances and being able to distribute them. Software appliances attempt to address one of the growing problems in Linux which is the application certification issue. Specifically, a key customer problem today is that applications are often certified on a subset of all of the Linux distributions that are available. And, what is worse, is that different ISV's certify their applications on different distros, but customers expect to buy any application off the shelf and run on their chosen distribution. Alas, this is becoming less and less the case. I've been at a number of customer sites where the number one complaint was that their key applications were not certified against their chosen RHEL or SLES distribution. Or worse, some were running Debian, Ubuntu, or some other distro. At the core, the distros are very similar, and most applications will *probably* run. But, if they don't, the customer gets to keep all the broken pieces.

So, how do appliances help with this problem? Well, simply put, the encapsulate all of the key attributes, including potentially the operating system!, into a single image which can easily be distributed, loaded, and delivered to a system with a hypervisor, such as VMware ESX, Xen, etc.

Examples of appliances can be found on the rPath site, like this LAMP stack, for instance.

Erik points out a few problems with virtual machines, some of which are noted in the previous blog entry. This include the fact that proliferating virtual machines are just as painful to manage as physical machines, and managing things like security updates take the same amount of effort as managing physical machines. Also, hypervisors today have some dependencies on their operating systems if they are going to get the benefits of the leading edge para virtualization techniques and such. While those compatibility issues should fade away over time, that is at least a short term consideration. As a result, a virtual image's kernel may be build for interfaces to a given hypervisor (Xen is probably the worst in this space right now, but mainline kernel API's for domU guests went into mainline just recently and should start to show up in distros over the next year or so).

On the flip side, Erik pointed out how well Software Appliances fit in with the Software as a Service (SaaS) model that is becoming so popular. In short, it fits in very nicely - a software appliance is basically a software service. There are a lot of other advantages, such as the fact that a software appliance is easier to test (it is always the same stack, no matter where you install it), easier to support (stack is well known, all customers using the same appliance), and easier for teams to configure (again, all components are the same). Erik believe that Software Appliances are actually better than SaaS for a few simple reasons: there's no need to worry about multiple applications residing on the same OS - apps are isolated by a virtualization boundary; there is no internet latency - multiple apps/software appliances can reside on the same physical system; no remote/unsecured data; no significant data center infrastructure - software appliances are virtual.

Erik provided a pretty common sense list of best practices for creating software appliances that I won't re-iterate here but the guidelines are definitely useful - small, simple, easy to administer (no CLI, minimal to no configuration required), etc.

Something that Erik pointed out that I have not really looked at is Amazon's elastic compute cloud. It has the ability to auto-create and host software appliances. The rate is a mere ten cents an hour in US Dollars. He demonstrated the creation of a MediaWiki site in less than ten minutes from click, create, configure, use, system updates all based on a preconfigured software appliances. Definitely a pretty cool option.

In short, I think software appliances will be a major boon not just for the SMB market but even for large corporations who need to deploy anything from sandboxes to entire racks of new machines with their custom workload. Since most of these solutions require a virtualization layer/hypervisor in here somewhere, I believe this will provide a significant push for virtualization in leading edge datacenters as well.

Virtualization tutorial at Next Generation Data Center conference

I'm in San Francisco's Moscone center on a chill and wet day in San Fran (what, did I really expect warmer weather than Oregon's current cold snap?). I'm sitting in a tutorial from Dan Olds on Virtualization at the Next Generation Data Center expo. Yes, this has been an area of interest of mine for quite a while.

Dan started with some rather (well, to me, anyway - there is a ton of info like this floating around the net already ;-) boring comparisons of workload/machine utilization and the benefits of consolidation. What was interesting was a number of surveys that his company engaged in which showed that traditional Unix customers have been more proactive at embracing virtualization than the current x86/x86-64 class of customers. And, while comparing Unix and x86-64 has initially been a bit confusing, it is clear that he is looking at the traditional strengths of Unix systems on non-x86 platforms and their built in relationships with their OS & Hardware to support virtualization, as compared to the x86-only virtualization solutions enabling multiple OSes to run on x86 (e.g. Windows, Linux, Solaris x86). But the trend for virtualization is on the uptick in the x86 space, with still a sizeable number of customers not convinced of its overall value.

Dan talked a bit about the the reasons why some customers don't see the value on virtualization, primarily because single rack mount servers are often so cheap, so capable, that purchasing and deploying a single small server is cheap and easy. It isn't until a site starts running into space constraints, power restraints, or cooling constraints that the incremental deployment of small servers becomes problematic. And, most people don't monitor the utilization of those small machines because they are not viewed as precious resources. Ergo, lots of potential waste cpu bandwidth, power consumption, heat generation, etc. Also, the number of sysadmins increases quietly as the number of servers increase, especially if the servers are actively managed for latest security patches, application updates, etc.

Dan talked briefly about consolidation from rack mount to blade servers and didn't see that as a major savings in anything other than space, and *maybe* over time power/cooling. Most people using blades aren't really doing any virtualization today and only improving on the footprint part of the consolidation story, where there is much more to gain when reducing the number of operating system instances, hardware platforms. Later Dan talks about making sure to measure *all* of the cost savings opportunities for virtualization which I believe is a critical component of any successful virtualization deployment as it helps understand exactly why virtualization is so important.

Dan next spent some time comparing VMware ESX and Containers solutions. He talked a bit about platform virtualization versus containers. He described how both were valid and often necessary, depending on your type of comparison & consolidation. VMware (or Xen) provides full OS isolation, containers does not. Containers provides low overhead therefore better performance. VMware allows multiple OS's, multiple versions, Containers is limited to a single OS, single version. VMware instances are each management independently, with Containers there is only a single OS to maintain, which may reduce systems management overhead. Containers is relatively a lot cheaper than VMware per socket.

A question from the audience was: why do we need virtualization when Unix/Linux today scales well and supports multiple applications on a single system? Answers vary across the board, from application isolation, security concerns (full security isolation between virtual OSes), simplicity of management of applications, ability to measure impacts and utilization of a specific workload, and, over time, the ability to migrate applications from one system to another or from one virtual environment to another.

Dan spent a fair bit of time talking about how to build a plan for moving towards a virtualized environment. I'll skip the details but most of his approach was strategic in nature rather than tactical, and he missed out on a lot of the process and efficiencies that can be brought to bear while working more on the justification and engagement of management and executives into any such plan.

Net summary: Virtualization is an ongoing trend, starting to become more prevalent in the x86 environment after being primarily reserved to high end mainframes or higher end Unix systems over the past few years. There are many benefits to be gained from virtualization, not all discussed here, but including consolidation, energy savings, simplification of management, and general simplicity of deployment. This talk left out the benefits of rapid prototyping and rapid deployment of solutions, for example. But for beginners this was fairly useful, I believe.