Thursday, November 03, 2011

Cloud Blog on Wired site: Time To Value

Well, I guess it is time to get back to blogging. I just had a collaborative blog post with Terri Virnig, our Cloud Business Executive in IBM's Systems and Technology Group. In part we try to highlight one of those key reasons that people have been looking to the Cloud: How to improve their "Time To Value". In other words, from the time that a team has a glimmer of an idea for a new project, product, or service that they would like to create, how quickly can they get that service to market.

As I pointed out in the Wired blog entry, we still see many people in industry that are finding it can take 6-9 months to get a project from concept to start of development. I was even presenting to a group recently where one participant pointed out that it often took them a year to get a new project started, in part because of hardware acquisition cycles, capital funding release, lead times on delivery, software selection, integration of software components, infrastructure integration (assignment of IP addresses often being a time consuming area there!), access to storage, backup strategy, security assessment by the IT/security organization, and the list often goes on and on.

Anyway, the focal of the article is about our recent announcement of our SmartCloud product family, including a particular project from my team known as SmartCloud Entry - a basic self-service environment for managing pools of servers, network infrastructure and storage which integrates an image (or virtual server) catalog, and hardware management in a relatively small footprint server.

I'll try to generate a few additional entries over the next few weeks which highlight some of the capabilities of SmartCloud Entry and its benefits, as well as some idea as to how it fits into the overall SmartCloud family. And, I'll try to share some of the key thinking that we are involved in with IBM Research, our clients, and our various development teams around Cloud uses, Cloud issues, and new Cloud capabilities.


Thursday, May 14, 2009

Public clouds fail more visibly!

With great fame and visibility also come great notoriety, at least in the case of failure.  Twitter was alive this morning with the tag #gmailfail with comments like "We are receiving reports of major outages on the West Coast and East coast. Canada + UK seem unaffected so far." and "Receiving reports of total Google Services Fail. Maps, News, Apps, Reader, Gmail, all affected. #gmailfail".  Some speculation placed the blame on google analytics ("Google service issues across all apps and other web pages appear to be caused by problems with google analytics #gmailfail") while others were speculating that the problem could be with portions of the internet backbone.  Some felt it might be the end of the world (" feels very weird to have google and gmail down. Is this a sign of the end of the world? #gmailfail") - are we *really* that dependent on our email and search now?  .

There is a summary of activity going on here as well with reports of outages via AT&T.  Google notes that the problem affects a small subset of users here.  Rumor is, though, you can't see that site if you can't access google.  ;)

It will be interesting to see the summary of why but one thing that is clear:  the level of visibility in the case of a failure increases rather dramatically.  Another observation is that determing root cause of the problem is complex - isolating to a set of affected users, analyzing the connectivity between the users and their cloud based resource and internally at the cloud provider assessing the failure require very good diagnostic skills and access to the data center.

This isn't the first (or last!) cloud outage.  And, perhaps the worst problem is that people were deprived of their email for a small number of hours.  But perhaps it points out that the cloud isn't ready yet for all workloads.

And a late update:  cnet news reports about a connection between YouTube, Google News, and the outage.  (Thanks, Nish!)

Another interesting update here that was passed on via facebook.  The graph here is very interesting as well with the dramatic traffic drop which implies just how much data is going on over the existing network infrastructure just for email and search driven activities.

Friday, April 10, 2009

Near Final session which covers Lightning talk readouts from some of the working groups

Okay, I was slow and missed some of the updates, so this is more sketchy than I usually am, but here are some of the readouts.

gnome mobile

Maintainers of various projects were able to meet with users of those projects across the vendors.  There is a lot of talk of fragmentation across the embedded space, mimo, moblin, mobile can help demonstrate that the fragmentation that exists isn't as dramatic as it may appear.

open printing

Still a shortage of manpower; it would help to have LF be more of a mentor at the google summer of code.  We get 10 students for three months from the google summer of code.  One will be working on getting JDK into the LSB.  One or more will be working on wireless.

There are still some challenges in how openprinting and the distributions integrate the downloading of new drivers for new printers.  Continued to discuss the common open printing dialog.  There wasn't enough time so next year there is a proposal to start several days before the summit again to provide more time for discussion.


Open Linux targetted towards mobile technologies.  Discussed what components are part of the stack.  Includes connman and mojito as two topics.  A couple of sessions on clutter (sp?) on using actors and timelines to create an animation in your application.  A track on the moblin SDK.  Last track was about porting applications to moblin. And a discussion of the changes between moblin v1 and v2.  Talked about creating a moblin compliance profile for LSB with the LSB team.

HPC track

9 of the 10 largest computers in the world run Linux.  Roughly half of the people who run those sites were present to meet with the Linux Foundation community. 

Tracing solution

mostly non-competing, er collaborting projects started yesterday.  Christoph Hellwig talked about how they worked to find some agreement on some very low level issues.  Members were assigned action items, many of which worked through the night to address their action items for today.

Jim at the summary pitched the upcoming Linux Events that the Linux Foundation is sponsoring.

There are a few sessions this afternoon but the official event ends with this summary.  Thanks all for reading (or attending!).  And feel free to post pingbacks if you documented any of the sessions that I did not attend.

Christine Hansen on Earning the Next Generation of Linux End-Users

Earning Future End-Users Now
Dr. Christine L.E.V. Hansen, CEO of Le Ciel
Arguably the largest divide in Linux remains the divide between developers and end-users. Developer oriented forms of communicating, like mailing-lists and wikae, are often the sole sources of information about projects. The future of Linux relies on connecting the developers, and the corporations, with the end-users who may not know what Linux is. What this end-user will know, or will have heard, is: 'somewhere there is free software.' Their next question/search will be: 'where can I get it?'As a point of departure, I propose a web site in plain language which serves as an index of Linux projects. All projects: commercial, nonprofit, ones in process, big, small. Obviously, voluntary participation. All together in one, search-able place. Audience: users, developers, corporators. The technical level is not as important as the pragmatic and visionary level, which ought to be high. High collaboration factor.

What the World sees:
1) Too slow
2) No 'instant ignition'
3) Product/project redundencies
4) ---oops missed it ---

Christine provided a distinction between "pure" and "applied" developers - pure being much of the first wave, idealogical, focused on Linux for themselves; applied being more focused on projects, more aware of their end users, more often speaks both tech and a human language, may be outside of North America.

Next Step:  Step Up
Instant ignition, in their own language(s), solve the rediculous problem of redundant effort, introduce a multilingual Linux web presence, empower global Linii.  (is that a plural for Linuxes?)

Christine pointed out that Transparent Software often has an organic life that may help it to live much longer than many of the short term trends.

Stormy Peters on Marketing Free and Open Source

Marketing Free and Open Source Software
Stormy Peters, Executive Director, GNOME Foundation
Open source software solutions are now very good technically - the remaining problem is how to get the word out. Marketing in an open source world, with a volunteer marketing team, small budgets and an open source development model, is often very different than the world of traditional marketing. In addition, projects are also advocating for "free and open source software" at the same time they are marketing their solution. Come share your best open source software marketing practices and discuss them with others.

Stormy has been busy talking a lot and is low on voice, so she generated a slide per sentence.

1) most people in open source are developers not marketers

2) marketing open source is different from proprietary products since the product is "not for sale"

Typically "not for sale" means that you are not collecting money and sometimes you are actually *asking* for money.  And, it tends to mean that you don't really know who your users are.

And, we still tend to need to explain what open source software (OSS) actually is.  And then, OSS doesn't really matter to many users.  So, for open source marketing, who is your audience?

Recruiting developers, users, distributors, partners, ... ?

That's why it is hard.

What types of marketing do we do today?  Looking at gnome, we have web pages (some good, some bad) and most of the best are wikis.  Events, such as GUADEC, ability to set up a table at an event.  Facebook, twitter, identica, linkedin, press releases, product materials such as sponsorship materials or application summaries.

So, what else do *you* have in your community?

Are there some best practices and how can we work together?

Question from the audience - is this true for all software communities?  Smaller communities really focus on growing community, but also may have dual licenses which allow proprietary distribution as well.  Response from the audience:  you clearly need to identify your audience and what you want to say to them.  Most open source projects don't have budget to develop materials, but the Linux Foundation video contest enabled guerilla marketing and viral marketing at very low cost.

What is our message?  Often we are marketing our values as opposed to selling our "products".

What are our goals?  Values don't always relate to our end product.  But open source projects often try to "sell" their values.  Some users really *do* buy based on the values.  But what percentage really purchase based on just the values?

It is about the applications for many audiences.  We had some discussion of brand development, association of concepts or values with brands and what actual mental associations are drawn by non-open source users when they hear "Open Source Software" or "Linux" or "Gnome".  What associations do we think people should have?  "Free Software" obviously implies Free - does it imply usability, applications, ability to do your job at home, etc.  Is there a way to collect input from new audiences to find out what their current associations are with those "brands"?  And, is there a way to focus marketing around some of those associations that we would like to see with the higher level brands?  And is there any agreement on what the higher level brands really are?

United Nations used "Transparent Technology" as opposed to Open Source and rarely is "Free Software" used in that context - which is important to understand when describing your audience.

Mozilla released a guide recently on how to do community marketing.

Lots of good discussion during the session, definitely some food for thought.

Linux Foundation Collab Summit: the Meld project for Embedded Developers

Benefits to a Social Approach in Development
Joerg Bertholdt, VP Marketing, MontaVista Software, Inc. & Jeffrey Osier-Mixon, Developer Advocate and Meld Community Manager, MontaVista Software, Inc.
Community is one of the defining properties of open-source software in general and Linux specifically.  To foster further innovation at the rate that the market demands, the power of community - the heart of open source - must be embraced. However, the nature of community is open sharing, which seems counter-intuitive to marketers and businesspeople working in competitive markets.  There are few markets as competitive or dynamic as the embedded world. How can companies cooperate at some points in the development cycle while they maintain their differentiation at other points? How can open communities---including mailing lists, blogs, forums, corporate communities, and conferences---promote and enable this cooperation to accelerate development? This presentation draws a detailed picture of the wealth of community involvement surrounding embedded Linux and presents a look at the evolution and future of cooperative development.  It also introduces Meld, a new community that enables embedded Linux developers, hardware manufacturers, and software providers to connect, share, and design commercial-ready embedded devices.

Embedded is a large market with something like 5 Billion embedded devices sold (per year?) dwarfing the server and desktop devices in the market. But the number of embedded developers contributing to the various open source community is a tiny part of the total number of contributors.

Each device in the embedded space is "similar" but never identical to all of the others.  And, for each embedded device provider to be involved in all of the communities that they draw technology from is
cost and time prohibitive.

This talks core proposal is that the creation of a new community might be the answer to allowing these embedded developers to participate in the community.  As a result, MontaVista helped create an open community for embedded developers called "meld" - with 1,001 members (in decimal, not binary!  ;-). 

Tim Bird referred to Meld as potentially the facebook of embedded developers.   Jeff walked through the interactive site to demonstrate how people can connect, fill out profiles, and provide some recognition for seniority within their communities and among their interests.  In general, the goal is to tap and share knowledge among embedded developers as a means of building a community around a shared general interest, despite the diversity of interests of the individual members.

Meld is intended to grow organically based on member interest and will align with hardware communities such as OMAP, PowerPC, etc, the Embedded communities, the Linux communities (via, and Embedded Linux such as, other wikis and communities.

Their committment is:  Helping embedded Linux engineers to Connect, Share and Design collaboratively.

It is too early to tell if the community has been successful, but with 1,001 members in a month it is off to a good start.  There are definitely competitors cooperating in the forum, e.g. timesys, wind river, along with the host montevista.

The site is a highly modified but originally canned solution, possibly based on or related to a .NET style framework.

Thursday, April 09, 2009

Cloud Computing viewpoint from Red Monk Analyst Michael Coté

The next session is an analyst viewpoint of Cloud Computing

Cloud Computing
Michael Coté, Industry Analyst, RedMonk
Linux is at the center of what's come to be called Cloud Computing. As with open source, SOA, and Web 2.0, the tech industry has quickly fallen for cloud computing. The reasons are compelling: promises of scalability, low cost, flexibility, and light weight process. The core question about cloud computing is how it effects the industry and cultural position of operating systems and other "raw infrastructure": does Linux "matter" in cloud computing?  How might the Linux community evolve - or be forced to evolve - in a wave of cloud computing enthusiasm? Is cloud computing an opportunity or threat for the Linux community? Or is it just another shiny object of distraction? This talk will discuss these questions and more.

Cloud computing now is like early SOA:  It's Silly-putty!

We'll take a simple definition and go with it.

How does "Linux" fit in?  SWOT.  Strengths, Weakness, Opportunities and Threats...

Primarily focused on Cloud in the key three *aaS's (pronounce that carefully!):  SaaS, PaaS, and IaaS.

Why Cloud Copmuting:

Users:  Cost, Flexibility, Elasticity/Scalability
Vendors:  new business models, new features, lower cost of ongoing maintenance (?)

Cloud Computing has sort of supplanted discussions around autonomic or some of the *aaS terminology.

Hype-fed, semantic confusion - public vs. private

Retraining - multi-thread development, dynamic operations

Ending up paying more, e.g. $20k on-premisies vs. $150k off-premises.

Lock-in - the current public cloud solutions debateably have lock-in today.  There are some activities to try to use standard/de-facto interfaces like Amazon's EC2 and Eucalyptus, Rightscale abstracting or layering on top of an infrastructure, etc.

Legacy concrete - moving workloads to the cloud are not as simple as a finger snap.  Often need to replicate a cloud solution, maintain internal and external solutions, and pay the additional cost for having two solutions running concurrently while evaluating and moving to public clouds.

Virtualization turns out to be more important - "fog computing".  Maybe Virtualization is more important and will outlive the Cloud Computing hype today.

When looking at pubic cloud versus an internal deployment, you still have to look at the total cost of the solution.  This includes capital versus operational expenses, management expenses, ongoing expenses, etc.

IaaS:  Amazon EC2, S3, etc.

PaaS -, Microsfot Azure

Sun Cloud, 3Tera Rackspace, & reborn hosters

Automation and provisioning people helps enable all cloud-like solutions, such as Puppet.

Linux Strengths
- Appealing for it's (potential) cost of zero
- Reliability, known quantity "at scale", breadth
- Malleable & Transparent
- Easier for IT management and tooling
- Virtualization
- Existing, Linux compatible applications

Linux Weaknesses
- Focus on the OS level, not applications and usage
- Weak connections to the development platforms, with exceptions like LAMP
- Virtualization and Automation Fragmentation
- Susceptible to "bad citizens"

Linux Opportunities
- OS for Cloud client - Netbooks, RIAs, "mobile"
- Model driven automation
- Fragmented air-space - everyone wants a cloud, everyone wants Linux
- Easy way to boot-strap into using Linux - deploying Linux has become easier but with the cloud, creating new Linux images is downright trivial.

Linux Threats
- SaaS and PaaS means "the OS doesn't matter" - cars with hoods that don't open (past: VM & .NET CLR)
- Commercial vendors creating new closed source worlds, e.g. Apple
- Cloud Distro Madness - QA & Support matrices
- Cloud consolidation & collapse - eggs in a basket

- Worst case scenario:  Cloud Computing is nothing and Nothing Changes for Linux
- Best case scenario:  Many, many more Linux instances
-- In short, the best of all worlds, can effectively do nothing from the point of view of the Linux community and either reap the benefit for free or let the wave of hype pass by without impact.
- Thinking ahead:  transitioning "brown field" applications.  Possibly more interesting to figure out how to move existing, older applications into the Cloud environment.  How do you get most of that old, boring, Enterprise software out into the Cloud?

So is there any thinking about interoperability with other Clouds?  And how does this interact with the view of the Gartner Hype Cycle - will we cross the chasm here because of inhibitors like cross-cloud interoperability?

Cloud computing is mostly going to be just another "new" option that will co-exist along with all of the other options.

Question on internationalization:  some countries may be very specific to specific language subsets within a particular country.  Then, how do you benefit from others that are localized to a specific language?  How do changes fold back into the open source communities?

Privacy and data security is going to be a huge part of this.  Does this problem still exist when moving the cloud "in-house"?  Again, various departments will be sharing the same infrastructure, in theory.  Aren't the same privacy constraints a concern?  And yes, to some extent, although employment guidelines help provide some protection.  Amazon recently put out a short white-paper on how to comply with HIPAA requirements and some description about how Amazon does not have access to your data.  But, oh, btw, engage your own laywer:  this is not legal advice.  The privacy space is likely to be complex for a while.

Will virtualization and cloud computing reduce the need for network and system administrators in the future?  Sort of like the question that asks if we put robots on the assembly line will we remove people or reassign them?  General answer right now:  the complexity changes and the costs over time may go down but don't expect any immediate reduction in the need for systems administrators.  But of course, learning new skills is always necessary in our fast moving, high tech environment.

KVM update by Chris Wright

At a glance - KVM is a Linux kernel module that turns Linux into a Hypervisor.
Requires hardware virtualization extensions - uses paravirtualization where it makes sense
Supports x86 32 & 64 bit, s390, PowerPC, ia64.
It has a competitive performance and feature set
Advanced memory management
tightly integrated into Linux

A hypervisor needs:
- A scheduler and memory management
- An I/O stack
- Device Drivers
- A management stack
- Networking
- Platform Support code

Linux has world-class support for this so why reinvent the wheel?
Reuse Linux code as much as posssible.
Focus on virtualization, leave other things to respective developers.
Benefit from semi-related advances in Linux.

KVM features
- buzzword compliant - VT-x/AMD-V, EPT/NPT, VT-d/IOMMU
- CPU and memory overcommit
- Higher performance paravirtual I/O
- Hotplug (cpu, block, nic)
- SMP guests
- Live Migration
- Power Management
- PCI device assignment and SR-IOV
- Page Sharing
- KVM autotest

One of they key points is that many of the capabilities come directly from the underlying Linux kernel providing these features. 

Libvirt Features
- Hypervisor agnostic:  Xen, KVM, QEMU, LXC, UML, OpenVZ
- Provisioning, lifecycle management
- Storage: IDE/SCSI/LLVM/FC/Multipath/NPIV/NFS
- Netowrking:  Bridging, bonding, vlans, etc.
- Secure remote management: TLS Kerberose
- Many common language bindings: python, perl, ruby, ocaml, c#, java
- CIM provider
- AMQP agent - High bandwidth, bus based, messaging protocol to enable the ability to manage very large numbers of machines.  Common with Wall Street Linux customers.

oVirt features
- Scalable data center virtualization management for server and desktop
- Small footprint virtualization hosting platform
- Web UI for centralized remote management
- Directory integration
- Hierarchical resource pools
- Statistcis gathering
- Provisioning, SLA, load balancing
- Currently built on top of KVM (not a hard requirement)
- Currently directly built on top of Fedora, but again not a hard requirement
- oVirt is about managing the hardware resources as well as the guests, includes ability to include agents on the guests and monitor the guests that way as well.

Question:  can you use oVirt to manage guests on Amazon's EC2?  In principle, yes, but there would be a bit of work to enable that, mostly because it includes the hardware provisioning access.  In some sense, oVirt because the "owner" of the physical machine to enable virtual machine deployment.  It would depend on libvirt running on the physical nodes within the "cloud", e.g. Amazon's EC2.

Newbie question (yes KVM is also Keyboard/Mouse/Monitor multiplexor, but not in our session today ;-): How does KVM compare to OpenVZ and Xen.  OpenVZ is focused on containers running on a single Linux instance.  The guests are not as isolated and do not include a complete and unique kernel & OS instance as is done with Xen or KVM.  OpenVZ is more like chroot on steroids.  Again, more like VMware's ESX.  Xen is basically a microkernel approach to a hypervisor where KVM is a "macro kernel" approach.  Xen allows modifications to a virtual machine to run as a paravirtualized system for performance considerations.  KVM can run a .vmdk image but it is probably more useful to convert to a KVM-friendly format.   Paravirtualizing I/O reduces the enormous number of traps that are otherwise present in a fully virtualized hardware environment where the hardware does not have full IO virtualization or an IO MMU.

<at this point we hit the break but Chris will go into his next 40 slides for those that want to stay ;) >

KVM Execution Model
- Three mdodes for thread executiion instead of the traditional two
  . User mode
  . Kernel mode
  . Guest mode
- A virtual CPU is implemented using a Linux thread
- The Linux scheduler is responsile for scheduing a virtual cpu, as it is a normal thread.

Guest code executes natively apart from trap'n'emulate instructions
- performance critical or security critical operations are handled in kernel, such as mode transition or shadow MMU.
- I/O emulationand management handled in user space such as qemu derived code base and other users are welcome.

Large page allocations are currently pinned and never swapped out.  This is a current downside to using large pages within KVM.

KVM Memory Model
- Guest physical memory is just a chunk of host virtual memory os it can be
  - swapped, shared, backed by large pages, backed by a disk file, COW'ed, NUMA Aware
- The rest of host virtual memory is free for use by the VMM, low bandwidth device utilization

Linux integration
- Preemption (and voluntary sleep) hooks:  preempt notifiers
- Swapping and other virtual memory management:  mmu notifiers
- Also uses the normal linux development model including small code fragment, community review, fully open

MMU Notifiers
- Linux doesn't know about hte KVM MMU
- So it can't:  flush shadow page table entries when it swaps out a page (or migrates it, etc.); Query the pte accessed bit when it determines the recency of a page
- Solution:  add a notifier for 1) tlb flushes 2) access/dirty bit checks
- With MMU notifiers, the KVM shadow MMU follows changes to the Linux view of the process memory map.
- Without this, a guest would be able to touch all user pages and the base Linux wouldn't know that those pages could be swapped out.

- Not nearly as critical for CPU/MMU now with hardware assistance; highly intrusive
- KVM has modular paravirt support: turn on and off as needed by hardware
- Supported areas:  1) hypercall-based, batched MMU operations 2) Clock 3) I/O path (virtio) [The last is the most critical currently]
Now native shadow page table operation is generally more efficient than paravirtualization so paravirt is rarely used with KVM today.

Virtio is cool
- Most devices emulated in userspace with fairly low performance
- paravirtualized IO is the traditional way to accelerate I/O
- Virtio is a framework and set of drivers:
  - A hypervisor independent, domain-independent, bus-independent protocal for transferring buffers
  - A binding layer for attaching virtio to a bus (e.g. pci)
  - Domain specific guest drivers (networking, storage, etc.)
  - Hypervisor specific host support

There is a tradeoff between moving driver support into kernel vs user level.  user level provides better security isolation and often negligble performance degradation.  Using dbus for communication is typically about 60 milliseconds (did he really say ms ?? or microseconds? ) either way, it introduces some latency.  Plan to move the virtio drivers back into kernel to measure and see if there is any noticeable difference.

Infiniband and ofed (?) do they work?  IB just works, driver functions, would you have RDMA support right to the target.  Answer is:  It depends.  Certainly possible but it gets pretty complicated.  Without assigning an adapter to the guest it becomes difficult to register a set of pages with the driver for direct DMA.

Xenner is a mode you can run QEMU in.  An independent application that uses KVM.
Emulates the Xen hypervisor ABI - Much, much smaller than Xen
. Used to run unmodified Xen guests on KVM
Has been going on for quite a while and will soon be a part of QEMU directly.

- QEMU improvements and integration:  libmonitor, machine description
- qxl/SPICE integratoin
- scalability work:  qemu & kvm
- performance work
  - block:  i/o using linux aio
  - Network:  GRO, multiqueue virtio, latency reduction, zero copy
- Enlightenment - the ability to receive calls from Windows guests.  Hyper-V requires VT.,,

Main contributors:  AMD, IBM, Intel, Red Hat
Typical open source project:  mailing lists, IRC
More contributions welcome.

Ksplice: reducing the need for kernel reboots

Ah, good morning...  This morning I'm moderating a session on New Technologies - the first presenter is Jeff Arnold who is covering Ksplice.  From the LF Program:


Jeff Arnold, Lead Developer, Ksplice
Today, when Linux users apply security updates to their systems, they are commonly informed that they must reboot in order to finish applying the updates. Since rebooting is disruptive, many users and system administrators delay performing these updates, despite the increased security risk--more than 90% of attacks exploit known vulnerabilities.  New technology out of MIT, called Ksplice, enables running Linux systems to stay secure without the disruption of rebooting. This talk will describe the core innovation behind Ksplice and how this technology will improve the security and maintainability of Linux systems.
As good as Linux is (transcendent?  ;-) it still has bugs.  Some of these include security problems which typically need updates as quickly as possible.  However, people do not enjoy the scheduled downtimes, especially in this world of round-the-clock access to services.  Reboots also have the problem of losing software state such as network state, application state, etc.  90% of attacks exploit known vulnerabilities, so getting patches installed more quickly would reduce the level of vulnerability.  Also, applying patches sooner avoids the need to wait for a scheduled maintenance window.  Ksplice currently adds negligible performance impact.  If any impact is found, please notify Jeff asap!

Ksplice works on any release since Linux 2.6.8.  he initial release is in April 2008 (GPLv2).  Ksplice has tools today in Debian sid, Ubuntu Jaundy, Fedora 8-10.  It is currently proposed for mainline and has 5 engineers working on Ksplice full time.

Question from the audience - doesn't virtualization obviate the need for this type of technology?  Answer:  Not really.  Even the unlderlying OS supporting, say, KVM, still needs to be patched.  The impact of downtime or mandatory migration for all virtual machines running on that physical machine is still extensive.  And, taking down the network state or application state for a virtual machine still impacts the workload.

Jeff provides an example of code that needs to be updated, and shows a way to apply a patch and generate a ksplice patch.  He then shows a binary exploit, followed by a complex patch including asm code and C code to a couple of files.  Then uses Ksplice to build two kernels, one with and one without patch.  Then with a simple sudo ksplice-apply applies the kernel patch, ksplice-view shows what ksplice updates have been applied, and a re-run of the exploit code fails to provide him with root privileges. ksplice-undo allows the patch to be removed.

These changes only update the running kernel.  However there are also tools which update the kernel for the next reboot or allow a set of rc scripts to apply updates at next boot to the same kernel.

Source for the kernel must be available to apply the patch.  If you try to build and install against the wrong source version Ksplice will notice if the kernel changes.  You must also use the same compiler and assembler version.  Could a vendor provide enough information so that an end-user could create these patches without the full source? Possibly yes.  Or could a vendor build these kernel modules for distribution to end users to allow the end users who don't have full kernel source easily available?  Yes.

Jeff showed a Ksplice Uptrack Manager much like the Ubuntu software update menu.  Looks like like a normal package manager but with patches getting spliced into the kernel?  The kernel stops for about 700 microseconds (0.7 milliseconds) during this update.

Question from the audience on what the likely model is here for distributions.  Does every vendor need an infrastructure for their patches.  Ksplice is hoping to be able to create these updates for a number of distributions.  A distro vendor member would like to have both the mechanism for updating any set of pre-build kernels as well as the running kernel.

Next Jeff talks about the mechanism for changing the code based on conversion from a source code patch to the binary updates.

Identify which functions are modified by the source code patch and then generate a "replacement function" for ever to-be-replaced function and then start redirecting execution to the replacement functions.

pre-post differencing.  Pre source is unmodified, post-source is the post-patch application source code.

Ksplice compiles both pre and post, then compares the two binary object files and extracts the replacement functions.

Shows a picture of foo() and foo()' (foo() prime), adds a jmp instruction at the beginning of foo() to the beginning of foo()', return from foo()' returns to caller of foo().

There are some symbolic references that also need to be patched as well. 

Need to make sure that foo() and foo()' are not running at the same time.  Temporarily grabs all CPUs, makes sure it is not running one of those functions, if necessary, abort(rare).

Design only allows changes to code, not data structures today.    ksplice_pre_apply() and ksplice_post_apply().

In some cases, a patch may need to iterate over all CPUs or use the ksplice_apply() call directly to apply the patch.

Hypothesis:  most linux security patches can be hot-applied with Ksplice.  Used all security patches from 2005-2008, and they were able to hot-apply with 88% without change, and 100% with only about 17 lines of code per patch of additional code.

Debugging with Ksplice can do some of what kgdb and SystemTap can do because it does live patching.  Can insert code almost anywhere and discover any symbol value.  x86_32, x86_64 and ARM.

Jonathan asks how we avoid making the problem worse by creating new versions of a kernel that was never really seen by a distro or a kernel developer?  Is the concern that there is unreviewed code?  Or is it that there are a larger variety of running kernels modified "in place".  You could use Ksplice for just the 88% of directly applied changes without having some of the extra code (which could itself have bugs) and limit some of the risk.  Any new code for the remaining 12% is intended to be as absolutely small as possible to minimize the chance of introducing new bugs.

Is this a portable root kit?  Well, sort of if you have root already and can load modules, yes.  Does it make it a little easier to generate a module which can be loaded to root the machine?  Probably.  But black hats already have these techniques and it gives the white hats an easier way to keep up with the black hats.  It is possible to also use cryptographically signed modules with Ksplice as well to improve security overall.  There are also a lot of tools to see what ksplice updates are installed, sets taint flags, etc. to make sure that people can tell what was changed.

This should work well with the stable trees as well although they've had to remove some patches to make a stable tree do a live update to a running base kernel.

Wednesday, April 08, 2009

So who won the Linux Foundation Video Campaign?

Jim Zemlin: "So, the videos that won this campaign were all great..." The contest outstripped the corporate entries for a typical contest of this type, with four to five times the number of submissions as compared to comparable corporate ad campaigns.

Here are the winners:

My personal favorite is actually the first runner up - it is fun.  ;)

Why can't we All Just Get Along? Microsoft, Sun, Linux reps discuss potential for interoperability

Roundtable Discussion: Why Can't We All Just Get Along (View Summary)
Jim Zemlin, Executive Director, Linux Foundation
Ian Murdock, Vice President of Developer and Community, Sun Microsystems
Sam Ramji, Sr. Director, Platform Strategy, Microsoft Corporation

Microsoft:  What would you have done differently?  Sam Ramji: I would have on Day #1 built a relationship with the Legal team.  Engineers adapt quickly but Lawyers tend to take a little longer to be educated on the trends and directions and to understand the ramifications.  The exact quote might have been:  "Engineers iterate and get to the right solution, Lawyers mitigate risk..."

Sun:  Open Solaris work from Ian - he entered a little naively due to the inherent inertia when working with a large company.  While building up a consensus finally happened, it took much longer than expected. Once the inertia is moving in the right direction, it is much easier to get things done.

Jim to Sam:  It is clear that MS sees the computing landscape changing. Is there anything that you'd like to see from this crowd, beyond just going away and leaving us to have the dominant market share?

Sam:  View Sam as the "unelected rep within Microsoft" to help work to advocate for changes in behavior within Microsoft.  Have built up Linux on top of Hyper-V as well as Windows on top of Xen within MS. Hopefully the Linux community can provide positive reinforcement for the things done well and not just kick the MS for the things done wrong.

Jim:  Why does MS care about the open source community?  There is often an assumption that MS has a nefarious purpose behind their actions and the open source community is trying to figure it out.

Sam:  Two answer:  First, he'd like to see computing just get better. Sam sees that greater efficiency in computing drives greater productivity within the economy.

$60 billion in revenue - what is the next engine for growth?  The better MS works with other products the greater the growth potential for MS products.  A lack of interoperability will likely be an inhibitor to revenue growth.

Jim to Ian (via Twitter):  Who is the largest contributor to open source in the world:  Ian:  Sun!  The entire company is built around open source, literally.  Working with other companies to ensure that their products interoperate.  Jim:  What is Sun going to do with mySQL?   In Cloud computing or Web 2.0 companies, whatever you want to call them, open source is a major underlying technology which is a place where mySQL fits in nicely.

One example of agreement between Linux, Sun and MS is on accessibility standards for people with disabilities.  Standardization in this space is a place where we could all work together.

Ian:  Sun did not foresee the rise of Linux and did not get the chance to incorporate it into their strategy until too late.  (Contrast this with Oracle's insight as viewed in an earlier posting with hindsight).

Jim:  is there any possibility of ZFS and dtrace under a GPLv2 license?  Ian (after some hesitation) "There is a possibility, yes."

Unfortunately I didn't catch all of the debate in this session because the dynamic was very interesting.  The fact that this panel exists is a good sign that there may be some collaboration among some more extreme competitors.  Of course, this is a major aspect of this part of the industry in which many competitors seek to find common ground for collaboration, advancing their common interests.

All in all, kudos to Ian, Sam and Jim for setting this up and running with it.

LF Collab Summit: Linux in the Enterprise: The Journey, Milestones and What's Ahead

Edward Screven, Chief Corporate Architect, Oracle

Date: Wednesday, April 8th
Time: 15:15 - 16:15
Location: Imperial
Summary: Linux has come a long way from its modest roots in the early 1990s to now being the operating system of choice for data center deployments. Join Oracle's Chief Corporate Architect, Edward Screven, as he discusses the importance of Linux in the industry, and how Oracle views and supports Linux. Edward will also highlight his thoughts on advancements needed and being made in Linux that will continue to keep it on the top.

Why Does Oracle Care About Linux?

Oracle wanted customers to try to standardize on shared pools of low-cost servers and storage as a way to save money on hardware, break down the existing Silos within the data center, and simplify the growing heterogeneity of Data Centers.  Windows was not a realistic option, BSD was considered but Linux appeared to have a better chance, although in hindsight, this relatively pivotal decision was as much luck as it was a good choice.

In 2002, Larry Ellison said "We will run our whole business on Linux".  Today that statement is true.

Chuck Rozwat said "We will run our base development on Linux for all of our products"  Today they do.

Oracle Enterprise Linux is primarily a support business exactly tracking Red Hat Linux.

$954 million in 2006 to 1.2 billion in 2007 (Gartner) makes Linux servers the fastest growing OS sub segment, growing 25.6% over that time period.

Oracle claims that they may be the world's largest user of Linux, including their Traditional IT, Development, Oracle On Demand, and Oracle University.  84,000 servers, 10 PB storage, all running Linux.  1000 dual CPU machines in "The Farm" - their test grid.  75% running Linux, 2000 jobs run simultaneously.

What helps linux succed in the data center?

- Cost effective - total cost and marginal cost

- Standards Based, extensible, 3rd party support, open

- Enterprise Ready - scalable, reliable, manageable (at the individual box level)

- Tested configuration ready to deploy

- Enterprise-Class Integrated Support

Oracle strongly believe that Virtualizatoin Changes the Game - turns pools of servers into a fungible resource.  Oracle VM is a product in this space, based on Xen, supports Linux & Windows.

Used in Oracle University, 1/6th the hardware, CPU utilization increased from 7% to 73%, Servers to administrator ratio increased 10X, Revenue per server increased 5X.

In Oracle on Demand, every customer required at least 6 machines, test, development, multiple tiers for database and servers, etc.  With virtualization cuts that number down to half that or better.

Next step:  Make Linux into the Standard OS for the Data Center.  He believes Windows will always be the default desktop OS, but the standard Enterprise/Data Center OS should be Linux.

Endorses the btrfs file system

- General purpose file system

- Handle large storage

- designed for repair and reliability

- copy on write with efficient snapshotting and checksumming

- Manage multiple devices under the file system in raid striped and mirroed configuration


- In kernel 2.6.29

Comments on SystemTap as not being open from the very beginning which has led us to where we are today.

The Opportunity for Linux in a New Economy

Al Gillen is back for the second year at the Collaboration Summit.


Economic Impact
Trends from past recessions and how that applies today
The role of virtualization software
Outlook for the Linux ecosystem
Essential Guidance

There is a slowdown in IT spend - a Capital Expense (capex) reduction is mandatory for many customers, work in North America, second is Western Europe, less in Asia Pacific, at least for now.  Nonpaid solutions are likely to be hot:  Linux, open source DBMS, middleware, tools.  Watch for update on non-paid virtualization solutions.
ROI Window compressed dramatically, paybacks not realized this fiscal year are nonstarters.  Difficult to justify new initiatives today.
New migratoin initiatives are unlikely.  If a migration was not already underway, it is unlikely to start now.  Existing skills will determine whaat is and is not done.
Everybody wants to declare victory in a down market.

In a down economy, everyone is loosing money.  No business does well, but we look more at what dynamics will change as a result of the current downturn over the next three years.

A chart on x86 server shipments shows a marked drop in Q4 2008 and is projected to return to current previous highs around the end of 2010.

Roughly a drop from 8+ million servers a year to about 7 million x86 servers a year.

It appears that the software spend trend matches the hardware trend, although the software trend remained positive (ranging from a high of 14% growth per year to a low of about 2% growth per year) with a projected high of about 7% growth per year by 2013.

Legacy of the 2001-2002 recession:  That was the original demarcation line for the acceptance of Linux in the market place.  Customers also began to buy major new servers for Linux deployments around that time.  This also drove the build for the software on Linux.  About that time CIO's claimed that there was no Linux in their data center while people in the trenches pointed out that they were using Linux at the fringes and the grass roots entrenchment had begun in earnest.  That also drove the beginnings of the standardization around Linux.  Linux was just then available on a number of diverse hardware platforms, driving a level of commonality in application availability - especially open source applications such as Apache, across the Enterprise.

Virtually all of the paid market for Linux is in the Enterprise space today.  Some geographies are focusing on the non-paid distros.

After the recession, there was a strong growth period for several years but also that recession drove a strong push towards virtualization and consolidation.

"Free" servers became widely available thanks to virtualization software.  Unix is increasingly under siege from Linux... and Windows.

Cloud might get a boost but revenue from cloud Linux might not.

Windows, of course, does not go away.  Nor does Microsoft cease to be a fierce competitor.

Tranitions eventually  mandate rationalization and assimilation - Managing nonpaid OSes, Managing hypervisors and guest OSes.

Virtualization as a Solution

Lifecycles for OSes and apps will be extended.  Solutions that foster cost avoidance will be favored.  OSes that were in use will probably stay in operation much longer than they have historically.  For instance, RHEL4 apps and environments may remain in the data center for more than the typical 10-15 years and more like "as long as the application has value" - which could be over 20 years in some cases.

Solutions that cost $0 will be tested and adopted.  The initiatives will result in permanent changes to how IT does business.

The Virtualization Effect
:  We see a decoupling of hardware and software, via virtualization.  Virtualization takes two basic forms:  independent guests ("stand alone" OS) - associated with nonpaid copies, less c; affinity with less critical workloads.
Replica guests ("child" copies) - Associated with enterprise distros, more critical workloads, virtualization rates higher with enterprise subscription.  The rate of virtual systems to physical systems is likely to increase to 2-1 in the datacenter in the next 18 months - where it was 1-1 up until about three (?) years ago.

Where does virtualization go next?

Pricing has been driven to $0
Value add moves to management
Integration with hardware, OS
Uniguest installations
Heterogeneous hypervisor management
Managing offline images critical

Al predicts that it might take 3-5 years for KVM to work its way into datacenters as a primary virtualization platform.  What this means to Xen is unclear but probably does not impact most enterprises much over the next 3 years and they will have time to devise a migration strategy.

Linux and Cloud Computing

Infrastructure Cloud - services such as CPU, networking and stroage; often presented as a virtual machine over the Web; user installs and manages their own OS and applications; pay by the megabyte, gigabits, MIPS, etc.;  Examples:  Amazon EC2.

Platform Clouds:  An operating system and possibly infrastructure softare; hosted in a web-Accessible location; may proivde apploication developre and runtime envrionement:  Example Any Web hosting provider

Applicatoin clouds:  Virtualize and entire application (aka SaaS) Consumed as a solution or indvidual services through APIs.  Examples:, Google

Linux has already had a big play in all of these areas.

Outlook for the Linux ecosystem

Chart showing Linux distro sales from Red Hat and Novell - both showed positive growth without much impact up to Q4 2008 share numbers.  Interestingly Red Hat was closer to $140 million in licenses where Novell was closer to $20 (from memory) - but Novell's growth was less impacted than Red Hat's, and both had nearly negligble impacts from the economy thus far.

Still forecasting a 15.7% CAGR for Linux, with a shift in what markets moving up the stack are growing.  The distro itself is a tiny portion of the overall spend, hardare is third smallest, with App development and application software being the two areas that grow the post, with Services being perhaps the second largest area of growth (I can't see the exact numbers but the slides should be published on the LF web site).

Al pointed out in one of the graphs that a smaller market share easily mangifies a CAGR - so comparing Windows CAGR on a larger market with a larger CAGR for Linux in a smaller market does not mean that Linux has taken over Windows.  At least not yet.  ;)

In key workloads, Collaboriative, Application Development, IT Infrastructure, Web infrastructure, decision support are called out in some detail and their corresponding growth rates (worth reading the chart).

Drivers for Linux:
 Capex concerns, virtualizatoin -
guest acquisition costs, ease of deployment, Non-paid Linux, larger ecosystem is good, open source layered SW
Increasing integration
software appliances
- new form factors
- new GTM sscenarios

Challenges for Linux
- Capturing new customers - particularly in a down economy, freeing up resources, staff training
- Non-paid Linux and OSS - generates on revenue, dries up survival funding
- Microsoft - Windows solutions, applications
- Generating revenue from Cloud

Essential Guidance
- Look at the current economic downturn as opportunity
- user virtualization as an integration point for commercial and nonpaid Linux
- Remember that revenue is not hte only metric that matters
- However, revenue is important
- Missed a bullet or two, sigh.  My fingers are tired.  ;)

Question from Matt Domsch:  Is there a distinction between Public and Private clouds?  Al:  Yes.  There are places for internal clouds and probably a hybrid between the two at many customers.

What about the guests in virtualization.  Is there any preference for Linux or Windows?  Al:  Currently there is a larger set of Windows installations often driven by software applications.  Aging Linux apps are often infrastructure related and it is often easier to replace infrastructure servers than to replace custom Windows apps in the Enterprise.

Linux rate may be similar to the windows rate in the next 6-8 years.

How do you measure paid vs non-paid Linux?  It seems to some that there are more non-paid installations than the 50-50 split on the charts might indicate.  Al: Based on world-wide surveys, returns say somewhere from 35-55% are non-paid.  That number probably reflects Corporate users.  There are a fixed number of servers out there so that number is used as a way to validate the response to some level.

A comment from the audience suggested that there might be a business tradeoff to consider on non-paid OSes - if you are paying for a kernel engineer or administrator to support the non-paid OS have you really saved money by using a non-paid OS?

Jim Zemlin wound up the session with the observation that non-paid Linux is obviously a key question.  But it is worth noting that if you use a non-paid OS you won't be going to jail, unlike using some other OSes without paying.

Linux Foundation: Panel: Measuring Community Contributions

Luckily I found a power outlet, so here comes more blog fodder!

Panel: Measuring Community Contributions
(View Summary)
Joe Brockmeier, Community Manager, OpenSUSE

Jono Bacon, Community Manager, Ubuntu
James Bottomley, Linux SCSI Subsystem Maintainer
Dan Frye, VP, Open Systems Development, IBM
Karsten Wade, Fedora Project, Red Hat

Joe asked each of the panelists to introduce themselves - Karsten as a gardener, Jono a member of a metal band (did I hear that right), Dan Frye - VP of Linux Development at IBM, and James Bottomley - now at Novell and still SCSI subsystem maintainer and Chair of the Linux Foundation's Technical Advisory Board.

James:  Contributions to the upstream kernel is a major source of community contributions.  Karsten points out that users bear the brunt of the poor contributions.  Jono points out that there are key contributions beyond just code.  As an example, ten years ago the documentation project required that you know LaTeX.  Today, contributions such as documentation, translation etc. are valid contributions.  Joe asks what about companies that contribute market or legal review - general agreement that that is a useful contribution.  Dan Frye points out that it was best said by <insert name here> in the Halloween note:  People scratch their own itch.  A real community allows people to contribute as benefits the contributor, as opposed to only contributing from a laundry list of features needed by the recipient.  This is key to the success of a real community.

If your itch involves contributing in the Peer to Patent project, that adds value to you and thus to the community that you are a part of.  James points out that code contributions are critical but so are testing, debugging, bug fixes, etc.  And, the community accepts money (via the Linux Foundation) to support various Linux related activities, such as the Linux Kernel Summit or requested capabilities.

Dan:  I don't see any reason to "grade" people's contributions.  But we do need to remain inclusive allowing people to contribute in ways that helps to scratch their own itch.  There's no need to guilt people into contributing in ways that they don't happen to find worthwhile.  However, it is key to make sure that the community remains open to people so that they can scratch their own itch, to be a part of the community and to grow within the community.

Jono points that we are all still learning about the different ways that people can contribute.  For instance, some people are predisposed towards certain strengths, such as python programmers or documentation writers or graphics artists.  From a distro perspective, the knowledge of how new members could contribute based on their skills is a new and emerging competency.

How much thought goes into a vendor's training and education about how to make community contributions.  Tongue in cheek from Joe:  IBM plans everything, right?  Dan laughs - we try.  IBM does evaluates a new community, understanding what roles we might be able to take on, learned about styles, governance models, and try very hard to learn how to be real and effective members of the community.  James points out that we also learn by mistakes.  Dan points out that there isn't a mistake that IBM hasn't made.

James points out that IBM long ago attempted to make contributions in whole cloth quite unsuccessfully.  IBM then went out to break down the contribution into small components which benefited the community and over time achieved IBM's own ends.  Dan agreed that was a lesson IBM had to learn many times.

Karsten's perspective was that he was impressed with IBM's due diligence.  His feeling is that Red Hat's approach has been more evolutionary in terms of interacting with communities and probably less formalized.  James pointed out that this is a key different between an industrialized approach of "I pay you for X feature" versus the community approach were people get together to share goals and possible solutions and work collaboratively to create a solution.

James points out that the panel has collectively a lot of experience in training an organization to engage with open source and is hoping that this knowledge can be transitioned to new folks and hopefully in part to the Linux Foundation.

Andy Wilson from Intel points out that development seems to be US centric.  Dan points out that IBM has a large number of contributors in India, Brasil, some in China.  Joe asks for clarification - are these people at large in these countries or are they IBM employees?  Dan says these are IBM employees.  Andy agrees that IBM has a supportive corporate culture here, but suggests that the problem is more endemic to the Linux community as a whole.  James points out that IBM's involvement has helped generate a lot more interest in developers in India becoming more interested in Linux.  Dan also agreed that we are (as a community) short on contributors from eastern Europe and other countries as well as a number of other major developing countries - not just Asia.

James points out that there is also an infrastructure problem in these countries which has to be addressed as one of the key inhibitors to the growth of Linux developers in those countries.

A member of the audience asked what objective measures are actually being used - based on the title of this panel.  Karsten points out there there are some objective measure such as mailing list traffic and analyzes the source of key contributors (similar to Greg KH's analysis of kernel contributors).  The goal is to be able to help track the health of any community.  Another measure might be IRC traffic on a project channel.  James points out the work of the Greg KH and Jonathan on the git logs of the kernel to permanently track all contributions.  Karten points out that community is primarily based on communication.  There are some collaboration techniques such as bug work flow that provide indicators of community contribution or community vitality. 

A woman from Google asked if there was a measure of success of mentoring and how mentoring improved or led to the viability of a community.  James pointed out that they do some of this for the Linux kernel via the Linux kernel summit.  Dan pointed out that companies don't measure code contributions as much sa effectiveness.  In IBM's view, if participation in a community led to other people contributing code to address IBM's customer needs.  That, in fact, was more important than measure patches which may have no bearing on value.  James pointed out that no real kernel maintainer appears to have a manager which measures based on that level of effectiveness.  Jono points out that measuring for the point of measuring is of course pointless.  James pointed out that many people are motivated simply to get a patch into the kernel.

And with that, the time expired.  Jim Zemlin wrapped up the session by pointing out that the Linux community is an excellent example of the success of a development community.  And, introduced Al Gillen from IDC who will provide some measures of the success of Linux in the market place.

Kernel Developer Round Table at LF Collab Summit

Panel: The Linux Kernel: What's Next
Moderator:  Jonathan Corbet, Editor at

Greg Kroah-Hartman, USB & PCI Subsystem Maintainer
Andrew Morton, Lead Kernel Developer & -mm tree Maintainer
Keith Packard, Project Lead
Ted Ts'o, Chief Technology Officer, Linux Foundation

2.6.30 merge window just closed.  Linux noted that about a third
of the code that went in was "crap".  A lot of code went into the
staging tree.  So, for Greg KH, what is the staging tree really

Greg KH:  the staging tree came out of the driver project which
provides a collection point for random drivers, including bad
API usage, bad code, rather crappy code.  GregKH is now the
"crap" maintainer, er, ah, staging tree maintainer.  So, about
130 drivers were merged into the staging tree, all experimental
code, mostly from drivers that have been out of kernel since
the 2.0 days.  Slowly that code is getting cleaned up now that
it is consolidated and being evolved to the point where it can
be merged into mainline.

Some distributed filesystem work, aka Ceph, went in through the
staging tree:  Why?  GregKH: because the maintainer asked that
it go in through there.

For Keith:  what are the current graphics things that Keith is
working on.  Drivers used to be done all in user mode but have been
re-educated or have come to the awareness that a number of changes
really need to be in the kernel to support graphics.  A number of
new APIs for accelleration, video mode configuration, and memory
management code is now in the kernel and can be used by the X11
graphics drivers.  Or rather, the X11 graphics drivers provide just
one of many graphics drivers based on the in-kernel support.  This
makes the graphics capabilities more accessible to graphics driver
writers.  There are still some problems in the 2.6.29 code base and
2.6.30 is getting better.  But most of this stuff is pretty bleeding
edge and probably should have been run through staging.

Graphics are now at a much better level of support in Linux than they
have ever been.  The number of supported chipsets is finally increasing
from just Intel chipsets to include a number of the ATI chipsets
more fully supported out of the box.  ATI has probably put less
developer dollars into improving the drivers as compared to Intel,
but they are getting a fair bit of help from the community and
are communicating well with the developers.  Fedora 11 has shifted
to the nouveau driver for nVidea hardware which in some cases exceeds
the capabilities of the native, binary drivers provided by nVidia.
nVidia is still not working at all well with the Linux community.

The graphics community could still use additional developers and
improved vendor support in general.

Jonathan:  Is there anything we can do to make the community more open
and accessible to new developers?  Keith:  the wayland (sp?) project
is a new windows system (not X11 based) whic his using the new kernel
APIs which would not have been possible without accelleration and basic
configuration support in the kernel.  The same is true for the Cairo
project.  These new APIs and kernel support should enable an increase
in the velocity of change.

Jonathan:  where are filesystems going?  Ext4 was just pronounced
"stable".  Ted:  Two community distros (fedora and ubuntu) will be
shipping with ext4 and possibly even the default filesystem.  Ted
has been using ext4 as his primary filesystem for over 6 months now.
ext4 has also atracted new developers and that of course leads to a
few new bugs as the new developers are less familiar with the constaints
and caveats of the ext3/ext4 body of code.

Jonathan:  looking beyond ext4, when is btrfs (pronounced Butter FS or
sometimes just Butter) available (a common question for Chris Mason ;-).
Ted:  btrfs is an exciting alternative but doesn't yet compare to the
four decades of experience behind the berkeley style ext3/4 family
of filesystems.  It still has some work to be ready for production and
will probably be the follow on filesystem after ext4.

Jonathan:  Are there too many filesystems?  Ted:  Some of the filesystems
are somewhat special purposed, e.g. for flash support or other unique
hardware configuration.  However only about 7-8 filesystems make up
about 95% of the total customer base of  filesytems in use today.

Andrew was key in shepherding in the a fs - but no particular insight
on what benefits it provides, although the code is very cleanly done and
appears as though it will be very well maintained.

Linux-next:  is that working out well?  akpm:  Yes!  It is doing a lot
of the work that he used to need to do for integrationn, testing, and
evaluation of new code.  Stephen's work is helping tremendously, although
Andrew feels that the code base is not getting tested by as many
people as it should be.

Where are the biggest problems in Linus' tree coming from?  Andrew:
typically they seem to be code that has skipped over linux-next and
gone straight to Linus' tree.  That seems to be a bad model and more
people should be planning for including first in linux-next.

Should people be developing against linux-next?  Andrew:  probably not.
The code base is really not stable enough for that and the various git
trees  are not  not really well set up  for this.

Are there too many developers?  Has the rate of change decreased?  Andrew:
no, not really.  There seems to be a trend of established developers not
always seeing the changes from new developers that make it into the kernel.
In some cases an established developer will stumble across a new
directory in the source tree and find that the code is filled with newbie
mistakes.  While this has the potential to be a problem in the long term,
it seems like the openness of the tree is helping to maintain the quality and as subsystems are used and encounter bugs or problems they still get fixed by the community.

Linux-next is causing a lot of email about merge conflicts between subsystems.  Is that causing a problem and is it too hard to do develop in the kernel now?  Answer from several:  no, it seems to make it easier and points out the problems soon.

Question from the audience;  Everyone pushes new developers to get code upstream.  However, many subsystems seem to meet extreme resistence when pushing code upstream, e.g. uprobes, systemtap.  Andrew points out that these subsystems are impacting the very core of the kernel and thus are more heavily scrutinized.  Someone suggested that the code being pushed upstream would meet less resistence if the code were cleaner and better designed.  Is this something that could be better documented for existing developers?  utrace probably has a better chance of getting merged now that several core kernel developers are helping shepherd the code.  In many cases, core kernel code being pushed by non-core kernel developers requires a level of responsiveness that those non-core kernel deveopers typically do not respond to.  Questions about locking, API changes, overlapping capabilities, implications on other subsystems, etc. need to be answered and code review comments need to be agressively addressed by the developer to have a chance of adoption.  Write and Post and never respond will clearly never get code into mainline/core kernel.

Some have suggested that your code should be so good that core developers want to pull your code into the kernel rather than having you push your code into the kernel.

Are there too many tracers in the kernel already?  Is anyone actually using any of the tracers?  Ftrace alone has probably a dozen tracers built into it.  However, most of the documentation for tracing is only in the git logs for the code checkins, which is pretty pathetic.

Question from the audience:  it seems that there are lots of functions duplicated in the architecture specific trees.  How does one test all architectures for factoring out common code like that?  Andrew:  there is a linux-arch mailing list which is the contact point for all architecture maintainers to see this type of common factoring.  Or, send to Andrew and he will send them out to the architecture maintainers until they stick.

Question from the audience:  Things change rapidly, including drivers moving to different directories.  Greg KH:  the rate of change is still increasing at a linear rate.  The usb subsystem changed from 2.6.10 to today, for instance.  Is that rate of change going to continue?  GregKH:  Yes - that was a several year period of time and things change rapidly.  Git logs and such help track those changes but things will continue to change.

Question from Christine Hansen:  How are the highly experienced (aggregate 100 years of Linux use?) developers mentoring new developers?  GregKH:  I think about this a lot and we document more, we train people how to accept patches, contribute patches, etc.  Christine:  There is still a world-wide perception that most of the development is centered around Portland, OR, USA.  At some point, all of you n the panel are going to Ascend.  To a Mountain.  But who will replace you?  Is there anything done on the mailing list or within corporations?  Andrew:  Oleg Nastrov from Russia is a good example of an up and coming individual.  And is an example of how someone can rise from nowhere and become a proficient developer.  But the community seems to evolve - including people with great capabilities who disappear to other jobs, etc.  Keith:  Corporations have an ability to do internal mentoring driven in part by the corporations need to build and develop new engineers which helps with some of this within the ecosystem.  Keith points out that the financial incentive for some programmers also help them fit into this mentoring environment.  (My observation:  corporations help make the best of any given individual, but Linux really draws extraordinary developers which often come from unexpected backgrounds.  As one of my old mentors and teachers said, "Great Engineers are born, not trained.  You can improve a good engineer with training and improve a great engineer with experience, but there is no replacement for a naturally excellent engineer).

A question on interface stability:  As an example, iptables as a command needs to remain relatively stable but the kernel interfaces are rarely used by anything other than a small number of applications.  Therefore an interlock between the key applications and the kernel is sufficient.  There are some cases where this is just a plain old hard problem, such as X11 interfaces for mode setting in kernel - when a user level application does its own mode setting, you wind up with "impossible" conditions which can lead to a kernel crash.  Ideally, the APIs would be stable enough for the applications but they should not have to be locked forever preventing new evolutions of a subsytem.  Application interfaces tend to stay very stable but are handled on a case by case basis.  If you can find a way for new applications to use new interfaces and old applications to continue to survive or to be updated to the new interface, we can over time migrate to a new interface and ultimately retire an API.  BTW, this same level of stability is not applied to the in-kernel APIs since all providers and consumers of an API can be updated simultaneously in the kernel source thus obviating the need for long term compatibility interfaces.  And, of course, good interface design enabling extensibility is ideal, although it is also impossible to envision all ways that an interface may need to evolve over time.

Ted:  I don't know how to get the latest X server onto my distro (Greg:  Get a Better Distro!) but some of these challenges are just inherent in the interlock between applications and the kernel.

Question from the audience:  Where is new code coming from?  Vendors, Hobbyists, etc.?  Andrew:  Seeing a lot of involvement from vendors to support their new hardware, e.g. Texas Instruments.  Greg KH, over 20% of all kernel changes can still not be tracked back to a specific vendor.  Ted:  In the filesystem space, we still see a lot of people "scratching their own itch" - fixing the one thing that really bothers them.  That often comes from non-corporate sponsored users trying to solve problems that annoy them, including university students, home users, etc.

GregKH collects per-release contributor information and feeds that to Jonathan and the LWN site to allow open tracking of the source of contributions.

A good session - very well received by the audience.  After this, we are all off to lunch for a while.  If my battery lasts, I'll continue tracking sessions.

Final comment from Jim Zemlin (with sustained applause) the LF is awarding an "unsung hero" award.  The recipient is Andrew Morton.  Andrew happens to be an avid racer and the Linux Foundation has arranged for a track day for Andrew.  Only condition is to not get killed at the track.

Andrew tells a story about Mark Merlin and a Ferrari where he forgot to brake on the track and had a near-miss.  Oops - we hope he remembers the break on his track day!  ;)

Intel hands Moblin project to Linux Foundation; Imad Sousou speaks about the project at the LF Collaboration Summit

Linux Foundation has taken over the Moblin project from Intel.

Imad Sousou speaks about the Moblin project., enablement for Intel platforms, enablement for Linux graphics, his group works on Xen, KVM, etc., entire development team is very impressive - primarily working upstream.  Intel really "gets it" when it comes to open source.

Big companies are terrible stewards of large open source projects. IBM's handling of Eclipse is a good example of migrating stewardship outwards to enable a vendor neutral forum.

Rather than creating yet another new open source foundation for Moblin, Intel chose to transition the project to the Linux Foundation.

Linux Foundation has become very well respected and is well funded, especially given the current economic climate. 

Moblin Governance:  no disruptions to the project; subsystem maintainers and steering committee.  Real key is that upstream is the place where anything "real" happens.

Why did Intel create Moblin?  Have to understand a bit about how Intel Software development works.  Intel's customers are asking for Linux.  The Moblin effort really started with the advent of the thinking around the Atom processor.  They needed an operating system to support all of the new devices that Jim mentioned in his previous talk.

Moblin would like to contain everything from the OS to the rest of the environment all included so end developers need only create the end user experience for their particular device. "Moblin is about enabling the best experience"  "The user experience.  The developer experience."

There are open source projects for nearly anything you can imagine. However, where Moblin (and other projects) tend to find gaps are in the seams between applications & open source projects.  And that level of integration is where Moblin spends a good deal of its time (and that is of course also the goal of any major Linux distribution).

Moblin v1:  A great start but:  it lacked integration, it had technology gaps and the user experience was not as good as was desired.

Moblin v2:  A clean break from v1, includes fastboot, new connection management, a lot of UI framework and enhancement work.

The Fast boot experience:  fast boot is not a patch or a component. It is really about doing the right things right.  Parallelizing bloat is still bloat.  Booting in 60 seconds is really bad.  Asynchronous actions are tools, not goals.

Boot times are now at 5 seconds (moving towards 2 seconds):  kernel 1 second, core systems 1 second, graphics subsystem 1 second, GUI 2 seconds.

Next generation User Interfaces:  real world versus computers.  Will future UIs use standard controls and toolkits?  More likely that the UI of the future is Animation Frameworks.  Clutter is their focal area - Intel bought the company.  Physics and 3D, Rich animations, Fluid and dynamic user interactions, compelling and innovative modern UIs, allows develepers to develop applications using the same techniques that game developers use.

Another area of focus is Connection Management.  It's not only wired and wifi.  It includes Bluetooth, WiMAX, 3G, Wifi, Ultrawideband, Telephone data (and voice).  Needs to be seamless, enable roaming, and enables sharing.

Working on connman, existing solutions not designed with our goals in mind; changes at the time seemed harder than a clean start, separate UI from functionality.  See Marcel Holtman for any questions (he's at the collab summit as well).

The Tools:  PowerTOP, LatencyTOP, Project Builder, Moblin Image Creator.

There is a moblin track tomorrow all day.  The agenda should be on the LF Events site.

Linux Foundation Collaboration Summit keynote notes

These may be a little ad hoc but here are some quick notes from the Jim Zemlin's welcoming talk on the state of Linux and the Linux Foundation at the third annual Linux Foundation Collaboration Summit.

Jim Zemlin kicked off the conference with an observation that even in tight times over 400 attendees made it to the 3rd annual Collaboration Summit.

Linux is now being used by nearly ever person in the world nearly every day (cue IBM prodigy video) followed by a slide showing a large number of servers and appliances where Linux is used.

3 trends and 3 opportunities...

Trend One:  It's the Economy

IDC:  50_% Yes for servers and clients, 25% evaluating, 25% No on evaluating Linux.

Increasing mergers/consolidation driving IT infrastructure consolidation.

Linux brings lower costs, greater simplicity.  A recession causes enterprises to re-think fixed cost assumptions iwth Linux on the desktop.

Linux-related software spending is to increase at 2-3X the rate of Unix and Windows in the overall market.

Linux is the primary beneficiary of the recession, growing 2-3 times faster than any other platform.

Trend 2:  Redefining the desktop

Is this again "The Year of the Desktop"?

Old desktop, Thinkpad T20, over $1000, crappy battery.

Also had a Motorola "Flip Phone",

Apple iPhone, about same processor (really?) as Thinkpad T20.

HP Mini 1000, 1.6 MHz Atom, 120 GB Storage, wifi, bluetooth, web cam VOIP, $250.  Note:  Cheaper than the iPhone.  Convergence is starting to happen - are phones the clients of the future?  Or sub/mini-notebooks?

Is the PDA or desktop or Kindle the new desktop of the future?  Is the TV the future?  Mobile internet?  Is your car the new desktop. Picture  of a complex chair with panoramic display.  Or will it be a holographic display interface a la MIT's Human Interface lab activities.

Linux can support any of these models today.  Linux is a key and fundamental component of all of these interfaces.

Trend Three:  The Cloud

It is real.  Linux has the vast lead in the cloud computing space. List of companies:  mozy, cassatt, flexiscale,, simory, elastra, mosso, dell, google, 3tera, morph, 10gen,, cohestiveFT, IBM, elastichosts, amazon web services, etc. etc.

Linux has the clear and dominating lead in the cloud computing space.

Opportunity #1:  Standards

Linux standardization.  Cool picture of penguins with different faces.

Major goal is to keep standards open and fair.  Trying to avoid a de facto, lock-in "standard" such as what exists today.

Opportunity #2:  Unified Defense

Still hearing that Linux may not be safe - a quadrillion dollars of money runs through the Chicago Merchantile Exchange today, all using Linux. 

Open Invention network, defensive publications, software freedom law center, linux defenders, patent commons, peer to patent, open  source, OSAPA, Post issue peer to patent, Linux Legal Defense Fund all out there, coordinating, making sure that Linux is a safe platform for end users.  Working on both defending Linux within the systems as well as evolving the legal environment

Opportunity #3:  It's not just about price:

Windows video looking for a laptop supporting his needs, under $1500. Microsoft is competing for the first time on price.

Now story of Zemlin's equivalent shopping experience.  Google G1 @ $179, 42" plasma television $699, HP laptop $250, $99 DVR, and bought an OLPC and gave it to a child in need for $199 - all under the $1500 price paid in the microsoft commercial.

Linux is changing entire models from a technical and business sense.

What's going on at the Linux Foundation. 

Events -> Web -> Workgroups -> Training -> Legal Defense -> Standards -> Promotion -> Fellows.

See Wikinomoics - #1 example is Linux.  And now time for Linux to up the ante on collaboration.

Television training programs with live telepresense at any time, new social networking tools, fellowship programs to enable people to continue to maintain (

Picture of 7*9 - 63 large members.  This is where the greater Linux ecosystem comes together.  New members are welcomed to get involved.

Monday, March 30, 2009

Linux Video Contest

You know how life becomes so busy that fun things like blogging fall to the wayside? Well, that's where I've been. I have lots of fodder for some decent blogging but it takes something a little more on the fun side to pull me into the blogosphere again: The Linux Foundation's Video Contest. While it is now to late for new entries, there appears to be a decent number of new videos to vote on. And many of them are pretty cute. I haven't seen many that will make the superbowl commercials but the diversity of thought and rational for using Linux makes good food for thought and there are a few that might help people sort of get what it is all about. There is less focus unfortunately on what Linux can really do for the average home user and more on why people choose to participate in the Linux community or why people use Linux and are annoyed with the reliability or cost of other platforms. But for Linux geeks, there are some summaries of why Linux is a great option and maybe, just maybe, the Linux Foundation will do a repeat of this next year and we'll see some maturing of the videos to focus on why mom & pop or a small business or why your family could or should use Linux.

But in the meantime, go ahead and view (and rate!) the vidoes! It is a fun diversion in the middle of a hectic work week!


Wednesday, January 07, 2009

Automotive Bail Outs or Building IT infrastructure for the future?

We live in interesting economic times and I can't claim that I'm a big fan of bailing out businesses that have failed, without finding some way of adapting that business to our current economic and social climates.  However, this line of thinking on government investment in building not just highways, but IT infrastructure as a means to improve the basis for much of our economy seems quite exciting.  As a few of my previous posts indicate, I'm quite interested in cloud computing as a way to improve the ability of businesses to rapidly adapt to changes and to spend more time focusing on the core of the business as opposed to managing IT.  This seems like an area where government investment in support of cloud computing and IT infrastructure updates could substantially benefit government organizations, educational institutions, health care and small businesses.  The blog entry has some enticing food for thought and a new outlook on a possible solution for improving our economy, increasing employment, and improve the underlying infrastructure on which so many of our businesses today are built.

Food for thought....

Friday, August 15, 2008

Um, Just Who is Managing Your Public Cloud?

This article summarizes some of the recent bumps in public clouds. While these bumps are inevitable (and unenviable!) in the early stages of a new technology, they do shine the light on the management of the data center. And, as may be obvious, the people that lost their data in one case most likely have no recourse with the holders of that data. In the case of outages, "well, gee, so sorry" is a pretty weak excuse at the moment for problems in managing the public cloud.

My guess is that this will start a bit of a turn towards more conservative cloud management (that loose and free stuff looks good on paper) and that in turn may start to put a little pressure on prices or start to reduce the license/contractual assurances that current cloud providers make available.

Another thing worth noting here, Google and Amazon, two of the biggest cloud providers, have internal architectures that are designed with high availability in mind. These types of outages would not have affected their core operations, typically. However, most applications that are running in their clouds today were not architected for the same style of high availability.

Anyway, I'll continue to assert that issues like this will help foster the drive towards at least initially, private clouds, with a limited subset of workloads moving into the public clouds based on the type of workload.

It is going to be a bumpy take off into these clouds - fasten your seat belt and hope that the people getting sick along the way aren't on your plane...

BTW here are a couple of other links to recent glitches and failures such as the evaporating cloud or "oops, sorry we deleted your cloud". Some are Web 2.0, but a couple are effectively cloud computing providers which have had public failures - in large part because the data centers and applications were not designed for true high availability or had maintenance issues. And, the last of those links (thanks, Brian!) was just the typical human error problem. Even if you don't create your own cloud, you may well want to really know who is managing your cloud and how - at least until we have some higher end service level agreements available.

Wednesday, August 13, 2008

VMware joins the Linux Foundation

So Cloud Computing is the rage today, it is based on virtualization. Many claim that Linux and Open Source was the master key that opened the door to Cloud Computing. So, it seems very fitting that VMware has joined the Linux Foundation. The recent re-/free-pricing of VMware ESX definitely helps make core virtualization a commodity and thus makes it easier to build the more complex software solutions that ultimately will simplify information technology management over the next several years. Linux with VMware ESX, Xen, KVM, etc. now provide a powerful base platform on which to build more complex solutions which will ultimately enrich our lives and reduce the amount of time we spend managing our IT infrastructure.

Welcome to the Linux Foundation, VMware!

You want to participate in an open source development community?

Then read this: . Kudos to Jonathan Corbet of fame. A very good (and relatively short) booklet on how to participate in an open source community. Specifically, this is geared towards Linux, but many of the observations in here will span communities and relate to any development project which is developed where a mailing list is the primary communication medium for developers.

Definitely a good read!

How Secure is your Public Cloud, anyway?

I've been chatting with people lately a bit about the rate of uptake and adoption of these so-called "public" clouds. While I'm a big fan of the potential here, they still aren't the right thing for all workloads. There are problems with availability, security, latency, etc. which have not all been resolved. As an example, VMware was recently hit by this bug, and black hats identified some holes in Xen security. And these are surely not the last holes. Sometime around the time I was born, IBM started working with virtualization and providing very high end availability, reliability, security and such. VMware and Xen are much younger cousins which have a lot more growing up to do before they provide the security and isolation of physical machines. Of course, the push for Cloud Computing and ubiquitous virtualization will accellerate the improvements in security and isolation in these more modern hypervisors. But I probably wouldn't be putting my corporate intellectual property on a public cloud just yet. Many other workloads may be just fine but think carefully about what goes out into the public domain, er, cloud, and what you protect with those corporate firewalls.

On the other hand, those corporate firewalls give you some protection if you want to use private clouds inside your enterprise today. Those security holes mean that your own employees might get access to more information than you might have intended, but there are other things, like employment contracts, that give you some control over those types of misuses. And, unintentional access resulting from bugs at least puts your data in the hands of people you generally consider reliable.

Wednesday, August 06, 2008

How can I get a padded jail?

Jim Zemlin is a featured speaker and panel coordinator today at LinuxworldExpo. His intro to the panel questions started with the assertion that over the years (have there really been 18 LinuxWorld Expos so far? wow) that Linux has become nearly ubiquitous, including a lot of pictures of mobile devices, servers, desktops, laptops, services, collaborations tools, etc, which are all using Linux. He talked also about initiatives, including Green data centers, Cloud Computing, etc. which are more widely enabled as a result of Linux being so prevalent and accessible within the industry. In many ways, Linux is enabling many of these emerging technologies because it provides a common basis for innovation which is easily accessible and eliminates the need to build every new initiative or product from scratch.

Jim also provided a reinforcement that the "competitor" from which we in the Linux community need to learn from today is no longer Microsoft (well, they might have a trick or two that we can still learn) but the real competitor today is Apple. Jim took a poll to see who has some sort of Apple device today and at first glance, it appeared to be the entire room -- At a Linux conference! -- had an Apple product. A little digging showed that Apple products weren't quite ubiquitous but the point was by then made. Jim also pointed out how Microsoft and Apple are finding a way to sell products that have vendor lock in. The products are not open, not easily available, controlled by a single entity and basically are a jail for consumers. Of course, he then pointed out that the Apple Jail looked a lot like a 4 star hotel room with video on demand, a great view, clean and neat, and was a jail that most of us find to be rather luxurious. The next slide, though showed the Microsoft Jail - emphasizing that the roughness of conditions were exacerbated by the fact that you were often trapped in that jail with no amentities, some very large rough looking malware types, and a raft of viruses to make your stay as unpleasant as possible. And, the wrap up was the equivalent Linux "Jail" is more like a visit to Burning Man - free and open, yeah, there may not be a lot of frills, the power might go out, but you are free to come and go as you will, you can improve your surroundings as you choose, and ultimately you can really enjoy yourself. Perhaps Burning Man is not the best analogy here, but it makes the point quite nicely.

Jim's panelists included James Bottomley of kernel community fame, Christie from the Motorola alliance providing Linux enabled cell phones, and David who helped create the (no longer available in stores) Walmart PC.