Tuesday, October 23, 2007

Linux Device Support

The Linux Desktop survey from the Linux Foundation is now in full swing. I also Chair the Vendor Advisory Council for the Linux Foundation and recently did a poll of the vendors to identify the top 10 issues inhibiting Linux adoption from the vendors point of view. A recent meeting of the Linux Foundation's Desktop group with the three primary vendors that now pre-install Linux also identified a number of key issues to address. I also have met with the User Advisory Council of the Linux Foundation a number of times to hear their issues. And, there are always surveys online and summaries of installation tests, primarily on desktops.

In all of these sources of end user pain points, I always hear about how device drivers and device support ranks in the top 5 (sometimes top 1 or top 2) issues from each of these groups. I also have some first hand evidence of drivers as a problem, especially when dealing with the ultra-latest in hardware support. At IBM, we set up a "tiger team" to help identify new hardware, work with independent hardware vendors to provide them encouragement and training on how to develop open source drivers. When I was part of OSDL's Data Center Linux effort, I helped develop training information for developers, managers, project managers on how to work drivers into Linux. And, with help from the Linux community, OSDL's legal team, and the Linux Foundation after that, we set up programs to allow vendors to enter into NDA's with Linux driver writers. The goal all along has been to enable vendors to get drivers written, either on their own, or with help from the greater community of Linux driver writers and to have those drivers exist as part of the Linux kernel, available to all users when they install their Linux distribution. And, the Linux community itself has made available over 200 driver writers and 10 project managers to help create device drivers.

Yes, there are holdouts like nVidia who have not embraced the open source driver path. There are challenges in the wireless space related to the FCC operational ranges for transcievers. But by and large, the inhibitors to developing and making device drivers available for hardware have been removed. So, I am somewhat puzzled by the fact that what was clearly the #1 inhibitor to Linux adoption 5 years ago is still on the list as an inhibitor. And, I'm struggling now to find out why. As part of the Linux Foundation, I've pledged to dig into details and attempt to provide concrete lists of devices which need drivers, and honestly, I need help with this. I no longer have a list of devices that are not supported by Linux, and, if I had such a list, I would work all possible channels to get drivers for those devices written and into mainline. And, the list has to be more concrete than some of these comments from the desktop survey:
  • hardware problems (USB, bluetooth devices - webcams, headsets, ...)
  • Difficulty connecting with some devices.
  • Drivers Support for some devices (Printers, WLAN, ..).
There are some better comments in there, such as:
  • The need to manually compile drivers for external devices like a usb tv tuner (for example AverMedia provides the driver but there is no user friendly tool to install it and u need console commands knowledge and internet access in order to complete the install)
  • Quality drivers. Non-official drivers usually support only a subset of the full capabilities of hardware devices, and even official ones (like the Intel graphics driver, for instance) have a lot of annoying bugs.
But even those are a bit vague. There is also some confusion between "devices" such as printers and scanners (many of which have pass through kernel drivers with user mode application drivers) and those that have a full kernel driver to enable their capability. Of course, that is something that the desktop group can work through over time.

But my primary point here is that getting detail about *what* devices do not have support has been getting much, much harder. People need to provide information about Vendor, Product, Model number, PCI ID, USB ID, etc. to help the community identify what drivers are broken. And, when a driver is missing capability, as is alluded to in the "quality drivers" comment, the missing features need to be identified. Until we have that list of non-functioning or limited-function device, there isn't much that people can do. And, sometimes it is key to continuously let people know that features or devices are not working from release to release. The community often moves so quickly that something broken one day could be fixed the very next, and without a periodic reassessment on the functionality of the device is sometimes necessary.

Oh, and before someone points out the obvious ones, yes, nVidia now has the nouveau driver, ATI has started working with the community to support their graphics devices, Intel is working towards full support of their hardware, most SCSI and SAS devices are currently supported, and there are some rough edges still with MultiPath support that are still in progress. But what else is broken? Surely these few devices can not be the substantial reason that device support is constantly on the top of the list? What other devices do not have driver support? Expose them! That is the best way to get those devices supported!!!


Wednesday, October 10, 2007

Ballmer out fear-mongering again

Ballmer is out telling the Microsoft faithful that Red Hat customers owe Microsoft compensation. Of course, he's not clear on what people owe Microsoft for, other than some vague allegation of "intellectual property." The details, of course, more like a Hitchcock film - cue the ominous music and make sure the camera is focused where you can't actually see anything. That always brings out the greatest fear in humans - that fear of something truly frightening that you can't actually see but are only led to belief that it must exist. But, just like with Hollywood, there seems to be nothing behind these allegations. Slide the camera down a few feet and you'll see the fake blood, the poor props, and the people with no wounds, much like the scene that Microsoft seems to have set up. Some props, some large scary numbers, a lot of repeating how scary it is, and of course, a lot of press to propogate a lot of nothing.

This big number came out originally around May and sounded oh so frightful. Of course what very few people looked at was the density of patents around a single piece of technology. For instance, 15 patents supposedly were implemented amongst several email programs. Of course, it could have been 15 email program each of which *might* have happened to implement something that bore some resemblence to one of the alleged patent. And, nothing looks at the scope of those alleged patents - are they for turning a high priority message's subject Bright Red? Gee, that's easy to work around or fix. Is it for some mailbox flag being raised when new mail arrives? And gee, if MS patented that, was there prior are back during the beginnings of the internet, back before MS was a player in the email field? And, let's assume there *is* some patent that might, just might be implemented by some mailer. And that's a big IF from what I've seen thus far, but assuming there were one, do you know what would happen in the Open Source community? Step 1: Remove the feature. Step 2: Implement it better.

So, what are MS's options with this dubious set of FUD (Fear, Uncertainty and Doubt) that they are creating? Maybe they should sue some rich people who are using Linux, like the good Mr. Ballmer suggest. Oh, wait, look at that - most of those rich Linux customers are also running Microsoft products. OH, what kind of PR does *that* make, when you sue your own customers? I bet that's a great way to make them run from your company. Others speculated that MS could use the strategy of just ignoring those patents and IP. Of course, MS could never do something so simple, could they? No, what they are doing and going to continue to do is play the ominous Hitchcock movie music, wave a lot of props, jump around in the press exclaiming how scary this should be for everyone, and then extort "insurance" money in the form of cross licensing to put to rest their competitors' fears (yes, those same fears that they helped create, that have no substantiated basis).

I kind of wish someone could just tell MS to put up (the goods) or shut up. Playing on people's fears without any basis is really just immature and going to continue to cause their reputation to be affiliated with the dark side of the force.


Wednesday, October 03, 2007

Security: Objects vs. Names

Emily writes about the annual kerfluffle known as LSM vs. SELinux and I've been reading the coverage on slashdot and kerneltrap with some amusement lately. Last year this was again coming to a head around the time of the Linux Kernel Summit and one of the topics was to have the AppArmour/LSM folks and the SELinux folks get together to talk about how to resolve their differences. And, in the end, they were unable to resolve them, although the reason is that LSM and SELinux take two different approaches to security - both valid, but roughly incompatible.

SELinux was started by the NSA and provides an appropriate level of security for people that are really paranoid about the security of specific "objects" - files, directories, pipes, shared memory, etc. The SELinux people strongly believe that the goal is to protect an object by all means necessary, and all permissions are ultimately associated with that object. This allows for both Discretionary Access Controls (where I can give you access to my directory or file with chmod(1) or some form of Access Control List (ACL), and for Mandatory Access Controls (someone gave me a file which requires certain permissions, and I can not open that file up to someone who does not have those permissions - think of it more like Classified documents). With SELinux, all objects have their own security traits and they are completely unrelated to the name or location of that file.

This obviously sounds like good stuff for security; however, the downside is that the implementation of object based security has been a bit more invasive on the kernel side, very difficult to use and configure from the system administrator side, and started life in early deployments as the feature that was always shut off to get it out of the way. In other words, it often crossed the line between usefulness and security. This is where that age old comment that the only secure system is the one that is unplugged in a locked room buried under ten feet of concrete" comes in. Usability typically drops as security goes up.

LSM provides another mechanism related implementing security. It starts with the premise that every object has a name. That seems simple enough, right? In fact, most of us don't talk about the file that holds all of the kernel messages, but instead we refer to /var/log/messages (or your system's equivalent) as the place where logs are stored. In fact, pretty much every interesting object has to have a name - and probably a name in the filesystem (there are some exceptions to this but that isn't really important here). And, LSM works from the premise that systems administrators know the names of the files that they want to protect or manage. Therefore, why not provide a security mechanism which takes as its primary "handle" for accessing an object a filesystem path name? And, from that small shift in fundamental focus, SELinux and LSM tend to diverge while both providing very strong, or sufficiently strong system security mechanism.

A more interesting difference between the two shows up when one wants to audit all actions on a file. Let's say I'm monitoring /var/log/auth.log watching to see who becomes root on my system. In SELinux, I might monitor all reads and writes to /var/log/auth.log. But what if someone moved /var/log/auth.log to /var/log/auth.hide? SELinux would continue to monitor my moved object for changes, even though it isn't the place where the current notification of reads and writes was happening. And, just supposed that some black hat at the end removed the new /var/log/auth.log and put back the /var/log/auth.hide as the new log? It might appear as though the entries of the black hat hacker just disappeared! Of course, we've monitored all reads/writes to /var/log/auth.log and the audit logs may not show the problem. This is a case where LSM track actions on the path /var/log/auth.log no matter what object it pointed to. Of course, this case is a bit contrived, but shows the point. In reality, we'd probably monitor all reads, writes, moves, truncates, creates, etc. of the object in question and in both SELinux and while using LSM we'd be able to track the black hat.

One final point regarding the pluggable debate - SELinux allows a single, fixed mechanism with some policy extensions. If it doesn't have the right capabilities or permission controls, well, you just can't monitor something. LSM provides a pluggable/semi-stackable (I'll stay out of the politics of this one at the moment) interface, which means that you can create more complex rules for denying access to files. One part of the debate is whether or not LSM should have that flexibility - what if you stack a function which doesn't just permit or deny access, but instead adds extra functionality? Who monitors that functionality to ensure that it does not in some way corrupt the kernel or make some locking layering violation, or simply changes some existing priority scheme by allowing access to a file inappropriately? The problem with flexibility is that it is like the rope you can use to hang yourself or the rope to build a more comfortable hammock. There is more concern about people using hooks to do the former (like in Spiderman - "with great power comes great responsibility").

But yet again (and probably indefinitely) *both* security policies are in to stay, while people play with both until some new mechanism emerges that is better than either.