I too forgot to mention that my accommodation at GUADEC was sponsored by the GNOME Foundation. Thanks guys!

Category: projects
I too forgot to mention that my accommodation at GUADEC was sponsored by the GNOME Foundation. Thanks guys!

Today I came across this blog post of your design team. In context of the recent criticism you had to endure regarding upstream contributions I am disappointed that you have not bothered to ping anybody from the upstream freedesktop sound theme (for example yours truly) about this in advance. No, you went to cook your own soup. What really disappoints me is that we have asked multiple times for help and support and contributions for the sound theme, to only very little success, and I even asked some of the Canonical engineers about this topic and in particular regarding some clarifications of the licensing of the old Ubuntu sound theme. I am sorry, but if you had listened, or looked, or asked you would have been aware that we were looking for somebody to maintain this actively, upstream -- and because we didn't have the time to maintain this we only did the absolute minimum work necessary and we only maintain this ourselves because noone else wanted to.
It should be upstream first, downstream second.
I am sorry if I sound like an always complaining prick to you. But believe me, I am not saying this because I wouldn't like you or anything like that. I am just saying this because I believe you could do things oh so much better.
Please fix this. We want your contributions. Upstream.
I guess it's a bit beating a dead horse, but I had a good laugh today when I learned that I alone contributed more to GNOME than the entirety of Canonical, and only 800 additional commits seperating me from being more awesome than Nokia.
/me is amused
Here's a podcast interview with yours truly where I speak a little about PulseAudio and systemd. Seek to 64:43 for my lovely impetuous voice. There's also an interview with Owen just before mine.
The Call for Papers for the Linux Plumbers Conference (LPC) in November in Cambridge, Massachusetts is ending soon, on July 19th 2010 (That's the upcoming monday!). It's a conference about the core infrastructure of Linux systems: the part of the system where userspace and the kernel interface. It's the only conference where the focus is specifically on getting together the kernel people who work on the userspace interfaces and the userspace people who have to deal with kernel interfaces. It's supposed to be a place where all the people doing infrastructure work sit down and talk, so that both parties understand better what the requirements and needs of the other are, and where we can work towards fixing the major problems we currently have with our lower-level infrastructure and APIs.
The two previous LPCs were hugely successful (as reported on LWN on various occasions), and this time we hope to repeat that.
Like the previous years, I will be running the Audio conference track of LPC, this time together with Mark Brown. Audio infrastructure on Linux has been steadily improving the last years all over the place, but there's still a lot to do. Join us at the LPC to discuss the next steps and help improving Linux audio further! If you are doing audio infrastructure work on Linux, make sure to attend and submit a paper!
Sign up soon! Send in your paper quickly! Only three days left to the end of the CFP!
(I am also planning to do a presentation there about systemd, together with Kay. Make sure to attend if you are interested in that topic.)
See you in Boston!
I forgot to mention another central problem in my blog story about file locking on Linux:
Different machines have access to different features of the same file system. Here's an example: let's say you have two machines in your home LAN. You want them to share their $HOME directory, so that you (or your family) can use either machine and have access to all your (or their) data. So you export /home on one machine via NFS and mount it from the other machine.
So far so good. But what happens to file locking now? Programs on the first machine see a fully-featured ext3 or ext4 file system, where all kinds of locking works (even though the API might suck as mentioned in the earlier blog story). But what about the other machine? If you set up lockd properly then POSIX locking will work on both. If you didn't one machine can use POSIX locking properly, the other cannot. And it gets even worse: as mentioned recent NFS implementations on Linux transparently convert client-side BSD locking into POSIX locking on the server side. Now, if the same application uses BSD locking on both the client and the server side from two instances they will end up with two orthogonal locks and although both sides think they have properly acquired a lock (and they actually did) they will overwrite each other's data, because those two locks are independent. (And one wonders why the NFS developers implemented this brokenness nonetheless...).
This basically means that locking cannot be used unless it is verified that everyone accessing a file system can make use of the same file system feature set. If you use file locking on a file system you should do so only if you are sufficiently sure that nobody using a broken or weird NFS implementation might want to access and lock those files as well. And practically that is impossible. Even if fpathconf() was improved so that it could inform the caller whether it can successfully apply a file lock to a file, this would still not give any hint if the same is true for everybody else accessing the file. But that is essential when speaking of advisory (i.e. cooperative) file locking.
And no, this isn't easy to fix. So again, the recommendation: forget about file locking on Linux, it's nothing more than a useless toy.
Also read Jeremy Allison's (Samba) take on POSIX file locking. It's an interesting read.
It's amazing how far Linux has come without providing for proper file locking that works and is usable from userspace. A little overview why file locking is still in a very sad state:
To begin with, there's a plethora of APIs, and all of them are awful:
File locking on Linux is just broken. The broken semantics of POSIX locking show that the designers of this API apparently never have tried to actually use it in real software. It smells a lot like an interface that kernel people thought makes sense but in reality doesn't when you try to use it from userspace.
Here's a list of places where you shouldn't use file locking due to the problems shown above: If you want to lock a file in $HOME, forget about it as $HOME might be NFS and locks generally are not reliable there. The same applies to every other file system that might be shared across the network. If the file you want to lock is accessible to more than your own user (i.e. an access mode > 0700), forget about locking, it would allow others to block your application indefinitely. If your program is non-trivial or threaded or uses a framework such as Gtk+ or Qt or any of the module-based APIs such as NSS, PAM, ... forget about about POSIX locking. If you care about portability, don't use file locking.
Or to turn this around, the only case where it is kind of safe to use file locking is in trivial applications where portability is not key and by using BSD locking on a file system where you can rely that it is local and on files inaccessible to others. Of course, that doesn't leave much, except for private files in /tmp for trivial user applications.
Or in one sentence: in its current state Linux file locking is unusable.
And that is a shame.
When programming software that cooperates with software running on behalf of other users, other sessions or other computers it is often necessary to work with unique identifiers. These can be bound to various hardware and software objects as well as lifetimes. Often, when people look for such an ID to use they pick the wrong one because semantics and lifetime or the IDs are not clear. Here's a little incomprehensive list of IDs accessible on Linux and how you should or should not use them.
There are various other hardware IDs available, many of which you may discover via the ID_SERIAL udev property of various devices, such hard disks and similar. They all have in common that they are bound to specific (replacable) hardware, not universally available, often filled with bogus data and random in virtualized environments. Or in other words: don't use them, don't rely on them for identification, unless you really know what you are doing and in general they do not guarantee what you might hope they guarantee.
Linux offers a kernel interface to generate UUIDs on demand, by reading from /proc/sys/kernel/random/uuid. This is a very simple interface to generate UUIDs. That said, the logic behind UUIDs is unnecessarily complex and often it is a better choice to simply read 16 bytes or so from /dev/urandom.
And the gist of it all: Use /var/lib/dbus/machine-id! Use /proc/self/sessionid! Use /proc/sys/kernel/random/boot_id! Use getuid()! Use /dev/urandom! And forget about the rest, in particular the host name, or the hardware IDs such as DMI. And keep in mind that you may combine the aforementioned IDs in various ways to get different semantics and validity constraints.
On popular request, here are my (terse) slides from LinuxTag on systemd.
The upcoming week I'll do two talks at LinuxTag 2010 at the Berlin Fair Grounds. One of them was only added to the schedule today, about systemd. Systemd has never been presented in a public talk before, so make sure to attend this historic moment... ;-). Read about what has been written about systemd so far, so that you can ask the sharpest questions during my presentation.
My second talk might be about stuff a little less reported in the press, but still very interesting, about Surround Sound in Gnome.
See you at LinuxTag!