Monday, 30 June 2008

I'm confused about hardy heron and default applications

Back in the day you told your linux system which applications you wanted to use with environmental variables things like:
export EDITOR=/usr/bin/emacs
Then along came the wonderful debianness of the apt-family and the alternatives system.
update-alternatives --config vi
Now this system too is being undermined by various systems, leaving me uncertain where to set things. What I'm trying to do is:
  • have Sound Juicer and not Music Player (RhythmBox) launched when a CD is inserted. There is an entry for "Multimedia" under the "Preferred Applications" menu option, but this seems to be about opening files, not responding to newly-mounted media and Sound Juicer is not listed as an option. There doesn't seem to be anything about CDs under the "Removable Drives and Media Preferences" (although this is where the setting are that automatically load F-Spot when I attach my camera, which seems like the same kind of thing).
  • configure which applications I can launch on the .cr2/TIFF/Canon RAW files produced by my digital camera I want the same applications to appear in both the file browser and F-Spot (which look like they're presenting the same interface but apparently aren't). ufraw seems to be the tool of choice here (either standalone or as a gimp plugin), but I'd like to pass it some command line args. I can find no entry or this under the "Preferred Applications" menu option.
There are lots of menus with a "Help" as an option, but very few of them seem to be.

Mike O'Connor at Friday drinks

Mike O'Connor
Originally uploaded by Stuart Yeates
I took some photos at Friday drinks, trying to do the whole wide-aperture-to-isolate-visual-elements thing. I wasn't really aware of just how much it is dependent on the relative position of the photographer, subject and background.

Some of them turned out better than others.

Sunday, 29 June 2008

What should the ohloh homepage look like?

In a previous post I criticised ohloh homepage for being completely useless for current users of the site. This was somewhat unfair, since I provided no concrete constructive suggestions as to what should be on the page. This blog post, hopefully, fixes that.

To my mind there are two classes of information that should be on the homepage: (a) things that lots of users are confused about and (b) things that are 'new' (think customised rss feeds) (c) combinations of both.

Finding out what people are confused about is easy, just look in the forums, where people are most confused about:

  1. changes in their kudos
  2. why their enlistment hasn't been updated
  3. why version control system of choice isn't supported

The list of 'new' things is:

  1. new / updated projects
  2. new / updated users
  3. new / updated enlistments
  4. new forum posts
  5. new RSS items in projects RSS feeds

These (1), (3) and (5) can be filtered by the users connection to the project (contributor/user/none).

So the trick now is to find combinations which help users understand what's going on and encourage users to engage with ohloh and the projects.

Idea X: A feed of updated enlistments a user is a contributor or user of:

  • Project A's enlistment at updated 24st June 2008 at 24:50 GMT. A, B and C are the biggest commiters to this project, which is in Java and XML. Last updated 1st Feb 1970.
  • Project B's enlistment at updated 24st June 2008 at 24:50 GMT. D, E and F are the biggest commiters to this project, which is in C and shell script. Last updated 1st Feb 1970.
  • Project C's enlistment at failed at or about revision 12345. Click here for instrustions on what to do about this. Last updated 1st Feb 1970.
  • ...

This not only tells user the status of their projects, but that enlistments are being processed, the expected time between each processing of enlistments, that some processing fails and that there's a link to find out more information. Such a feed also focuses attention on the processing of enlistments---which is the heart of ohloh and the key differentiating factor that seperates ohloh from 15 billion other open source sites.

Idea Y: A mixed feed of upstream bugs that effect ohloh performance and functionality:

  • Ticket "support for .xcu file format" updated in ohcount by user "batman"
  • Post "jabber message length" updated in Help! forum by user "someone else"
  • Ticket "svn branch support" updated in ohloh by user "robin"
  • Ticket "bzr support in ohloh" created in ohloh by user "joker"
  • Post "jabber message length" created in Help! forum by user "someone"

This lets people keep up with the status of ohloh progress on issues such as the implementation branch support for svn and support for hg.

Monday, 23 June 2008

New ohloh look and feel

ohloh have changed their look and feel, and I've got to say I hate it.

Once you're logged in, almost nothing above the scroll cut on the front page is useful---we already know what ohloh is and don't need bandwidth-hogging ads to tell us. What we need are deep links into new stuff---projects, users and forum posts.

How about logged in users see content rather than ads on the homepage?

Sunday, 15 June 2008

Kernel Hell and what to do about it

I've been in kernel hell with my home system for the past couple of days. What I want to build is a custom kernel that'll do xen, vserver, vmware, selinux, support both my wireless chipsets and support my video chipset. Ideally it should be built the Debian/Ubuntu way, so it just works on my Ubuntu Hardy Heron system.

So far I've had various combinations of four or five out of six working at once.

I'm not a kernel hacker, but I have a PhD in computer science, so I should be able to at least make progress on this, and the fact that I can't is very frustrating. At work I grabbed a kernel off a co-worker, but it wasn't built the Debian/Ubuntu way.

Standing back and looking at the problem, there seem to be two separate contributing factors:

  1. There are a huge number of organically-grown structural layers. I count git, the kernel build scripts, make, Linus's release system, the Debian kernel building system and the ubuntu kernel building system. I won't deny that each of these service a purpose, but that's is different points a which each of the six different things I'm trying to make work can begin their explanation of how to make them work and six different places for things to go wrong.
  2. There are about many Linux distributions and each of the things I'm trying to get working caters to a different set of them.
In many ways the distribution kernel packagers are victims of their own success, most Ubuntu, debian and RedHat kernels just work because they're packagers keep adding more and more features and more and more drivers to the default kernels. With the default kernels working for so many people, fewer and fewer people build their own kernels and the pool of knowledge shrinks. The depth of that knowledge increases too, with the each evolution of the collective build system.

Wouldn't it be great if someone (ideally under the auspices of the OSDL) stepped in and said "This is insane, we need a system to allow users to build their own kernels from a set of <git repository, tag> pairs and a set of flags (a la the current kernel config system). It would download the git repositories and sync to the tags and then compile to the set of flags. Each platform can build their own GUI and their own backend so it works with their widget set and their low level architecture, but here's a prototype."

The system would take the set of repositories and tags in those repositories and download the sources with git, merge the results, use the flags to configure the build and build the kernel. Of course, sometimes the build won't work (in which case the system sends a copy of the config and the last N lines of output to a central server) and sometimes it will (in which case the system sends a copy of the config and an md5 checksum of the kernel to a central server and optionally uploads the kernel to a local repository), but more than anything it'll make it easy and safe for regular users to compile their own kernels. The system would supplant "building kernels the Debian way" or "building kernels the RedHat way" and enable those projects working at the kernel level to provide meaningful support and help to their users on distributions other than slackware.

Potential benefits I can see are:

  1. increasing the number of crash-tolerant users willing to test the latest kernel features (better testing of new kernels and new features, which is something that's frequently asked for on lkml)
  2. easing the path of new device drivers (users get to use their shiny new hardware on linux faster)
  3. increasing the feedback from users to developers, in terms of which features people are using/interested in (better, more responsive, kernel development)
  4. reduce the reliance on linux packagers to release kernels in which an impossible-to-test number of features work flawlessly (less stressed debian/ubuntu/redhat kernel packages)
  5. ease the path to advanced kernel use such as virtualisation

You know the great thing about that list? Everyone who would need to cooperate gets some benefit, which means that it might just happen...

Macrons and URLs

Macrons are allowed in the path part of URLs, but not currently in the machine-name part (or at least, not yet), soāori-papakupu is good, but http://www.taiuru.Mā is not (use

A review of how lots of programs handle macrons is at

Saturday, 14 June 2008

Exporting firefox 3.0 history to selenium

In the new firefox 3.0 they've completely changed he way history is recorded, using a SQL engine to record it (details here).

I wrote a quik hack to export the history as a series of selenium tests:

sqlite3 .mozilla/firefox/98we5tz3.default/places.sqlite 'select * from moz_places' | awk -F\| '{print "<tr><td>open</td><td>"$2"</td><td></td></tr>" }'

sqlite3 locks the file, so you'll need to close firefox (or take a copy of the file) first. Cut and paste the results into an empty selenium test.

Obviously, your profile will have a different name.