a graphical web of trust

I recently stumbled upon sig2dot, a gpg/pgp keyring graph generator. In fact this seems to have been around for some time, but I’d never come across it before. It can be used to generate a graph of all of the signature relationships in a GPG/PGP keyring, and, like other visualisation tools, this graphical image producing program can give new insight into relationships between objects.

The sig2dot program itself is available in the debian/ubuntu repositories in the package called “signing-party”. But unless you want to install a shed load of other unnecessary cruft along with it (exim? for god’s sake, why?), I recommend you simply pull the perl code direct from the author’s site. Along with the sig2dot program itself, you will need “neato” from the graphviz package and “convert” from the wondrous imagemagick package suite. If you don’t already have those installed then it is pretty safe to pull them from your distro’s package repository.

That done, try the following:

first create an ascii graphviz dot file ready for neato

$ gpg –list-sigs –keyring ~/.gnupg/pubring.gpg | sig2dot.pl > ~/.gnupg/pubring.dot

(that is “minus minus list-sigs” and “minus minus keyring”) now convert to a postscipt file

$ neato -Tps ~/.gnupg/pubring.dot > ~/.gnupg/pubring.ps

before using imagmagick to convert to a png graphic

$ convert ~/.gnupg/pubring.ps ~/.gnupg/pubring.png

Those of you with gpg keyrings may wish to try it out (and no. I’m not going to show you mine).

Permanent link to this article: https://baldric.net/2010/09/12/a-graphical-web-of-trust/

kseniya simonova

This has absolutely nothing to do with my usual topics but I make no apology for posting this because the artistry is stunningly beautiful. I was sent a link to Kseniya Simonova’s sand art by a correspondent on a mailing list I subscribe to. Apparently the artist is telling the story of a ukrainian family before, during and after the bombing of their town in the second world war.

I understand that Ms Simonova was a contestant on Ukraine’s version of “Britain’s got talent”. This lady has real talent, unlike some of the contestants I have seen on the UK’s version. It looks as if Ukrainian television may be in a better place than ITV.

Permanent link to this article: https://baldric.net/2010/09/04/kseniya-simonova/

it’s not that I’m anti google

I’m just pro privacy. And google just happens to be one of the worst offendors when it comes to breaches of my privacy. El Reg yesterday ran an article pointing to the consumerwatchdog.org ad depicting Eric Schmidt as a “privacy pervert”. Deliciously, that ad is hosted on youtube.

But consumerwatchdog have long campaigned about google’s attempts to trample on users’ privacy. The video below shows how google’s chrome browser fails to protect the user’s privacy even when “incognito mode” is used. Incidentally, the video also shows how google’s javascript based, supposedly helpful, “stem searching” capability during searches effectively adds a keystroke sniffer to your PC. Note that this capability is not specific to chrome, it happens whatever browser you use when you use google’s search engine.

Be careful out there.

Permanent link to this article: https://baldric.net/2010/09/04/its-not-that-im-anti-google/

phone home

Google’s chrome browser first appeared back in 2008, since when many commentators have sung its praises. Apparently it is “blindingly fast” (well, let’s face it firefox can be a tad slow, particularly if loaded down with a swathe of plugins) “clean”, and “simple”. Until recently I had not tried chrome (for some fairly obvious reasons) but I thought it might be interesting to fire up a copy in a VM just to see what all the fuss was about. So I did. And whilst I was doing that I ran tcpdump and etherape to see what was happening under the hood. What I found intrigued me.

First I spun up a completely new clean install of ubuntu in a virtualbox VM. Then I downloaded the latest chrome .deb from the google site and installed it. Before launching chrome for the first time in the guest system I fired up the sniffers in the host system. This is what I found:

image of etherape capture

Note that etherape shows five connections which are instantly recognisable as going to google servers (the 1e100.net domain), three to verisign, and a further three to IP addresses with no associated names (these appear to be either youtube or google image cache machines – also owned by google of course). You can ignore the rlogin.net servers, they are all mine.

A quick look at the tcpdump record shows that the verisign connections all check for SSL certificates and/or revocations – perfectly sensible and understandable. But the google connections are less illuminating until you follow the tcp streams. Two of the connections are SSL encrypted so it is not possible to be certain what is carried in them, but they appear to be certificate exchanges (or updates), a third gets a certificate revocation list whilst two more get simple html or xml schema probably associated with building the welcome screen (I didn’t explore in detail). One connection gets a shockwave flash file and two get and set cookies in the youtube domain. At least one of the google connections also gets and sets cookies in the google domain.

Now none of this is inherently suspicious (well, alright, it might be) but the point is that all this happens upon first connection and without reference to the user. And if you don’t want google (or youtube) cookies on your machine you will have to delete them when first you use the browser. I have an instinctive (OK, partly irrational) dislike of software which “phones home” without telling me – and chrome does that on quite an impressive scale. I’m not sure what would happen in prolonged usage of the browser because I wasn’t impressed enough to want to use it in anger.

I’ve trashed the VM of course.

Permanent link to this article: https://baldric.net/2010/08/29/phone-home/

update to autossh – or how ServerAliveInterval makes this unnecessary

I had a couple of comments on my earlier post about autossh which suggested that I should look at alternative mechanisms for keeping my ssh tunnel up. Rob in particular suggested that setting “ServerAliveInterval” should work. Oddly I had tried this in the past whilst trying out various configuration options and I swear it didn’t work for me. But since the autossh mechanism felt inelegant I thought I’d revisit my ssh_config file as Rob suggested. And indeed setting ServerAliveInterval to 300 (i.e. 5 minutes) solved my tunnel drop problem. I’d guess that other intervals of less than 1 hour would equally work but I haven’t checked.

I have no idea why my earlier experiments failed.

Permanent link to this article: https://baldric.net/2010/08/27/update-to-autossh-or-how-serveraliveinterval-makes-this-unnecessary/

they are taking over the entire net

Some time ago I disabled my wp-recaptcha plugin because it had the unfortunate side effect of marking all comments as spam. I don’t have a particularly high comment rate, but the ones I do get, and which get past akismet, are usually OK. Apparently a flaw in version 2.9.6 surfaced when wp-recaptcha was used in conjunction with wordpress 2.9.2. I obviously got caught with this when I updated my wordpress installation so when I noticed the problem I just disabled wp-recaptcha. Of course I have since updated wordpress again and I noticed that the plugin had also been updated to 2.9.7 so I thought I would upgrade and reactivate. Upon doing so, however, I discovered that my public/private key pair had disappeared as a result of the deactivation and I was invited to apply for a new set. OK, no problem, happy to do so but a bit peeved that the keys seemed to be deleted when the plugin is deactivated. This strikes me as unnecessary.

But, and this is now a big but, this is what I was greeted with when I attempted to get a new set of keys:

“reCAPTCHA is now part of Google. In order to use it, you must create a new Google Account or sign in with an existing Google Account.
If you are a previous user of reCAPTCHA, you can migrate your old account after signing in with a Google Account.”

Yikes! It seems that re-captcha is now part of google. The chocolate factory have bought yet another piece of internet infrastructure which will no doubt feed the maw of the advertising goliath with statistics gained from my site.

Well bollocks to that. It can stay disabled. I’ll look for another captcha mechanism.

Permanent link to this article: https://baldric.net/2010/08/02/they-are-taking-over-the-entire-net/

autossh – or how to use tor through a central ssh proxy

Since I first set up a remote tor node on a VPS about this time last year, I have played about with various configurations (and used different providers) but I have now settled on using two high bandwidth servers on different networks. One (at daily.co.uk) allows 750 Gig of traffic per month, the other (a new player on the block called ThrustVPS) allows 1000 Gig of traffic. These limits are remarkably generous given the low prices I pay (the 1000 Gig server cost me £59.42, inc VAT, for a year. OK, that was a special offer, but they are still good value at full price) and they allow me to provide two reasonably fast exit servers to the tor network. Both suppliers know that I am running tor nodes and are relaxed about that. Some suppliers are less so.

I fund fast exit nodes as a way of paying something back to the community – but as I have pointed out before, they also allow me to have a permanent entry point to the tor network which I can tunnel to over ssh, thus protecting my own tor usage from snoopers. For some time I used a configuration based on tyranix’s notes documented on the tor project wiki. But eventually I found that to be rather limiting because it meant that I had to remember to run an ssh listener on each machine I used around the house (my laptop, netbook, two desktops, my wife’s machine etc) and to configure the browser settings as necessary. Then I hit upon the notion of centralising the ssh listener on one machine (I used the plug) in a sort of ssh proxy configuration. This meant that I only had to configure the local browsers to use the central ssh listener as a proxy and everything else could be left untouched. It also has the distinct advantage that my wife no longer has to worry about anything more complex than switching browser when she wants to use tor.

But I hit a snag when initially setting up ssh on the plug. For some reason (which I have never successfully bottomed out) the ssh process dies after an hour of inactivity. This is not helpful. Enter autossh. Using autossh means that the listener is restarted automagically whenever it dies so I can be confident that my proxy will always be there when I need it.

Here’s the command used on the plug to fire up the proxy:

autossh -M 0 -N -C -g -l user -f -L 192.168.57.200:8000:127.0.0.1:8118 tornode

That says:

– M 0 – turn off monitoring so that autossh will only restart ssh when it dies.
– N – do not execute a command at the remote end (i.e. we are simply tunneling)
– C – compress traffic
– g – allow remote hosts to connect to this listener (I limit this to the local network through iptables on the plug)
– l user – login as this user at the remote end
– f – background the process
– L 192.168.57.200:8000 – listen on port 8000 on the given IP address (rather than the more usual localhost address)
– 127.0.0.1:8118 tornode – and forward the traffic to localhost port 8118 on the remote machine called tornode

Of course “tornode” must be running ssh on a port reachable by the proxy. Again, I use iptables on tornode to limit ssh connections to my fixed IP address – don’t want random bad guys knocking on the door.

Now on tornode I have polipo listening on port 8118 on localhost. I used to use privoxy for this, but I have found polipo to be much faster, and speed matters when you are using tor. My polipo configuration forwards its traffic to the tor socks listener on localhost 9050. I also disabled the local polipo cache (diskCacheRoot = “”) because leaving it enabled means that the cache (by default /var/cache/polipo) directory will contain a copy of your browsed destinations in an easily identifable form – not smart if you really want your browsing to be anonymous (besides, my wife deserves as much privacy as do I).

The final bit of configuration needed is simple. Set your chosen browser to use the proxy on port 8000 on address 192.168.57.200. Since I use firefox for most of my browsing, I simply use opera for tor and I have that browser stripped to its basics and locked down as much as possible. Using tor is then simply a matter of firing up opera in place of firefox. This means that I always know when I am using tor or not (and just to reassure myself, the opera homepage is the torcheck page).

You can’t be too careful.

Permanent link to this article: https://baldric.net/2010/08/01/autossh-or-how-to-use-tor-through-a-central-ssh-proxy/

the “awesome power” of the apple brand

I have been following the unfolding tale of the faulty antenna on the new iPhone4 with some amusement. Apple’s complete inability to admit to any possibility of a mistake is hugely entertaining. Apple (or is it Jobs?) seem to be unable to contemplate the possibility of the need for a recall. Such hubris is bound to cause a problem somewhere downstream.

I was particularly amused by this post over at fakesteve where Dan Lyons says:

“we’ll rush out iPhone 5 with a new design by Christmas season. It’s basically an iPhone 4 with a rubber wrapper around the outside. We’ve got the kids in China building them already.”

Treating your customers like idiots (even if they are; in fact especially if they are) is not a smart marketing move.

Permanent link to this article: https://baldric.net/2010/07/25/the-awesome-power-of-the-apple-brand/

there are more than 10 kinds of people in the world

A correspondent on a mailing list I subscribe to uses the .sig “There are 10 kinds of people in the world. Those who understand Vigesimal, and 9 others.”

Even after checking what vigesimal was, I had to think about this for a bit because initially I thought he was wrong. If I understand correctly, I think he means “J others”. Anybody out there got any other suggestions?

Permanent link to this article: https://baldric.net/2010/07/04/there-are-more-than-10-kinds-of-people-in-the-world/

scroogle is having a problem

I posted a note about scroogle back in January. Scroogle offered an SSL interface to the google engine, and, moreover, didn’t lumber its users with google cookies and sundry other irritations. Since then, however, google themselves have started to offer an SSL interface and, coincidentally, scroogle seem to have started to have some problems.

If you visit the scroogle SSL interface, you get a redirect to a notice which explains why some changes made at google mean that scroogle can no longer work properly. Scroogle managed to get a workaround in place for a few days, but it seems that another google change has finally killed that too unless google can be convinced to help out – unlikely in my view. The scroogle redirect page (dated 1 July 2010) has the following line from Daniel Brandt:

“Thank you for your support during these past five years. Check back in a week or so; if we don’t hear from Google by next week, I think we can all assume that Google would rather have no Scroogle, and no privacy for searchers.”

That in itself is bad enough, but as a separate new posting explains, scroogle now seems to be the target of a botnet aimed at swamping its servers. As Brandt goes on to say:

“Google has a few hundred thousand servers, while Scroogle has six. They can put up with sites that spread malware, but our bandwidth is limited. Even if Google relents and the output=ie interface returns, this Scroogle malware problem could still be increasing at that point. Eventually it alone might shut down Scroogle.”

Sad. I hate to see the little guy lose out.

Permanent link to this article: https://baldric.net/2010/07/04/scroogle-is-having-a-problem/

this is a politics free zone

Well, I have cast my vote. Let’s hope we get the result we need.

Permanent link to this article: https://baldric.net/2010/05/06/this-is-a-politics-free-zone/

email address images

Adding valid email addresses to web sites is almost always a bad idea these days. Automated ‘bots routinely scan web servers and harvest email addresses for sale to spammers and scammers. And in some cases, email addresses harvested from commercial web sites can be used in targetted social engineering attacks. So, posting your email address to a website in a way which is useful to human being, but not to a ‘bot has to be a “good thing” (TM). One way of doing so is to use an image of an address rather than text itself. Of course this has the disadvantage that the address will not be immediately usable by client email software (unless, of course you defeat the object of the exercise by adding an html “mailto” tag to the image) but it should be no big deal for someone who wants to contact you to write the address down.

There are a number of web sites which offer a (free) service which allows you to plug in an email address and then download an image generated from that address. However, I can’t get over the suspicion that this would be an ideal way to actually harvest valid email addresses, moreover addresses which you could be pretty certain the users did not want exposed to spammers. Call me paranoid, but I prefer to control my own privacy.

There are also a number of web sites (and blog entries) describing how to use netpbm tools to create an image from text – one of the better ones (despite its idiosyncratic look) is at robsworld. But in fact it is pretty easy to do this in gimp. Take a look at the address below:

This was created as follows:

open gimp and create a new file with a 640×480 template (actually any template will do);
select the text tool and choose a suitable font size, colour etc;
enter the text of the address in the new file;
select image -> autocrop image;
select layer -> Transparency -> Colour to Alpha;
select from white (the background colour) to alpha;
select save-as and use the file extension .png – you will be prompted to export as png.

Now add the image to your web site.

Permanent link to this article: https://baldric.net/2010/05/03/email-address-images/

ubuntu 10.04 – minor, and some not so minor, irritations

If and when the teething problems in 10.04 are fixed and the distro looks stable enough to supplant my current preferred version, I will be faced with one or two usability issues. In this version, canonical have taken some design decisions which seem to have some of the fanbois frothing at the mouth. The most obvious change in the new “light” theme applied is the move of the window control buttons from the top right to the top left (a la Mac OSX). Personally I don’t find this a problem, but it seems to have started all sorts of religious wars and has apparently even resulted in Mark Shuttleworth being branded as a despot because he had the temerity to suggest that the ubuntu community was not a democracy. Design decisions are taken by the build team, not by polling the views of the great unwashed. In my view that is how it should be. The great beauty of the free software movement is the flexiibility and freedom it gives its users to change anything they don’t like. Hell, you can even build your own linux distro if you don’t like any of the (multiple) offerings available. Complaining about a design decision in one distro simply means that the complainant hasn’t understood the design process, and further, probably doesn’t understand that if he or she doesn’t like it, then they are perfectly free to change that decision on their own implementation.

In fact, it is pretty easy to change the button layout. To do so, simply run “gconf-editor” then select apps -> metacity -> general from the left hand menu. Now highlight the button_layout attribute and change the entry as follows:

change
close,minimize,maximize:
to
:minimize,maximize,close

i.e. move the colon from the right hand end of the line to the left and relocate the close button to the outside. Bingo, your buttons are now back where god ordained they should be and all is right in the universe.

Presentation issues aside, there are some more fundamental design issues which are indicative of a worrying trend. As I noted in the post below, it is now pretty easy to install restricted codecs as and when they are needed. Rhythmbox will happily pull in the codecs needed to play MP3 encoded music with only a minor acknowledgement that the codecs have been deliberately omitted from the shipped distribution for a reason – the format is closed and patent encumbered. Most users won’t care about the implications here, but I think it is only right that they should know the implications of using a closed format before accepting it. It is also worth bearing in mind that some software (including that necessary to watch commercial DVDs) is deliberately not shipped because the legal implications of doing so are problematic in many countries.

So, whilst from a usability perspective, I may applaud the decisions which have made it easy for the less technically savvy users to get their multimedia installations up and running with minimal difficulty, I find myself more than a little unhappy with the implications.

But it gets worse. Enter ubuntu one.

Ubuntu one attempts to do for ubuntu what iTunes does for Apple (but without the DRM one hopes….). The new service is integrated with rhythmbox and allows users to search for and then pay for music on-line. The big problem here is that the music is all encoded in MP3 format when ubuntu, as a champion of free software, could have chosen the (technically superior) patent free ogg vorbis format. The choice smacks of business “realpolitick” in a way that I find disappointing from a company like Canonical. Compare and contrast this approach with the strictly free and open stance taken by Debian and you have to wonder where Canonical is going.

Watch this space. If they introduce DRM in any form there will be an unholy row.

Permanent link to this article: https://baldric.net/2010/05/02/ubuntu-10-04-minor-and-some-not-so-minor-irritations/

ubuntu 10.04 problems

The lastest LTS version of ubuntu (10.04, or lucid lynx according to your naming preferences) was released to an eagerly waiting public on 29 April. Long term support (LTS) versions are supported for three years on the desktop and five years on the server instead of the usual 18 months for the normal releases. My current desktop of choice is 8.04 (the previous LTS version) and I will probably move to 10.04 eventually. But not yet.

A wet and windy bank holiday weekend (as this is) meant that my plans to go fishing were put on hold so I downloaded the 10.04 .isos to play with. I grabbed three versions, the 32 and 64 bit desktops and the netbook-remix version. Given that this was a mere day after the release date, I expected a slow response from the mirrors, but I was pleasantly surprised by the download speeds I obtained. Canonical must have put a lot of effort into getting a good range of fast mirrors. The longest download took just over 22 minutes and the fastest came down in just 14 minutes.

I copied the netbook-remix .iso to a USB stick using unetbootin on my 8.04 desktop (later versions of ubuntu ship with a usb startup disk creator) and installed to my AAO netbook with no hitches whatever. The new theme ditches the bright orange (or worse, brown) colour scheme used in earlier versions of ubuntu and looks attractive and professional.

UNR 10.04 desktop image

UNR 10.04

I spent a short while adding some of my preferred tools and applications and configuring the new installation to handle my multimedia requirements, but all this is now remarkably easy. Even playback of restricted formats (MP3 or AAC audio for example) is eased by the fact that totem (or rhythmbox) will fetch the required codecs for you when first you attempt to play a file which needs them. So, pleasant and easy to use. But I /still/ can’t get sony memory sticks to work.

But the netbook is simply a (mobile) toy. I do not rely upon it as I do my desktop. Any data on the netbook is ephemeral and (usually) a copy of the same data held elsewhere, either on a server in the case of email, or my main desktop. It would not matter if my installation had trashed the netbook, but my desktop is far more important. It has taken me a long time to get that environment working exactly the way I want it, and there is no way I will update it without a lot of testing first.

I am lucky enough to have a plenty of spare kit around to play with though and I normally test any distro I like the look of in a virtual machine on an old 3.4 GHz dual core pentium 4 I have. Until this weekend, that box was running a 64 bit installation of ubuntu 9.04 with virtualbox installed for testing purposes. Running a new distro in a virtual machine is normally good enough to give me a feel for whether I would be happy using that distro long term – but it does have some limitations and I really wanted to test 10.04 with full access to the underlying hardware so I decided to wipe the test box and install the 64 bit download. If it worked I could then re-install virtualbox and use the new base system as my test rig in future. If it failed, then all I have lost is some time on a wet weekend. It failed.

To be fair, the installation actually worked pretty well. My problems arose when I started testing my multimedia requirements. I installed all the necessary codecs and libraries (along with libdecss, mencoder, vlc, flash plugins etc, etc) to allow me to waste time watching youtube, MP4 videos and DVDs only to discover that neither of the DVD/CD devices in my test box were recognised. I could not mount any optical medium. This is a big problem for me because I encode my DVDs to MP4 format so that I can watch them on my PSP on the train. Thinking that there might be a problem with the automounter, I tried manually mounting the devices – no go, mount failed consistently because it could not find any media. I could not find any useful messages in any of the logs so I checked the ubuntu forums to see if others were having any similar problems. Yep – I’m not alone. This is a common problem. But it seems that I’m pretty lucky not to have seen a lot more problems (black, or purple, screen of death seems to be a major complaint). I think I’ll wait a month or so before trying again.

Meanwhile, I guess I can always ask for my money back.

Permanent link to this article: https://baldric.net/2010/05/02/ubuntu-10-04-problems/

where are you

I have added a new widget to trivia – a map of the world from clustrmaps which gives a small graphic depicting where in the world the IP addresses associated with readers are supposedly located. Geo location of IP addresses is not a perfect art, but the map given corresponds roughly with what I expect from my logs.

I’ve checked the clustrmaps privacy statement and am reasonably content that neither my, nor (more importantly) your, privacy is compromised by this addition any more than it would be if I linked to any other site. Clustrmaps logs no more information about your visit to trivia than do I.

Besides, the map is pretty.

Permanent link to this article: https://baldric.net/2010/04/18/where-are-you/

there are 10 kinds of people in the world

binary soduko

With grateful thanks to xkcd.

Permanent link to this article: https://baldric.net/2010/04/02/there-are-10-kinds-of-people-in-the-world/

webDAV in lighttpd on debian

I back up all my critical files to one of my slugs using rsync over ssh (and just because I am really cautious I back that slug up to another NAS). Most of the files I care about are the obvious photos of friends and family. I guess that most people these days will have large collections of jpeg files on their hard disks whereas previous generations (including myself I might add) would have old shoe boxes filled with photographs.

The old shoe box approach has much to recommend it. Not least the fact that anyone can open that box and browse the photo collection without having to think about user ids or passwords, or having to search for some way of reading the medium holding the photograph. I sometimes worry about how future generations’ lives will be subtly impoverished by the loss of the serendipity of discovery of said old shoe box in the attic. Somehow the idea of the discovery of a box of old DVDs doesn’t feel as if it will give the discoverer the immediate sense of delight which can follow from opening a long forgotten photo album. Old photographs feel “real” to me in a way that digital images will never do. In fact the same problem affects other media these days. I am old enough (and sentimental enough) to have a stash of letters from friends and family past. These days I communicate almost exclusively via email and SMS. And I feel that I have lost some part of me when I lose some old messages in the transfer from one ‘phone to another or from one computing environment to another.

In order to preserve some of the more important photographs in my collection I print them and store them in old fashioned albums. But that still leaves me with a huge number of (often similar) photographs in digital form on my PC’s disk. As I said, I back those up regularly, but my wife pointed out to me that only I really know where those photos are, and moreover, they are on media which are password protected. What happens if I fall under a bus tomorrow? She can’t open the shoebox.

Now given that all the photos are on a networked device, it is trivially easy to give her access to those photos from her PC. But I don’t like samba, and NFS feels like overkill when all she wants is read-only access to a directory full of jpegs. The slug is already running a web server and her gnome desktop conveniently offers the capability to connect to a remote server over a variety of different protocols, including the rather simple option of WebDAV. So all I had to do was configure lighty on the slug to give her that access.

Here’s how:-

If not already installed, then run aptitude to install “lighttpd-mod-webdav”. If you want to use basic authenticated access then it may also be useful to install “apache2-utils” which will give you the apache htpasswd utility. The htpasswd authentication function uses unix crypt and is pretty weak, but we are really only concerned here with limiting browsing of the directory to local users on the local network, and if someone has access to my htpasswd file then I’ve got bigger problems than just worrying about them browsing my photos. There are other authentication mechanisms we can use in lighty if we really care – although I would argue that if you really want to protect access to a network resource you shouldn’t be providing web access in the first place.

To enable the webdav and auth modules, you can simply run “lighty-enable-mod webdav” and “lighty-enable-mod auth” (which actually just create symlinks from the relevant files in /etc/lighttpd/conf-available to /etc/lighttpd/conf-enabled directories) or you can activate the modules directly in /etc/lighttpd/lighttpd.conf or the appropriate virtual-host configuration file with the directives:

server.modules += ( “mod_auth” )
server.modules += ( “mod_webdav” )

lighty is pretty flexible and doesn’t really care where you activate the modules. The advantage of using the “lighty-enable-mod” approach however, is that it allows you to quickly change by running “lighty-disable-mod whatever” at a future date. The symlink will then be removed and so long as you remember to restart lighty, the module activation will cease.

Now to enable the webdav access to the directory in question we need to configure the virtual host along the following lines:

$HTTP[“host”] == “slug” {
# turn off directory listing (assuming that it is on by default elsewhere)
server.dir-listing = “disable”

# turn on webdav on the directory we wish to share
$HTTP[“url”] =~ “^/photos($|/)” {
webdav.activate = “enable”
webdav.is-readonly = “enable”
webdav.sqlite-db-name = “/var/run/lighttpd/lighttpd.webdav_lock.db”
auth.backend = “htpasswd”
auth.backend.htpasswd.userfile = “/etc/lighttpd/htpasswd”
auth.require = ( “” => ( “method” => “basic”,
“realm” => “photos”,
“require” => “valid-user” ) )
}

}

Note that the above configuration will allow any named user listed in the file /etc/lighttpd/htpasswd read only access to the “/photos” directory on the virtual host called “slug” if they can sucessfully authenticate. Note also, however, that because directory listing is turned off, it will not be possible for that user to access this directory with a web browser (which would be possible if listing were allowed). Happily however, the gnome desktop (“Connect to server” mechanism) will still permit authenticated access and once connected will provide full read only access to all files and subdirectories of “/photos” in the file browser (which is what we want). The “auth.backend” directive tells lighty that we are using the apache htpasswd authentication mechanism and the file can be found at the location specified (make sure this is outside webspace). the “auth.require” directives specify the authentication method (bear in mind that “basic” is clear text, albeit base64 encoded, so authenticatiion credentials can be trivially sniffed off the network); the “realm” is a string which will be displayed in the dialogue box presented to the user (though it has additional functions if digest authentication is used); “require” specifies which authenticated users are allowed access. This can be useful if you have a single htpasswd file for multiple virtual hosts (or directories) and you wish to limit access to certain users.

Passing authentication credentials in clear over a hostile network is not smart, so I would not expose this sort of configuration to the wider internet. However, the webDAV protocol supports ssl encryption so it would be easy to secure the above configuration by changing the setup to use lighty configured for ssl. I choose not to here because of the overhead that would impose on the slug when passing large photographic images – and I trust my home network……

Now given that we have specified a password file called “htpasswd” we need to create that file in the “/etc/lighttpd” configuration directory thusly:

httpasswd -cm /etc/lighttpd/htpasswd user

The -c switch to htpasswd creates the file if it does not already exist. When adding any subsequent users you should omit this switch or the file will be recreated (and thus overwritten). The “user” name adds the named user to the file. You will be prompted for the password for that user after running the command. As I mentioned above, by default the password will only be encrypted by crypt, the -m switch used forces htpasswd to use (the stronger) MD5 hash. Note that htpasswd also gives the option of SHA (-s switch) but this is not supported by lighty in htpasswd authentication. Make sure that the passwd file is owned by root and is not writeable by non root users (“chown root:root htpasswd; chmod 644 htpasswd” if necessary).

The above configuration also assumes that the files you wish to share are actually in the directory called “photos” in the web root of the virtual host called “slug”. In my case this is not true because I have all my backup files outside the webspace on the slug (for fairly obvious reasons). In order to share the files we simply need to provide a symlink to the relevant directory outside webspace.

Now whenever the backup is updated, my wife has access to the latest copy of all the photos in the shoebox. But let’s hope I don’t fall under a bus just yet.

Permanent link to this article: https://baldric.net/2010/03/31/webdav-in-lighttpd-on-debian/

what a user agent says about you

I get lots of odd connections to my servers – particularly to my tor relay. Mostly my firewalls bin the rubbish but my web server logs still show all sorts of junk. Occasionally I get interested (or possibly bored) enough to do more than just scan the logs and I follow up the connection traces which look really unusual. I may get around to posting an analysis of all my logs one day.

One of the interesting traces in web logs in the user agent string. Mostly this just shows the client’s browser details (or maybe proxy details) but often I find that the UA string is the signature of a known ‘bot (e.g. Yandex/1.01.001). A good site for keeping tabs on ‘bot signatures is www.botsvsbrowsers.com. But I also find user-agent-string.info useful as a quick reference. If you have never checked before, it can be instructive to learn just how much information you leave about yourself on websites you visit. Just click on “Analyze my UA” (apologies for the spelling, they are probably american) for a full breakdown of your client system.

Permanent link to this article: https://baldric.net/2010/03/30/what-a-user-agent-says-about-you/

unplugged

My earlier problems with the sheevaplug all seem to have stemmed from the fact that I had installed Lenny to SDHC cards. As I mentioned in my post of 7 March, I burned through two cards before eventually giving up and trying a new installation to USB disk. This seems to have fixed the problem and my plug is now stable. I had a series of problems with the SD cards I used (class 4 SDHC 8 GB cards) which may have been related to the quality of the cards I used. Firstly the root filesystem would often appear as readonly and the USB drive holding my apt-mirror (mounted as /home2) would similarly appear to be mounted read-only. This seemed to occur about every other day and suggested to me that the plug had seen a problem of some kind and rebooted. But of course since the filesystem was not writeable, there were no logs available to help my investigations.

I persevered for around two weeks during which time I completely rebuilt both the original SD card and another with Martin’s tarball, reflashed uboot with the latest from his site, and reset the uboot environment to the factory defaults before trying again. I also changed /etc/fstab to take out the “errors=remount-ro” entry against the root filesystem, and reduced the number of writes to the card by adding “noatime, commit=180” in the hope that I could a) gain stability, and b) find out what was going wrong. No joy. I still came home to a plug with a /home2 that was either unmounted or completely unreadable or mounted RO. The disk checked out fine on another machine and I could find nothing obvious in the logs to suggest why the damned thing was failing in the first place. Martin’s site says that “USB support in u-boot is quite flaky”. My view is somewhat stronger than that, particularly when the plug boots from another device and then attaches a USB disk.

But I don’t give up easily. After getting nowhere with the SDHC card installation from Martin’s tarball, I reset the uboot environment on the plug to the factory default (again) and then ran a network installation of squeeze to a 1TB USB disk (following Martin’s howto). It took me two attempts (I hit the bug in the partitioner on the first installation) but I now have a stable plug running squeeze. It is worth noting here that I had to modify the uboot “bootcmd” environment variable to include a reset (as Martin suggests may be necessary) so that the plug will continue to retry after a boot failure until it eventually loads. The relevant line should read:

setenv bootcmd ‘setenv bootargs $(bootargs_console); run bootcmd_usb; bootm 0x00800000 0x01100000; reset’

The plug now boots successfully every second or third attempt. So far it has been up just over ten days now without any of the earlier problems recurring.

My experience appears not to be all that unusual. There has been some considerable discussion on the debian-arm list of late about problems with installation to SDHC cards. Most commentators conclude that wear levelling on the cards (particularly cheap ones) may not be very good. SD cards are sold formatted as FAT or FAT32 (depending on the capacity of the card). Modern journalling filesystems such as ext3 on linux result in much higher read/write rates and the quality of the cards becomes a much greater concern. Perhaps my cards just weren’t good enough.

Permanent link to this article: https://baldric.net/2010/03/30/unplugged/

psp video revisited

I last posted about ripping DVDs to PSP format back in November 2007. Since then I have used a variety of different mechanisms to transcode my DVDs to the MP4 format preferred by my PSP. A couple of years ago I experimented with both winff and a command line front end to ffmpeg called handbrake. Neither were really as successful as I would have liked (though winff has improved over the past few years) so I usually fell back to the mencoder script that works for 95% of all the DVDs I buy.

I have continually upgraded the firmware on my PSP since 2007 so that I am now running version 6.20 (the latest as at today’s date). Somewhere between version 3.72 and now, sony decided to stop being so bloody minded about the format of video they were prepared to allow to run on the PSP. We are still effectively limited to mpeg-4/h.264 video wth AAC audio in an mp4 container, but the range of encoding bitrates and video resolutions is no longer as strictly limited as it was back in late 2007. So when going about converting all the DVDs I received for christmas and my last birthday and considering whether I should I move my viewing habits to take advantage of the power of my N900, I recently revisited my transcoding options.

Despite the attractiveness of the N900’s media player I concluded that it still makes sense to use the PSP for several reasons:- it works; the battery lasts for around 7 hours between charges; I have a huge investment in videos encoded to run on it; and most importantly, not using the the N900 as intensively as I use the PSP means that I know that my ‘phone will be charged enough to use as a ‘phone should I need it.

But whilst revisiting my options I discovered that the latest version of handbrake (0.9.4) now has a rather nice GUI and it will rip and encode to formats usable by both the PSP and a variety of other hand-held devices (notably apple’s iphone and ipod thingies) quite quickly and efficiently. Unfortunately for me, the latest version is only available as a .deb for ubuntu 9.10 and I am still using 8.04 LTS (because it suits me). A quick search for alternative builds led me to the ppa site for handbrake which gives builds up to version 0.9.3 for my version of ubuntu. See below:

image of handbrake gui

This version works so well on my system that I no longer have to use my mencoder script.

Permanent link to this article: https://baldric.net/2010/03/21/psp-video-revisited/

16 is so much safer than 10

I am indebted to a colleague of mine (thank you David) for drawing this to my attention.

The Symantec Norton SystemWorks 2002 Professional Edition manual says:

“About hexadecimal values – Wipe Info uses hexadecimal values to wipe files. This provides more security than wiping with decimal values.”

I suppose octal or (worse) binary, just won’t cut the mustard.

Permanent link to this article: https://baldric.net/2010/03/15/16-is-so-much-safer-than-10/

plug instability

I’m still having a variety of problems with my sheevaplug. Not least of which is the fact that SDHC cards don’t seem to be the best choice of boot medium. I have had failures with two cards now and some searching of the various on-line fora suggests that I am not alone here. In particular, SD cards seem to suffer badly under the read/write load that is routine for an OS writing log files – let alone one running a file or web server. I have also had several failures with my external USB drive. It seems that the plug boots too quickly for the USB subsystem to initialise properly. This means that there is not enough time for the relevant device file (/dev/sda1 in my case) to appear before /etc/fstab is read to mount the drive. A posting on the plugcomputer.org forum suggested a useful workaround (essentially introducing a wait), but even that was only partially sucessful. Sometimes it worked, sometimes it didn’t. In fact, the USB drive still often fails after a random (and short) time and then remounts read-only. Attempts to then remount the drive manually (after a umount) result in failure with the error message “mount: special device /dev/sda1 does not exist”.

In my attempts to cure both the booting problems and the USB connection failures I have installed the latest uboot (3.4.27 with pingtoo patches linked to from Martin’s site) and updated my lenny kernel to Martin’s 2.6.32-2-kirkwood in the (vain as it turns out) hope that the latest software would help. Here I also discovered another annoying problem – installing the latest kernel does not result in a new kernel image, the plug still boots into the old kernel until you run “flash-kernel”. Fortunately this is reasonably well known and is covered in Martin’s troubleshooting page.

I will persevere for perhaps another week with the current plug configuration. If I can’t get a stable system though I will try installing to USB drive (perverse as that may seem) and changing the uboot to boot from that rather than the flaky SD card. Most on-line advice suggests that USB support in uboot is rather “immature”, but it can’t be any worse than the current setup. My thinking is that if I can introduce a delay in the boot process by uboot so that I can successfully boot from an external HDD, the drive connection might then be stable enough to be usable.

Of course I could be completely wrong.

Permanent link to this article: https://baldric.net/2010/03/07/plug-instability/

from slug to plug

Well this took rather longer than expected. I intended to write about my latest toy much earlier than this, but several things got in the way – more of which later.

About three or four weeks ago I bought myself a new sheevaplug.

image of sheevaplug

The plug has been on sale in the US for some time, but UK shipping costs added significantly to $99 US retail price. Recently however, a UK supplier (Newit) has started stocking and selling the plugs over here – and at very good prices too. My plug arrived within three days of order and I can thoroughly recommend Newit. The owner, one Jason King no less (fans of 1970’s TV will recognise the name), kept me informed of progress from the time I placed the order to the time it was shipped. He even took the trouble to email me after shipping to check that I had received it OK. Nice touch, even if it was automated.

Looking much like a standard “wall wart” power supply typically attached to an external disk, the plug is actually quite chunky, but it will still fit comfortably in the palm of your hand. Inside that little box though there is enough computing power to make a slug owner more than happy. The processor is a 1.2 GHz Marvell Kirkwood ARM-compatible device and it is coupled with 512MB SDRAM and 512MB Flash memory. Compare that to the poor old slug’s 266 MHz processor and 32 MB of flash and you can see why I’d be interested – particularly since the plug can run debian (and Martin Michlmayr has again provided a tarball and instructions to help you out.

The plugs come in a variety of flavours, but all offer at least one USB 2.0 port, a mini usb serial port, gigabit ethernet and an SDHC slot. This means that debian (or another debian based OS such as Ubuntu) can be installed either to the internal flash or to one of the external storage media available. Newit ship the plugs in various configurations and will happily sell you a device fully prepared with debian (either Lenny or Squeeze according to your taste) on SD card to go with the standard Ubuntu 9.04 in flash. Personally I chose to install debian myself, so I bought the base model. (No, I’m not a cheapskate, I just prefer to play. Where’s the fun in buying stuff that “just works”?)

Given that Martin’s instructions suggest that installing to USB disk can be problematic, and that I have debian lenny on my slugs (and had a spare 4 Gig SDHC card lying around) I chose to use his tarball to install lenny to my SDHC card. Firstly I formatted the card (via a a USB mounted card reader) as below:

/dev/sdb1 512 Meg bootable
/dev/sdb2 2.25 Gig
/dev/sdb3 1024 Meg swap

(note that the plug will see these devices as “/dev/mmcblk0pX” when the card is loaded. The “/dev/sdbX” layout simply reflects the fact that I was using a USB mounted card reader on my PC. )

I then downloaded and installed Martin’s lenny tarball to the newly formatted card and as instructed edited the /etc/fstab to match my installation. Martin’s fstab file is below:

# /etc/fstab: static file system information.
#
# proc /proc proc defaults 0 0
# Boot from USB:
/dev/sda2 / ext2 errors=remount-ro 0 1
/dev/sda1 /boot ext2 defaults 0 1
/dev/sda3 none swap sw 0 0
# Boot from SD/MMC:
#/dev/mmcblk0p2 / ext2 errors=remount-ro 0 1
#/dev/mmcblk0p1 boot ext2 defaults 0 1
#/dev/mmcblk0p3 none swap sw 0 0

As you can see it defaults to assuming a USB attached device. You need to comment out the USB boot entries and uncomment the SD/MMC entries if. like me, you are intending to boot from SD card. At this stage I also edited “/etc/network/interfaces” to change the eth0 interface from dhcp to static (to suit my network) and I changed “/etc/resolv.conf” because the default includes references to cyrius.com and a local IP address for DNS.

Before we can boot from the SD card, we have to make a few changes to the uboot boot loader configuration to stop it using the default OS on internal flash (where the factory installed Ubuntu resides). Again, Martin’s instructions are helpful here but he points to the openplug.org wiki for instructions in setting up the necessary serial connection to the plug. On my PC (running Ubuntu 8.04 LTS) I got no ttyUSB devices by default and “modprobe usbserial” did not work but “modprobe ftdi_sio vendor=0x9e88 product=0x9e8f” did work for me.

Now open a TTY session using cu thusly “cu -s 115200 -l /dev/ttyUSB1” – don’t use putty on linux, it doesn’t allow cut and paste which can be very useful if you are following on-line instructions (of course it helps if you cut and paste the right instructions). I found that booting is too fast if you have to switch on the plug and then return to a keyboard so I recommend simply leaving the terminal session open and resetting the plug with a pin or paper clip. Hit any key to interrupt the boot session, then follow Martin’s instructions for editing the uboot environment.

My plug was running v 3.4.16 of uboot, so at first I used version 3.4.27 (downloaded from plugcomputer.org) and loaded that via tftp as described by Martin, But this turmed out to be a mistake because my plug failed to boot thereafter. I got the following error message via the serial console:

## Booting image at 00400000 …
Image Name: Debian kernel
Created: 2009-11-23 17:25:02 UTC
Image Type: ARM Linux Kernel Image (uncompressed)
Data Size: 1820320 Bytes = 1.7 MB
Load Address: 00008000
Entry Point: 00008000
Verifying Checksum … Bad Data CRC

Some searching suggested that the uboot image was probably the problem and that reverting to v3.4.19 would solve this. So I downloaded 3.4.19 from “vioan’s” post “#6 on: November 16, 2009, 03:21:34 PM” at the plugcomputer.org forum and reflashed the plug with that image. Success – my plug now booted into debian lenny. Tidy up, update the OS and add a normal user as recommended and we’re ready to go.

My plug was intended to replace the slug I was using as my local apt-mirror. That mirror is now fairly large because I have a mix of 32 and 64 bit ubuntus (of varying vintages) and 386 and ARM versions of debian. I therefore recycled an unused 500 gig lacie USB disk and mounted that as /home2 (originally as /home, but I soon changed that when I wanted to unmount it frequently and then lost my home directory….) Copying the apt-mirror (175 Gig) over the network from my old slug was clearly going to take forever – high speed networking is not the slug’s forte, so I mounted both the slug and the plug’s disks locally on my PC and copied the files over USB – much faster. It was here that I discovered why the old lacie disk (a “designed by porsche” aluminium coated beast) was lying idle. I’d forgotten that it sounded like a harrier jump jet on take off when in use. I put up with that for a week – just long enough to get me to a free weekend when I could rebuild the old slug (now used as just an NTP server and the webcam) to boot from a 4 gig USB stick so that I could recycle its disk onto the plug. I’ve just finished doing that.

One other problem I found with the plug which caused me much head scratching (and delayed my writing this as I noted above) was that it consistently failed to boot back into my debian install after a “reboot” or “shutdown -r” – I had to power cycle the device to get it to boot properly. I spent some time this weekend with the serial port connected before I noticed (using “printenv” at the uboot prompt) that I had mixed up the uboot environment variables printed on Martin’s site. I had actually copied part of the instructions for the USB boot variant instead of the correct ones for the SD card boot. Sometimes “cut and paste” can be a mistake.

Permanent link to this article: https://baldric.net/2010/02/28/from-slug-to-plug/

homeopathy

I can’t recall how I got there, but this made me laugh enough to want to share it.

Permanent link to this article: https://baldric.net/2010/02/26/homeopathy/