maybe I should sell

I have been exploring the InTrust Domain Names website I mentioned in the previous post. There are some absolutely astonishing prices quoted for some domains which do not immediately spring to mind as being particularly valuable. For example, the domain falldaron.com is quoted at $10000000.00.

If you actually click on that domain name you are taken to this page:

Don’t you just love the “PayPal” option on the payment method?

[WARNING – I recommend that you do NOT attempt to actually visit the domain falldaron.com. It currently redirects to a site on a domain with a very anglo saxon four lettered name which I would not call “work safe”.]

Permanent link to this article: https://baldric.net/2010/10/09/maybe-i-should-sell/

domain sales pitch

In the past couple of days I have received some amusing email spam.

I own ten different domain names, mostly in the .net TLD. The spam emails in question all offered to sell me the domain “exnic.com” on the grounds that I already own “exnic.net” (not an unreasonable sales pitch). It turns out that this particular domain has expired and is currently “pending deletion”. This means that the original registrant has failed to renew ownership of the domain and the registrar is about to release it back to internic for resale. This process often happens and usually results in the sort of irritating website I noted a while back when looking at linuxdoc.org.

But, as I said, the domain name in question is not actually for sale yet, just “pending”, and it is always possible that the original owner will wake up to the fact that his domain has gone awol and will claim it back. The interesting thing about the emails I received was that two of them (purportedly from one “Arthur Simmons”) came from two completely different domains (“underforge.com” and “thewingsofhope.com” – both of which appear to be owned by “InTrust Domain Names”) and seemed to be competing with each other to sell me the domain. A third email came from somewhere calling itself “premierdomainbrokers.net” which at least has the decency to look like a brokerage. That email amused me because it said:

You may have received emails from other companies offering to sell you exnic.com.
That is misleading information. The domain cannot be purchased at this time.
It is actually in the pending delete stage and will be available very soon.

Oh yes indeed. Shame I’m not interested.

Permanent link to this article: https://baldric.net/2010/10/09/domain-sales-pitch/

professional ability

I was skimming through a series of security related sites last week when I came across an article referring to someone described as something like “A Person, M.Inst.ISP, CISM, CISSP, MBCS, CITP, BSc, Director of etc…..” and I found myself wondering what that all actually meant. Yes, I know what the letters stand for, hell I’ve even got a few of them myself, but what do they actually mean in the real world? And because of those letters, would you believe that person knew anywhere near as much about software security as say David Litchfield (Jr), or Charlie Miller, or Thomas Dullien?

Just wondering.

Permanent link to this article: https://baldric.net/2010/09/25/professional-ability/

very, very, slow electrons

I recently received an email from my old chum Chris Samuel. Chris emigrated to Australia several years ago, but we still correspond, if infrequently. In fact he sometimes comments here. But he is not good at email.

This is what I received:

On Thu, 19 Dec 2002 03:50:08 am you wrote:

> Have a very Merry Christmas and an exceptionally good New Year.

You too! ;-)

Yes, trying to catch up on some email, and yes I’m crap at it. ;-)

cheers,
Chris

Way to go Chris – nearly eight years late, but you made it. :-)

Permanent link to this article: https://baldric.net/2010/09/14/very-very-slow-electrons/

a graphical web of trust

I recently stumbled upon sig2dot, a gpg/pgp keyring graph generator. In fact this seems to have been around for some time, but I’d never come across it before. It can be used to generate a graph of all of the signature relationships in a GPG/PGP keyring, and, like other visualisation tools, this graphical image producing program can give new insight into relationships between objects.

The sig2dot program itself is available in the debian/ubuntu repositories in the package called “signing-party”. But unless you want to install a shed load of other unnecessary cruft along with it (exim? for god’s sake, why?), I recommend you simply pull the perl code direct from the author’s site. Along with the sig2dot program itself, you will need “neato” from the graphviz package and “convert” from the wondrous imagemagick package suite. If you don’t already have those installed then it is pretty safe to pull them from your distro’s package repository.

That done, try the following:

first create an ascii graphviz dot file ready for neato

$ gpg –list-sigs –keyring ~/.gnupg/pubring.gpg | sig2dot.pl > ~/.gnupg/pubring.dot

(that is “minus minus list-sigs” and “minus minus keyring”) now convert to a postscipt file

$ neato -Tps ~/.gnupg/pubring.dot > ~/.gnupg/pubring.ps

before using imagmagick to convert to a png graphic

$ convert ~/.gnupg/pubring.ps ~/.gnupg/pubring.png

Those of you with gpg keyrings may wish to try it out (and no. I’m not going to show you mine).

Permanent link to this article: https://baldric.net/2010/09/12/a-graphical-web-of-trust/

kseniya simonova

This has absolutely nothing to do with my usual topics but I make no apology for posting this because the artistry is stunningly beautiful. I was sent a link to Kseniya Simonova’s sand art by a correspondent on a mailing list I subscribe to. Apparently the artist is telling the story of a ukrainian family before, during and after the bombing of their town in the second world war.

I understand that Ms Simonova was a contestant on Ukraine’s version of “Britain’s got talent”. This lady has real talent, unlike some of the contestants I have seen on the UK’s version. It looks as if Ukrainian television may be in a better place than ITV.

Permanent link to this article: https://baldric.net/2010/09/04/kseniya-simonova/

it’s not that I’m anti google

I’m just pro privacy. And google just happens to be one of the worst offendors when it comes to breaches of my privacy. El Reg yesterday ran an article pointing to the consumerwatchdog.org ad depicting Eric Schmidt as a “privacy pervert”. Deliciously, that ad is hosted on youtube.

But consumerwatchdog have long campaigned about google’s attempts to trample on users’ privacy. The video below shows how google’s chrome browser fails to protect the user’s privacy even when “incognito mode” is used. Incidentally, the video also shows how google’s javascript based, supposedly helpful, “stem searching” capability during searches effectively adds a keystroke sniffer to your PC. Note that this capability is not specific to chrome, it happens whatever browser you use when you use google’s search engine.

Be careful out there.

Permanent link to this article: https://baldric.net/2010/09/04/its-not-that-im-anti-google/

phone home

Google’s chrome browser first appeared back in 2008, since when many commentators have sung its praises. Apparently it is “blindingly fast” (well, let’s face it firefox can be a tad slow, particularly if loaded down with a swathe of plugins) “clean”, and “simple”. Until recently I had not tried chrome (for some fairly obvious reasons) but I thought it might be interesting to fire up a copy in a VM just to see what all the fuss was about. So I did. And whilst I was doing that I ran tcpdump and etherape to see what was happening under the hood. What I found intrigued me.

First I spun up a completely new clean install of ubuntu in a virtualbox VM. Then I downloaded the latest chrome .deb from the google site and installed it. Before launching chrome for the first time in the guest system I fired up the sniffers in the host system. This is what I found:

image of etherape capture

Note that etherape shows five connections which are instantly recognisable as going to google servers (the 1e100.net domain), three to verisign, and a further three to IP addresses with no associated names (these appear to be either youtube or google image cache machines – also owned by google of course). You can ignore the rlogin.net servers, they are all mine.

A quick look at the tcpdump record shows that the verisign connections all check for SSL certificates and/or revocations – perfectly sensible and understandable. But the google connections are less illuminating until you follow the tcp streams. Two of the connections are SSL encrypted so it is not possible to be certain what is carried in them, but they appear to be certificate exchanges (or updates), a third gets a certificate revocation list whilst two more get simple html or xml schema probably associated with building the welcome screen (I didn’t explore in detail). One connection gets a shockwave flash file and two get and set cookies in the youtube domain. At least one of the google connections also gets and sets cookies in the google domain.

Now none of this is inherently suspicious (well, alright, it might be) but the point is that all this happens upon first connection and without reference to the user. And if you don’t want google (or youtube) cookies on your machine you will have to delete them when first you use the browser. I have an instinctive (OK, partly irrational) dislike of software which “phones home” without telling me – and chrome does that on quite an impressive scale. I’m not sure what would happen in prolonged usage of the browser because I wasn’t impressed enough to want to use it in anger.

I’ve trashed the VM of course.

Permanent link to this article: https://baldric.net/2010/08/29/phone-home/

update to autossh – or how ServerAliveInterval makes this unnecessary

I had a couple of comments on my earlier post about autossh which suggested that I should look at alternative mechanisms for keeping my ssh tunnel up. Rob in particular suggested that setting “ServerAliveInterval” should work. Oddly I had tried this in the past whilst trying out various configuration options and I swear it didn’t work for me. But since the autossh mechanism felt inelegant I thought I’d revisit my ssh_config file as Rob suggested. And indeed setting ServerAliveInterval to 300 (i.e. 5 minutes) solved my tunnel drop problem. I’d guess that other intervals of less than 1 hour would equally work but I haven’t checked.

I have no idea why my earlier experiments failed.

Permanent link to this article: https://baldric.net/2010/08/27/update-to-autossh-or-how-serveraliveinterval-makes-this-unnecessary/

they are taking over the entire net

Some time ago I disabled my wp-recaptcha plugin because it had the unfortunate side effect of marking all comments as spam. I don’t have a particularly high comment rate, but the ones I do get, and which get past akismet, are usually OK. Apparently a flaw in version 2.9.6 surfaced when wp-recaptcha was used in conjunction with wordpress 2.9.2. I obviously got caught with this when I updated my wordpress installation so when I noticed the problem I just disabled wp-recaptcha. Of course I have since updated wordpress again and I noticed that the plugin had also been updated to 2.9.7 so I thought I would upgrade and reactivate. Upon doing so, however, I discovered that my public/private key pair had disappeared as a result of the deactivation and I was invited to apply for a new set. OK, no problem, happy to do so but a bit peeved that the keys seemed to be deleted when the plugin is deactivated. This strikes me as unnecessary.

But, and this is now a big but, this is what I was greeted with when I attempted to get a new set of keys:

“reCAPTCHA is now part of Google. In order to use it, you must create a new Google Account or sign in with an existing Google Account.
If you are a previous user of reCAPTCHA, you can migrate your old account after signing in with a Google Account.”

Yikes! It seems that re-captcha is now part of google. The chocolate factory have bought yet another piece of internet infrastructure which will no doubt feed the maw of the advertising goliath with statistics gained from my site.

Well bollocks to that. It can stay disabled. I’ll look for another captcha mechanism.

Permanent link to this article: https://baldric.net/2010/08/02/they-are-taking-over-the-entire-net/

autossh – or how to use tor through a central ssh proxy

Since I first set up a remote tor node on a VPS about this time last year, I have played about with various configurations (and used different providers) but I have now settled on using two high bandwidth servers on different networks. One (at daily.co.uk) allows 750 Gig of traffic per month, the other (a new player on the block called ThrustVPS) allows 1000 Gig of traffic. These limits are remarkably generous given the low prices I pay (the 1000 Gig server cost me £59.42, inc VAT, for a year. OK, that was a special offer, but they are still good value at full price) and they allow me to provide two reasonably fast exit servers to the tor network. Both suppliers know that I am running tor nodes and are relaxed about that. Some suppliers are less so.

I fund fast exit nodes as a way of paying something back to the community – but as I have pointed out before, they also allow me to have a permanent entry point to the tor network which I can tunnel to over ssh, thus protecting my own tor usage from snoopers. For some time I used a configuration based on tyranix’s notes documented on the tor project wiki. But eventually I found that to be rather limiting because it meant that I had to remember to run an ssh listener on each machine I used around the house (my laptop, netbook, two desktops, my wife’s machine etc) and to configure the browser settings as necessary. Then I hit upon the notion of centralising the ssh listener on one machine (I used the plug) in a sort of ssh proxy configuration. This meant that I only had to configure the local browsers to use the central ssh listener as a proxy and everything else could be left untouched. It also has the distinct advantage that my wife no longer has to worry about anything more complex than switching browser when she wants to use tor.

But I hit a snag when initially setting up ssh on the plug. For some reason (which I have never successfully bottomed out) the ssh process dies after an hour of inactivity. This is not helpful. Enter autossh. Using autossh means that the listener is restarted automagically whenever it dies so I can be confident that my proxy will always be there when I need it.

Here’s the command used on the plug to fire up the proxy:

autossh -M 0 -N -C -g -l user -f -L 192.168.57.200:8000:127.0.0.1:8118 tornode

That says:

– M 0 – turn off monitoring so that autossh will only restart ssh when it dies.
– N – do not execute a command at the remote end (i.e. we are simply tunneling)
– C – compress traffic
– g – allow remote hosts to connect to this listener (I limit this to the local network through iptables on the plug)
– l user – login as this user at the remote end
– f – background the process
– L 192.168.57.200:8000 – listen on port 8000 on the given IP address (rather than the more usual localhost address)
– 127.0.0.1:8118 tornode – and forward the traffic to localhost port 8118 on the remote machine called tornode

Of course “tornode” must be running ssh on a port reachable by the proxy. Again, I use iptables on tornode to limit ssh connections to my fixed IP address – don’t want random bad guys knocking on the door.

Now on tornode I have polipo listening on port 8118 on localhost. I used to use privoxy for this, but I have found polipo to be much faster, and speed matters when you are using tor. My polipo configuration forwards its traffic to the tor socks listener on localhost 9050. I also disabled the local polipo cache (diskCacheRoot = “”) because leaving it enabled means that the cache (by default /var/cache/polipo) directory will contain a copy of your browsed destinations in an easily identifable form – not smart if you really want your browsing to be anonymous (besides, my wife deserves as much privacy as do I).

The final bit of configuration needed is simple. Set your chosen browser to use the proxy on port 8000 on address 192.168.57.200. Since I use firefox for most of my browsing, I simply use opera for tor and I have that browser stripped to its basics and locked down as much as possible. Using tor is then simply a matter of firing up opera in place of firefox. This means that I always know when I am using tor or not (and just to reassure myself, the opera homepage is the torcheck page).

You can’t be too careful.

Permanent link to this article: https://baldric.net/2010/08/01/autossh-or-how-to-use-tor-through-a-central-ssh-proxy/

the “awesome power” of the apple brand

I have been following the unfolding tale of the faulty antenna on the new iPhone4 with some amusement. Apple’s complete inability to admit to any possibility of a mistake is hugely entertaining. Apple (or is it Jobs?) seem to be unable to contemplate the possibility of the need for a recall. Such hubris is bound to cause a problem somewhere downstream.

I was particularly amused by this post over at fakesteve where Dan Lyons says:

“we’ll rush out iPhone 5 with a new design by Christmas season. It’s basically an iPhone 4 with a rubber wrapper around the outside. We’ve got the kids in China building them already.”

Treating your customers like idiots (even if they are; in fact especially if they are) is not a smart marketing move.

Permanent link to this article: https://baldric.net/2010/07/25/the-awesome-power-of-the-apple-brand/

there are more than 10 kinds of people in the world

A correspondent on a mailing list I subscribe to uses the .sig “There are 10 kinds of people in the world. Those who understand Vigesimal, and 9 others.”

Even after checking what vigesimal was, I had to think about this for a bit because initially I thought he was wrong. If I understand correctly, I think he means “J others”. Anybody out there got any other suggestions?

Permanent link to this article: https://baldric.net/2010/07/04/there-are-more-than-10-kinds-of-people-in-the-world/

scroogle is having a problem

I posted a note about scroogle back in January. Scroogle offered an SSL interface to the google engine, and, moreover, didn’t lumber its users with google cookies and sundry other irritations. Since then, however, google themselves have started to offer an SSL interface and, coincidentally, scroogle seem to have started to have some problems.

If you visit the scroogle SSL interface, you get a redirect to a notice which explains why some changes made at google mean that scroogle can no longer work properly. Scroogle managed to get a workaround in place for a few days, but it seems that another google change has finally killed that too unless google can be convinced to help out – unlikely in my view. The scroogle redirect page (dated 1 July 2010) has the following line from Daniel Brandt:

“Thank you for your support during these past five years. Check back in a week or so; if we don’t hear from Google by next week, I think we can all assume that Google would rather have no Scroogle, and no privacy for searchers.”

That in itself is bad enough, but as a separate new posting explains, scroogle now seems to be the target of a botnet aimed at swamping its servers. As Brandt goes on to say:

“Google has a few hundred thousand servers, while Scroogle has six. They can put up with sites that spread malware, but our bandwidth is limited. Even if Google relents and the output=ie interface returns, this Scroogle malware problem could still be increasing at that point. Eventually it alone might shut down Scroogle.”

Sad. I hate to see the little guy lose out.

Permanent link to this article: https://baldric.net/2010/07/04/scroogle-is-having-a-problem/

this is a politics free zone

Well, I have cast my vote. Let’s hope we get the result we need.

Permanent link to this article: https://baldric.net/2010/05/06/this-is-a-politics-free-zone/

email address images

Adding valid email addresses to web sites is almost always a bad idea these days. Automated ‘bots routinely scan web servers and harvest email addresses for sale to spammers and scammers. And in some cases, email addresses harvested from commercial web sites can be used in targetted social engineering attacks. So, posting your email address to a website in a way which is useful to human being, but not to a ‘bot has to be a “good thing” (TM). One way of doing so is to use an image of an address rather than text itself. Of course this has the disadvantage that the address will not be immediately usable by client email software (unless, of course you defeat the object of the exercise by adding an html “mailto” tag to the image) but it should be no big deal for someone who wants to contact you to write the address down.

There are a number of web sites which offer a (free) service which allows you to plug in an email address and then download an image generated from that address. However, I can’t get over the suspicion that this would be an ideal way to actually harvest valid email addresses, moreover addresses which you could be pretty certain the users did not want exposed to spammers. Call me paranoid, but I prefer to control my own privacy.

There are also a number of web sites (and blog entries) describing how to use netpbm tools to create an image from text – one of the better ones (despite its idiosyncratic look) is at robsworld. But in fact it is pretty easy to do this in gimp. Take a look at the address below:

This was created as follows:

open gimp and create a new file with a 640×480 template (actually any template will do);
select the text tool and choose a suitable font size, colour etc;
enter the text of the address in the new file;
select image -> autocrop image;
select layer -> Transparency -> Colour to Alpha;
select from white (the background colour) to alpha;
select save-as and use the file extension .png – you will be prompted to export as png.

Now add the image to your web site.

Permanent link to this article: https://baldric.net/2010/05/03/email-address-images/

ubuntu 10.04 – minor, and some not so minor, irritations

If and when the teething problems in 10.04 are fixed and the distro looks stable enough to supplant my current preferred version, I will be faced with one or two usability issues. In this version, canonical have taken some design decisions which seem to have some of the fanbois frothing at the mouth. The most obvious change in the new “light” theme applied is the move of the window control buttons from the top right to the top left (a la Mac OSX). Personally I don’t find this a problem, but it seems to have started all sorts of religious wars and has apparently even resulted in Mark Shuttleworth being branded as a despot because he had the temerity to suggest that the ubuntu community was not a democracy. Design decisions are taken by the build team, not by polling the views of the great unwashed. In my view that is how it should be. The great beauty of the free software movement is the flexiibility and freedom it gives its users to change anything they don’t like. Hell, you can even build your own linux distro if you don’t like any of the (multiple) offerings available. Complaining about a design decision in one distro simply means that the complainant hasn’t understood the design process, and further, probably doesn’t understand that if he or she doesn’t like it, then they are perfectly free to change that decision on their own implementation.

In fact, it is pretty easy to change the button layout. To do so, simply run “gconf-editor” then select apps -> metacity -> general from the left hand menu. Now highlight the button_layout attribute and change the entry as follows:

change
close,minimize,maximize:
to
:minimize,maximize,close

i.e. move the colon from the right hand end of the line to the left and relocate the close button to the outside. Bingo, your buttons are now back where god ordained they should be and all is right in the universe.

Presentation issues aside, there are some more fundamental design issues which are indicative of a worrying trend. As I noted in the post below, it is now pretty easy to install restricted codecs as and when they are needed. Rhythmbox will happily pull in the codecs needed to play MP3 encoded music with only a minor acknowledgement that the codecs have been deliberately omitted from the shipped distribution for a reason – the format is closed and patent encumbered. Most users won’t care about the implications here, but I think it is only right that they should know the implications of using a closed format before accepting it. It is also worth bearing in mind that some software (including that necessary to watch commercial DVDs) is deliberately not shipped because the legal implications of doing so are problematic in many countries.

So, whilst from a usability perspective, I may applaud the decisions which have made it easy for the less technically savvy users to get their multimedia installations up and running with minimal difficulty, I find myself more than a little unhappy with the implications.

But it gets worse. Enter ubuntu one.

Ubuntu one attempts to do for ubuntu what iTunes does for Apple (but without the DRM one hopes….). The new service is integrated with rhythmbox and allows users to search for and then pay for music on-line. The big problem here is that the music is all encoded in MP3 format when ubuntu, as a champion of free software, could have chosen the (technically superior) patent free ogg vorbis format. The choice smacks of business “realpolitick” in a way that I find disappointing from a company like Canonical. Compare and contrast this approach with the strictly free and open stance taken by Debian and you have to wonder where Canonical is going.

Watch this space. If they introduce DRM in any form there will be an unholy row.

Permanent link to this article: https://baldric.net/2010/05/02/ubuntu-10-04-minor-and-some-not-so-minor-irritations/

ubuntu 10.04 problems

The lastest LTS version of ubuntu (10.04, or lucid lynx according to your naming preferences) was released to an eagerly waiting public on 29 April. Long term support (LTS) versions are supported for three years on the desktop and five years on the server instead of the usual 18 months for the normal releases. My current desktop of choice is 8.04 (the previous LTS version) and I will probably move to 10.04 eventually. But not yet.

A wet and windy bank holiday weekend (as this is) meant that my plans to go fishing were put on hold so I downloaded the 10.04 .isos to play with. I grabbed three versions, the 32 and 64 bit desktops and the netbook-remix version. Given that this was a mere day after the release date, I expected a slow response from the mirrors, but I was pleasantly surprised by the download speeds I obtained. Canonical must have put a lot of effort into getting a good range of fast mirrors. The longest download took just over 22 minutes and the fastest came down in just 14 minutes.

I copied the netbook-remix .iso to a USB stick using unetbootin on my 8.04 desktop (later versions of ubuntu ship with a usb startup disk creator) and installed to my AAO netbook with no hitches whatever. The new theme ditches the bright orange (or worse, brown) colour scheme used in earlier versions of ubuntu and looks attractive and professional.

UNR 10.04 desktop image

UNR 10.04

I spent a short while adding some of my preferred tools and applications and configuring the new installation to handle my multimedia requirements, but all this is now remarkably easy. Even playback of restricted formats (MP3 or AAC audio for example) is eased by the fact that totem (or rhythmbox) will fetch the required codecs for you when first you attempt to play a file which needs them. So, pleasant and easy to use. But I /still/ can’t get sony memory sticks to work.

But the netbook is simply a (mobile) toy. I do not rely upon it as I do my desktop. Any data on the netbook is ephemeral and (usually) a copy of the same data held elsewhere, either on a server in the case of email, or my main desktop. It would not matter if my installation had trashed the netbook, but my desktop is far more important. It has taken me a long time to get that environment working exactly the way I want it, and there is no way I will update it without a lot of testing first.

I am lucky enough to have a plenty of spare kit around to play with though and I normally test any distro I like the look of in a virtual machine on an old 3.4 GHz dual core pentium 4 I have. Until this weekend, that box was running a 64 bit installation of ubuntu 9.04 with virtualbox installed for testing purposes. Running a new distro in a virtual machine is normally good enough to give me a feel for whether I would be happy using that distro long term – but it does have some limitations and I really wanted to test 10.04 with full access to the underlying hardware so I decided to wipe the test box and install the 64 bit download. If it worked I could then re-install virtualbox and use the new base system as my test rig in future. If it failed, then all I have lost is some time on a wet weekend. It failed.

To be fair, the installation actually worked pretty well. My problems arose when I started testing my multimedia requirements. I installed all the necessary codecs and libraries (along with libdecss, mencoder, vlc, flash plugins etc, etc) to allow me to waste time watching youtube, MP4 videos and DVDs only to discover that neither of the DVD/CD devices in my test box were recognised. I could not mount any optical medium. This is a big problem for me because I encode my DVDs to MP4 format so that I can watch them on my PSP on the train. Thinking that there might be a problem with the automounter, I tried manually mounting the devices – no go, mount failed consistently because it could not find any media. I could not find any useful messages in any of the logs so I checked the ubuntu forums to see if others were having any similar problems. Yep – I’m not alone. This is a common problem. But it seems that I’m pretty lucky not to have seen a lot more problems (black, or purple, screen of death seems to be a major complaint). I think I’ll wait a month or so before trying again.

Meanwhile, I guess I can always ask for my money back.

Permanent link to this article: https://baldric.net/2010/05/02/ubuntu-10-04-problems/

where are you

I have added a new widget to trivia – a map of the world from clustrmaps which gives a small graphic depicting where in the world the IP addresses associated with readers are supposedly located. Geo location of IP addresses is not a perfect art, but the map given corresponds roughly with what I expect from my logs.

I’ve checked the clustrmaps privacy statement and am reasonably content that neither my, nor (more importantly) your, privacy is compromised by this addition any more than it would be if I linked to any other site. Clustrmaps logs no more information about your visit to trivia than do I.

Besides, the map is pretty.

Permanent link to this article: https://baldric.net/2010/04/18/where-are-you/

there are 10 kinds of people in the world

binary soduko

With grateful thanks to xkcd.

Permanent link to this article: https://baldric.net/2010/04/02/there-are-10-kinds-of-people-in-the-world/

webDAV in lighttpd on debian

I back up all my critical files to one of my slugs using rsync over ssh (and just because I am really cautious I back that slug up to another NAS). Most of the files I care about are the obvious photos of friends and family. I guess that most people these days will have large collections of jpeg files on their hard disks whereas previous generations (including myself I might add) would have old shoe boxes filled with photographs.

The old shoe box approach has much to recommend it. Not least the fact that anyone can open that box and browse the photo collection without having to think about user ids or passwords, or having to search for some way of reading the medium holding the photograph. I sometimes worry about how future generations’ lives will be subtly impoverished by the loss of the serendipity of discovery of said old shoe box in the attic. Somehow the idea of the discovery of a box of old DVDs doesn’t feel as if it will give the discoverer the immediate sense of delight which can follow from opening a long forgotten photo album. Old photographs feel “real” to me in a way that digital images will never do. In fact the same problem affects other media these days. I am old enough (and sentimental enough) to have a stash of letters from friends and family past. These days I communicate almost exclusively via email and SMS. And I feel that I have lost some part of me when I lose some old messages in the transfer from one ‘phone to another or from one computing environment to another.

In order to preserve some of the more important photographs in my collection I print them and store them in old fashioned albums. But that still leaves me with a huge number of (often similar) photographs in digital form on my PC’s disk. As I said, I back those up regularly, but my wife pointed out to me that only I really know where those photos are, and moreover, they are on media which are password protected. What happens if I fall under a bus tomorrow? She can’t open the shoebox.

Now given that all the photos are on a networked device, it is trivially easy to give her access to those photos from her PC. But I don’t like samba, and NFS feels like overkill when all she wants is read-only access to a directory full of jpegs. The slug is already running a web server and her gnome desktop conveniently offers the capability to connect to a remote server over a variety of different protocols, including the rather simple option of WebDAV. So all I had to do was configure lighty on the slug to give her that access.

Here’s how:-

If not already installed, then run aptitude to install “lighttpd-mod-webdav”. If you want to use basic authenticated access then it may also be useful to install “apache2-utils” which will give you the apache htpasswd utility. The htpasswd authentication function uses unix crypt and is pretty weak, but we are really only concerned here with limiting browsing of the directory to local users on the local network, and if someone has access to my htpasswd file then I’ve got bigger problems than just worrying about them browsing my photos. There are other authentication mechanisms we can use in lighty if we really care – although I would argue that if you really want to protect access to a network resource you shouldn’t be providing web access in the first place.

To enable the webdav and auth modules, you can simply run “lighty-enable-mod webdav” and “lighty-enable-mod auth” (which actually just create symlinks from the relevant files in /etc/lighttpd/conf-available to /etc/lighttpd/conf-enabled directories) or you can activate the modules directly in /etc/lighttpd/lighttpd.conf or the appropriate virtual-host configuration file with the directives:

server.modules += ( “mod_auth” )
server.modules += ( “mod_webdav” )

lighty is pretty flexible and doesn’t really care where you activate the modules. The advantage of using the “lighty-enable-mod” approach however, is that it allows you to quickly change by running “lighty-disable-mod whatever” at a future date. The symlink will then be removed and so long as you remember to restart lighty, the module activation will cease.

Now to enable the webdav access to the directory in question we need to configure the virtual host along the following lines:

$HTTP[“host”] == “slug” {
# turn off directory listing (assuming that it is on by default elsewhere)
server.dir-listing = “disable”

# turn on webdav on the directory we wish to share
$HTTP[“url”] =~ “^/photos($|/)” {
webdav.activate = “enable”
webdav.is-readonly = “enable”
webdav.sqlite-db-name = “/var/run/lighttpd/lighttpd.webdav_lock.db”
auth.backend = “htpasswd”
auth.backend.htpasswd.userfile = “/etc/lighttpd/htpasswd”
auth.require = ( “” => ( “method” => “basic”,
“realm” => “photos”,
“require” => “valid-user” ) )
}

}

Note that the above configuration will allow any named user listed in the file /etc/lighttpd/htpasswd read only access to the “/photos” directory on the virtual host called “slug” if they can sucessfully authenticate. Note also, however, that because directory listing is turned off, it will not be possible for that user to access this directory with a web browser (which would be possible if listing were allowed). Happily however, the gnome desktop (“Connect to server” mechanism) will still permit authenticated access and once connected will provide full read only access to all files and subdirectories of “/photos” in the file browser (which is what we want). The “auth.backend” directive tells lighty that we are using the apache htpasswd authentication mechanism and the file can be found at the location specified (make sure this is outside webspace). the “auth.require” directives specify the authentication method (bear in mind that “basic” is clear text, albeit base64 encoded, so authenticatiion credentials can be trivially sniffed off the network); the “realm” is a string which will be displayed in the dialogue box presented to the user (though it has additional functions if digest authentication is used); “require” specifies which authenticated users are allowed access. This can be useful if you have a single htpasswd file for multiple virtual hosts (or directories) and you wish to limit access to certain users.

Passing authentication credentials in clear over a hostile network is not smart, so I would not expose this sort of configuration to the wider internet. However, the webDAV protocol supports ssl encryption so it would be easy to secure the above configuration by changing the setup to use lighty configured for ssl. I choose not to here because of the overhead that would impose on the slug when passing large photographic images – and I trust my home network……

Now given that we have specified a password file called “htpasswd” we need to create that file in the “/etc/lighttpd” configuration directory thusly:

httpasswd -cm /etc/lighttpd/htpasswd user

The -c switch to htpasswd creates the file if it does not already exist. When adding any subsequent users you should omit this switch or the file will be recreated (and thus overwritten). The “user” name adds the named user to the file. You will be prompted for the password for that user after running the command. As I mentioned above, by default the password will only be encrypted by crypt, the -m switch used forces htpasswd to use (the stronger) MD5 hash. Note that htpasswd also gives the option of SHA (-s switch) but this is not supported by lighty in htpasswd authentication. Make sure that the passwd file is owned by root and is not writeable by non root users (“chown root:root htpasswd; chmod 644 htpasswd” if necessary).

The above configuration also assumes that the files you wish to share are actually in the directory called “photos” in the web root of the virtual host called “slug”. In my case this is not true because I have all my backup files outside the webspace on the slug (for fairly obvious reasons). In order to share the files we simply need to provide a symlink to the relevant directory outside webspace.

Now whenever the backup is updated, my wife has access to the latest copy of all the photos in the shoebox. But let’s hope I don’t fall under a bus just yet.

Permanent link to this article: https://baldric.net/2010/03/31/webdav-in-lighttpd-on-debian/

what a user agent says about you

I get lots of odd connections to my servers – particularly to my tor relay. Mostly my firewalls bin the rubbish but my web server logs still show all sorts of junk. Occasionally I get interested (or possibly bored) enough to do more than just scan the logs and I follow up the connection traces which look really unusual. I may get around to posting an analysis of all my logs one day.

One of the interesting traces in web logs in the user agent string. Mostly this just shows the client’s browser details (or maybe proxy details) but often I find that the UA string is the signature of a known ‘bot (e.g. Yandex/1.01.001). A good site for keeping tabs on ‘bot signatures is www.botsvsbrowsers.com. But I also find user-agent-string.info useful as a quick reference. If you have never checked before, it can be instructive to learn just how much information you leave about yourself on websites you visit. Just click on “Analyze my UA” (apologies for the spelling, they are probably american) for a full breakdown of your client system.

Permanent link to this article: https://baldric.net/2010/03/30/what-a-user-agent-says-about-you/

unplugged

My earlier problems with the sheevaplug all seem to have stemmed from the fact that I had installed Lenny to SDHC cards. As I mentioned in my post of 7 March, I burned through two cards before eventually giving up and trying a new installation to USB disk. This seems to have fixed the problem and my plug is now stable. I had a series of problems with the SD cards I used (class 4 SDHC 8 GB cards) which may have been related to the quality of the cards I used. Firstly the root filesystem would often appear as readonly and the USB drive holding my apt-mirror (mounted as /home2) would similarly appear to be mounted read-only. This seemed to occur about every other day and suggested to me that the plug had seen a problem of some kind and rebooted. But of course since the filesystem was not writeable, there were no logs available to help my investigations.

I persevered for around two weeks during which time I completely rebuilt both the original SD card and another with Martin’s tarball, reflashed uboot with the latest from his site, and reset the uboot environment to the factory defaults before trying again. I also changed /etc/fstab to take out the “errors=remount-ro” entry against the root filesystem, and reduced the number of writes to the card by adding “noatime, commit=180” in the hope that I could a) gain stability, and b) find out what was going wrong. No joy. I still came home to a plug with a /home2 that was either unmounted or completely unreadable or mounted RO. The disk checked out fine on another machine and I could find nothing obvious in the logs to suggest why the damned thing was failing in the first place. Martin’s site says that “USB support in u-boot is quite flaky”. My view is somewhat stronger than that, particularly when the plug boots from another device and then attaches a USB disk.

But I don’t give up easily. After getting nowhere with the SDHC card installation from Martin’s tarball, I reset the uboot environment on the plug to the factory default (again) and then ran a network installation of squeeze to a 1TB USB disk (following Martin’s howto). It took me two attempts (I hit the bug in the partitioner on the first installation) but I now have a stable plug running squeeze. It is worth noting here that I had to modify the uboot “bootcmd” environment variable to include a reset (as Martin suggests may be necessary) so that the plug will continue to retry after a boot failure until it eventually loads. The relevant line should read:

setenv bootcmd ‘setenv bootargs $(bootargs_console); run bootcmd_usb; bootm 0x00800000 0x01100000; reset’

The plug now boots successfully every second or third attempt. So far it has been up just over ten days now without any of the earlier problems recurring.

My experience appears not to be all that unusual. There has been some considerable discussion on the debian-arm list of late about problems with installation to SDHC cards. Most commentators conclude that wear levelling on the cards (particularly cheap ones) may not be very good. SD cards are sold formatted as FAT or FAT32 (depending on the capacity of the card). Modern journalling filesystems such as ext3 on linux result in much higher read/write rates and the quality of the cards becomes a much greater concern. Perhaps my cards just weren’t good enough.

Permanent link to this article: https://baldric.net/2010/03/30/unplugged/

psp video revisited

I last posted about ripping DVDs to PSP format back in November 2007. Since then I have used a variety of different mechanisms to transcode my DVDs to the MP4 format preferred by my PSP. A couple of years ago I experimented with both winff and a command line front end to ffmpeg called handbrake. Neither were really as successful as I would have liked (though winff has improved over the past few years) so I usually fell back to the mencoder script that works for 95% of all the DVDs I buy.

I have continually upgraded the firmware on my PSP since 2007 so that I am now running version 6.20 (the latest as at today’s date). Somewhere between version 3.72 and now, sony decided to stop being so bloody minded about the format of video they were prepared to allow to run on the PSP. We are still effectively limited to mpeg-4/h.264 video wth AAC audio in an mp4 container, but the range of encoding bitrates and video resolutions is no longer as strictly limited as it was back in late 2007. So when going about converting all the DVDs I received for christmas and my last birthday and considering whether I should I move my viewing habits to take advantage of the power of my N900, I recently revisited my transcoding options.

Despite the attractiveness of the N900’s media player I concluded that it still makes sense to use the PSP for several reasons:- it works; the battery lasts for around 7 hours between charges; I have a huge investment in videos encoded to run on it; and most importantly, not using the the N900 as intensively as I use the PSP means that I know that my ‘phone will be charged enough to use as a ‘phone should I need it.

But whilst revisiting my options I discovered that the latest version of handbrake (0.9.4) now has a rather nice GUI and it will rip and encode to formats usable by both the PSP and a variety of other hand-held devices (notably apple’s iphone and ipod thingies) quite quickly and efficiently. Unfortunately for me, the latest version is only available as a .deb for ubuntu 9.10 and I am still using 8.04 LTS (because it suits me). A quick search for alternative builds led me to the ppa site for handbrake which gives builds up to version 0.9.3 for my version of ubuntu. See below:

image of handbrake gui

This version works so well on my system that I no longer have to use my mencoder script.

Permanent link to this article: https://baldric.net/2010/03/21/psp-video-revisited/