gun, foot, shoot

As a chartered member of the British Computer Society (BCS) I recently received through the post my voting forms for the 2008 AGM. The process gives me the option of voting electronically using a website run by Electoral Reform Services. My security codes (two separate numeric IDs, one of six characters, the other of four) were printed on my personalised letter from the Society. So far so dandy.

However, the following day I received an email from Electoral Reform Services giving me exactly the same information, together with the address of the webite where I may cast my votes.

Am I happy? Guess.

Permanent link to this article:

webanalytics – just say no

I have just built myself a new intel core 2 duo based machine to replace one of my older machines which was beginning to struggle under the load of video transcoding I was placing upon it. The new machine is based on an E8400 and is nice and shiny and fast. Because it is a new build, I decided to install the OS and all my preferred applications, tools and utilities from scratch. Yes, I could have just copied my old setup, or at the least, my home directory and system configuration from my older machine, but I chose to do a completely new clean build on top of a clean install of ubuntu 8.04. I did this largely because my older system has been upgraded and “tweaked” so often I am no longer sure exactly what is on there or why. I am sure that it contains a lot of unnecessary cruft and I felt it was time for a clear out. A new build should ensure that I only installed what I actually needed. Of course I copied over my mail, bookmarks and other personal data, but the applications themselves I simply installed from new and then configured to my preferred standard.

Like most modern linux distros, Ubuntu is pretty secure straight out of the box. Gone are the (good old, bad old) days when umpteen unnecessary services were fired up by init or run out of inetd by default. But old habits die hard and I still like to check things over and stop/remove stuff I don’t want, or don’t trust. I also like to check outbound connections because a lot of programs these days have a habit of “calling home” – a habit I dislike. I noticed and cleared up one or two oddities I’d forgotten about (Ubuntu uses ntpdate to call a canonical server if ntpd is not configured for example. Since I use my own internal ntp server, this was easy to sort). However, after clearing, or identifying all other connections I was left with one outbound http connection I didn’t recognise, and worse, it was to a network I know to be untrustworthy. The connection was to This machine is on the omniture network. Omniture is notorious for running the deeply suspicious Omniture market webanalytics services and are used by a whole range of (perfectly respectable) companies who pay them for web usage statistics. But omniture have never successfully explained why they choose to use a domain name which looks like, but isn’t, a local RFC 1918 address from the 16 bit block (e.g. I don’t trust them, and I didn’t like the fact that my shiny new machine was connecting to them. So what was responsible? And what to do?

Well, the “what to do” bit is easy – just blackhole the whole – network at my firewall. But that feels a bit OTT, even for me. A bit of thought, and a bit of digging gave me a better solution, and one which incidentally solves a range of related problems. What I actually needed was a way of preventing oubound connections to any hosts I don’t like or don’t trust. So long as the IP addresses of the hosts are not hard coded in the application (as sometimes happens in trojans) the classic way to do this is to simply map the hostname to the local loopback address in your hosts file. But this can become tedious. Fortunately, it turns out that a guy called Dan Pollock maintains a pretty comprehensive hosts file on-line at Result.

Because I run my own local DNS server (DNSmasq on one of the slugs) it was easy for me to add Dan’s host file to my central hosts file. So now all my machines will routinely bin any attempted outbound connection to adservers, porn sites, or whatever in the list. The downside, of course, is that this is a bit of blunt instrument and may cause some difficulty with some sites (ebay for example). But I’m prepared to put up with that whilst I fine tune the list. I can also pull the list regularly and automatically via cron so that I stay up to date (but of course I won’t just blindly update my DNS, I’ll pull the file in for inspection and manual substitution…..).

So what was making the connection? Well it looks to me as if adobe is the culprit. I had installed the acroreader plugin for firefox.

Silly me. Must remember to avoid proprietary software.

(Oh, and you just have to love omniture’s guidance on how to opt-out of their aggregation and analysis. You have to install an opt-out cookie. Oh yes, indeedy, I’ll do that.)

Permanent link to this article:

french slugs?

In an earlier post I speculated that the CherryPal PC might be a possible option for users considering replacements for the slug. But that device has still yet to hit the streets and is beginning to look suspiciously like vapourware. However, linuxdevices, the site devoted to linux on embedded devices, wrote about the interesting looking french made linutop some months back. The linutop site looks to me as if it is actually taking orders.


Now if they could just ship one with two ethernet ports, it might make a good base for a firewall.

Permanent link to this article:

chrome *can* get rusty

Amidst all the hype and hullabaloo about Google’s chrome, el reg tells it like it is. Yes, “it’s a f***ing web browser”.

You just have to love the reg.

Permanent link to this article:

where did my bandwidth go

Have you ever wondered what was eating your network? Would you like to be able to check exactly which application was responsible for that sudden spike in outbound traffic? NetHogs might help. This neat little utility calls itself a “small ‘net top’ tool”, and that is exactly what it is. NetHogs groups bandwidth usage by PID so you can immediately see which application is responsible and take whatever action you deem appropriate.


(Oh, and if you want a nice graphical representation of the connections your PC is making whilst you are using it, I recommend you install etherape. It can be a highly educational (not to say scary) experience to leave etherape running whilst you fire up your browser. You will find that your PC is making HTTP connections all over the place. Now try leaving it running whilst you are not doing anything and watch what happens.)

Permanent link to this article:

trusting DNS

Dan Kaminsky has (quite rightly) been hitting the press a lot in the weeks since 8 July when he announced the work done to fix a flaw he had discovered in DNS. The vulnerability itself was new, but its impact (cache poisoning) was not. Indeed, we’ve known about the dangers of poisoned DNS caches for some years now. Kaminsky originally took a lot of flak about his announcement, its timing (to coincide with Microsoft’s “patch tuesday”), his reluctance to discuss details (“trust me, it’s dangerous. I’ll tell you all about it later”) and his apparent willingness to “talk up” the issue with the non-specialist press. But all that aside, he deserves immense credit for highlighting the flaw and herding all the cats necessary to get vendors on board to create patches. He has also since been as good as his word and described the problem in detail.

However, I have a big problem with one of his blog entries: “Here comes the cavalry” where he says “Note, if you must forward, it’s most secure to do so to a name server that’s still on your network but happens to be patched — but in a pinch, you’re much better off forwarding to OpenDNS or another free and patched name service provider than going direct (and insecure).”

In my view, this is hugely ironic. Cache poisoning means that you cannot trust the answer your DNS server provides. I do not trust the answer OpenDNS provides. OpenDNS violates principles which in my view are essential to an open, transparent and trustworthy network. They hijack queries and give incorrect answers. For example, they do not reply with NXDOMAIN to a query for a non-existent host or domain. They also hijack queries aimed specifically at Google. See the dig queries below for examples.

I first came across OpenDNS when I installed packetprotector on an Asus wireless router I was playing with. OpenDNS servers were hardwired as the default DNS hosts in that package. I run my own dns internally using DNSmasq and the hosts file on that system contains the private addresses of my internal servers on my network. Imagine my surprise then, when during testing, I pinged one of my internal hosts from the Asus only to get a response back from a server with the address “”. This should not happen. It is a bad thing (TM), regardless of how OpenDNS may attempt to portray this as “helping” the community.

In a discussion about Google’s toolbar, David Ulevitch of OpenDNS said, “The solution to this problem was to route Google requests through a machine we run to check if the request is a typo or one of your shortcuts. If it is a typo or shortcut then we do what we always do, just fix the typo or launch your shortcut and send you off on your way. If it’s not one of those two things, we pass it on to Google for them to give you search results. This solution provides the best of both worlds: OpenDNS users get back the features that they love and Google continues to operate without problems.”

Wrong. I do not want some third party fiddling with my DNS requests on the spurious grounds that I may have mistyped some hostname.

Make up your own mind. There is extensive discussion about OpenDNS on-line. See in particular the commentary at the scream and on wikipedia. Personally, I prefer to use my ISP’s DNS servers. I have a contractual relationship with them and I can therefore expect them to provide me with a service which works, and is trustworthy (for some definition of “trust”). Oh, and they patched their DNS servers very, very, quickly.

Now some sample dig results:

First using my default (DNSmasq forwarding to my ISP)

mick@slug:~$ dig

; <<>> DiG 9.4.2-P1 <<>>
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47551 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 7, ADDITIONAL: 0 ;; QUESTION SECTION: ; IN A ;; ANSWER SECTION: 604798 IN CNAME 299 IN A 299 IN A 299 IN A 299 IN A ;; AUTHORITY SECTION: 86397 IN NS 86397 IN NS 86397 IN NS 86397 IN NS 86397 IN NS 86397 IN NS 86397 IN NS ;; Query time: 30 msec ;; SERVER: ;; WHEN: Sat Jul 26 18:03:08 2008 ;; MSG SIZE rcvd: 228 Now use openDNS mick@slug:~$ dig @ ; <<>> DiG 9.4.2-P1 <<>> @
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40840 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ; IN A ;; ANSWER SECTION: 30 IN CNAME 30 IN A 30 IN A ;; Query time: 19 msec ;; SERVER: ;; WHEN: Sat Jul 26 18:04:54 2008 ;; MSG SIZE rcvd: 104 Are you happy with that? I'm damned if I am.

Permanent link to this article:

replacement for the slug

I noted in an earlier post that Linksys were ceasing production of the NSLU2. There are now a variety of NAS systems coming onto the market which might make good replacements – but most of them look expensive when compared to the slug. However I’ve just seen a review of a box which looks as if it might be just up my street – the oddly named CherryPal PC, based on Freescale’s MPC5121e mobileGT processor.

CherryPal PC

The specs look very interesting – indeed, if the press release at is to be believed, the box has “256GB of DDR2 DRAM” to go with the 800 MIPS Freescale’s MPC5121e processor.

Methinks this may be a typo.

Permanent link to this article:

implementing mailman and postfix with lighttpd on debian

I recently needed to set up a mailing list for a group of friends (my bike club). I had become tired of mail bounces and failures because we were all relying on an out of date list of addresses originally cobbled together by one member. That list of addresses was routinely used in “reply all” messages to others about forthcoming social events. An obvious improvement would be a mail list – ideally one which members could manage themselves. I originally looked at using a quick and dirty system using a mail forwarding mechanism which would simply explode mail sent to one address to the complete list of aliases (I can be lazy). However I discovered that neither my mail/web provider, nor my ISP really offered this facility in quite the way I wanted it. So, an obvious way forward would be do it myself using a slug.

I’ve used mailman in the past and knew it offered everything I wanted (including a web interface for membership management and access to archived messages), but I don’t (or rather didn’t) run a mail server on my home network. So that had to be fixed first. The necessary ingredients for the list management were: mailman itself; an MTA (I chose postfix because I know it, like it and find the default debian exim unnecessarily complicated); and a webserver (I was already running lighttpd on both slugs because it performs better than apache on low memory machines). I also wanted to use SSL encryption on the webserver to preserve password integrity (but not to authenticate the webserver itself).

There were a number of steps required to get this all working to my satisfaction. These were:

Step 1 – upstream SMTP authentication using TLS with postfix;
Step 2 – getting a mailman listserver running with postfix;
Step 3 – configuring lighttpd with SSL for mailman;
Step 4 – putting it all together and letting the world in.

It all worked, but the main drawback turned out to be the performance of the slug when running mailman. The combination of SSL encryption and mailman python scripts is too big a hit for a device with only 32 Mb of RAM. It would be perfectly feasible to run mailman on the slug if we limited ourselves to management by email alone (i.e. ignore the web management interface). But doing this would severely limit its functionality and in such case we might as well look at alternative list managers such as Majordomo or Listproc. In the end, the attractiveness of mailman’s web interface meant that I moved it all off the slug and onto a more powerful platform (also running debian). Nevertheless, the documentation here may be of use to anyone considering a mailman install with postfix and lighttpd on any linux distro. The notes on SSL usage at step 3 can, of course, also be applied (with suitable modification) to apache or any other webserver supporting SSL certificates.

Permanent link to this article:


An apt-get dist-upgrade (to bring the kernel up to date and install some new patches) on the slugs killed the webcam. Of course I should have remembered that the gspca module was built against the old kernel and might fail. One quick “m-a auto-install gspca” later and all is working again.

Of course the kernel update required a reboot so my uptime is now back to zero, but security is more important than a long running time.

Permanent link to this article:

slugs are history

Jim Buzbee, of batbox fame and one of the original NSLU2 hackers, apparently gave a presentation about the history of slug hacking at the Boulder Linux Users Group. A PDF copy of his presentation can be found on his site.

Jim also notes that Linkys are ending production of the NSLU2 after four years of life. Better get your hands on a few now before they all disappear – or end up at twice the price on ebay.

Permanent link to this article:

mine’s longer than yours

You could regard this as another pointless entry to go alongside the webcam. But hey – so what.

I had cause to check the uptime on my slugs a little while ago now that they are largely stable and providing the services I want. After doing so I thought it would be good to be able to check this from a web page and a short search later came across Matthew Trent’s UD daemon. I’ve now made my webcam slug uptime public. Let’s see how high this will get.

Permanent link to this article:

backtrack 3 released

Any half decent sysadmin will routinely test the security of his or her own systems. A good, and sensible, sysadmin will follow up those tests with an independent security audit by a professional company – preferably one which is a member of a recognised industry body (such as CREST). Finding the holes in your security mechanisms (and there will be some – probably more than you will be happy about) before the bad guys do is essential if you want to sleep at night (and keep your job).

There are a huge number of security testing tools available for free if you know where to look. Most sysadmins keep a toolbox of their favourites (nmap, nessus, ettercap, dsniff et al.) to hand ready for testing any new build. But it can sometimes be difficult to know just which tool to use, and where to get it. Enter backtack. I first came across this collection of tools as recently as february 2006 and found it an excellent resource. Essentially backtrack is a collection of all the security testing tools you are likely to need packaged into one linux distribution. Think of it as a knoppix for security testing. A complete list of all the tools in the collection can be seen here.

Bactktrack Version 3 has just hit the streets. Get it here.

(Oh, and don’t think that using a toolset like this makes you a pen-tester. It doesn’t. What it might do is make you more security aware, and a better sysadmin.)

Permanent link to this article:

dental dos

On Tuesday 17 June, Craig Wright, supposedly “Manager of Risk Advisory Services” in an Australian Company called “BDO Kendalls”, posted a rather odd note to Bugtraq and a few other security related lists titled “Hacking Coffee Makers”. In that posting he said that the Jura F90 Coffee maker (which can apparently be networked) was vulnerable to remote attack. His post said that the vulnerabilities allowed the attacker to:

“- Change the preset coffee settings (make weak or strong coffee);
– Change the amount of water per cup (say 300ml for a short black) and make a puddle;
– Break it by engineering settings that are not compatible (and making it require a service);”

but worse

“the software allows a remote attacker to gain access to the Windows XP system it is running on at the level of the user”.

Now I’ve been a subscriber to bugtraq for longer than I care to remember and I’ve seen some odd posts in the past – particularly around the beginning of April, but in June? I initially dismissed this as just one more nut trying to raise his profile in the security community, but since tuesday the story has been picked up by a range of commentators. Some have found the story simply amusing (slashdot – “All Your Coffee Are Belong To Us”), others such as CNET seem to have taken it only slightly more seriously. OK, the bits about attacking the coffee maker itself may be amusing, but there is a serious point here if Wright is correct in his statement that attacking the coffe jug gets you access to the windows system its management software runs on. Certainly Thor of Hammerofgod has taken the post seriously enough to question Wright’s professional judgement in posting details of a vulnerability before alerting the manufacturer.

The point to note is that as more and more consumer devices become networkable (and networked) then the attack surface gets larger and larger. And it is a fairly good bet that the manufacturer of (say) a networked microwave oven is not going to take network security as seriously as would the manufacturer of a router, NAS, or mainframe.

Oh and Wright has done it again today. His latest post to bugtraq is titled “Oral B SmartMonitor Information Disclosure Vulnerability and DoS”. It’s about a “remote exploitation of an information disclosure vulnerability in Oral B’s SmartGuide management system [that] allows attackers to obtain sensitive information.”

That’s right, he’s talking about a toothbrush.

Some people have way too much time on their hands.

Permanent link to this article:

xkcd on the openssl fiasco

I’ve had my attention drawn to Randall Munroe’s take on the openssl coding change problem.



Permanent link to this article:

debian and the openssl flaw

Ben Laurie wrote about the Debian SSL problem a couple of weeks ago. That particular post has attracted a huge response which is well worth reading if you care about free open source software and/or privacy/security issues (or even if you don’t). The key point to take from the discussion is that about two years ago the Debian development team “fixed” a perceived problem in openssl and in so doing actually introduced a fairly serious vulnerability. The net result of this change was that anyone using Debian or a related distribution such as Ubuntu to generate a cryptographic key based on the “fixed” opensssl libraries actually left themselves open to compromise. To quote from the Debian advisory “the random number generator in Debian’s openssl package is predictable. This is caused by an incorrect Debian-specific change to the openssl package (CVE-2008-0166). As a result, cryptographic key material may be guessable…….. affected keys include SSH keys, OpenVPN keys, DNSSEC keys, and key material for use in X.509 certificates and session keys used in SSL/TLS connections.”

Fortunately, it seems that GPG keys are not affected (and in any case, my own key was generated some time ago and not on a Debian based system) but this is pretty serious nonetheless and means that a great many people (myself included) have been relying on keys which it turns out are vulnerable to attack. I have now regenerated all the keys I suspect were vulnerable, but that does not leave me feeling very comfortable about past usage.

I don’t want to denigrate the Debian team in any way, but I can’t help but agree with Ben Laurie’s view that the proper place to fix any perceived flaw in an open source product, particularly one as important as a security critical component, is in the upstream package – not in the distribution.

Permanent link to this article:

recursion: see recursion

I have written about how I use one of my slugs to backup my internal files via rsync over ssh. Well it turns out I made a pretty silly mistake in my rsync options. I thought I’d been careful in specifying the files I specifically wanted excluded from the backup (ephemeral stuff, thumbnail images, some caches such as my browser cache etc.) but I missed one crucial directory and it bit me – and sent the slug’s load average through the roof.

GNOME 2.22 introduced GVFS, a new network-transparent virtual filesystem layer. GVFS is a userspace virtual file system with backends for protocols like SFTP and FTP. GVFS creates a (hidden) directory called .gvfs in your home directory and uses this as a mount point when you open a connection via SSH, FTP, SMB, WebDAV etc from the “Places -> Connect to Server” menu option. So if you open an SFTP connection to a server called “slug”, it will mount that connection in .gvfs. Try it yourself.

Now guess what I had mounted on my desktop at the time my rsync cron job ran. The slug spent some frantic time copying itself to itself until I noticed that it seemed to be inordinately busy, diagnosed the problem and managed to kill the rsync and clear up the mess.

Permanent link to this article: