more DNS silliness

I came across an interesting post on Avert labs site recently. That post pointed to an earlier SANS posting, which in turn, referenced a Symantec discussion of a new Trojan called Trojan.Flush.M. This trojan is an interesting variant of a class of trojans which hijack local DNS settings to force the compromised machine to use a hostile DNS server. The hostile server will then redirect the user to fake sites – usually Banks in an attempt to extract identification and authentication credentials. As the Avert post says, there have been various types of DNS changing tactics employed in the past, but the clever tactic used by this latest trojan is that it subverts the use of DHCP on any network which uses that protocol to manage client system settings. Once the trojan has been installed on a (windows) PC it creates a new service on that PC which allows the machine to send fake DHCP offer packets to any requesting client on the network. The DHCP offer includes the address of a hostile DNS server outside the network. The neat point here is that any client system on the network, regardless of the operating system in use, can then be subverted – and without some network traffic analysis it will be very difficult to find out how the subverted machine was compromised.

But, and this is a big but, the whole attack fails when faced with a properly designed and well managed network. Consider: for the attack to be succesful the subverted client must be able to make DNS requests directly to the hostile server. But no corporate network should allow a client system direct access to the net. All DNS requests should be answered by a local DNS server and that server should be the only machine which is allowed to forward DNS requests to the outside world. Indeed, that server should probably only forward DNS requests to specific servers on the company’s service provider network. The bad news of course, is that any home or SOHO network is unlikely to be well designed and protected.

One of the respondents to the Avert post seems to have missed the point entirely though. He said “All the more reason to consider using trusted third party DNS networks, such as OpenDNS.”. Oh dear, that is so wrong in so many ways. Just think that through will you Jason?

Permanent link to this article: https://baldric.net/2008/12/24/more-dns-silliness/

gun, foot, shoot

As a chartered member of the British Computer Society (BCS) I recently received through the post my voting forms for the 2008 AGM. The process gives me the option of voting electronically using a website run by Electoral Reform Services. My security codes (two separate numeric IDs, one of six characters, the other of four) were printed on my personalised letter from the Society. So far so dandy.

However, the following day I received an email from Electoral Reform Services giving me exactly the same information, together with the address of the webite where I may cast my votes.

Am I happy? Guess.

Permanent link to this article: https://baldric.net/2008/09/25/gun-foot-shoot/

webanalytics – just say no

I have just built myself a new intel core 2 duo based machine to replace one of my older machines which was beginning to struggle under the load of video transcoding I was placing upon it. The new machine is based on an E8400 and is nice and shiny and fast. Because it is a new build, I decided to install the OS and all my preferred applications, tools and utilities from scratch. Yes, I could have just copied my old setup, or at the least, my home directory and system configuration from my older machine, but I chose to do a completely new clean build on top of a clean install of ubuntu 8.04. I did this largely because my older system has been upgraded and “tweaked” so often I am no longer sure exactly what is on there or why. I am sure that it contains a lot of unnecessary cruft and I felt it was time for a clear out. A new build should ensure that I only installed what I actually needed. Of course I copied over my mail, bookmarks and other personal data, but the applications themselves I simply installed from new and then configured to my preferred standard.

Like most modern linux distros, Ubuntu is pretty secure straight out of the box. Gone are the (good old, bad old) days when umpteen unnecessary services were fired up by init or run out of inetd by default. But old habits die hard and I still like to check things over and stop/remove stuff I don’t want, or don’t trust. I also like to check outbound connections because a lot of programs these days have a habit of “calling home” – a habit I dislike. I noticed and cleared up one or two oddities I’d forgotten about (Ubuntu uses ntpdate to call a canonical server if ntpd is not configured for example. Since I use my own internal ntp server, this was easy to sort). However, after clearing, or identifying all other connections I was left with one outbound http connection I didn’t recognise, and worse, it was to a network I know to be untrustworthy. The connection was to 66.235.133.2. This machine is on the omniture network. Omniture is notorious for running the deeply suspicious 2o7.net. Omniture market webanalytics services and are used by a whole range of (perfectly respectable) companies who pay them for web usage statistics. But omniture have never successfully explained why they choose to use a domain name which looks like, but isn’t, a local RFC 1918 address from the 16 bit block (e.g. 192.168.112.207). I don’t trust them, and I didn’t like the fact that my shiny new machine was connecting to them. So what was responsible? And what to do?

Well, the “what to do” bit is easy – just blackhole the whole 66.235.128.0 – 66.235.159.255 network at my firewall. But that feels a bit OTT, even for me. A bit of thought, and a bit of digging gave me a better solution, and one which incidentally solves a range of related problems. What I actually needed was a way of preventing oubound connections to any hosts I don’t like or don’t trust. So long as the IP addresses of the hosts are not hard coded in the application (as sometimes happens in trojans) the classic way to do this is to simply map the hostname to the local loopback address in your hosts file. But this can become tedious. Fortunately, it turns out that a guy called Dan Pollock maintains a pretty comprehensive hosts file on-line at someonewhocares.org. Result.

Because I run my own local DNS server (DNSmasq on one of the slugs) it was easy for me to add Dan’s host file to my central hosts file. So now all my machines will routinely bin any attempted outbound connection to adservers, porn sites, or whatever in the list. The downside, of course, is that this is a bit of blunt instrument and may cause some difficulty with some sites (ebay for example). But I’m prepared to put up with that whilst I fine tune the list. I can also pull the list regularly and automatically via cron so that I stay up to date (but of course I won’t just blindly update my DNS, I’ll pull the file in for inspection and manual substitution…..).

So what was making the connection? Well it looks to me as if adobe is the culprit. I had installed the acroreader plugin for firefox.

Silly me. Must remember to avoid proprietary software.

(Oh, and you just have to love omniture’s guidance on how to opt-out of their aggregation and analysis. You have to install an opt-out cookie. Oh yes, indeedy, I’ll do that.)

Permanent link to this article: https://baldric.net/2008/09/12/webanalytics-just-say-no/

french slugs?

In an earlier post I speculated that the CherryPal PC might be a possible option for users considering replacements for the slug. But that device has still yet to hit the streets and is beginning to look suspiciously like vapourware. However, linuxdevices, the site devoted to linux on embedded devices, wrote about the interesting looking french made linutop some months back. The linutop site looks to me as if it is actually taking orders.

linutop

Now if they could just ship one with two ethernet ports, it might make a good base for a firewall.

Permanent link to this article: https://baldric.net/2008/09/12/french-slugs/

chrome *can* get rusty

Amidst all the hype and hullabaloo about Google’s chrome, el reg tells it like it is. Yes, “it’s a f***ing web browser”.

You just have to love the reg.

Permanent link to this article: https://baldric.net/2008/09/08/chrome-can-get-rusty/

where did my bandwidth go

Have you ever wondered what was eating your network? Would you like to be able to check exactly which application was responsible for that sudden spike in outbound traffic? NetHogs might help. This neat little utility calls itself a “small ‘net top’ tool”, and that is exactly what it is. NetHogs groups bandwidth usage by PID so you can immediately see which application is responsible and take whatever action you deem appropriate.

Recommended.

(Oh, and if you want a nice graphical representation of the connections your PC is making whilst you are using it, I recommend you install etherape. It can be a highly educational (not to say scary) experience to leave etherape running whilst you fire up your browser. You will find that your PC is making HTTP connections all over the place. Now try leaving it running whilst you are not doing anything and watch what happens.)

Permanent link to this article: https://baldric.net/2008/08/20/where-did-my-bandwidth-go/

trusting DNS

Dan Kaminsky has (quite rightly) been hitting the press a lot in the weeks since 8 July when he announced the work done to fix a flaw he had discovered in DNS. The vulnerability itself was new, but its impact (cache poisoning) was not. Indeed, we’ve known about the dangers of poisoned DNS caches for some years now. Kaminsky originally took a lot of flak about his announcement, its timing (to coincide with Microsoft’s “patch tuesday”), his reluctance to discuss details (“trust me, it’s dangerous. I’ll tell you all about it later”) and his apparent willingness to “talk up” the issue with the non-specialist press. But all that aside, he deserves immense credit for highlighting the flaw and herding all the cats necessary to get vendors on board to create patches. He has also since been as good as his word and described the problem in detail.

However, I have a big problem with one of his blog entries: “Here comes the cavalry” where he says “Note, if you must forward, it’s most secure to do so to a name server that’s still on your network but happens to be patched — but in a pinch, you’re much better off forwarding to OpenDNS or another free and patched name service provider than going direct (and insecure).”

In my view, this is hugely ironic. Cache poisoning means that you cannot trust the answer your DNS server provides. I do not trust the answer OpenDNS provides. OpenDNS violates principles which in my view are essential to an open, transparent and trustworthy network. They hijack queries and give incorrect answers. For example, they do not reply with NXDOMAIN to a query for a non-existent host or domain. They also hijack queries aimed specifically at Google. See the dig queries below for examples.

I first came across OpenDNS when I installed packetprotector on an Asus wireless router I was playing with. OpenDNS servers were hardwired as the default DNS hosts in that package. I run my own dns internally using DNSmasq and the hosts file on that system contains the private addresses of my internal servers on my network. Imagine my surprise then, when during testing, I pinged one of my internal hosts from the Asus only to get a response back from a server with the address “208.69.34.132”. This should not happen. It is a bad thing (TM), regardless of how OpenDNS may attempt to portray this as “helping” the community.

In a discussion about Google’s toolbar, David Ulevitch of OpenDNS said, “The solution to this problem was to route Google requests through a machine we run to check if the request is a typo or one of your shortcuts. If it is a typo or shortcut then we do what we always do, just fix the typo or launch your shortcut and send you off on your way. If it’s not one of those two things, we pass it on to Google for them to give you search results. This solution provides the best of both worlds: OpenDNS users get back the features that they love and Google continues to operate without problems.”

Wrong. I do not want some third party fiddling with my DNS requests on the spurious grounds that I may have mistyped some hostname.

Make up your own mind. There is extensive discussion about OpenDNS on-line. See in particular the commentary at the scream and on wikipedia. Personally, I prefer to use my ISP’s DNS servers. I have a contractual relationship with them and I can therefore expect them to provide me with a service which works, and is trustworthy (for some definition of “trust”). Oh, and they patched their DNS servers very, very, quickly.

Now some sample dig results:

First using my default (DNSmasq forwarding to my ISP)

mick@slug:~$ dig www.google.com

; <<>> DiG 9.4.2-P1 <<>> www.google.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47551 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 7, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.google.com. IN A ;; ANSWER SECTION: www.google.com. 604798 IN CNAME www.l.google.com. www.l.google.com. 299 IN A 216.239.59.104 www.l.google.com. 299 IN A 216.239.59.147 www.l.google.com. 299 IN A 216.239.59.99 www.l.google.com. 299 IN A 216.239.59.103 ;; AUTHORITY SECTION: l.google.com. 86397 IN NS e.l.google.com. l.google.com. 86397 IN NS a.l.google.com. l.google.com. 86397 IN NS c.l.google.com. l.google.com. 86397 IN NS d.l.google.com. l.google.com. 86397 IN NS b.l.google.com. l.google.com. 86397 IN NS g.l.google.com. l.google.com. 86397 IN NS f.l.google.com. ;; Query time: 30 msec ;; SERVER: 192.168.10.10#53(192.168.10.10) ;; WHEN: Sat Jul 26 18:03:08 2008 ;; MSG SIZE rcvd: 228 Now use openDNS mick@slug:~$ dig www.google.com @208.67.222.222 ; <<>> DiG 9.4.2-P1 <<>> www.google.com @208.67.222.222
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40840 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.google.com. IN A ;; ANSWER SECTION: www.google.com. 30 IN CNAME google.navigation.opendns.com. google.navigation.opendns.com. 30 IN A 208.69.34.230 google.navigation.opendns.com. 30 IN A 208.69.34.231 ;; Query time: 19 msec ;; SERVER: 208.67.222.222#53(208.67.222.222) ;; WHEN: Sat Jul 26 18:04:54 2008 ;; MSG SIZE rcvd: 104 Are you happy with that? I'm damned if I am.

Permanent link to this article: https://baldric.net/2008/08/10/trusting-dns/

replacement for the slug

I noted in an earlier post that Linksys were ceasing production of the NSLU2. There are now a variety of NAS systems coming onto the market which might make good replacements – but most of them look expensive when compared to the slug. However I’ve just seen a review of a box which looks as if it might be just up my street – the oddly named CherryPal PC, based on Freescale’s MPC5121e mobileGT processor.

CherryPal PC

The specs look very interesting – indeed, if the press release at Marketwatch.com is to be believed, the box has “256GB of DDR2 DRAM” to go with the 800 MIPS Freescale’s MPC5121e processor.

Methinks this may be a typo.

Permanent link to this article: https://baldric.net/2008/07/26/replacement-for-the-slug/

implementing mailman and postfix with lighttpd on debian

I recently needed to set up a mailing list for a group of friends (my bike club). I had become tired of mail bounces and failures because we were all relying on an out of date list of addresses originally cobbled together by one member. That list of addresses was routinely used in “reply all” messages to others about forthcoming social events. An obvious improvement would be a mail list – ideally one which members could manage themselves. I originally looked at using a quick and dirty system using a mail forwarding mechanism which would simply explode mail sent to one address to the complete list of aliases (I can be lazy). However I discovered that neither my mail/web provider, nor my ISP really offered this facility in quite the way I wanted it. So, an obvious way forward would be do it myself using a slug.

I’ve used mailman in the past and knew it offered everything I wanted (including a web interface for membership management and access to archived messages), but I don’t (or rather didn’t) run a mail server on my home network. So that had to be fixed first. The necessary ingredients for the list management were: mailman itself; an MTA (I chose postfix because I know it, like it and find the default debian exim unnecessarily complicated); and a webserver (I was already running lighttpd on both slugs because it performs better than apache on low memory machines). I also wanted to use SSL encryption on the webserver to preserve password integrity (but not to authenticate the webserver itself).

There were a number of steps required to get this all working to my satisfaction. These were:

Step 1 – upstream SMTP authentication using TLS with postfix;
Step 2 – getting a mailman listserver running with postfix;
Step 3 – configuring lighttpd with SSL for mailman;
Step 4 – putting it all together and letting the world in.

It all worked, but the main drawback turned out to be the performance of the slug when running mailman. The combination of SSL encryption and mailman python scripts is too big a hit for a device with only 32 Mb of RAM. It would be perfectly feasible to run mailman on the slug if we limited ourselves to management by email alone (i.e. ignore the web management interface). But doing this would severely limit its functionality and in such case we might as well look at alternative list managers such as Majordomo or Listproc. In the end, the attractiveness of mailman’s web interface meant that I moved it all off the slug and onto a more powerful platform (also running debian). Nevertheless, the documentation here may be of use to anyone considering a mailman install with postfix and lighttpd on any linux distro. The notes on SSL usage at step 3 can, of course, also be applied (with suitable modification) to apache or any other webserver supporting SSL certificates.

Permanent link to this article: https://baldric.net/2008/07/22/implementing-mailman-and-postfix-with-lighttpd-on-debian/

ooops

An apt-get dist-upgrade (to bring the kernel up to date and install some new patches) on the slugs killed the webcam. Of course I should have remembered that the gspca module was built against the old kernel and might fail. One quick “m-a auto-install gspca” later and all is working again.

Of course the kernel update required a reboot so my uptime is now back to zero, but security is more important than a long running time.

Permanent link to this article: https://baldric.net/2008/07/16/ooops/

slugs are history

Jim Buzbee, of batbox fame and one of the original NSLU2 hackers, apparently gave a presentation about the history of slug hacking at the Boulder Linux Users Group. A PDF copy of his presentation can be found on his batbox.org site.

Jim also notes that Linkys are ending production of the NSLU2 after four years of life. Better get your hands on a few now before they all disappear – or end up at twice the price on ebay.

Permanent link to this article: https://baldric.net/2008/07/09/slugs-are-history/

mine’s longer than yours

You could regard this as another pointless entry to go alongside the webcam. But hey – so what.

I had cause to check the uptime on my slugs a little while ago now that they are largely stable and providing the services I want. After doing so I thought it would be good to be able to check this from a web page and a short search later came across Matthew Trent’s UD daemon. I’ve now made my webcam slug uptime public. Let’s see how high this will get.

Permanent link to this article: https://baldric.net/2008/07/02/mines-longer-than-yours/

backtrack 3 released

Any half decent sysadmin will routinely test the security of his or her own systems. A good, and sensible, sysadmin will follow up those tests with an independent security audit by a professional company – preferably one which is a member of a recognised industry body (such as CREST). Finding the holes in your security mechanisms (and there will be some – probably more than you will be happy about) before the bad guys do is essential if you want to sleep at night (and keep your job).

There are a huge number of security testing tools available for free if you know where to look. Most sysadmins keep a toolbox of their favourites (nmap, nessus, ettercap, dsniff et al.) to hand ready for testing any new build. But it can sometimes be difficult to know just which tool to use, and where to get it. Enter backtack. I first came across this collection of tools as recently as february 2006 and found it an excellent resource. Essentially backtrack is a collection of all the security testing tools you are likely to need packaged into one linux distribution. Think of it as a knoppix for security testing. A complete list of all the tools in the collection can be seen here.

Bactktrack Version 3 has just hit the streets. Get it here.

(Oh, and don’t think that using a toolset like this makes you a pen-tester. It doesn’t. What it might do is make you more security aware, and a better sysadmin.)

Permanent link to this article: https://baldric.net/2008/06/20/backtrack-3-released/

dental dos

On Tuesday 17 June, Craig Wright, supposedly “Manager of Risk Advisory Services” in an Australian Company called “BDO Kendalls”, posted a rather odd note to Bugtraq and a few other security related lists titled “Hacking Coffee Makers”. In that posting he said that the Jura F90 Coffee maker (which can apparently be networked) was vulnerable to remote attack. His post said that the vulnerabilities allowed the attacker to:

“- Change the preset coffee settings (make weak or strong coffee);
– Change the amount of water per cup (say 300ml for a short black) and make a puddle;
– Break it by engineering settings that are not compatible (and making it require a service);”

but worse

“the software allows a remote attacker to gain access to the Windows XP system it is running on at the level of the user”.

Now I’ve been a subscriber to bugtraq for longer than I care to remember and I’ve seen some odd posts in the past – particularly around the beginning of April, but in June? I initially dismissed this as just one more nut trying to raise his profile in the security community, but since tuesday the story has been picked up by a range of commentators. Some have found the story simply amusing (slashdot – “All Your Coffee Are Belong To Us”), others such as CNET seem to have taken it only slightly more seriously. OK, the bits about attacking the coffee maker itself may be amusing, but there is a serious point here if Wright is correct in his statement that attacking the coffe jug gets you access to the windows system its management software runs on. Certainly Thor of Hammerofgod has taken the post seriously enough to question Wright’s professional judgement in posting details of a vulnerability before alerting the manufacturer.

The point to note is that as more and more consumer devices become networkable (and networked) then the attack surface gets larger and larger. And it is a fairly good bet that the manufacturer of (say) a networked microwave oven is not going to take network security as seriously as would the manufacturer of a router, NAS, or mainframe.

Oh and Wright has done it again today. His latest post to bugtraq is titled “Oral B SmartMonitor Information Disclosure Vulnerability and DoS”. It’s about a “remote exploitation of an information disclosure vulnerability in Oral B’s SmartGuide management system [that] allows attackers to obtain sensitive information.”

That’s right, he’s talking about a toothbrush.

Some people have way too much time on their hands.

Permanent link to this article: https://baldric.net/2008/06/19/dental-dos/

xkcd on the openssl fiasco

I’ve had my attention drawn to Randall Munroe’s take on the openssl coding change problem.

openssl

Beautiful.

Permanent link to this article: https://baldric.net/2008/06/05/xkcd-on-the-openssl-fiasco/

debian and the openssl flaw

Ben Laurie wrote about the Debian SSL problem a couple of weeks ago. That particular post has attracted a huge response which is well worth reading if you care about free open source software and/or privacy/security issues (or even if you don’t). The key point to take from the discussion is that about two years ago the Debian development team “fixed” a perceived problem in openssl and in so doing actually introduced a fairly serious vulnerability. The net result of this change was that anyone using Debian or a related distribution such as Ubuntu to generate a cryptographic key based on the “fixed” opensssl libraries actually left themselves open to compromise. To quote from the Debian advisory “the random number generator in Debian’s openssl package is predictable. This is caused by an incorrect Debian-specific change to the openssl package (CVE-2008-0166). As a result, cryptographic key material may be guessable…….. affected keys include SSH keys, OpenVPN keys, DNSSEC keys, and key material for use in X.509 certificates and session keys used in SSL/TLS connections.”

Fortunately, it seems that GPG keys are not affected (and in any case, my own key was generated some time ago and not on a Debian based system) but this is pretty serious nonetheless and means that a great many people (myself included) have been relying on keys which it turns out are vulnerable to attack. I have now regenerated all the keys I suspect were vulnerable, but that does not leave me feeling very comfortable about past usage.

I don’t want to denigrate the Debian team in any way, but I can’t help but agree with Ben Laurie’s view that the proper place to fix any perceived flaw in an open source product, particularly one as important as a security critical component, is in the upstream package – not in the distribution.

Permanent link to this article: https://baldric.net/2008/06/02/debian-and-the-openssl-flaw/

recursion: see recursion

I have written about how I use one of my slugs to backup my internal files via rsync over ssh. Well it turns out I made a pretty silly mistake in my rsync options. I thought I’d been careful in specifying the files I specifically wanted excluded from the backup (ephemeral stuff, thumbnail images, some caches such as my browser cache etc.) but I missed one crucial directory and it bit me – and sent the slug’s load average through the roof.

GNOME 2.22 introduced GVFS, a new network-transparent virtual filesystem layer. GVFS is a userspace virtual file system with backends for protocols like SFTP and FTP. GVFS creates a (hidden) directory called .gvfs in your home directory and uses this as a mount point when you open a connection via SSH, FTP, SMB, WebDAV etc from the “Places -> Connect to Server” menu option. So if you open an SFTP connection to a server called “slug”, it will mount that connection in .gvfs. Try it yourself.

Now guess what I had mounted on my desktop at the time my rsync cron job ran. The slug spent some frantic time copying itself to itself until I noticed that it seemed to be inordinately busy, diagnosed the problem and managed to kill the rsync and clear up the mess.

Permanent link to this article: https://baldric.net/2008/06/02/recursion-see-recursion/

linuxdoc.org hijacked

Sadly it appears that the once useful linuxdoc.org website has been hijacked by one of those awful domain squatters who seem to want to sell mortgages, holidays and houses. I tried today to check out an old “howto” I had bookmarked and was greeted by a completely new site – as below:

linuxdoc.org hijacked

At first I thought that they had simply redesigned the site because most of the links appeared to be in place. Unfortunately, none of the old LDP documents appear to be there. I also noticed that all the links are referred to a new site on www.kolmic.com. So, none of my old linuxdoc bookmarks are any use now. RIP friend.

Fortunately, however, the original and best TLDP site is still up and running as is the (similarly named) linuxdocs.org site. So, update your bookmarks and stay away from the hijacker. Such a shame that so many printed references in places like the O’Reilly books are no longer valid.

Permanent link to this article: https://baldric.net/2008/05/26/linuxdocorg-hijacked/

what it is to be popular

According to some dubious stats from a web company, this site now ranks at number 4,880,077 (on a scale of usage where Yahoo, Google and YouTube are apparently first second and third). But I shouldn’t really complain. The same stats say that the position is “up 16,958,547 ranks over the last three months”.

Now that is some rise.

Permanent link to this article: https://baldric.net/2008/05/16/what-it-is-to-be-popular/

slugs aren’t really slow

A recent email exchange with the friend who originally suggested that I take a look at the NSLU2 got me thinking about the machines we currently take for granted. In his email he outlined that he had consolidated a set of services previously run on a couple of old desktops (a Dell and a Shuttle) onto his slug – thereby making a big saving in power consumption. His slug now runs ssh, DNS, IMAP and SMTP mail and a couple of other services – a typical slug user’s profile. The phrase that got me thinking however, was his statement that “I’m quite amazed that it can do all this within 32MB memory”.

Now, not so long ago, 32 Meg of RAM was considered quite a lot. We seem to have become so used to desktop home machines equipped with multi GHz CPUs, 2 or 3 Gig of RAM and anywhere from 160 Gig to three quarters of a terabyte of disk that we are surprised that an apparently humble 266 MHz, 32 Mb RAM machine can do so much. But why? As recently as 10 years ago I was running a large public facing network on which the main DNS/mail and syslog server was a single processor Sun SPARC5 with only 32Mb of RAM. And I recall only 15 years ago (OK, so I’m old) running a network of ICL DRS 6000s providing full office system functions to over 1200 users. So I dug out the specs of the machines I was running at that time for comparison. It made interesting reading.

The smallest (in capacity terms) machine on my network 15 years ago was a DRS6000 L440 – which had a single 40 MHz CPU, 32 Mb of RAM and 2 x 660 Mb disks. That machine served 30 users. I also had a mixture of DRS6000s with older 25Mhz and 33MHz CPUs but with more RAM and disk store (typically 96 Mb and 4 x 660 Mb disks) each of those would support around a hundred users (the office application was memory not CPU dependent). The really interesting point is the pricing. I found a note with the following on it:

Item — Price (UKP)

DRS6000 L440 40MHz CPU — £15,000
(inc. 1 * 660 Mb disk)

64 Mb memory board — £11,000

32 Mb memory board — £6.550

SCSI daughter board — £800
(to support additional disks)

3 * 660 MB disks — £8,850

16 port asynch controller board — £1,500

ethernet LAN controller board — £2,660

external exabyte tape drive — £4,000

console and keyboard — £500

sundry cables — £200

hardware sub-total — £51,060

to which I had to add:

128 user licence for Unix 6, TCP and OSLAN — £11,000

(Thankfully, we had a site licence for the application software…)

So, for just over £62,000 I had a 40 MHz machine with 96Mb of RAM and 2.6 Gig of disk. Not bad.

Oh, I forgot VAT.

Permanent link to this article: https://baldric.net/2008/05/05/slugs-arent-really-slow/

a problem slug

I bought myself another slug recently so that I could have one dedicated to internal work and the other used for public facing webs. I wasn’t really comfortable with having my network backup and apt-get mirror on the same beast as a public web. I know from experience that public facing systems are vulnerable and I have to assume that my webcam slug is disposable.

However, it seems that I picked exactly the wrong time to build a new slug because I fell foul of a previously undocumented bug in the new initramfs-tools (version 0.92) in Debian testing. This version generated a ramdisk that made the slug unbootable. This bug was particularly irritating because it only manifested itself at the end of the complete Debian install – i.e at the point when the installer had flashed the new initramfs and rebooted. Because I had been so successful with the earlier slug only weeks before, I thought at first that either I had made a mistake, or, worse, I had bought a problem slug which I could not return having voided the warranty. So I wasted some more time reflashing first with unslung and later with the original Linksys image – just to satify myself that I had a working beast. Then I checked the debian-arm mailing list. A couple of other users reported similar problems and the cuplprit – initramfs-tools – was quickly identified and rapidly fixed (see bug #478236).

When researching the problem, I picked up a useful tip from the mail list on a quick way of backing up a working slug image which is not documented in the how-to section of the slug website. This tip enabled me to take a copy of the image from the known good working slug and flash it to the non-working new slug at the end of (yet another) complete Debian install.

On a working system, do “cat /dev/mtdblock* > backup.img”, and copy that backup image off to a safe place. Use that image with upslug2 to flash to a non-working (or corrupted) slug thus: “upslug2 -i backup.img”.

The problem I encountered is now fixed with the release of 0.92a of initramfs-tools which is now in the Lenny tree.

Permanent link to this article: https://baldric.net/2008/05/04/a-problem-slug/

slugs as pets

Following a recommendation from a friend of mine, I have recently been playing with a Linksys NSLU2. This device is no larger than a paperback book yet packs some remarkable capabilities. It was originally designed by Linksys (Cisco) to act as a “Network Storage Link for USB 2.0 Disk Drives” (hence NSLU2).

The Linksys NSLU2

Externally, the rear of the box offers two USB 2.0 ports and a 10/100 ethernet RJ45 port for connectivity and sports front panel based LEDs for power, disk and ethernet status. Internally it has an XScale-IXP42x CPU (Intel’s implementation of ARM) running at 266 MHz (early versions were apparently underclocked to 133 Mhz) 8Mb of flash memory and 32Mb of SDRAM. Most interesting, at least from my point of view, is that the OS in flash is a version of Linux. Better yet, that can be changed for a full blown OS such as Debian so long as that OS is installed to external disk and the NSLU2 firmware is reflashed with an image which tells it to look for a bootable kernel on disk. Too good an opportunity to be missed – so I bought one and attached a 500 Gig Lacie USB disk so that I’d have room to play.

There is extensive documentation on-line about reflashing and upgrading the slug (as they are affectionately known by their users). My experience is documented here. My own slug now runs Debian Lenny (kernel 2.6.24-1-ixp4xx) and acts as the local apt-mirror for my home network. That mirror is run out of cron overnight so that I save on my bandwidth allowance. Having a local mirror speeds up software installs and security updates and I know that I can run local downloads to any of my machines at any time without impacting on either my monthly allowance or my external access speed. The slug runs lighttpd (changed from Apache) to give me internal virtual webservers as well as access to the mirror and I also backup my internal files to it via rsync over ssh. For example, my primary desktop machine runs a cron job to rsync to the slug.

Oh, and it also runs a webcam – just for fun.

webcam image

A web search for “webcam on slug” led me to the deliciously bizarre “Slug Racing online” site. Quote – “Slug racing is an exciting and cheap alternative to other racing forms. Slugs are available almost everywhere, often in abundance. Seen as a pest by many people, they can be a great pleasure in cultivated slug racing.” Unquote.

Some people have the strangest hobbies.

Permanent link to this article: https://baldric.net/2008/04/07/slugs-as-pets/

google oddness

A google search for “loadlin” produces a sponsored link for “Inflatable lilos”. Strangely no references to insects or food however.

Permanent link to this article: https://baldric.net/2008/04/06/google-oddness/

ssh through http proxy

On a mail list I subscribe to I have recently been involved in a discussion about the restrictions sometimes placed on users of WiFi hotspots or hotel networks (to say nothing of the restrictions placed on corporate networks). Some of the suggested solutions involve tunnelling ssh connections over http(s). Other solutions assume that the network is simply restricting access with packet filters so that you may just need to connect to a non-standard port (such as 80 or 443). If this is the case, then you simply have to configure your target ssh daemon to listen on that port. However, some networks force you through a proxy, in which case you need a utility like corkscrew. I had not previously heard of this neat little utility – but it turns out to merit some exploration if you find yourself needing such a tool.

Corkscrew is relatively simple to set up, but if you have problems, take a look at Andrew Savory’s blog entry of 27 February 2008.

Permanent link to this article: https://baldric.net/2008/03/01/ssh-through-http-proxy/