isp shenanigans

I have recently been off-line. And I am less than happy about the reasons.

My ISP recently informed me that it was changing it’s back end provider from Entanet to Vispa. Like many small ISPs, my provider does not have any real infrastructure of its own, it simply repackages services provided by a wholesaler who does have the necessary infrastructure in a process commonly called “whitelabelling”. This whitelabel approach is particularly common amongst providers of webspace and it normally works fine. Amongst the smaller ISPs there are many who are simply Entanet resellers. And until recently Entanet had a good name for pretty solid service. Well not any more.

I had not noticed any particular problems and was slightly surprised to hear from my ISP that they were unhappy with the service they were getting from Entanet. Apparently there had been frequent network outages for many of their customers and so they had chosen a new provider and were notifying their customers of impending moves, Of course this would mean some local configuration changes so customers were advised in advance of those changes and the dates for action. Apart from preparing to change the ADSL login details on my router, in my case I also had to ensure that my SSH and other login details on various external services I have or use were modified to accept the new fixed IP address assigned to my router (I tend to lock down such services so that they only accept connections from my IP address, not foolproof I know, but it all helps).

In the migration advice letter, my ISP advised its customers to set up new direct debit arrangements for Vispa and cancel the existing ones to Entanet. That letter advised that any over or under charge either way during transition would be sorted out between the providers. So I did as I was advised and waited for the big day (approximately 10 days away). Big mistake.

About a week before the date of transition I found my web traffic intercepted and blocked by Entanet with the message “Your account has been blocked. Please contact your internet service provider”. This blockage only occurred on web traffic (my email collection over POP3S and IMAPS continued to work, as did ICMP echo requests and ssh connections out). This action actually pissed me off even more than I would have been if Entanet had completely cut my connection. It also, incidentally, betrayed the fact that they were using a transparent web proxy on the connection – not something that makes me very happy. But simply blocking web traffic was obviously designed to annoy me and make me contact my ISP and strongly suggests to me that Entanet were usnsure of their legal right to cut me off completely. Further, in my view, intercepting my web traffic in this way may actually have been illegal.

Interestingly, even http traffic aimed inbound to my ADSL line (where I run a webcam on one of my slugs) was similarly intercepted as is evidenced by this link from changedetection.com. Obviously, the imposition of the message from Entanet was picked up by changedetection as an actual change to that web page.

So I emailed Entanet and my ISP, pointing out that my contract was with them and not Entanet and told them to sort it out between themselves. I, as a customer, did not expect to be penalised simply because my ISP had decided to change its wholesaler. Meanwhile, I decided to bypass Entanet’s pathetic and hugely irritating web block by tunneling out to a proxy of my own. Of course I could have used my existing tor connection, but that is not always as fast as I would like, particularly at peak web usage hours, so I set up a new proxy on another of my VPSs using tinyproxy, listening on localhost 8118 (the same as privoxy on my tor node). I then set up an ssh listener on my local machine and set firefox to use that listener as its proxy – again, much as I had for tor. Bingo. Stuff you Entanet.

Unfortunately, it did not stop there. Entanet’s rather arrogant response to my email was to insist that I re-establish a direct debit with them for the few days remaining before the changeover (despite them having had my payment in advance for the month in question). No way, so I ignored this request only to find that Entanet then throttled my connection to 0.02 Mb/s – see the speedtest result below.

speedtest image

speedtest image

This sort of speed is just about usable for text only email, but is absolutely useless for much else. Now I had originally been given two separate dates for the changeover by my ISP, so in a fit of over enthusiastic optimism on my part, I tried to convince myself that the earlier (later corrected) date given was the correct one and so I reconfigured my router in the hope it would connect to Vispa, No deal. Worse, when I then tried to fall back to the (pitiful) Entanet connection, I found it blocked completely. I was thus without a connection for some four days (including a very long weekend).

So far my new connection looks good. But apart from my disgust with Entanet, I have not been overly impressed with the support I have received from my ISP during these problems. I’ll keep an eye on things – I may yet move of my own volition.

[Addendum] just by way of comparison, the test result below is what I expect my connection speed to look like. Test run at around 21.45 on Sunday 21 February 2010.

speedtest image

That’s a bit better. Note however that this test was direct from Vispa’s network rather than through my ssh tunnel.

Permanent link to this article: https://baldric.net/2010/02/20/isp-shenanigans/

life is too short to use horde

I own a bunch of different domains and run a mail service on all of them. In the past I have used a variety of different ways of providing mail, from simple pop/imap using dovecot and postfix, through to using the database driven mail service in egroupware.

Recently I have consolidated mail for several of my domains onto one of my VPSs. I don’t have a lot of mail users so at first I stuck with the simple approach available to all dovecot/postfix installations, i.e. – using dovecot as the local delivery mechanism and simply telling postfix to hand off incoming mail to dovecot. Dovecot then has to figure out where to deliver mail. I also used a simple password file for the dovecot password mechanism. This mechanism worked fine for a small number of users, but it rapidly becomes a pain if you have multiple users across multiple domains and you wish to allow those users to change their passwords remotely. The solution is to move user management to a MySQL backend and change the postfix and dovecot configurations to use that backend database.

Now to allow (virtual) users to change their mail passwords, most on-line documentation points to the sork password module for horde. But have you /seen/ horde? Sheesh, what a dog’s breakfast of overengineered complexity. I flatter myself that I can find may away around most sysadmin problems. but after most of a day one weekend trying to install and configure the entire horde suite just so that I could use the remote password changing facility I gave up in disgust and went searching for an easier mechanism. Sure enough I found just what I wanted in the shape of postfixadmin. This is a php application which provides a web based interface for managing mailboxes, virtual domains and aliases on a postfix mail server.

Postfixadmin is easy to install and has few dependencies (beyond the obvious php/postfix/mysql). There are even ubuntu/debian packages available for users of those distributions. I also found an excellent installation howto at rimuhosting which I can recommend.

I can now manage all my virtual domains, user mailboxes and aliases from one single point – and the users can manage their passwords and vacation messages from a simple web interface.

image of postfixadmin page

postfixadmin domain creation

Whilst I currently only provide pop3s/imaps mail access through dovecot, postfixadmin offers a squirrelmail plugin to integrate webmail should I wish to do that in future.

Simple, elegant and above all, usable. And it didn’t take all day to install either.

Permanent link to this article: https://baldric.net/2010/01/23/life-is-too-short-to-use-horde/

tor server compromise

According to this post by Roger Dingledine, two tor directory servers were compromised recently. In that post Dingledine said:

In early January we discovered that two of the seven directory authorities were compromised (moria1 and gabelmoo), along with metrics.torproject.org, a new server we’d recently set up to serve metrics data and graphs. The three servers have since been reinstalled with service migrated to other servers.

Whilst the direrctory servers apparently also hosted the tor project’s svn and git source code repositories, Dingledine is confident that the source code has not been tampered with – and nor has there been any possible compromise of user anonymity. Neverthless, the project recommends that tor users and operators upgrade to the latest version. Good advice I’d say – I’ve just upgraded mine.

Permanent link to this article: https://baldric.net/2010/01/22/tor-server-compromise/

are you /really/ sure you want that mobile phone

The launch of the google nexus one “iPhone killer” reminds me just how prescient Dr Fun’s cartoon of 16 January 2006 (see third cartoon down from the top on the right) really was.

I just love the way the google employee in the video says at the end that Verizon and Vodafone have “agreed to join our program”.

Oh yes indeed.

Permanent link to this article: https://baldric.net/2010/01/10/are-you-really-sure-you-want-that-mobile-phone/

using scroogle

For completeness, my post below should have pointed to the scroogle search engine which purportedly allows you to search google without google being able to profile you. Neat idea if you must use google (why?) but it still fails the Hal Roberts test of what to do if the intermediate search engine is prepared to sell your data. I actually quite like the scroogle proxy though, particularly in its ssl version because anything that upsets google profiling has to be a good thing. Besides, the really paranoid can simply connect to scroogle via tor.

(Odd that google seem not to have tried to grab the scroogle domain name. If they do, let’s just hope that they get the groovle answer.)

Permanent link to this article: https://baldric.net/2010/01/02/using-scroogle/

scroogled

One of the more annoying aspects of the web follows directly from one of its strengths. The web is actually designed to make it easy for authors to cross refer to the work of others – hyperlinking is intended to make linking between documents anywhere in web space seamless and transparent. Unfortunately, this cross linking ability leads to many posts (this one included) quoting directly from the source when referring to material elsewhere. In the academic world, quoting from source material is encouraged. When the work is properly attributed to the original author, then this is known as research. Without such attribution it is known as plagiarism.

So whenever I post or write here, I try hard to refer to original source material if I am quoting from elsewhere or I am referring to a particular tool or technique I have found useful. If I am writing about something commented on elsewhere (as for example, Hal Roberts’ discussion of GIFC selling user data in my posting about anonymous surfing), then I will try to link directly to the original material rather than to another article discussing that original. There are fairly good (and obvious) reasons for doing this, not least of which is that the original author deserves to be read directly and not through the (possibly distorting) lens of someone else’s words.

Writing for the web is a very different art to writing for print publication. Any web posting can easily become lazy as the author cross refers to other web posts. Many of those posts may be inaccurate or not primary source material. This can lead to the sort of problem commonly seen in web forums where umpteen people quote someone who said something about someone else’s commentary on topic X or Y. In such circumstances, finding the original, definitive, authoritative, source can be difficult.

Like most people, when faced with this sort of problem I resort to using one or more of the main search engines. But what to search for? Plugging in a simple quote from the original article can often bring up references to unrelated material which happens to include that same (or very similar) phrase. Worse, for reasons outlined above, the search can simply return multiple instances of postings in web fora about the article rather than the article itself. Most irritatingly these days I find that a search will lead to a wikipedia posting – and I just don’t trust the “wisdom of the crowds” enough to trust wikipedia. I’m old fashioned, I like my “facts” to be peer reviewed, authoritative, and preferably written in a form not subject to arbitrary post publication edits. Actually I still prefer dead trees as a trusted source of both factual material and fiction – which is one reason I have lost count of the number of books I have. I also like the reassuring way I can go to my bookshelf and know that my copy of 1984 will be where I left it and in a form in which I remember it.

So when I was researching older articles about Google recently and I wanted to find a copy of Cory Doctorow’s original short fiction piece about Google called “Scroogled” I expected to find umpteen thousand quotes as well as pointers to the original. I was wrong. I originally searched for the phrase “Want to tell me about June 1998?” on the grounds that that would be likely to give me a tighter set of results than simply looking for “scroogled”. This actually gave me fewer that sixty hits on clusty (the search engine I used at the time). I was initially reassured that most of the results were simple extracts of the full story with pointers to the original article on radaronline. Even Doctorow’s own blog points to radaronline without giving a local copy of the story. But then I discovered that radaronline no longer lists that article at that URL. Worse, a search of the site gives no results for “scroogled”. So Cory Doctorow’s creative commons licenced short has vanished from the original location and all I can find are copies. This worries me. Perhaps I’m wrong to rely on pointing to original material. What if the original is ephemeral? Or gets pulled for some reason? And if I point to copies, how can I be sure those copies are faithful to the original?

I actually fell foul of this same problem myself a couple of years ago when I was discussing my experiences with BT’s awful home hub router. I wrote in that post a reference to a contribution I made on another forum about my experiments with the FTP daemon on the hub whilst I was figuring out how to get a root shell. That article no longer exists, because the site no longer exists, and I have no copy.

So the web is both vast and surprisingly small and fragile in places.

Oh, just to be on the safe side, I have posted here a local (PDF) copy of scroogled obtained from feedbooks. You never know.

Permanent link to this article: https://baldric.net/2010/01/02/scroogled/

shiny!

Well I finally cracked and ordered an N900 on-line just before Christmas. Nokia had been promising since about August of this year that the device “might” ship in the UK around October. Since then, the release date has slipped, and slipped, and slipped (much to the amusement of an iPhone using friend of mine who predicted exactly that back in August). Every time I read about a new impending release date I checked with the major independent retailers only to be told “no, not yet, maybe next month”.

Many review sites are now saying that Vodafone and T-Mobile will both be shipping the N900 on contract in January. Well, not according to the local retail outlets for those networks they won’t. And besides, I had no intention of locking myself in to a two year contract at around £35-£40 pcm, particularly if the network provider chose to mess about with the device in order to “customise” it. So, as I say, I cracked and ordered one on-line, unlocked and SIM free on 21 December. It arrived yesterday, which is pretty good considering the Christmas holiday period intervened.

nokia n900

nokia n900

So what is it like?

Well, there is a pretty good (if somewhat biased) technical description on the Nokia Maemo site itself, and that site also has a pretty good gallery of images of the beast so I recommend interested readers start there. There are also a number of (sometimes breathless) reviews scattered around the net, use your search engine of choice to find some. I won’t attempt to add much to that canon here. Suffice to say that I am a gadget freak and a fan of all things linux and open source. This device is a powerful, hand held ARM computer with telephony capability – and it runs a Debian derivative of linux. What more could you ask for?

Tap the screen to open the x-terminal and you drop in to a busybox shell.

busybox shell on the N900

busybox shell on the N900

Oh the joy!

So – first things first. Add the “Maemo Extras” catalogue to the application manager menu, then Install openSSH, add a root password and also install “sudo gainroot”. Stuff you Apple, I’ve got a proper smartphone (and, moreover, one which is unlikely to be hit by an SSH bot because a) I have added my own root password, and b) I have moved the SSH daemon to a non-standard port – just because I can). Now I can connect to my N900 from my desktop, but more importantly from my N900 to my other systems. Next on the agenda is the addition of OpenVPN so that I can connect back to my home network from outside. Having the power and portability of the N900 means that even my netbook is looking redundant as a mobile remote access device.

(Oh, and it’s a pretty good ‘phone too, if a little bulky).

[ update posted 16 March 2010 – This review at engadget.com is in my view well balanced and accurate. I have now had around three months usage from my N900 and I love its power and internet connectivity, but I have found myself carrying my old 6500 slide for use as a phone. I agree with engadget that the N900 is a work in progress. If I were designing a successor (N910?) personally I’d drop the keyboard (which I hardly ever use in practice) and save weight and thickness. ]

Permanent link to this article: https://baldric.net/2009/12/30/shiny/

comment spam

I block comment spam aimed at this blog, and I insist that commenters leave some form of identification before I will allow a comment to be posted. Further, I use a captcha mechanism to keep the volume of spam down. Nevertheless, like most blogs, trivia attracts its fair share of attempted viagra ads, porn links and related rubbish. Most appears to come from Russia for some reason.

Periodically I review my spam log and clear it out – it can make for interesting, if ultimately depressing reading (when I can actually understand it). But one post today plucked at my heart strings. The poster, again from a Russian domain, said “Dear Author baldric.net ! I am final, I am sorry, but it does not approach me. There are other variants?”

I guess it lost something in the translation.

Permanent link to this article: https://baldric.net/2009/12/12/comment-spam/

colossally boneheaded

David Adams over at OS News has posted an interesting commentary on Eric Schmidt’s recent outburst. Referring to Schmidt’s statement which I commented on below, Adams says:

I think the portion of that statement that’s sparked the most outrage is the “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place” part. That’s a colossally boneheaded thing to say, and I’ll bet Schmidt lives to regret being so glib, if he didn’t regret it within minutes of it leaving his mouth. As many people have pointed out, there are a lot of things you could be doing or thinking about that you don’t want other people to be watching or to know about, and that are not the least bit inappropriate for you to be doing, such as using the toilet, trying to figure out how to cure your hemorrhoids, or singing Miley Cyrus songs in the shower.

The post is worth reading in its entirety.

Permanent link to this article: https://baldric.net/2009/12/12/colossally-boneheaded/

privacy is just for criminals

I’ve mentioned before that I value my privacy. I use tor, coupled with a range of other necessary but tedious approaches (such as refusing cookies, blocking ad servers, scrubbing my browser) to provide me with the degree of anonymity I consider my right in an increasingly public world. It is nobody’s business but mine if I choose to research the symptoms of bowel cancer or investigate the available statistics on crime clear up rates in Alabama. But according to Google’s CEO Eric Schmidt, my choosing to do so anonymously makes me at best suspect, and at worst possibly criminal. In an interview with CNBC, Schmidt reportedly said “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place,”

I have been getting increasingly worried about Google’s activities for a while now, but the breathtaking chutzpah of Schmidt’s statement is beyond belief. Lots of perfectly ordinary, law abiding, private citizen’s from a wide range of backgrounds and interests will use Google’s search capabilities in the mistaken belief that in so doing they are relatively anonymous. This has not been so for some long time now, but the vast majority of people just don’t know that. For the CEO of the company providing those services to suggest that a desire for privacy implies criminality is frankly completely unacceptable.

Just don’t use Google. For anything. Ever.

Permanent link to this article: https://baldric.net/2009/12/07/privacy-is-just-for-criminals/

apple antipathy may be misplaced

Apparently the lastest release of the iPhone OS (v 3.1) has caused a few minor problems with WiFi and battery life. This has led El Reg to moan about the fact that you can’t downgrade the iPhone OS to an earlier version. I’m no great fan of Apple, but to be fair, this situation is not unique to them. Each time I update my PSP to the latest software release, I receive a warning that I cannot revert to the earlier version after upgrade. Not being an iPhone user, I don’t know whether you get a similar warning from Apple before the upgrade or not. But that aside, it does not strike me as unreasonable that Apple should prefer you to keep your OS as current as possible. Software upgrades are generally designed to fix bugs and/or introduce new features. If a particular upgrade has problems, then I would expect the supplier to fix those problems with a new release or a service pack. I would not expect them to recommend that you downgrade.

Permanent link to this article: https://baldric.net/2009/11/29/apple-antipathy-may-be-misplaced/

system monitoring with munin

A while back a friend and colleague of mine introduced me to the server monitoring tool called munin which he had installed on one of the servers he maintains. It looked interesting enough for me to stick it on my “to do” list for my VPSs. Having a bunch of relevant stats presented in graphical form all in one place would be useful. So this weekend I decided to install it on both my mail and web VPS and my tor node.

Munin can be installed in a master/slave configuration where one server acts as the main monitoring station and periodically polls the others for updated stats. This is the setup I chose, and now this server (my web and mail host) acts as the master and my tor node is a slave. Each server in the cluster must be set to run the munin-node monitor (which listens by default on port 4949) to allow munin itself to connect and gather stats for display. The configuration file allows you to restrict connections to specific IP addresses. On the main node I limit this to local loopback whilst on the tor node I allow the master to connect in addition to local loopback. And just to be on the safe side, I reinforced this policy in my iptables rules.

The graphs are drawn using RRDtool, which can be a little heavy on CPU usage, certainly too heavy for the slugs which ruled out my installing the master locally rather than on one of the VPSs. But the impact on my bytemark host looks perfectly acceptable so far.

One of the neatest things about munin is its open architecture. Statistics are all collected via a series of plugins. These plugins can be written in practically any scripting language you care to name. In the plugins which came by default with the standard debian install of munin I found plugins mostly written as shell scripts with the occasional perl script. However, a couple of the additional scripts I installed were written in php and python. The standard set of plugins covers most of what you would expect to monitor on a linux server (cpu, memory i/o, process stats, mail traffic etc). but there were two omissions which were quite important to me. One was for lighttpd, the other for tor. I found suitable candidates on-line pretty quickly though. The tor monitor plugin can be found on the munin exchange site (a repository of third party plugins). I couldn’t find a lighttpd plugin there but eventually picked one up from here (thomas is clearly not a perl fan).

Most plugins (at least those supplied by default in the the debian package) “just work”, but some do need a little extra customisation. For example the “ip_ ” plugin (which monitors network traffic on specified IP addresses) gets its stats from iptables and assumes that you have an entry of the form:

-A INPUT -d 192.168.1.1
-A OUTPUT -s 192.168.1.1

at the top of your iptables config file. You also need to ensure that the “ip_” plugin is correctly named with the suffix formed of the IP address to be monitored (e.g. “ip_” becomes “ip_192.168.1.1”). The simplest way to do this (and certainly the best way if you wish to monitor multiple addresses) is to ensure that the symlink from “/etc/munin/plugins/ip_” to “/usr/share/munin/plugins/ip_” is named correctly. Thus (in directory /etc/munin/plugins):

ln -s /usr/share/munin/plugins/ip_ ip_192.168.1.1

The lighttpd plugin I found also needs a little bit of work before you can see any useful stats. The plugin connects to lighty’s “server status” URL to gather its information. So you need to ensure that you have loaded the mod_status module in your lighty config file and that you have specified the URL correctly (any name will do, it just has to be consistent in both the lighty config and the plugin). It is also worth restricting access to the URL to local loopback if you are not going to access the stats directly from a browser from elsewhere. This sort of entry in your config file should do:

server.modules += ( “mod_status” )

$HTTP[“remoteip”] == “127.0.0.1” {
status.status-url = “/server-status”
}

The tor plugin connects to the tor control port (9051 by default) but this port is normally not configured because it poses a security risk if configured incorrectly. Unless you also specify one of “HashedControlPassword” or “CookieAuthentication”, in the tor config file, then setting this option will cause tor to allow any process on the local host to control it. This is a “bad thing” (TM). If you choose to use the tor plugin, then you should ensure that access to the control port is locked down. The tor plugin assumes that you will use “CookieAuthentication”, but the path to the cookie is set incorrectly for the standard debian install (which sets the tor data directory to /var/lib/tor rather than the standard /etc/tor).

So far it all looks good, but I may add further plugins (or remove less useful ones) as I experiment with munin over the next few weeks.

Permanent link to this article: https://baldric.net/2009/11/15/system-monitoring-with-munin/

OSS shouldn’t frighten the horses

Since I first read that Nokia were adding much needed telephony capability to their N8x0 range of internet tablets I have been watching the development of the new Nokia N900 with much interest. It looks to be potentially the sort of device I would buy. Despite all the hype around the iPhone, I really dislike Apple’s proprietary approach to locking in its customers and I hate even more its use of DRM. So the emergence of a device which uses Linux based software such as Maemo and which is obviously targetted at the iPhone’s market looks to me to be very interesting. But some of the advertising is starting to look scary….

(I still want one though.)

Permanent link to this article: https://baldric.net/2009/11/11/oss-shouldnt-frighten-the-horses/

a free (google) service is worth exactly what you pay for it

I note from a recent register posting that that some gmail users are objecting to the fact that google’s mail service has failed yet again. El Reg even quotes one disgruntled user as saying:

“More than 30 hours without email…totally unacceptable. I’ll definitely have to reconsider my selection of gmail for my primary email account. It may be I have to pay for an account but hell will freeze over before I pay one penny to Google after this debacle.”

Umm, maybe it’s me, but I fail to understand how anyone can complain when a free service stops working. There is a good reason why people pay for services. Paying gives you the option of a contract with an SLA. If the service you are paying for includes storage of your data (as in the corporate data centre model) then your contract should include all the necessary clauses which will ensure that your data is stored securely, is reachable via resilient routes in case of telco failure, is backed up and/or mirrored to a separate site (to which service should fail over automatically in case of loss of the primary) etc. The contract should also ensure that you data remains yours if the hosting company fails, goes out of business or is taken over,

All of that costs money – lots of money in some cases.

Anyone who entrusts their email to a third party provider without ensuring that that they have a decent contractual relationship with that provider (though a paid contract) is, in my view, asking for trouble. Most email users nowadays are heavily dependent upon that medium for communication. I know I would have real difficulty coping without it. Outside of my work environment, I pay for my personal email service. And I am happy to do so. In fact, on some domains I own, I even run my own mail servers (with backups). That costs time and money, but it ensures that my email is available when I expect it to be.

So, google users, stop whining and think again. A proper email service will only cost you a few pounds – and there are plenty of other reasons for not using google’s email service (not least the fact that your email is scanned by google to enable them to target you with their adverts).

Permanent link to this article: https://baldric.net/2009/11/01/a-free-service-is-worth-exactly-what-you-pay-for-it/

call me by name, or call me by value

The old saw about “real” programmers versus the rest (known as “quiche eaters”) was originally summarised beautifully in the classic letter to the editor of Datamation in July 1983 entitled “real programmers don’t use pascal”.

Similar religious (i.e. irrational, but deeply held) positions are taken around various other “lifestyle” choices, such as the equally classic emacs vs vi argument. (For the record I’m in the vi camp – you know it makes sense). So I was delighted to stumble across this from xkcd.

real_programmers_cartoon

Again, my thanks to xkcd.

Permanent link to this article: https://baldric.net/2009/10/29/call-me-by-name-or-call-me-by-value/

handbags

It would appear that I may have been unnecessarily concerned about the accuracy of the profiling data held on me by the commercial sites I use. In my inbox today I found the following email from Amazon:

“As a valued Amazon.co.uk customer, we thought you might be interested in visiting our website dedicated to shoes and handbags, Javari.co.uk.

Javari.co.uk offers Free One-Day Delivery, Free Returns, and a 100% Price Match Guarantee.

Welcome to Javari.co.uk”

I don’t know whether to feel reassured at Amazon’s failure to understand me or disappointed that the considerable resource they have at their disposal can get me so wrong.

Permanent link to this article: https://baldric.net/2009/10/28/handbags/

logrotate weirdness in debian etch

I have two VPSs, both running debian. One runs lenny, the other runs etch. The older etch install runs fine, and is much as the supplier delivered it. Until now I have not had cause to consider the need to upgrade the etch install to lenny because it “just worked”. But today I noticed for the first time a very odd difference between the two machines. A difference which had me scratching my head, and reading too many man entries, for some long time before I found the answer.

For reasons I don’t need to go into, I log all policy drops in my iptables config to a file called “/var/log/firewall”. This file is (supposedly) rotated weekly. The logrotate and cron entries on both machines are identical.. The entry in “/etc/logrotate.d/firewall” looks like this:

/var/log/firewall {
rotate 6
weekly
mail logger@baldric.net
compress
missingok
notifempty
}

The (standard) file “/etc/logrotate.conf” simply calls the firewall logrotate file out of the included directory “/etc/logrotate.d”. The “/etc/cron.daily/logrotate” file (which calls the logrotate script) is also standard and simply says:

#!/bin/sh

test -x /usr/sbin/logrotate || exit 0
/usr/sbin/logrotate /etc/logrotate.conf

and the (again standard) crontab file says:

# /etc/crontab: system-wide crontab
# Unlike any other crontab you don’t have to run the `crontab’
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# m h dom mon dow user command
54 * * * * root cd / && run-parts –report /etc/cron.hourly
55 4 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts –report /etc/cron.daily )
36 5 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts –report /etc/cron.weekly )
3 3 5 * * root test -x /usr/sbin/anacron || ( cd / && run-parts –report /etc/cron.monthly )
#

So far so simple, and you would expect the file “/var/log/firewall” to be rotated once a week at 04.55 on a sunday morning. Wouldn’t you.

Well, on lenny, you’d be right. But on the etch machine the file was rotated daily at a time completely unrelated to the crontab entry. It turns out that there is a bug in the way etch handles logrotation because syslog doesn’t use logrotate and overrides the logrotation entries run out of cron. I found this after much searching (and swearing).

See the bug report at bugs.debian.org and this entry which pointed me there.

I love standards.

Permanent link to this article: https://baldric.net/2009/10/18/logrotate-weirdness-in-debian-etch/

where has my money gone

Like most ‘net users I know these days, I conduct most of my financial transactions on-line. But on-line banking is a high risk activity, particularly if you use the “default” OS and browser combination to be found on most PCs. I don’t, but that doesn’t make me invulnerable, just a slightly harder target. So attempts by the banks to make it harder for the bad guys to filch my money are welcome. Many banks seem to be taking the two factor authentication route by supplying their customers with a hardware token of some kind to be used in conjunction with the traditional UID/password.

I have just logged on to my bank to be greeted with a message that they will shortly be introducing a one-time password system. Apparently this system requires me to register my mobile phone number with the bank. Thereafter, for certain “high risk” transactions (such as setting up a new payment to an external account) in addition to requiring my normal UID/password, the bank will send a one-time password to my moblle which I will have to play back to the bank via the web site before the transaction will be authorised. Sounds reasonable? Maybe. Maybe not. I can see some flaws – not least the obvious one that I have to have a mobile phone (probably not an unreasonable assumption) and that I have to be prepared to register that with the bank (slightly less reasonable). But my biggest concern is that this approach fails to take account of the fact that “people do dumb things” (TM).

The bank’s FAQ about the new system says:

“We have decided to use your mobile phone for extra security so there is no need to carry around a card reader to use e-banking. This also provides extra security as it is unlikely a fraudster will be able to capture both your internet log on credentials as well as your mobile phone.”

I have a problem with that assumption. I know a lot of otherwise very smart people who use their “smart” phones as a central repository of a huge amount of difficult to remember personal information. These days it is very rare for anyone to actually even remember a friend’s ‘phone number. Why bother – just scroll down to “john at work” and press call. These same people store names, addresses, birthday reminders, and yes, passwords for the umpteen web services they use, on the same device. That ‘phone may even be used to log on to the website that requires the password. Indeed, it is entirely plausible that many people will use their ‘phone to log on to their bank when out and about simply to make exactly the kind of transaction my bank deems “high risk”, i.e. to transfer funds from one acccount to another so that they can make a cash withdrawal from an ATM without incurring charges.

How many mobiles are lost or stolen every day?

lost-pda

Permanent link to this article: https://baldric.net/2009/10/15/on-line-banking-security/

debian on a DNS-313

I bought another new toy last week – a D-Link DNS 313 NAS.

D-Link DNS-313

D-Link DNS-313

Actually, this was a mistake because what I really wanted was the DNS-323. I just wasn’t careful enough at the time. Quite apart from having space for two 3.5″ SATA hard drives instead of just one, the 323 is a very different beast to its smaller (and much cheaper) sibling.

Martin Michlmayr has a nice guide to installing debian on a 323. Given that the 323 has a faster processor and more RAM than a slug and it can take two internal SATA disks rather than just the external USBs it looks like an attractive option for a new debian based server. Pity then that I bought the wrong one. My excuse is that I thought the only difference was in the disk capacity and I was prepared to settle for just 1TB of store. The (normally reliable) owner of the shop where I bought the beast was also adamant that the disk capacity was the only difference between the two devices. I should have known better than to succumb to what was essentially an impulse buy when I wasn’t really intending to buy a new NAS at the time (I was in the shop for something else and picked up the 313 because I recalled reading Martin’s pages recently).

Once I got the 313 home of course and checked the specs it looked as if I would be stuck with the D-Link supplied OS. However, a bit of searching turned up the DSM-G600, DNS-323 and TS-I300 Hack Forum which has a series of articles on installing debian on a 313 (despite the forum title). A forum contributor called “CharminBaer” has put together a nice tarball of debian lenny which allows the user to replace the D-Link OS without actually reflashing the device. This means that the original bootloader is retained but the device boots into the replacement system on disk. The nicest part of this installation method is that there is almost no risk of bricking the device because the installation simply entails copying the tarball onto the disk over a USB connection, extracting the files and then booting into a shiny new Lenny install. Result.

The tarball can be found here, and the installation instructions are here.

Many thanks to “CharminBaer”.

Permanent link to this article: https://baldric.net/2009/10/03/debian-on-a-dns-313/

debonaras demise?

Sadly it seems that the debonaras wiki is no more. Rod Whitby had done some excellent work in pulling together a site which consolidated useful information about low cost network storage devices (alternatives to the slug) which could be made to run Debian. Unfortunately the site was a continual target for wiki spam bots and malcontents and it obviously became high maintenance. About a year ago I corresponded with Rod about the spam activity after he introduced password controlled access to some of the main target pages. I even assisted in moderation and maintenance of the site for a while when it became clear that Rod was getting fed up of the work load for no obvious benefit. It now looks as if he has decided to let the domain lapse after all.

Permanent link to this article: https://baldric.net/2009/10/03/292/

abigail’s party

In today’s Guardian, Charlie Brooker ranted rather eloquently about how much he hated smug Apple fans. Or did he?

Actually he made full on broadside swipes at both Apple and Microsoft’s approach to product marketing. One side is too slick and irritating, the other is way too uncool and irritating. But most of his ranting was aimed straight at the Microsoft Windows 7 Launch Party ads on Youtube.

These ads are mind numbingly painful to watch. Take a look at this:

Now take a look at the Cabel remix and read the comments posted. Apparently there is a large contingent of people out there who seriously believe that Microsoft deliberately made the videos such that people would blog about them saying how bad they were.

No way. Couldn’t possibly happen.

Permanent link to this article: https://baldric.net/2009/09/28/abigails-party/

wordpress security

At about the time I decided to move trivia to my own VPS, there was a lot of fuss about a new worm which was reportedly exploiting a vulnerability in all versions <= 2.8.3. Even the Grauniad carried some (rather inaccurate) breathless reporting about how the wordpress world was about to end and maybe we should all move to a rival product. Kevin Anderson said on the technology page of 9 September:

“.. the anxiety that this attack – one of a number in the past year against WordPress – has engendered may create enough concern for someone to spot the chance to create a rival product.”

Rubbish. Besides the fact that there are already several rivals to wordpress (blogger, typepad and livejournal in the hosted services domain alone, plus others such as textpattern if you wish to host your own) what Anderson apparently fails to realise is that all software contains bugs, and any software which is exposed to as hostile an environment as the internet is going to have problems. Live with it. Sure it would be good if we could find and fix all vulerabilities before they are exploited, but as far as I am aware, that hasn’t happened for any other piece of code more complex than “printf (“hello world\n);” (and even that could have problems). Why expect wordpress to be any different?

Amongst all the brouhaha I did find one site which offered some commentary and advice I could agree with, take a look at David Coveney’s “common sense” post of 6 September.

Permanent link to this article: https://baldric.net/2009/09/20/wordpress-security/

wordpress on lighttpd

I have commented in the past how I prefer lighttpd to apache, particularly on low powered machines such as the slug. I used to be a big apache fan, in fact I think I first used it at version 1.3.0 or maybe 1.3.1, having migrated from NCSA 1.5.1 (and before that Cern 3.0) back in the day when I ran web servers for a living. However, those days are long gone and my web server requirements are now limited to my home network and VPSs so I don’t need, nor do I want, the power of an industrial strength apache installation. In fact, my primary home web server platform (the slugs) struggles with a standard apache install. Lighttpd works very well on machines which are low on memory.

Having got used to lighttpd, it seemed a natural platform to use on my VPSs. And it performs very well on those machines for the kind of traffic I see. Moving trivia to my bytemark VPS meant that I had to take care of some minor configuration issues myself – most notably the form of permalinks I use. Most of the documentation about running your own wordpress blog assumes that you will be using apache (since that is the most popular web server software provided by shell account providers). For those of you who, like me, want to use lighttp instead, the configuration details from my vhosts config file are below. Lighttpd is remarkably easy to configure for both virtual hosting in general, and for wordpress in particular. Note that I also choose to restrict access to wp-admin to my home IP address, this helps to keep the bad guys out.

Extract of “conf-enabled/10-simple-vhost.conf” file:

# redirect www. to domain (assumes that “mod_redirect” is set in server.modules in lighttpd.conf)

$HTTP[“host”] =~ “www.baldric.net” {
url.redirect = ( “.*” => “..”)
}

#
# config for the blog
#
$HTTP[“host”] == “baldric.net” {
# turn off dir listing (you can do this globally of course, but I choose not to.)
server.dir-listing = “disable”
#
# do the rewrite for permalinks (it really is that simple)
#
server.error-handler-404 = “/index.php”
#
# reserve accesss to wp-admin directory and wp-login to our ip address
#
$HTTP[“remoteip”] !~ “123.123.123.123” {
$HTTP[“url”] =~ “^/wp-admin/” {
url.access-deny =(“”)
}
$HTTP[“url”] =~ “^/wp-login.php” {
url.access-deny =(“”)
}
}

}

# end

Enjoy.

Permanent link to this article: https://baldric.net/2009/09/12/wordpress-on-lighttpd/

are you human enough

In the course of moving trivia to its new home, I necessarily reviewed and edited a bunch of links. This meant I revisited some old friends – including Chris Samuel’s blog where I discovered this gem.

quantum-random-bit-generator-service

As Chris says, you’ve got to love the captcha.

Permanent link to this article: https://baldric.net/2009/09/09/are-you-human-enough/