ubuntu is free and it always will be

But we may ask you for a contribution.

Canonical have made another move in what is beginning to look ever more like a monetary commercialisation of ubuntu. On 9 October 2012, they added a new page to the “download” section titled “Tell us what we should do more……and put your money where your mouth is ;)” The page looks like this:

The sliders allow you to “target” your contribution to those areas of ubuntu which you feel deserve most reward (or conversely, you believe need most effort in improvement). The default is $2.00 for each of the eight radio button options (for a total of $16.00).

Now $16.00 is not a huge amount to pay for a linux distro of the maturity of ubuntu, but I’m not sure I like the way this is being done. Most distros offer a “donate” button somewhere on their website, but no other has placed it as prominently in the download process as canonical has chosen to do. I’m also a little bothered by the size and placement of the “Not now, take me to the download” option and I have a sneaking feeling that will become even less prominent over time.

Not surprisingly, some of the commentariat have taken great umbrage at this move (witness the comment over at El Reg of the form “Where is the option for “fix the known damn bugs and quit pissing around with GUI”?” and I expect more hostility as and when users start fetching the new 12.10 release.

But an earlier move to monetise the ubuntu desktop worries me even more. Canonical’s link with Amazon through the ubuntu desktop search was, according to Mark Shuttleworth, perfectly sensible, because “the Home Lens of the Dash should let you find *anything* anywhere. Over time, we’ll make the Dash smarter and smarter, so you can just ask for whatever you want, and it will Just Work.” (So that’s alright then.) But the problem, which Shuttleworth clearly doesn’t understand, is that people don’t generally like having advertising targetted at them based on their search criteria. (cf. Google…..). What was worse, the search criteria were passed to Amazon in clear. Think about that.

I share the views of Paul Venezia over at Infoworld where he says:

“But the biggest problem I have with the Amazon debacle is another comment by Shuttleworth: “Don’t trust us? Erm, we have root. You do trust us with your data already.” That level of hubris from the founder of Ubuntu, in the face of what is clearly a bad idea badly implemented, should leave everyone with a bad taste in their mouth. If this idea can make it to the next Ubuntu release, then what other bad ideas are floating around? What’s next? Why should we maintain that trust?

So fine, Mr. Shuttleworth. You have root. But not on my box. Not anymore.”

Ubuntu is already in decline following the way unity was foisted on the userbase. And Canonical has been likened to Apple in the past. Things can only get worse for ubuntu from hereon. Way past time to move on.

Permanent link to this article: https://baldric.net/2012/10/14/ubuntu-is-free-and-it-always-will-be/

password lunacy

One of my fixed term savings accounts matured at the end of last week. This means that the paltry “bonus” interest rate which made the account ever so slightly more attractive than the pathetic rates generally available 12 months ago now disappears and I am left facing a rate so far below inflation that I have contemplated just stuffing the money under my mattress. Current rates generally on offer at the moment are pretty terrible all round, but I was certainly not going to leave the money where it was so I decided to move it to a (possibly temporary) new home.

After checking around, I found a rate just about more attractive than my mattress and so set about opening the new account on-line. Bearing in mind that this account will be operated solely on-line and may hold a significant sum of money (well, not in my case, but it could) one would expect strong authentication mechanisms. I was therefore not reassured to be greeted by a sign up mechanism that asked for the following:

a password which must:

  • be between 8 and 10 characters in length;
  • contain at least one letter and one number;
  • not use common words or names such as “password”;
  • contain no special characters i.e. £ %

(oh, and it is not case sensitive. That’s good then.)

Further, I am asked to provide:

  • a memorable date;
  • a memorable name; and
  • a memorable place.

I should note here that I initially failed the last hurdle because the place name I chose had fewer than the required 8 characters, and when I tried a replacement I found that I wasn’t allowed to use a place name with spaces in it (so, something like “Reading” or “Ross on Wye” are unacceptable to this idiot system).

I haven’t tried yet (the account is in the process of being set up and I will receive details in the post) but from experience with other similar accounts, I guess that the log-on process will ask for my password, then challenge me to enter three characters drawn from one of my memorable date/name/place. Oh, and the whole process is secured by a 128bit SSL certificate.

My friend David wrote a blog piece a while ago about stupid password rules. The ones here are just unbelievable. Why must the password be limited to 8-10 characters? Why can’t I choose a long passphrase which fits my chosen algorithm (like David, I compute passwords according to a mechanism I have chosen which suits me). Why must it only be alphanumeric? And why for pity’s sake should it be case insensitive? Are they deliberately trying to make it easy to crack?

As for the last three requirements, what proportion of the population do you think are likely to choose their birthdate, mother’s maiden name and place of birth (unless of course, they were born in Reading, or London, or York, or Glasgow, or Burton on Trent or…)

Answers on a postcard please.

Permanent link to this article: https://baldric.net/2012/10/13/password-lunacy/

a positive response

Whenever my logs show evidence of unwanted behaviour I check what has happened and, if I decide there is obviously hostile activity coming from a particular address I will usually bang off an email to the abuse contact for the netblock in question. Most times I never hear a thing back though I occasionally get an automated response.

Today, after finding over 23,000 automated attempts to access the admin page of trivia I sent off my usual notification to the netblock owner (“Hey, spotted this coming from you, a bit annoying”). Within a couple of hours I got an automated acknowledgement asking me to authenticate myself by response. A couple of hours after that, I got a human response saying “We’ve dealt with it. Your address is now blocked”. I’ve never had that helpful a response before.

The ISP was Russian.

Permanent link to this article: https://baldric.net/2012/10/05/a-positive-response/

iptables firewall for servers

I paid for a new VPS to run tor this week. It is cheaper, and offers a higher bandwidth allowance than my existing tor server so I may yet close that one down – particularly as I recently had trouble with the exit policy on my existing server.

In setting up the new server, the first thing I did after the base installation of debian and the first apt-get update/upgrade was to install my default minimum iptables firewall ruleset. This rule simply locks down the server to accept inbound connections only to my SSH port and only from my home trusted network. All other connections are denied. I have a variety of different iptables rules depending upon the system (rules for headless servers are clearly different to those needed on desktops running X for example). In reviewing my policy stance for this new server, I started comparing the rules I was using on other servers, both externally on the ‘net and internally on my LAN. I found I was inconsistent. Worse, I was running multiple rulesets with no clear documentation and no obvious commonality where the rules should have been consistent, or any explanation of the differences. In short I was being lazy, but in doing so I was actually making things more difficult for myself because a) I was reinventing rulesets each time I built a server, and b) the lack of documentation and consistency meant that checking the logic of the rules was unnecessarily time consuming.

To add to my woes, I noted that in one or two cases I was not even filtering outbound traffic properly. This is a bad thing (TM), but not untypical of the approach I have often seen used elsewhere. Indeed, a quick check around the web will show that most sites offering advice about iptables rulesets concentrate only on the input chain of the filter table and ignore forwarding and output. To be fair, many sites discussing iptables seem to assume that ipforwarding is turned off in the kernel (or at least recommend that it should be) but very few that I could find even consider output filtering.

In my view, output fitering is almost as important, if not as important as input filtering. Consider for example how most system compromises occur these days. Gone are the days when systems were compromised by remote attacks on vulnerable services listening on ports open to the outside world. Today, systems are compromised by malicious software running locally which calls out to internet based command and control or staging servers. That malicious software initially reaches the desktop through email or web browsing activity. This “first stage” malware is often small, aimed at exploiting a very specific (and usually completely unpatched) vulnerability and is unnoticed by the unsuspecting desktop user. The first stage malware will then call out to a server (over http or https usually) to both register its presence and obtain the next stage malware. That next stage will give the attacker greater functionality and persistence on the compromised system. It is the almost ubiquitous ability of corporate desktops to connect to any webserver in the world that has led to the scale of compromise we now routinely see.

But does output filtering matter on a server? And does it really matter when that server is running linux and not some other proprietary operating system? Actually, yes, it matters. And it matters regardless of the operating system. There is often a disconcerting smugness from FOSS users that “our software is more secure than that other stuff – we don’t need to worry”. We do need to worry, And as good net citizens we should do whatever we can to ensure that any failures on our part do not impact badly on others.

I’m afraid I was not being a good net citizen. I was being too lax in places.

If your linux server is compromised and your filtering is inadequate, or non-existent, then you make the attacker’s job of obtaining additional tools easy. Additionally, you run the risk of your server being used to attack others because you have failed to prevent outbound mailicious activity, from port scanning to DOS, to email spamming to running IRC or other services he wants, on your server (for which you pay the bills). Of course if the attacker has root on your box, no amount of iptables filtering is going to protect you. He will simply change the rules. But if he (or she) has not yet gained root, and his privilege escalation depends upon access to the outside world, then your filters may delay him enough to give you time to take appropriate recovery action. Not guaranteed of course, but at least you will have tried.

So how can your server be compromised? Well, if you get your input filtering wrong and you run a vulnerable service, you could be taken over by a ‘bot. There are innumerable ‘bots out there routinely scanning for services with known vulnerabilities. If you don’t believe that, try leaving your SSH port open to the world on the default port number and watch your logs. Fortunately for us, most distros these days ship with the minimum of services enabled by default, often not even SSH. But how often have you turned on a service simply to try something new? And how often did you review your iptables rules at the same time? And have you ever used wget to pull down some software from a server outside your distro’s repository? And did you then bother to check the MD5 sum on that software? Are you even sure you know fully what that software does? Do you routinely su to root to run software simply because the permissions require that? Do you have X forwarding turned on? Have you ever run X software on your server (full disclosure – I have)? Ever run a browser on that? In the corporate world I have even seen sysadmins logged in to servers which were running a full desktop suite. That way lies madness.

Believe me, there are innumerable ways your server could become compromised. What you need to do is minimise the chances of that happening in the first place, and mitigating the impact if it does happen. Which brings me back to iptables and my configuration.

The VM running trivia is also my mailserver. So this server has the following services running:

  • a mail server listening on port 25;
  • an http/https server listening on ports 80 and 443;
  • my SSH server listening on a non standard port;
  • an IMAPS/POP3S server listening on ports 993 and 995.

My tails mirror only has port 80 and my nonstandard SSH port open, my tor server has ports 80, 9001 and my non standard SSH port open, and of course some of my internal LAN servers listen on ports such as 53, 80, 443, 2049, (and even occasionally on 139 and 445 when I decide I need to play with samba, horrible though that is). I guess this is not an unusual mix,

My point here though, is that not all of those ports need to be accessible to all network addresses. On my LAN, none of them need to be reachable from anywhere other than my internal selected RFC1918 addresses. My public servers only need to be reachable over SSH from my LAN (if I need to reach one of them when I am out, I can do so from a VPN back into my LAN) and given that my public servers are on different networks, they in turn do not need to reach the same DNS servers or distro repositories (one of my ISPs runs their own distro mirror. I trust that. Should I?). Whilst inevitably the iptables rules for each of these servers needs to be different, the basic rule configuration should really be the same (for example, all should have a default drop policy, none need allow inbound connections to any non existent service, none need allow broadcasts, none need access to anything other than named DNS servers, or NTP servers etc.) so that I am sure it does what I think it should do. My rules didn’t conform to that sort of approach. They do now.

Having spent some time considering my policy stance, I decided that what I needed was a single iptables script that could be modified quite simply, and clearly, in a header which stated the name of the server, the ports it needed open or which it needed access to and the addresses of any servers which it trusted or it needed access to. This turned out to be harder to implement than I at first thought it should be.

Consider again this server. It should be possible to nail it down so that it only allows inbound new or established connections to the ports listed and only allows oubound established connections to those inbound. Further, it should not call out to any servers other than my DNS/NTP and distro repositories. Easy. But not so. Mail is awkward for example because we have to cater for inbound to port 25 from anywhere as well as outbound to port 25 anywhere. That feels a bit lax to me, but it is necessary unless we connect only to our ISP’s mailserver as a relay. Worse, as I discovered when I first applied my new tight policy, I found that my wordpress installation slowed to a crawl in certain circumstances. Here it transpired that I had forgotten that I run the akismet plugin which needs access to four akismet servers (Question. Do I need to continue to run akismet? What are the costs/benefits?) It is conceivable that other plugins will have similar requirements. I also noticed that I had over thirty entries for rpc servers in my wordpress “Update Services” settings (this lists rpc servers you wish to automatically notify about posts/updates on your blog). Of course WP was attempting to reach those servers and failing. So I found myself adding exceptions to an initially simple rulebase. I don’t like that. And what if the IP addresses of those servers change?

So I actually ended up with two possible policy stances, which I called “tight” and “loose”. The first attempts to limit all access to known services and servers (with the obvious exception of allowing inbound to public services). The second takes a more permissive stance in that it recognises that it may not be possible to list all the servers we must allow connection to, but limits those connections to particular services (so for example, whilst it will allow out connection only to DNS on one or two servers, it will allow out new connections to any server on say port 80 (I actually don’t like this, for fairly obvious reasons, but it is at least more restrictive than the usual “allow anything to anywhere”).

Others may find these scripts useful so I have posted them here: iptables.tight.sh and iptables.loose.sh Since the scripts must be run at boot time they should be run out of one your boot run control scripts (such as /etc/init.d/rc.local) or at network initialisation as a script in /etc/network/if-up.d. Before doing so, however, I strongly advise you to test them on a VM locally, or at least on a machine to which you have console access. Locking yourself out of a remote VM can be embarrassing.

By way of explanation of the policy stances taken, I have posted separate descriptions of each at tight and loose.

Comments. feedback, suggestions for improvement or criticism all welcome.

Permanent link to this article: https://baldric.net/2012/09/09/iptables-firewall-for-servers/

Neil Armstrong

I suppose it is inevitable that your heroes die as you get older. I was just finishing my O’ levels when Armstrong took his “one small step”. I can remember clearly looking up at the moon on that day in 1969 in awe that there were two human beings on that satellite at the very moment I was watching it.

To a boy weaned on a diet of pulp SF where lunar landings were commonplace and the Grey Lensman fought the Boskonian criminals across the Galaxy, Armstrong, Aldrin and Collins were the real deal. True heroes, breaking boundaries and setting new frontiers. Now, as a middle aged man, I know that all too often your heroes can turn out to have clay feet. Not so with Neil Armstrong. A self professed “nerdy engineer”, he remains to me, as to millions of others of my generation, an inspiration.

He died on saturday 25 August 2012 at the age of 82.

Permanent link to this article: https://baldric.net/2012/08/28/neil-armstrong/

my russian fanbase

My readership in Russia appears to be growing. For some reason I seem to be getting a lot of hits from Russian domains on my posts and pages about egroupware. And my referer logs show a lot of inbound connections from domains in the .ru TLD. Those websites I have checked appear to be technical, or partly technical, bulletin board type sites equivalent to the scream over here. Intriguingly, counterize, which I have recently updated, shows the top three countries hitting trivia as USA, China and the Russian Federation, in that order. The UK is fourth after the Netherlands.

Today I received my first comment (at least I think it is a legitimate comment rather than spam) from a russian speaker. That comment, on my old post “from russia with love“, translates as “Accidentally stumbled on your blog. Now I will always see. I hope not disappoint and further / Thanks, good article. Subscribed.”

Thank you притчи онлайн. Enjoy.

Permanent link to this article: https://baldric.net/2012/08/28/my-russian-fanbase/

tails has not been hacked

I run a tails mirror on one of my VMs. Earlier this week there was a flurry of anxious comment on the tails forum suggesting that the service had been “hacked”. Evidence pleaded in support of that theory included the facts that file timestamps on some of the tails files varied across mirrors, one of the mirrors resolved to a Pirate Bay mirror, and the tails signing key had apparently changed.

Well, none of that is necessarily proof of hostile behaviour. In fact, good old cock-up wins out over conspiracy again. As can be seen from the tails admin comment over at the forum, human error (followed by panic reaction) is to blame.

I hold my hand up to a mistake which contributed to the problem. My rsync to the tails repository omitted the “-t” switch which would have preserved file modification times. In mitigation, I plead stupidity (and the fact that the tails mirror documentation also omitted that switch…..).

Now fixed.

Permanent link to this article: https://baldric.net/2012/08/23/tails-has-not-been-hacked/

you are at 2001:db8::ff00:42:8329.

Verity Stob is having trouble getting a new IP address. What with the IPV4 address exhaustion problem, it would seem that the only alternative is IPV6. This is causing Verity some grief.

Stress brings out my unoriginal streak. I said: ‘Where am I?’

‘You are at 2001:db8::ff00:42:8329.’

‘What?’

‘Your new IP address at 2001:db8::ff00:42:8329.’ He had the rare gift of speaking hex and punctuation. ‘You wanted a new static IP address. Your government has arranged that you should get one.’

‘That… That’s not an IP address. That’s a malformed MAC address with extra rivets. You can’t… Ow! Stop! What are you injecting into me?’

‘Don’t worry about that Ms Stob. It’s a little something the Chinese have come up with. It suppresses the body’s natural resistance to incompletely established international standards. It’s quite safe – approved by NICE for treatment of both acute and chronic Luddism. But once more enough of the chitchat. We have some reeducation to do. Oddjob: the Powerpoint, please. Now. The 128 bits of the IP address are divided into a subnet prefix and a unique device ID…’

Absolutely delightful. A “malformed MAC address with extra rivets”. Sheer poetry.

Permanent link to this article: https://baldric.net/2012/08/21/you-are-at-2001db8ff00428329/

debian on a DNS-320

Back in 2009 I bought, on impulse, a D-Link DNS-313 thinking it was sufficiently similar to the 323 to enable me to install debian with some ease. As I noted at the time, however, I’d made a slight mistake and then had to settle for a compromise installation from a tarball rather than a full native install.

Recently I bought a slightly bigger brother to the 313 in the shape of a DNS-320 ShareCenter. Again, this box is not quite the same spec as the 323 (and hence is slightly less easy to flash with a debian installation) but at under £55.00 (albeit with no disks) it was too good a bargain to miss, particularly since I already had one spare terabyte SATA disk. What I hadn’t banked on, of course, was the terrible price I would have to pay for a second disk, but hey, I wanted to be able to set up RAID because I planned on making this new toy my main backup NAS.

Before parting with my money, I checked carefully that I would indeed be able to install my preferred OS. The 320 has an 800 MHz Marvell 88F6281 CPU on the Kirkwood family of SoC (so is closely related to the sheevaplug) and 128 MB RAM. Unfortunately, Martin Michlmayr’s site (which would normally be my first port of call) has nothing on the 320, but there are plenty of other sites offering advice on debian installation on this particular NAS. Martin does provide detailed instructions for the 323 of course, but that is based on the older Orion SoC.

D-link actually provides a complete build environment (available on its on its German ftp server) that lets you build your own firmware image. They also provide a rather useful build of debian squeeze on their Polish ftp site (strangely nothing so useful on the UK ftp site though).

The first and most comprehensive set of information I found was on the 320 wiki at kood.org. Apart from proving a valuable technical resource itself, the site points to useful “howtos” on other sites such as Jamie Lentin’s excellent site which gives detailed instructions for building and installing debian images for both the DNS-320 and the 325, and the NAS Tweaks site which introduced me to the very useful “fonz fun_plug” concept.

The idea behind the fonz is to allow installation of non-standard software on a range of NAS devices. To quote from the NAS tweaks site tutorial page:

The Firmwares of various NAS-Devices includes a very interesting bonus: the user can execute a script (file) named “fun_plug” when the OS is booted. Unlike all the other Linux software which is loaded when the NAS boots, this file is located on Volume_1 of the hard disk rather than within the flash memory. This means the user can easily and safely modify the file because the contents of the flash memory is not changed. If you delete the fun_plug file (see here for instructions), or replace your hard disk, the modification is gone.

Fun_plug allows the user to start additional programs and tools on the NAS. A Berlin-based developer named “Fonz” created a package called “ffp” (Fonz fun_plug), which includes the script and some extra software which can be invoked by fun_plug.

Installation of fun_plug is easy and takes only a few steps. These steps should be performed carefully, as they depend on typed commands and running with “root” privileges.

What this means in practice, is that the user can effectively use fun_plug to install a complete OS image (such as debian) into a chrooted ennvironment on the NAS. This has the advantage of being easily reversible, you don’t have to dump the (sometimes useful) original firmware, and you don’t run much risk of bricking your device. So whilst the Jamie Lentin tutorial appealed to the techy in me, the pragamatist said that fun_plug looked a more interesting first approach, and the Fonz’s script in particular looked very useful. And, indeed, so it turned out.

I installed fun_plug 0.7 for ARM EABI and now have a cheap (if rather noisy to be honest) 1TB RAID1 NAS which both retains the D-Link firmware and gives me the additional functionality offered by a debian 6 installation. Muting that fan is now high on my ToDo list.

Permanent link to this article: https://baldric.net/2012/08/21/debian-on-a-dns-320/

what every iphone needs

I stumbled across this site today when following a link from an email on the topic of reflashing an android tablet with CyanogenMod. The guy who sent the email (to the ALUG list) had bought a generic 7″ android tablet. He was considering reflashing with CM9 and was asking for advice/guidance/gotchas or whatever before doing so.

The tablet actually looks quite reasonable, if a little pricey, though Mark did say that he hadn’t paid anywhere near the advertised price. But a casual wander around the rest of the Dee-Sign site led me to this page under the “cool stuff” category.

Forgive me if I weep.

Permanent link to this article: https://baldric.net/2012/08/20/what-every-iphone-needs/

the stainless steel rat bows out

I read today that Harry Harrison has died at the age of 87. Harrison was one of the greats of the SF glory years. Alongside Heinlein (whose work he spoofed mercilessly) Philip Dick, Asimov, van Vogt, Ray Bradbury, Bob Sheckley, Doc Smith and a host of others I grew up with in the 60s and early 70s, Slippery Jim DeGriz was a wonderful companion.

Goodbye old friend.

Permanent link to this article: https://baldric.net/2012/08/15/the-stainless-steel-rat-bows-out/

oops

An attempted quick search this morning using ixquick over tor drew a blank. In fact I hit a brick wall as the screenshot below will show.

The commentary provided by ixquick is self-explanatory (click the image if you have difficulty reading the snapshot), but I can’t help feeling that this problem should have been foreseen and dealt with in advance. After all, google has long reacted badly to tor based searches so it is not as if the volume issue could not be predicted. And tor users tend to react badly to any “unexpected” results from tor usage. We are paranoid enough as it is……

Fortunately, a refresh, using another exit node cured the temporary glitch and I got the results I wanted.

Permanent link to this article: https://baldric.net/2012/08/08/oops/

outlook goes public – linus approves

Microsoft has (re-)launched its free public email service (previously branded “hotmail” and “windows live”) under the brand “outlook”. Outlook has been Microsoft’s email client on the corporate desktop for many years now, so they may be hoping that the new look email product will benefit from the existing brand’s goodwill.

However, I noticed from a posting on El Reg today that whilst Microsoft were breathlessly boasting “One million people have signed up for a new, modern email experience at Outlook.com. Thanks!” they were not apparently being careful about differentiating between people, and accounts. The Reg article notes that some new users have signed up to multiple accounts and/or accounts with unlikely names (such as steveballmer@outlook.com and satan@outlook.com). So I moseyed on over to the new service to take a look.

I am now the proud owner of the email account “linus-torvalds@outlook.com”.

I’m pretty sure that Linus himself would not want that particular address, but if he does, and Microsoft don’t delete it in a clean-up operation, then he is welcome to it. Personally I shan’t be using the service.

Permanent link to this article: https://baldric.net/2012/08/02/outlook-goes-public-linus-approves/

avoiding accidental google

Even though I set my default search engine to anything but google (usually ixquick, but sometimes its sister engine at startpage) I have occasionally been caught out by firefox’s helpful attempts to intervene if I mistakenly enter a search option in the URL navigation field (or just hit return too early). Firefox’s default action in such cases is to direct a search to google. This is not helpful to someone who actively wishes to avoid that.

The way to prevent this is to edit the firefox configuration thus:

– go to “about:config” in the navigation bar
– now search for the string “keword.url”
– right click on the returned option and select “modify”
– now enter “https://ixquick.com/do/search/?q=” and accept.

Now all mistakes will be sent to ixquick as searches and not to google.

Permanent link to this article: https://baldric.net/2012/07/31/avoiding-accidental-google/

too much bling

Avid readers will note that I have reverted to a simpler, two column, layout. Posts and pages are to the left, and additional navigation links are to the right. Some of my friends commented that the three column layout, with several images in the outer columns was distracting (actually, one said “bleah!”). I left it for a while, but then noticed something odd in my stats. My hit rate has gone down, way down, since I changed the layout back in March. In fact, my current hit rate is lower than half my original peak.

So, I conclude that I had indeed added “too much bling” and I have now cleaned up the layout. Let’s see if my readership approves (and improves).

Permanent link to this article: https://baldric.net/2012/07/28/too-much-bling/

coercion

David commented on my gpg upgrade post saying: “How does one ensure that they are not coerced into signing a transition statement with a new (but compromised) key?”.

Well, you can never be sure I can’t be coerced, and this is why I can’t be sure I cannot be coerced:

My thanks as always to xkcd

Permanent link to this article: https://baldric.net/2012/07/24/coercion/

the accidental stupidity of good intentions

For some years now I have used what used to be the freecycle system to dispose of unwanted, but otherwise useful items from my home. In return I have sometimes used the same mechanism to get hold of things like books which someone else wishes to get rid of. A couple of years or so ago, the UK freecycle organisation split from the US parent and was renamed “freegle”. Naming and politics aside (and there was some nasty politicing going on) the purpose and intentions of the UK freegle organisation continued to be honourable and useful. A lot of goods and material that would otherwise have ended up in landfill has been successfully recycled.

To date, UK freegle has been based on yahoo groups. Members subscribe to a freegle group covering their area and then send/receive email alerts about items offered or wanted. But yahoo groups is not an ideal mechanism for an organisation such as freegle, so alternatives are popping up – usually web based and often bespoke. My local group recently received lottery funding to help it establish just such a website.

Following an email from the group’s moderators about the intended change (and imminent closure of the old yahoo group) I signed up to the new system. Having done so I then set about editing my preferences and settings. One of the settings required is “postcode”, which the system uses with (an editable) radial distance in miles to determine which offers/requests to email to you. Alongside the required postcode, is the option to include your full address. Entering your full address will obviate the need to add it later when giving details to other freeglers about how to get to your location to pick up offers. Address details are apparently necessary because all interaction between freeglers now takes place (and is moderated) via the website address. Unlike the old system whereby freegler’s email addresses were exposed, users of the new system only see the freegle group address. (This, incidentally, is a “good thing” (TM)). However, one of the other settings on the preferences page is a checkbox marked “On holiday”.

Forgive me for thinking that this might not be a good idea.

Permanent link to this article: https://baldric.net/2012/07/22/the-accidental-stupidity-of-good-intentions/

gpg key upgrade

Following a recent discussion about gpg key signing on my local linux user group email list, one of the members pointed out that several of us (myself included) were using rather old 1024-bit DSA GPG keys with SHA-1 hashes. He recommended that such users should upgrade to keys with a minimum size of 2048 bits and a hash from the SHA-2 family (say SHA256).

I believe he is right. That is good advice and it is long past time that I upgraded. So, I have now created a new default GPG key of 4096 bits – that should last for a while. However, that leaves the problem of how to migrate from the old key to the new key when the old key has been in circulation since at least 2004.

Fortunately, Daniel Kahn Gillmor (dkg) has published a rather nice and useful how-to on his debian-administration blog. I used that guide, supplemented with some further guidance on the apache site to come up with a transition plan. If you wish to contact me securely in future, then please use my new GPG key. My signed transition statement is here. A copy is given below. That transition statement is signed with both my old and new keys so that people who have my old key may be sure (or as sure as they can be if they presume that my old key has not been compromised) that the new key is valid and a true means of secure communication with me.

(BTW, the way to sign a document with two keys is as follows:

gpg –clearsign –local-user $KEYID-1 –local-user $KEYID-2 filename

where $KEYID-1 and $KEYID-2 are the eight digit IDs of the old and new keys. This is not well documented in the GPG manual.)

Copy of transition statement

—–BEGIN PGP SIGNED MESSAGE—–
Hash: SHA1,SHA256

GPG transition statement – Friday 20 July 2012

I am moving my preferred GPG key from an old 1024-bit DSA key to a
new 4096-bit RSA key.

The old key will continue to be valid for some time, but I prefer all
new secure correspondence to be encrypted with the new key. I will be
making all new signatures with the new key from today.

This transition was aided by the excellent on-line how-to at:

https://www.debian-administration.org/users/dkg/weblog/48

This message is signed by both keys to certify the transition.

The old key was:

pub 1024D/10927423 2004-07-15

Key fingerprint = E8D2 8882 F7AE DEB7 B2AA 9407 B9EA 82CC 1092 7423

And the new key is:

pub 4096R/5BADD312 2012-07-20

Key fingerprint = FC23 3338 F664 5E66 876B 72C0 0A1F E60B 5BAD D312

I have signed my new key with the old key. You may get a copy of my
new key from my server at rlogin.net/keys/micks-new-public-key.asc

To fetch the new key, you can get it with:

wget -q -O- https://rlogin.net/keys/micks-new-public-key.asc

Or, to fetch my new key from a public key server, you can simply do:

gpg –keyserver keys.gnupg.net –recv-key 5BADD312

If you already have my old key, you can now verify that the new key is
signed by the old one:

gpg –check-sigs 5BADD312

If you don’t already know my old key, or you just want to be double
extra paranoid, you can check the fingerprint against the one above:

gpg –fingerprint 5BADD312

Please let me know if you have any trouble with this transition. If you
are also still using old 1024-bit DSA keys, you too may wish to consider
migrating your old key to a stronger version.

Best

Mick
—–BEGIN PGP SIGNATURE—–
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAlAJms8ACgkQueqCzBCSdCM90wCeMo25AmWEdEEztjY635LPFtxB
qcMAn17NUSPZLPZAmknloWacWUsXFGndiQIcBAEBCAAGBQJQCZrPAAoJEAof5gtb
rdMSDyUQALIMRKcIZgaOqMPBA2o5juJTu0W9mICEaNMl4Cnf7kw5zZvwc+vu/9N0
pTYwgc0UmrG3Uy0rzBU53jf6coAqu3g2rqfWQ4+Ns03gGfdFCTYlKP0IlpntlVba
TxAaw52ScCLKOKCJK7atXxF0PvQCKK9wATbT1HgHZ6dHBzejEn4X308UzkgEzQ/l
Z1FQtgwkZrVd2QQlYBn6PxYrFqaH0rEBSKJWskAY05IalIwRXD6Pj7oq8BdwhU4t
cFK41afY74ZgAhvNzYs7Ge8Dk3Izj1RtN7nRnESD0ZZAFUG9M8smolE9f677xeOR
TRgVKEBhDN3JvKo+wuxgxsCpB1DD/W2yIs+b7Y140GvPvwGZjcF50tEP0KhZMiIK
/W1UxwNmQhDmTrhilL4o6efVaI1EZgyn6sdyycimrQ+0zm1o+TSntiF2o5Ulj+uY
JnME9WfnNWcfS9ezp6ZQ03YkhW5PVGaWg9KxPGN2hWOKtCBHJf1E5xOS5zy4kgc8
C9HovVp0N46MZleTBYE/v5JdJ5/yrktcSfYGA6jeOaBDHFb6qNkGcMQVlpNszKid
7a5Q4/rJ/Z75BMoVBaicNwUZpqHvCxDHMXjuw44RzG/QWca6ljxmSF/8ZFPggqJP
uTv3wKd0Y8i3DjWmAX/ps4viQEeDal7w5lqoJBA6YulQGnwqFAl5
=DzN2
—–END PGP SIGNATURE—–

Permanent link to this article: https://baldric.net/2012/07/20/gpg-key-upgrade/

RBS meltdown

I’ve been away on holiday during the one of the most public, and potentially most expensive, IT screwups for some time. By now everyone will be aware of the meltdown in RBS/NatWest/Ulster Bank systems. Since my return, I’ve been catching up on some of the on-line commentary and analysis. I particularly liked this comment on El Reg in an article discussing how management might address the root causes of the fault:

“the decision to put the cheapest person they could find anywhere in the world in such a responsible position was a bad idea.”

That reminded me of Alan Shepherd’s famous quote about the early US space programme:

“It’s a very sobering feeling to be up in space and realize that one’s safety factor was determined by the lowest bidder on a government contract.”

I guess some people are currently looking for new jobs.

Permanent link to this article: https://baldric.net/2012/07/07/rbs-meltdown/

fail

My new bank (which is actually one of the few remaining mutuals in the UK) sent me my voting forms for the AGM today (by postal mail). The information pack included details of how to vote on-line should I choose to do so, together with two unique “voting codes” one of eight digits the other of four alphanumerics. Sure enough, entering the requisite codes on the website allowed me access to the voting arena.

Once there, besides actually voting, I was presented with a checkbox saying: “Opt to receive future AGM information by email and you could win 1 of 10 prizes of £1000 (prize draw rules apply).” (Clearly they wish to encourage email as a cost saving measure). However, upon clicking the checkbox I was presented with a new box saying “By ticking this box, I agree that in future the Society may email my voting codes to my email address below and that I can access the Notice of AGM and Summary Financial Statement and give my voting instructions on the Society’s secure online voting site via a link sent to my email address. To access your AGM documents by email in future years please provide your email address below:”

Oh dear, a repetition of the BCS stupidity of a few years ago. No way do I want supposedly secure, unique, voting codes sent to me via email, unless of course the bank can encrypt with my public key.

Guess what?

Permanent link to this article: https://baldric.net/2012/06/19/fail/

microsoft goes all canonical

It would seem that Microsoft has been taking a look at the world of linux and decided that the best way to take on the emerging desktop threat is to emulate the competition. Unfortunately for them, they seem to have decided to emulate the stupidest of decisions recently taken by Canonical and have completely redesigned the windows desktop interface in windows 8 to make it look like a mobile phone. [Note, that link goes to techradar. I would link to the official windows 8 pages, but they insist on using silverlight….].

When I first saw the early reviews of the new metro interface I couldn’t quite believe that Microsoft (who are not usually suicidal) could have made a mistake which so completely dwarfs the vista experience. So I downloaded a copy of the release preview and fired it up in a virtualbox VM. I chose the iso installation option because MS do not have the equivalent of a linux live distro and, since I do not run Microsoft software, the option of a live install is, fortunately, impossible for me. I say fortunately, because I cannot believe that anyone who selected that option and overwrote a working version of windows 7 will be at all happy.

Not that the iso install option was straightforward either. Microsoft, being Microsoft, cannot allow you to simply try out a pre-release version of their software, Instead, they make you jump through all sorts of unnecessary hoops to register and set up an account which will give you access to their on-line software repository. Why, is completely beyond me. I had to give MS a name, postcode, date of birth, gender, the name of my first pet and two, yes two, separate email accounts before they would let me in. Even then, I couldn’t successfully complete registration (OK, I may not have been entirely truthful in my registration details, but two separate trashmail accounts should have been enough). The last hurdle I faced was the “click this link to confirm your account” sent to the first of the two email addresses I gave. Clicking on that link took me to a windows live sign in page – and there, of course, I got stuck. MS wouldn’t let me sign in using FF on linux, saying, unhelpfully:

“The Windows Live Network is unavailable from this site for one of the following reasons:

  • This site may be experiencing a problem
  • The site may not be a member of the Windows Live Network.

You can:

You can sign in or sign up at other sites on the Windows Live Network, or try again later at this site.”

Now if the windows 8 installation in my VM had allowed cut and paste from my desktop (as is possible with a linux VM) I could have just pasted the login URL into IE and continued from there. But no, not possible, and I really could not be bothered to transcribe a 177 character long random URL by hand. I wasn’t that interested in continuing.

Here’s why:

And this is what you see if you click on the “desktop” icon:

What happened to the task bar and the infamous “Start button”?

That interface is not going to go down at all well even with domestic users, let alone businesses which may have invested significant time, effort and money in previous versions of windows.

Ironically, one of the arguments used against moving from windows to a linux desktop in the enterprise is the “cost of change” notably in the cost of staff training. It looks to me as if Microsoft may just have handed the open source community its best ever opportunity to move into space previously sewn up by Redmond.

(Oh, and BTW, windows 8 is painfully slow in a VM).

Permanent link to this article: https://baldric.net/2012/06/17/microsoft-goes-all-canonical/

software is /not/ biological

I know language evolves, and I know that jargon from one domain is sometimes reused in another, completely unrelated domain, but I really, really do not like the increasing usage of the word “ecosystem” to refer to software/hardware or information systems. I think I first heard the word used in this way by a Microsoft guy in a presentation a few years ago. As I recall, what he he was /actually/ referring to was Microsoft applications running on the windows OS. Only he called it the “windows ecosystem”, possibly in the belief that that somehow sounded cooler or more important. Since then, I have seen (and heard) the word used to describe everything from simple application software (the “MySQL ecosystem”) to a collection of security systems.

I’m sorry, but an ecosystem is a biological system in which a collection of living things interacts with its environment. So a coral reef, or a rain forest are ecosystems, but microsoft word used in the accounts department of a business most assuredly is not. I’m with the FSF here, but sadly a search for the phrase “software ecosystem” will get you no end of business techno-babble references.

Permanent link to this article: https://baldric.net/2012/05/27/software-is-not-biological/

stupid hunt

According to reporting today, Jeremy Hunt, the Secretary Of State for Culture, Media and Sport, lobbied the Prime Minister in support of Rupert Murdoch’s bid for BSkyB. The report says:

“The inquiry heard that the culture secretary drafted the email on his private Gmail account on 19 November 2010 despite being warned by his officials that he should not intervene because the decision was being taken exclusively by Cable. In the memo he voiced concern that Cable, the business secretary, had referred the takeover to media regulator Ofcom, warning him that James Murdoch was “pretty furious” and that the government “could end up in the wrong place in terms of media policy as a result”.

His Gmail account? Regardless of all else of Hunt’s manifest deficiencies, that act alone means that the man is not fit to hold Minsterial Office.

Permanent link to this article: https://baldric.net/2012/05/25/stupid-hunt/

tor abuse

I have been running at least one tor exit node for about three years now. Over that period I have occasionally had to move provider following one or more abuse reports. Most ISPs like the quiet life, and you can’t really blame them for not wanting the hassle of dealing with complaints from other ISPs about apparent hostile activity originating from their networks. I have been with one provider for a couple of years and, until now, they have been understanding when they have received complaints and I have pointed them to my exit policy and my notice on the tor node itself. However, this week that changed. They have received two more reports of hostile activity, aimed apparently at Brazilian Government servers, in rapid succession. Following discussions with my provider I have now reluctantly agreed to shut down the exit policy completely. In future my tor node will relay only.

This is a shame, but the only real alternative I was faced with was to shut it down completely and/or move yet again. I’m no longer prepared to do that.

For the record, the activity logged by the victims showed that some bozo was using tor (and popped out of my node) to scan servers with sqlmap. It is extremely disappointing to me that the tor network should be adversely affected by that sort of script kiddie activity.

Update on 24/05/12

I emailed the tor-relays list about my experience and rapidly received a half dozen or so “me too” replies. It would seem that someone has been heavily targetting Brazilian governnment web servers through tor.

Permanent link to this article: https://baldric.net/2012/05/22/tor-abuse/