why mysql crashed

Update to previous post

According to bytemark, a bug in the bigV system caused some disk corruptions for a small number of customers. It seems that I was just one of the unlucky ones. I am assured that all is now well.

Permanent link to this article: https://baldric.net/2012/11/22/why-mysql-crashed/

forcing innodb recovery in mysql

Today I had a nasty looking problem with my mysql installation. At first I thought I might have to drop one or more databases and re-install. Fortunately, I didn’t actually have to do that in the end.

I first noticed a problem at around 15.45 today when I couldn’t collect my mail. My mail system on this VM uses postfix as the smtp component and dovecot for IMAPS/POP3S delivery. A quick check showed that the server was up and postfix and dovecot were both running. Nothing immediately obviously wrong. However, a check on the mail log showed multiple entries of the form:

dovecot: auth-worker(3078): Error: mysql(localhost): Connect failed to database (mail): Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’

An attempt to connect to trivia also resulted in a “database error” message. So on checking again I noticed that of course mysql wasn’t running. Unfortunately, all my attempts to restart it failed. Further investigation of my logs (in particular, an ominously large /var/log/error) showed that I had a corrupt InnoDB page. As soon as mysql reached this it barfed and fell over. Not good. I don’t like database errors because I’m no database expert. But at least that explained the mail error. Both components of my mail system rely on a running mysql server because I store users, passwords, aliases, domain details etc. in mysql databases.

The largest database on this VM is, of course, the blog. I keep regular backups of that so in extremis I could dump the old database and reload from a backup and loose only my daily stats, But I was reluctant to do that without first checking to see if a repair was possible. I then spent some long time reading man pages, my O’Reily MySQL book, and the on-line MySQL reference pages. The best clue I received was the message in the error log:

Nov 16 15:57:05 pipe mysqld: InnoDB: If the corrupt page is an index page
Nov 16 15:57:05 pipe mysqld: InnoDB: you can also try to fix the corruption
Nov 16 15:57:05 pipe mysqld: InnoDB: by dumping, dropping, and reimporting
Nov 16 15:57:05 pipe mysqld: InnoDB: the corrupt table. You can use CHECK
Nov 16 15:57:05 pipe mysqld: InnoDB: TABLE to scan your table for corruption.
Nov 16 15:57:05 pipe mysqld: InnoDB: See also https://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
Nov 16 15:57:05 pipe mysqld: InnoDB: about forcing recovery.
Nov 16 15:57:05 pipe mysqld: InnoDB: Ending processing because of a corrupt database page.

That message appeared just before a load of stacktrace information which looked horribly meaningless to me but spoke of deep, unfathomable database wrongness. A read of the referenced mysql manual page didn’t initially reassure me over much either. It starts:

If there is database page corruption, you may want to dump your tables from the database with SELECT … INTO OUTFILE. Usually, most of the data obtained in this way is intact.

“Usually” eh? Hmmm.

As suggested in the documentation I added the line “innodb_force_recovery = 1” to the [mysqld] section of my /etc/mysql/mysql,cnf config file and restarted the server. I started with the lowest non-zero option in the hope that this would cause the least corruption if I had to dump the tables. This time, mysql came up when I started it, but the error log contained the following:

Nov 16 18:09:26 pipe mysqld: InnoDB: A new raw disk partition was initialized or
Nov 16 18:09:26 pipe mysqld: InnoDB: innodb_force_recovery is on: we do not allow
Nov 16 18:09:26 pipe mysqld: InnoDB: database modifications by the user. Shut down
Nov 16 18:09:26 pipe mysqld: InnoDB: mysqld and edit my.cnf so that newraw is replaced
Nov 16 18:09:26 pipe mysqld: InnoDB: with raw, and innodb_force_… is removed.

Now there is nothing in my my.cnf about “raw” or “newraw”, so I simply removed the “innodb_force_recovery” line and shutdown and restarted mysql. Mysql started up without error, and without my having to dump any database. And no, I have no idea why. I can only assume that the force_recovery option forced some database repair as well as the documented forcing of the InnoDB storage engine to start.

And I don’t yet know what caused the problem in the first place, but since my logs show that the VM went down and restarted at around 14.36 I conclude that a failure somewhere in the VM system occurred during a database write and that screwed things up. I’m waiting for a response to my support call.

So. If you too ever face a database corruption similar to mine, do not panic. And do not attempt unnecessarily drastic database drops and reloads until you have at least tried the “innodb_force_recovery” option in your my.cnf configuration.

It may of course just be magic. Most database incantations are as far as I am concerned.

Permanent link to this article: https://baldric.net/2012/11/16/forcing-innodb-recovery-in-mysql/

using openvpn to bypass NAT firewalls

OpenVPN is a free, open source, general purpose VPN tool which allows users to build secure tunnels through insecure networks such as the internet. It is the ideal solution to a wide range of secure tunnelling requirements, but it is not always immediately obvious how it should be deployed in some circumstances.

Recently, a correspondent on the Anglia Linux User Group (ALUG) mailing list posed a question which at first sight seemed easy to answer. He wanted to connect from one internet connected system to another which was behind a NAT firewall (actually, it turned out to be behind two NAT firewalls, one of which he didn’t control, and therein lay the difficulty).

The scenario was something like this:

He wanted to connect from the system called “Client” in the network on the left to the secured system called “Host A” on the network on the right of the diagram. We can assume that both networks use RFC 1918 reserved addresses and that both are behind NAT firewalls (so routers A and C at the least are doing NAT).

Ordinarily, this would be pretty straightforward. All we have to do is run an SSH daemon (or indeed openVPN) on Host A and set up port forwarding rules on routers A and B to forward the connection to the host. So long as we have appropriate firewall rules on both the host and the routers, and the SSH/OpenVPN daemons are well configured we can be reasonably confident that the inbound connection is secure (for some definition of “secure”). This is exactly the setup I use on my home network. I have an openVPN server running on one of my debian systems. When out and about I can connect to my home network securely over that VPN from my netbook.

However, as I noted above, the problem in this scenario is that the owner of Host A did not control the outermost NAT device (Router A) so could not set up the necessary rule. Here is where openVPN’s flexibility comes in. Given that both networks are internet connected, and can make outbound connections with little difficulty, all we need to do is set up an intermediary openVPN host somewhere on the net. A cheap VPS is the obvious solution here (and is the one I used). Here’s how to accomplish what we want:

  • install openVPN server on the VPS;
  • install openVPN client on Host A;
  • set up the openVPN tunnel from Host A to the VPS;
  • connect over SSH from Client to VPS;
  • connect (using SSH again) over the openVPN tunnel from the VPS to Host A.

Using SSH over an already encrypted tunnel may seem like overkill, but it has the advantage that we can leverage the existing security mechanisms on Host A (and we really don’t want a telnet daemon listening there).

Installing openVPN

There are already a huge range of openVPN “HowTos” out there so I won’t add to that list here. The very comprehensive official HowTo on the openVPN website lists all that you need to know to install and configure openVPN to meet a wide variety of needs, but it can be a bit daunting to newcomers. OpenVPN has a multitude of configuration options so it is probably best to follow one of the smaller, distro specific, howtos instead. Two I have found most useful are for debian and arch. And of course, Martin Brooks, one of my fellow LUGgers has written quite a nice guide too. Note, however, that Martin’s configuration does not use client side certificates as I do here.

By way of example, the server and client configuration files I built when I was testing this setup are given below. Note that I used the PKI architectural model, not the simpler static keys approach. As the main howto points out, the static key approach doesn’t scale well, is not as secure as we’d like (keys must be stored in plain text on the server), but most importantly, it doesn’t give us perfect forward secrecy so any key compromise would result in complete disclosure of all previous encrypted sessions. Note also that I chose to change the keysize ($KEY_SIZE) in the file “vars” to 2048 from the default 1024. If you do this, the build of the CA certificate and server and client keys warns you that the build “may take a long time”. In fact, on a system with even quite limited resources, this only takes a minute or two. Of course, it should go without saying that the build process should be done on a system which is a secure as you can make it and which gives you a secure channel for passing keys around afterwards. There is little point in using a VPN for which the keys have been compromised. It is also worth ensuring that the root CA key (“ca.key” used for signing certificates) is stored securely away from the server. So if you build the CA and server/client certificates on the server itself, make sure that you copy the ca key securely to another location and delete it from the server. It doesn’t need to be there. I chose /not/ to add passwords to the client certificates because the client I was testing from (emulating Host A) is already well secured (!). In reality, however, it is likely that you would wish to strengthen the security of the client by insisting on passwords.

server.conf on the VPS

# openvpn server conf file for VPS intermediary

# Which local IP address should OpenVPN listen on (this should be the public IP address of the VPS)

local XXX.XXX.XXX.XXX

# use the default port. Note that we firewall this on the VPS so that only the public IP address of the

# the network hosting “Host A” (i.e. the public address of “Router A”) is allowed to connect. Yes this exposes

# the VPN server to the rest of the network behend Router A, but it is a smaller set than the whole internet.

port 1194

# and use UDP as the transport because it is slightly more efficient than tcp, particularly when routing.

# I don’t see how tcp over tcp can be efficient.

proto udp

# and a tun device because we are routing not bridging (we could use a tap, but it is not necessary when routing)

dev tun

# key details. Note that here we built 2048 bit diffie hellman keys, not the default 1024 (edit $KEY_SIZE in the

# file “vars” before build)

ca /etc/openvpn/keys/ca.crt

dh /etc/openvpn/keys/dh2048.pem

cert /etc/openvpn/keys/vps-server-name.crt

key /etc/openvpn/keys/vps-server-name.key

# now use tls-auth HMAC signature to give us an additional level of security.

# Note that the parameter “0” here must be matched byy “1” at the client end

tls-auth /etc/openvpn/keys/ta.key 0

# Configure server mode and supply a VPN subnet for OpenVPN to draw client addresses from.

# Our server will be on 172.16.10.1. Be careful here to choose an RFC1918 network which will not clash with

# that in use at the client end (or Host A in our scenario)

server 172.16.10.0 255.255.255.0

# Maintain a record of client virtual IP address associations in this file. If OpenVPN goes down or is restarted,

# reconnecting clients can be assigned the same virtual IP address from the pool that was previously assigned.

ifconfig-pool-persist ipp.txt

# We do /not/ push routes to the client because we are on a public network, not a reserved internal net.

# The default configuration file allows for this in the (example) stanzas below (commented out here)

# push “route 192.168.1.0 255.255.255.0”

# push “route 192.168.2.0 255.255.255.0”

# Nor do we tell the client (Host A) to redirect all its traffic through the VPN (as could be done). The purpose of this

# server is to allow us to reach a firewalled “client” on a protected network. These directives /would/ be useful if we

# wanted to use the VPS as a proxy to the outside world.

# push “redirect-gateway def1”

# push “dhcp-option DNS XXX.XXX.XXX.XXX”

# Nor do we want different clients to be able to see each other. So this remains commented out.

# client-to-client

# Check that both ends are up by “pinging” every 10 seconds. Assume that remote peer is down if no ping

# received during a 120 second time period.

keepalive 10 120

# The cryptographic cipher we are using. Blowfish is the default. We must, of course, use the same cipher at each end.

cipher BF-CBC

# Use compression over the link. Again, the client (Host A) must do the same.

comp-lzo

# We can usefully limit the number of allowed connections to 1 here.

max-clients 1

# Drop root privileges immediately and run as unpriviliged user/group

user nobody

group nogroup

# Try to preserve some state across restarts.

persist-key

persist-tun

# keep a log of the status of connections. This can be particularly helpful during the testing stage. It can also

# be used to check the IP address of the far end client (Host A in our case). Look for lines like this:

#

# Virtual Address,Common Name,Real Address,Last Ref

# 172.16.10.6,client-name,XXX.XXX.XXX.XXX:nnnnn,Thu Oct 25 18:24:18 2012

#

# where “client name” is the name of the client configuration, XXX.XXX.XXX.XXX:nnnnn is the public IP address of the

# client (or Router A) and port number of the connection. This means that the actual client address we have a connection

# to is 172.16.10.6. We need this address when connecting from the VPS to Host A.

status /var/log/openvpn-status.log

# log our activity to this file in append mode

log-append /var/log/openvpn.log

# Set the appropriate level of log file verbosity

verb 3

# Silence repeating messages. At most 20 sequential messages of the same message category will be output to the log.

mute 20

# end of server configuration

Now the client end.

client.conf on Host A

# client side openvpn conf file

# we are a client

client

# and we are using a tun interface to match the server

dev tun

# and similarly tunneling over udp

proto udp

# the ip address (and port used) of the VPS server

remote XXX.XXX.XXX.XXX 1194

# if we specify the remote server by name, rather than by IP address as we have done here, then this

# directive can be useful since it tells the client to keep on trying to resolve the address.

# Not really necessary in our case, but harmless to leave it in.

resolv-retry infinite

# pick a local port number at random rather than bind to a specific port.

nobind

# drop all priviliges

user nobody

group nogroup

# preserve state

persist-key

persist-tun

# stop warnings about duplicate packets

mute-replay-warnings

# now the SSL/TLS parameters. First the server then our client details.

ca /home/client/.openvpn/keys/server.ca.crt

cert /home/cient/.openvpn/keys/client.crt

key /home/client/.openvpn/keys/client.key

# now add tls auth HMAC signature to give us additional security.

# Note that the parameter to ta.key is “1” to match the “0” at the server end.

tls-auth /home/client/.openvpn/keys/ta.key 1

# Verify server certificate by checking.

remote-cert-tls server

# use the same crypto cypher as the server

cipher BF-CBC

# and also use compression over the link to match the server.

comp-lzo

# and keep local logs

status /var/log/openvpn-status.log

log-append /var/log/openvpn.log

verb 3

mute 20

# end

Tunneling from Host A to the VPS.

Now that we have completed configuration files for both server and client, we can try setting up the tunnel from “Host A”. In order to do so, we must, of course, have openVPN installed on the Host A and we must have copied the required keys and certificates to the directory specified in the client configuration file.

At the server end, we must ensure that we can forward packets over the tun interface to/from the eth0 interface. Check that “/proc/sys/net/ipv4/ip_forward” is set to “1”, if it is not, then (as root) do “echo 1 > /proc/sys/net/ipv4/ip_forward” and then ensure this is made permanent by uncommenting the line “net.ipv4.ip_forward=1″ in /etc/sysctl.conf”. We also need to ensure that neither the server, nor the client block traffic over the VPN. According to the openVPN website, over 90% of all connectivity problems with openVPN are caused not by configuration problems in the tool itself, but by firewall rules.

If you have an iptables script on the server such as I use, then ensure that:

  • the VPN port is open to connections from the public address of Host A (actually Router A in the diagram);
  • you have modified the anti-spoof rules which would otherwise block traffic from RFC 1918 networks;
  • you allow forwarding over the tun interface.

The last point can be covered if you add rules like:

$IPTABLES -A INPUT -i tun0 -j ACCEPT
$IPTABLES -A OUTPUT -o tun0 -j ACCEPT
$IPTABLES -A FORWARD -o tun0 -j ACCEPT

This would allow all traffic over the tunnel. Once we know it works, we can modify the rules to restrict traffic to only those addresses we trust. If you have a tight ruleset which only permits ICMP to/from the eth0 IP address on the VPS, then you may wish to modify that to allow it to/from the tun0 address as well or testing may be difficult.

Many “howtos” out there also suggest that you should add a NAT masquerade rule of the form:

“$IPTABLES -t nat -A POSTROUTING -s 172.16.10.0/24 -o eth0 -j MASQUERADE”

to allow traffic from the tunnel out to the wider network (or the internet if the server is so connected). We do not need to do that here because we are simply setting up a mechanism to allow connection through the VPS to Host A and we can use the VPN assigned addresses to do that.

Having modified the server end, we must make similarly appropriate modifications to any firewall rules at the client end before testing. Once that is complete, we can start openVPN at the server. I like to do this using the init.d run control script whilst I run a tail -f on the log file to watch progress. Watch for lines like “TUN/TAP device tun0 opened” and “Initialization Sequence Completed” for signs of success. We can then check that the tun interface is up with ifconfig, or ip addr. Given the configuration used in the server file above, we should see that the tun0 interface is up and has been assigned address 172.16.10.1.

Now to set up the tunnel from the client end, we can run (as root) “openvpn client.conf” in one terminal window and we can then check in another window that the tun0 interface is up and has been assigned an appropriate IP address. In our case that turns out to be 172.16.10.6. It should now be possible to ping 172.16.10.1 from the client and 172.16.10.6 from the server.

Connecting over SSH from Client to VPS.

We should already be doing this! I am assuming that all configuration on the VPS has been done over an SSH connection from “Client” in the diagram. But the SSH daemon on the VPS may be configured in such a way that it will not permit connections over the VPN from “Host A”. Check (with netstat) that the daemon is listening on all network addresses rather than just the public IP address assigned to eth0. If it is not, then you will need to modify the /etc/ssh/sshd_config file to ensure that “ListenAddress” is set to “0.0.0.0”. And if you limit connections to the SSH daemon with something like tcpwrappers, then check that the /etc/hosts.allow and /etc/hosts.deny files will actually permit connections to the daemon listening on 172.16.10.1. Once we are convinced that the server SSH configuration is correct, we can try a connection from the Host A to 172.16.10.1. If it all works, we can move on to configuring Host A.

Connect (using SSH again) over the openVPN tunnel from the VPS to Host A.

This is the crucial connection and is why we have gone to all this trouble in the first place. We must ensure that Host A is running an SSH daemon, configured to allow connections in over the tunnel from the server with address 172.16.10.1. So we need to make the same checks as we have undertaken on the VPS. Once we have this correctly configured we can connect over SSH from the server at 172.16.10.1 to Host A listening on 172.16.10.6.

Job done.

Addendum: 26 March 2013. For an alternative mechanism to achieve the same ends, see my later post describing ssh tunnelling.

Permanent link to this article: https://baldric.net/2012/10/27/using-openvpn-to-bypass-nat-firewalls/

book theft

Back in August 2011. I wrote about my preference for real books over the emerging electronic version. In that post I noted that Amazon had famously deleted copies of Orwell’s 1984 and Animal Farm purchased by some customers. It now appears that Amazon has gone even further and deleted the entire contents of a Norwegian customer’s kindle. In Cory Doctorow’s post he points out that:

Reading without surveillance, publishing without after-the-fact censorship, owning books without having to account for your ongoing use of them: these are rights that are older than copyright. They predate publishing. They are fundamentals that every bookseller, every publisher, every distributor, every reader, should desire. They are foundational to a free press and to a free society.

Now I’d be pretty angry if someone sold me a book, but later stole that book back on the grounds that I had somehow infringed some sales condition buried in a contract I had implicitly (and forcedly) entered into by the act of purchase. But I would be absolutely livid if, in the act of stealing back “their” book, they also removed the rest of my library. Amazon, however, seems to find this acceptable.

Doctorow went on to say that encrypting storage on mobile devices was much preferable to the option of remote deletion in case of loss. I agree. Unfortunately I also agree with his view that users will have difficulty with password protected encrypted filesystems, and I am completely with him when he says:

If it’s a choice between paving the way for tyranny and risking the loss of your digital life at the press of a button by some deceived customer service rep, and having to remember a password, I think the password is the way to go. The former works better, but the latter fails better.

My own kindle only has the DRM free content I originally uploaded (over a USB connection) after my wife bought it for me. And the wifi is resolutely turned off. But I don’t know why I bothered, because I still haven’t used it, despite taking it on holiday. And now, like Adrian Short, I never will.

Permanent link to this article: https://baldric.net/2012/10/23/book-theft/

grep -R doesn’t search amazon

Towards the end of last month, following the release of the unity lens in ubuntu which searches amazon, “akeane” posted a bug report on launchpad complaining that “grep -R doesn’t automatically search amazon”. In his first posting he said:

Dear “root owning” overlords,

When using grep recursively I only get local results:

grep -R fish_t /home/noob/fish_game/*

/home/noob/fish_game/fish.h: struct fish_t { /home/noob/fish_game/fish.c: struct fish_t eric_the_ fish;

or worse:

grep -R shark_t /home/noob/fish_game/*/home/noob/fish_game/fish.h: struct shark_t { /home/noob/fish_game/fish.c: struct

shark_t_t mark_sw;

I declare this a bug for two reasons:

1. The output is boring.

2. The terminal has more than 2 lines!!! It’s an unefficient use of my screenspace.

I believe the reason for this is that the grep command only searches locally for things I am actually looking for, I kind of

expect the results I get from my codebase and as such it removes any sense of mystery or something new and exciting to

spice up my dull geek existence. That’s boring, grep -R should also search amazon, so I get more exciting results such as:

Shark Season 1 Starring Steven Eckholdt, Nora Dunn, Patrick Fabian, et al.

Amazon Instant Video to buy episodes: $1.99 to buy season: $34.99 ($1.59 per episode)

This bug report has been added to over the past few weeks, particularly after a reference was posted to reddit. As at today’s date, the bug is reported to “affect 169 people”.

The main thrust of akeane’s complaint is that the command line is lacking functionality available in the GUI. He finds that annoying and others agree. On 21 October, “rupa” wrote

I’ve been following this bug with some interest. A lot of good ideas here, but there are some technical issues that haven’t

been addressed.

Until all the common unix utilities can be upgraded to be affiliate-aware, common pipe operations might suffer from related

amazon results. if you pipe grep into awk for example, without being able to be sure of the results, things can get messy.

I feel this issue can be solved by introducing a new ‘default’ file descriptor. In addition to ‘stdout’ and ‘stderr’, I propose a

‘stdaffiliate’ file descriptor, and amazon results can go there. This would allow us to see relevant results in the terminal, but

keep them out of pipes unless purposely redirected.

Here I must disagree. In my view, all amazon (or related adware) results should be piped directly to /dev/null. Of course, stdaffiliate could be linked directly to /dev/null and all would be well.

Permanent link to this article: https://baldric.net/2012/10/20/grep-r-doesnt-search-amazon/

ubuntu is free and it always will be

But we may ask you for a contribution.

Canonical have made another move in what is beginning to look ever more like a monetary commercialisation of ubuntu. On 9 October 2012, they added a new page to the “download” section titled “Tell us what we should do more……and put your money where your mouth is ;)” The page looks like this:

The sliders allow you to “target” your contribution to those areas of ubuntu which you feel deserve most reward (or conversely, you believe need most effort in improvement). The default is $2.00 for each of the eight radio button options (for a total of $16.00).

Now $16.00 is not a huge amount to pay for a linux distro of the maturity of ubuntu, but I’m not sure I like the way this is being done. Most distros offer a “donate” button somewhere on their website, but no other has placed it as prominently in the download process as canonical has chosen to do. I’m also a little bothered by the size and placement of the “Not now, take me to the download” option and I have a sneaking feeling that will become even less prominent over time.

Not surprisingly, some of the commentariat have taken great umbrage at this move (witness the comment over at El Reg of the form “Where is the option for “fix the known damn bugs and quit pissing around with GUI”?” and I expect more hostility as and when users start fetching the new 12.10 release.

But an earlier move to monetise the ubuntu desktop worries me even more. Canonical’s link with Amazon through the ubuntu desktop search was, according to Mark Shuttleworth, perfectly sensible, because “the Home Lens of the Dash should let you find *anything* anywhere. Over time, we’ll make the Dash smarter and smarter, so you can just ask for whatever you want, and it will Just Work.” (So that’s alright then.) But the problem, which Shuttleworth clearly doesn’t understand, is that people don’t generally like having advertising targetted at them based on their search criteria. (cf. Google…..). What was worse, the search criteria were passed to Amazon in clear. Think about that.

I share the views of Paul Venezia over at Infoworld where he says:

“But the biggest problem I have with the Amazon debacle is another comment by Shuttleworth: “Don’t trust us? Erm, we have root. You do trust us with your data already.” That level of hubris from the founder of Ubuntu, in the face of what is clearly a bad idea badly implemented, should leave everyone with a bad taste in their mouth. If this idea can make it to the next Ubuntu release, then what other bad ideas are floating around? What’s next? Why should we maintain that trust?

So fine, Mr. Shuttleworth. You have root. But not on my box. Not anymore.”

Ubuntu is already in decline following the way unity was foisted on the userbase. And Canonical has been likened to Apple in the past. Things can only get worse for ubuntu from hereon. Way past time to move on.

Permanent link to this article: https://baldric.net/2012/10/14/ubuntu-is-free-and-it-always-will-be/

password lunacy

One of my fixed term savings accounts matured at the end of last week. This means that the paltry “bonus” interest rate which made the account ever so slightly more attractive than the pathetic rates generally available 12 months ago now disappears and I am left facing a rate so far below inflation that I have contemplated just stuffing the money under my mattress. Current rates generally on offer at the moment are pretty terrible all round, but I was certainly not going to leave the money where it was so I decided to move it to a (possibly temporary) new home.

After checking around, I found a rate just about more attractive than my mattress and so set about opening the new account on-line. Bearing in mind that this account will be operated solely on-line and may hold a significant sum of money (well, not in my case, but it could) one would expect strong authentication mechanisms. I was therefore not reassured to be greeted by a sign up mechanism that asked for the following:

a password which must:

  • be between 8 and 10 characters in length;
  • contain at least one letter and one number;
  • not use common words or names such as “password”;
  • contain no special characters i.e. £ %

(oh, and it is not case sensitive. That’s good then.)

Further, I am asked to provide:

  • a memorable date;
  • a memorable name; and
  • a memorable place.

I should note here that I initially failed the last hurdle because the place name I chose had fewer than the required 8 characters, and when I tried a replacement I found that I wasn’t allowed to use a place name with spaces in it (so, something like “Reading” or “Ross on Wye” are unacceptable to this idiot system).

I haven’t tried yet (the account is in the process of being set up and I will receive details in the post) but from experience with other similar accounts, I guess that the log-on process will ask for my password, then challenge me to enter three characters drawn from one of my memorable date/name/place. Oh, and the whole process is secured by a 128bit SSL certificate.

My friend David wrote a blog piece a while ago about stupid password rules. The ones here are just unbelievable. Why must the password be limited to 8-10 characters? Why can’t I choose a long passphrase which fits my chosen algorithm (like David, I compute passwords according to a mechanism I have chosen which suits me). Why must it only be alphanumeric? And why for pity’s sake should it be case insensitive? Are they deliberately trying to make it easy to crack?

As for the last three requirements, what proportion of the population do you think are likely to choose their birthdate, mother’s maiden name and place of birth (unless of course, they were born in Reading, or London, or York, or Glasgow, or Burton on Trent or…)

Answers on a postcard please.

Permanent link to this article: https://baldric.net/2012/10/13/password-lunacy/

a positive response

Whenever my logs show evidence of unwanted behaviour I check what has happened and, if I decide there is obviously hostile activity coming from a particular address I will usually bang off an email to the abuse contact for the netblock in question. Most times I never hear a thing back though I occasionally get an automated response.

Today, after finding over 23,000 automated attempts to access the admin page of trivia I sent off my usual notification to the netblock owner (“Hey, spotted this coming from you, a bit annoying”). Within a couple of hours I got an automated acknowledgement asking me to authenticate myself by response. A couple of hours after that, I got a human response saying “We’ve dealt with it. Your address is now blocked”. I’ve never had that helpful a response before.

The ISP was Russian.

Permanent link to this article: https://baldric.net/2012/10/05/a-positive-response/

iptables firewall for servers

I paid for a new VPS to run tor this week. It is cheaper, and offers a higher bandwidth allowance than my existing tor server so I may yet close that one down – particularly as I recently had trouble with the exit policy on my existing server.

In setting up the new server, the first thing I did after the base installation of debian and the first apt-get update/upgrade was to install my default minimum iptables firewall ruleset. This rule simply locks down the server to accept inbound connections only to my SSH port and only from my home trusted network. All other connections are denied. I have a variety of different iptables rules depending upon the system (rules for headless servers are clearly different to those needed on desktops running X for example). In reviewing my policy stance for this new server, I started comparing the rules I was using on other servers, both externally on the ‘net and internally on my LAN. I found I was inconsistent. Worse, I was running multiple rulesets with no clear documentation and no obvious commonality where the rules should have been consistent, or any explanation of the differences. In short I was being lazy, but in doing so I was actually making things more difficult for myself because a) I was reinventing rulesets each time I built a server, and b) the lack of documentation and consistency meant that checking the logic of the rules was unnecessarily time consuming.

To add to my woes, I noted that in one or two cases I was not even filtering outbound traffic properly. This is a bad thing (TM), but not untypical of the approach I have often seen used elsewhere. Indeed, a quick check around the web will show that most sites offering advice about iptables rulesets concentrate only on the input chain of the filter table and ignore forwarding and output. To be fair, many sites discussing iptables seem to assume that ipforwarding is turned off in the kernel (or at least recommend that it should be) but very few that I could find even consider output filtering.

In my view, output fitering is almost as important, if not as important as input filtering. Consider for example how most system compromises occur these days. Gone are the days when systems were compromised by remote attacks on vulnerable services listening on ports open to the outside world. Today, systems are compromised by malicious software running locally which calls out to internet based command and control or staging servers. That malicious software initially reaches the desktop through email or web browsing activity. This “first stage” malware is often small, aimed at exploiting a very specific (and usually completely unpatched) vulnerability and is unnoticed by the unsuspecting desktop user. The first stage malware will then call out to a server (over http or https usually) to both register its presence and obtain the next stage malware. That next stage will give the attacker greater functionality and persistence on the compromised system. It is the almost ubiquitous ability of corporate desktops to connect to any webserver in the world that has led to the scale of compromise we now routinely see.

But does output filtering matter on a server? And does it really matter when that server is running linux and not some other proprietary operating system? Actually, yes, it matters. And it matters regardless of the operating system. There is often a disconcerting smugness from FOSS users that “our software is more secure than that other stuff – we don’t need to worry”. We do need to worry, And as good net citizens we should do whatever we can to ensure that any failures on our part do not impact badly on others.

I’m afraid I was not being a good net citizen. I was being too lax in places.

If your linux server is compromised and your filtering is inadequate, or non-existent, then you make the attacker’s job of obtaining additional tools easy. Additionally, you run the risk of your server being used to attack others because you have failed to prevent outbound mailicious activity, from port scanning to DOS, to email spamming to running IRC or other services he wants, on your server (for which you pay the bills). Of course if the attacker has root on your box, no amount of iptables filtering is going to protect you. He will simply change the rules. But if he (or she) has not yet gained root, and his privilege escalation depends upon access to the outside world, then your filters may delay him enough to give you time to take appropriate recovery action. Not guaranteed of course, but at least you will have tried.

So how can your server be compromised? Well, if you get your input filtering wrong and you run a vulnerable service, you could be taken over by a ‘bot. There are innumerable ‘bots out there routinely scanning for services with known vulnerabilities. If you don’t believe that, try leaving your SSH port open to the world on the default port number and watch your logs. Fortunately for us, most distros these days ship with the minimum of services enabled by default, often not even SSH. But how often have you turned on a service simply to try something new? And how often did you review your iptables rules at the same time? And have you ever used wget to pull down some software from a server outside your distro’s repository? And did you then bother to check the MD5 sum on that software? Are you even sure you know fully what that software does? Do you routinely su to root to run software simply because the permissions require that? Do you have X forwarding turned on? Have you ever run X software on your server (full disclosure – I have)? Ever run a browser on that? In the corporate world I have even seen sysadmins logged in to servers which were running a full desktop suite. That way lies madness.

Believe me, there are innumerable ways your server could become compromised. What you need to do is minimise the chances of that happening in the first place, and mitigating the impact if it does happen. Which brings me back to iptables and my configuration.

The VM running trivia is also my mailserver. So this server has the following services running:

  • a mail server listening on port 25;
  • an http/https server listening on ports 80 and 443;
  • my SSH server listening on a non standard port;
  • an IMAPS/POP3S server listening on ports 993 and 995.

My tails mirror only has port 80 and my nonstandard SSH port open, my tor server has ports 80, 9001 and my non standard SSH port open, and of course some of my internal LAN servers listen on ports such as 53, 80, 443, 2049, (and even occasionally on 139 and 445 when I decide I need to play with samba, horrible though that is). I guess this is not an unusual mix,

My point here though, is that not all of those ports need to be accessible to all network addresses. On my LAN, none of them need to be reachable from anywhere other than my internal selected RFC1918 addresses. My public servers only need to be reachable over SSH from my LAN (if I need to reach one of them when I am out, I can do so from a VPN back into my LAN) and given that my public servers are on different networks, they in turn do not need to reach the same DNS servers or distro repositories (one of my ISPs runs their own distro mirror. I trust that. Should I?). Whilst inevitably the iptables rules for each of these servers needs to be different, the basic rule configuration should really be the same (for example, all should have a default drop policy, none need allow inbound connections to any non existent service, none need allow broadcasts, none need access to anything other than named DNS servers, or NTP servers etc.) so that I am sure it does what I think it should do. My rules didn’t conform to that sort of approach. They do now.

Having spent some time considering my policy stance, I decided that what I needed was a single iptables script that could be modified quite simply, and clearly, in a header which stated the name of the server, the ports it needed open or which it needed access to and the addresses of any servers which it trusted or it needed access to. This turned out to be harder to implement than I at first thought it should be.

Consider again this server. It should be possible to nail it down so that it only allows inbound new or established connections to the ports listed and only allows oubound established connections to those inbound. Further, it should not call out to any servers other than my DNS/NTP and distro repositories. Easy. But not so. Mail is awkward for example because we have to cater for inbound to port 25 from anywhere as well as outbound to port 25 anywhere. That feels a bit lax to me, but it is necessary unless we connect only to our ISP’s mailserver as a relay. Worse, as I discovered when I first applied my new tight policy, I found that my wordpress installation slowed to a crawl in certain circumstances. Here it transpired that I had forgotten that I run the akismet plugin which needs access to four akismet servers (Question. Do I need to continue to run akismet? What are the costs/benefits?) It is conceivable that other plugins will have similar requirements. I also noticed that I had over thirty entries for rpc servers in my wordpress “Update Services” settings (this lists rpc servers you wish to automatically notify about posts/updates on your blog). Of course WP was attempting to reach those servers and failing. So I found myself adding exceptions to an initially simple rulebase. I don’t like that. And what if the IP addresses of those servers change?

So I actually ended up with two possible policy stances, which I called “tight” and “loose”. The first attempts to limit all access to known services and servers (with the obvious exception of allowing inbound to public services). The second takes a more permissive stance in that it recognises that it may not be possible to list all the servers we must allow connection to, but limits those connections to particular services (so for example, whilst it will allow out connection only to DNS on one or two servers, it will allow out new connections to any server on say port 80 (I actually don’t like this, for fairly obvious reasons, but it is at least more restrictive than the usual “allow anything to anywhere”).

Others may find these scripts useful so I have posted them here: iptables.tight.sh and iptables.loose.sh Since the scripts must be run at boot time they should be run out of one your boot run control scripts (such as /etc/init.d/rc.local) or at network initialisation as a script in /etc/network/if-up.d. Before doing so, however, I strongly advise you to test them on a VM locally, or at least on a machine to which you have console access. Locking yourself out of a remote VM can be embarrassing.

By way of explanation of the policy stances taken, I have posted separate descriptions of each at tight and loose.

Comments. feedback, suggestions for improvement or criticism all welcome.

Permanent link to this article: https://baldric.net/2012/09/09/iptables-firewall-for-servers/

Neil Armstrong

I suppose it is inevitable that your heroes die as you get older. I was just finishing my O’ levels when Armstrong took his “one small step”. I can remember clearly looking up at the moon on that day in 1969 in awe that there were two human beings on that satellite at the very moment I was watching it.

To a boy weaned on a diet of pulp SF where lunar landings were commonplace and the Grey Lensman fought the Boskonian criminals across the Galaxy, Armstrong, Aldrin and Collins were the real deal. True heroes, breaking boundaries and setting new frontiers. Now, as a middle aged man, I know that all too often your heroes can turn out to have clay feet. Not so with Neil Armstrong. A self professed “nerdy engineer”, he remains to me, as to millions of others of my generation, an inspiration.

He died on saturday 25 August 2012 at the age of 82.

Permanent link to this article: https://baldric.net/2012/08/28/neil-armstrong/

my russian fanbase

My readership in Russia appears to be growing. For some reason I seem to be getting a lot of hits from Russian domains on my posts and pages about egroupware. And my referer logs show a lot of inbound connections from domains in the .ru TLD. Those websites I have checked appear to be technical, or partly technical, bulletin board type sites equivalent to the scream over here. Intriguingly, counterize, which I have recently updated, shows the top three countries hitting trivia as USA, China and the Russian Federation, in that order. The UK is fourth after the Netherlands.

Today I received my first comment (at least I think it is a legitimate comment rather than spam) from a russian speaker. That comment, on my old post “from russia with love“, translates as “Accidentally stumbled on your blog. Now I will always see. I hope not disappoint and further / Thanks, good article. Subscribed.”

Thank you притчи онлайн. Enjoy.

Permanent link to this article: https://baldric.net/2012/08/28/my-russian-fanbase/

tails has not been hacked

I run a tails mirror on one of my VMs. Earlier this week there was a flurry of anxious comment on the tails forum suggesting that the service had been “hacked”. Evidence pleaded in support of that theory included the facts that file timestamps on some of the tails files varied across mirrors, one of the mirrors resolved to a Pirate Bay mirror, and the tails signing key had apparently changed.

Well, none of that is necessarily proof of hostile behaviour. In fact, good old cock-up wins out over conspiracy again. As can be seen from the tails admin comment over at the forum, human error (followed by panic reaction) is to blame.

I hold my hand up to a mistake which contributed to the problem. My rsync to the tails repository omitted the “-t” switch which would have preserved file modification times. In mitigation, I plead stupidity (and the fact that the tails mirror documentation also omitted that switch…..).

Now fixed.

Permanent link to this article: https://baldric.net/2012/08/23/tails-has-not-been-hacked/

you are at 2001:db8::ff00:42:8329.

Verity Stob is having trouble getting a new IP address. What with the IPV4 address exhaustion problem, it would seem that the only alternative is IPV6. This is causing Verity some grief.

Stress brings out my unoriginal streak. I said: ‘Where am I?’

‘You are at 2001:db8::ff00:42:8329.’

‘What?’

‘Your new IP address at 2001:db8::ff00:42:8329.’ He had the rare gift of speaking hex and punctuation. ‘You wanted a new static IP address. Your government has arranged that you should get one.’

‘That… That’s not an IP address. That’s a malformed MAC address with extra rivets. You can’t… Ow! Stop! What are you injecting into me?’

‘Don’t worry about that Ms Stob. It’s a little something the Chinese have come up with. It suppresses the body’s natural resistance to incompletely established international standards. It’s quite safe – approved by NICE for treatment of both acute and chronic Luddism. But once more enough of the chitchat. We have some reeducation to do. Oddjob: the Powerpoint, please. Now. The 128 bits of the IP address are divided into a subnet prefix and a unique device ID…’

Absolutely delightful. A “malformed MAC address with extra rivets”. Sheer poetry.

Permanent link to this article: https://baldric.net/2012/08/21/you-are-at-2001db8ff00428329/

debian on a DNS-320

Back in 2009 I bought, on impulse, a D-Link DNS-313 thinking it was sufficiently similar to the 323 to enable me to install debian with some ease. As I noted at the time, however, I’d made a slight mistake and then had to settle for a compromise installation from a tarball rather than a full native install.

Recently I bought a slightly bigger brother to the 313 in the shape of a DNS-320 ShareCenter. Again, this box is not quite the same spec as the 323 (and hence is slightly less easy to flash with a debian installation) but at under £55.00 (albeit with no disks) it was too good a bargain to miss, particularly since I already had one spare terabyte SATA disk. What I hadn’t banked on, of course, was the terrible price I would have to pay for a second disk, but hey, I wanted to be able to set up RAID because I planned on making this new toy my main backup NAS.

Before parting with my money, I checked carefully that I would indeed be able to install my preferred OS. The 320 has an 800 MHz Marvell 88F6281 CPU on the Kirkwood family of SoC (so is closely related to the sheevaplug) and 128 MB RAM. Unfortunately, Martin Michlmayr’s site (which would normally be my first port of call) has nothing on the 320, but there are plenty of other sites offering advice on debian installation on this particular NAS. Martin does provide detailed instructions for the 323 of course, but that is based on the older Orion SoC.

D-link actually provides a complete build environment (available on its on its German ftp server) that lets you build your own firmware image. They also provide a rather useful build of debian squeeze on their Polish ftp site (strangely nothing so useful on the UK ftp site though).

The first and most comprehensive set of information I found was on the 320 wiki at kood.org. Apart from proving a valuable technical resource itself, the site points to useful “howtos” on other sites such as Jamie Lentin’s excellent site which gives detailed instructions for building and installing debian images for both the DNS-320 and the 325, and the NAS Tweaks site which introduced me to the very useful “fonz fun_plug” concept.

The idea behind the fonz is to allow installation of non-standard software on a range of NAS devices. To quote from the NAS tweaks site tutorial page:

The Firmwares of various NAS-Devices includes a very interesting bonus: the user can execute a script (file) named “fun_plug” when the OS is booted. Unlike all the other Linux software which is loaded when the NAS boots, this file is located on Volume_1 of the hard disk rather than within the flash memory. This means the user can easily and safely modify the file because the contents of the flash memory is not changed. If you delete the fun_plug file (see here for instructions), or replace your hard disk, the modification is gone.

Fun_plug allows the user to start additional programs and tools on the NAS. A Berlin-based developer named “Fonz” created a package called “ffp” (Fonz fun_plug), which includes the script and some extra software which can be invoked by fun_plug.

Installation of fun_plug is easy and takes only a few steps. These steps should be performed carefully, as they depend on typed commands and running with “root” privileges.

What this means in practice, is that the user can effectively use fun_plug to install a complete OS image (such as debian) into a chrooted ennvironment on the NAS. This has the advantage of being easily reversible, you don’t have to dump the (sometimes useful) original firmware, and you don’t run much risk of bricking your device. So whilst the Jamie Lentin tutorial appealed to the techy in me, the pragamatist said that fun_plug looked a more interesting first approach, and the Fonz’s script in particular looked very useful. And, indeed, so it turned out.

I installed fun_plug 0.7 for ARM EABI and now have a cheap (if rather noisy to be honest) 1TB RAID1 NAS which both retains the D-Link firmware and gives me the additional functionality offered by a debian 6 installation. Muting that fan is now high on my ToDo list.

Permanent link to this article: https://baldric.net/2012/08/21/debian-on-a-dns-320/

what every iphone needs

I stumbled across this site today when following a link from an email on the topic of reflashing an android tablet with CyanogenMod. The guy who sent the email (to the ALUG list) had bought a generic 7″ android tablet. He was considering reflashing with CM9 and was asking for advice/guidance/gotchas or whatever before doing so.

The tablet actually looks quite reasonable, if a little pricey, though Mark did say that he hadn’t paid anywhere near the advertised price. But a casual wander around the rest of the Dee-Sign site led me to this page under the “cool stuff” category.

Forgive me if I weep.

Permanent link to this article: https://baldric.net/2012/08/20/what-every-iphone-needs/

the stainless steel rat bows out

I read today that Harry Harrison has died at the age of 87. Harrison was one of the greats of the SF glory years. Alongside Heinlein (whose work he spoofed mercilessly) Philip Dick, Asimov, van Vogt, Ray Bradbury, Bob Sheckley, Doc Smith and a host of others I grew up with in the 60s and early 70s, Slippery Jim DeGriz was a wonderful companion.

Goodbye old friend.

Permanent link to this article: https://baldric.net/2012/08/15/the-stainless-steel-rat-bows-out/

oops

An attempted quick search this morning using ixquick over tor drew a blank. In fact I hit a brick wall as the screenshot below will show.

The commentary provided by ixquick is self-explanatory (click the image if you have difficulty reading the snapshot), but I can’t help feeling that this problem should have been foreseen and dealt with in advance. After all, google has long reacted badly to tor based searches so it is not as if the volume issue could not be predicted. And tor users tend to react badly to any “unexpected” results from tor usage. We are paranoid enough as it is……

Fortunately, a refresh, using another exit node cured the temporary glitch and I got the results I wanted.

Permanent link to this article: https://baldric.net/2012/08/08/oops/

outlook goes public – linus approves

Microsoft has (re-)launched its free public email service (previously branded “hotmail” and “windows live”) under the brand “outlook”. Outlook has been Microsoft’s email client on the corporate desktop for many years now, so they may be hoping that the new look email product will benefit from the existing brand’s goodwill.

However, I noticed from a posting on El Reg today that whilst Microsoft were breathlessly boasting “One million people have signed up for a new, modern email experience at Outlook.com. Thanks!” they were not apparently being careful about differentiating between people, and accounts. The Reg article notes that some new users have signed up to multiple accounts and/or accounts with unlikely names (such as steveballmer@outlook.com and satan@outlook.com). So I moseyed on over to the new service to take a look.

I am now the proud owner of the email account “linus-torvalds@outlook.com”.

I’m pretty sure that Linus himself would not want that particular address, but if he does, and Microsoft don’t delete it in a clean-up operation, then he is welcome to it. Personally I shan’t be using the service.

Permanent link to this article: https://baldric.net/2012/08/02/outlook-goes-public-linus-approves/

avoiding accidental google

Even though I set my default search engine to anything but google (usually ixquick, but sometimes its sister engine at startpage) I have occasionally been caught out by firefox’s helpful attempts to intervene if I mistakenly enter a search option in the URL navigation field (or just hit return too early). Firefox’s default action in such cases is to direct a search to google. This is not helpful to someone who actively wishes to avoid that.

The way to prevent this is to edit the firefox configuration thus:

– go to “about:config” in the navigation bar
– now search for the string “keword.url”
– right click on the returned option and select “modify”
– now enter “https://ixquick.com/do/search/?q=” and accept.

Now all mistakes will be sent to ixquick as searches and not to google.

Permanent link to this article: https://baldric.net/2012/07/31/avoiding-accidental-google/

too much bling

Avid readers will note that I have reverted to a simpler, two column, layout. Posts and pages are to the left, and additional navigation links are to the right. Some of my friends commented that the three column layout, with several images in the outer columns was distracting (actually, one said “bleah!”). I left it for a while, but then noticed something odd in my stats. My hit rate has gone down, way down, since I changed the layout back in March. In fact, my current hit rate is lower than half my original peak.

So, I conclude that I had indeed added “too much bling” and I have now cleaned up the layout. Let’s see if my readership approves (and improves).

Permanent link to this article: https://baldric.net/2012/07/28/too-much-bling/

coercion

David commented on my gpg upgrade post saying: “How does one ensure that they are not coerced into signing a transition statement with a new (but compromised) key?”.

Well, you can never be sure I can’t be coerced, and this is why I can’t be sure I cannot be coerced:

My thanks as always to xkcd

Permanent link to this article: https://baldric.net/2012/07/24/coercion/

the accidental stupidity of good intentions

For some years now I have used what used to be the freecycle system to dispose of unwanted, but otherwise useful items from my home. In return I have sometimes used the same mechanism to get hold of things like books which someone else wishes to get rid of. A couple of years or so ago, the UK freecycle organisation split from the US parent and was renamed “freegle”. Naming and politics aside (and there was some nasty politicing going on) the purpose and intentions of the UK freegle organisation continued to be honourable and useful. A lot of goods and material that would otherwise have ended up in landfill has been successfully recycled.

To date, UK freegle has been based on yahoo groups. Members subscribe to a freegle group covering their area and then send/receive email alerts about items offered or wanted. But yahoo groups is not an ideal mechanism for an organisation such as freegle, so alternatives are popping up – usually web based and often bespoke. My local group recently received lottery funding to help it establish just such a website.

Following an email from the group’s moderators about the intended change (and imminent closure of the old yahoo group) I signed up to the new system. Having done so I then set about editing my preferences and settings. One of the settings required is “postcode”, which the system uses with (an editable) radial distance in miles to determine which offers/requests to email to you. Alongside the required postcode, is the option to include your full address. Entering your full address will obviate the need to add it later when giving details to other freeglers about how to get to your location to pick up offers. Address details are apparently necessary because all interaction between freeglers now takes place (and is moderated) via the website address. Unlike the old system whereby freegler’s email addresses were exposed, users of the new system only see the freegle group address. (This, incidentally, is a “good thing” (TM)). However, one of the other settings on the preferences page is a checkbox marked “On holiday”.

Forgive me for thinking that this might not be a good idea.

Permanent link to this article: https://baldric.net/2012/07/22/the-accidental-stupidity-of-good-intentions/

gpg key upgrade

Following a recent discussion about gpg key signing on my local linux user group email list, one of the members pointed out that several of us (myself included) were using rather old 1024-bit DSA GPG keys with SHA-1 hashes. He recommended that such users should upgrade to keys with a minimum size of 2048 bits and a hash from the SHA-2 family (say SHA256).

I believe he is right. That is good advice and it is long past time that I upgraded. So, I have now created a new default GPG key of 4096 bits – that should last for a while. However, that leaves the problem of how to migrate from the old key to the new key when the old key has been in circulation since at least 2004.

Fortunately, Daniel Kahn Gillmor (dkg) has published a rather nice and useful how-to on his debian-administration blog. I used that guide, supplemented with some further guidance on the apache site to come up with a transition plan. If you wish to contact me securely in future, then please use my new GPG key. My signed transition statement is here. A copy is given below. That transition statement is signed with both my old and new keys so that people who have my old key may be sure (or as sure as they can be if they presume that my old key has not been compromised) that the new key is valid and a true means of secure communication with me.

(BTW, the way to sign a document with two keys is as follows:

gpg –clearsign –local-user $KEYID-1 –local-user $KEYID-2 filename

where $KEYID-1 and $KEYID-2 are the eight digit IDs of the old and new keys. This is not well documented in the GPG manual.)

Copy of transition statement

—–BEGIN PGP SIGNED MESSAGE—–
Hash: SHA1,SHA256

GPG transition statement – Friday 20 July 2012

I am moving my preferred GPG key from an old 1024-bit DSA key to a
new 4096-bit RSA key.

The old key will continue to be valid for some time, but I prefer all
new secure correspondence to be encrypted with the new key. I will be
making all new signatures with the new key from today.

This transition was aided by the excellent on-line how-to at:

https://www.debian-administration.org/users/dkg/weblog/48

This message is signed by both keys to certify the transition.

The old key was:

pub 1024D/10927423 2004-07-15

Key fingerprint = E8D2 8882 F7AE DEB7 B2AA 9407 B9EA 82CC 1092 7423

And the new key is:

pub 4096R/5BADD312 2012-07-20

Key fingerprint = FC23 3338 F664 5E66 876B 72C0 0A1F E60B 5BAD D312

I have signed my new key with the old key. You may get a copy of my
new key from my server at rlogin.net/keys/micks-new-public-key.asc

To fetch the new key, you can get it with:

wget -q -O- https://rlogin.net/keys/micks-new-public-key.asc

Or, to fetch my new key from a public key server, you can simply do:

gpg –keyserver keys.gnupg.net –recv-key 5BADD312

If you already have my old key, you can now verify that the new key is
signed by the old one:

gpg –check-sigs 5BADD312

If you don’t already know my old key, or you just want to be double
extra paranoid, you can check the fingerprint against the one above:

gpg –fingerprint 5BADD312

Please let me know if you have any trouble with this transition. If you
are also still using old 1024-bit DSA keys, you too may wish to consider
migrating your old key to a stronger version.

Best

Mick
—–BEGIN PGP SIGNATURE—–
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAlAJms8ACgkQueqCzBCSdCM90wCeMo25AmWEdEEztjY635LPFtxB
qcMAn17NUSPZLPZAmknloWacWUsXFGndiQIcBAEBCAAGBQJQCZrPAAoJEAof5gtb
rdMSDyUQALIMRKcIZgaOqMPBA2o5juJTu0W9mICEaNMl4Cnf7kw5zZvwc+vu/9N0
pTYwgc0UmrG3Uy0rzBU53jf6coAqu3g2rqfWQ4+Ns03gGfdFCTYlKP0IlpntlVba
TxAaw52ScCLKOKCJK7atXxF0PvQCKK9wATbT1HgHZ6dHBzejEn4X308UzkgEzQ/l
Z1FQtgwkZrVd2QQlYBn6PxYrFqaH0rEBSKJWskAY05IalIwRXD6Pj7oq8BdwhU4t
cFK41afY74ZgAhvNzYs7Ge8Dk3Izj1RtN7nRnESD0ZZAFUG9M8smolE9f677xeOR
TRgVKEBhDN3JvKo+wuxgxsCpB1DD/W2yIs+b7Y140GvPvwGZjcF50tEP0KhZMiIK
/W1UxwNmQhDmTrhilL4o6efVaI1EZgyn6sdyycimrQ+0zm1o+TSntiF2o5Ulj+uY
JnME9WfnNWcfS9ezp6ZQ03YkhW5PVGaWg9KxPGN2hWOKtCBHJf1E5xOS5zy4kgc8
C9HovVp0N46MZleTBYE/v5JdJ5/yrktcSfYGA6jeOaBDHFb6qNkGcMQVlpNszKid
7a5Q4/rJ/Z75BMoVBaicNwUZpqHvCxDHMXjuw44RzG/QWca6ljxmSF/8ZFPggqJP
uTv3wKd0Y8i3DjWmAX/ps4viQEeDal7w5lqoJBA6YulQGnwqFAl5
=DzN2
—–END PGP SIGNATURE—–

Permanent link to this article: https://baldric.net/2012/07/20/gpg-key-upgrade/

RBS meltdown

I’ve been away on holiday during the one of the most public, and potentially most expensive, IT screwups for some time. By now everyone will be aware of the meltdown in RBS/NatWest/Ulster Bank systems. Since my return, I’ve been catching up on some of the on-line commentary and analysis. I particularly liked this comment on El Reg in an article discussing how management might address the root causes of the fault:

“the decision to put the cheapest person they could find anywhere in the world in such a responsible position was a bad idea.”

That reminded me of Alan Shepherd’s famous quote about the early US space programme:

“It’s a very sobering feeling to be up in space and realize that one’s safety factor was determined by the lowest bidder on a government contract.”

I guess some people are currently looking for new jobs.

Permanent link to this article: https://baldric.net/2012/07/07/rbs-meltdown/