beware the zombie apocalypse

Tom Scott is a young educational entertainer who publishes fairly regularly on youtube. Back in mid 2004, whilst still a linguistics student at York, he managed to upset both the Home Office and the Cabinet Office by publishing a Department of Vague Paranoia website spoofing the rather po faced official “Preparing for Emergencies” site. Tom’s website is still in operation – unlike the official one. I guess Tom never aspired to a career in the Civil Service.

I mention Tom here because I have just discovered his youtube channel called “The Basics” in which he addresses some of the complexities of computer science in ways which are accessible to a wide audience. In particular, he has a very good exposition of why encryption backdoors are not a terribly good idea. Take a look at the clip below:

I commend that clip to anyone who still adheres to the kind of “magic thinking” that leads them to believe that the laws of mathematics can be ignored, or that only the “good” guys (whoever they are) would ever take advantage of crippled encryption.

Permanent link to this article:

have I been pwned?

Well, I don’t think so. But for a while I was not entirely sure.

Following the move last November of trivia from a VM on UK2’s datacentre in London to our new home on a faster VM on ITLDC’s network I have been making a variety of minor changes and doing some essential housework. One of the biggest changes of course (fortunately for me as it turns out) was the complete separation of my two main services (mail and web) onto different VMs in different countries. My mailserver is now housed in Nuremburg where I have made some additional changes (for example I now run opendkim on it). This VM in Prague now houses just my webserver and of course is home to this blog.

Following the configuration changes which I noted in my last post, I spent a short while checking my web server logs – particularly the error log. That log shows a variety of messages such as:

SSL: 1 error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol
SSL: 1 error:1420918C:SSL routines:tls_early_post_process_client_hello:version too low
SSL: 1 error:1417A0C1:SSL routines:tls_post_process_client_hello:no shared cipher

which indicate browsers or other clients attempting to connect using either protocols I no longer support (such as ssl3) or TLS versions lower than I support server side. This is expected behaviour and the frequency of such log entries should decline over time as clients out there catch up with current acceptable security standards. I know from long experience that there are still a huge number of old, outdated browsers still in use – possibly on equally old and outdated platforms such as Android 4, or Windows XP (yes, it still exists). As a cross check I started looking through my access logs and sure enough I found user agent strings like:

“Mozilla/5.0 (Linux; U; Android 4.4.2; en-gb; SM-T310 Build/KOT49H)”

(almost certainly the default broswer on an old Android tablet) and

“Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1) Opera 7.54 [en]”

(probably Opera 7 on Windows XP).

So, no real surprise then that clients like that should have problems negotiating a secure connection with my server. However, this is where things started to look a little weird.

As I was scanning through my access logs I noticed entries like the following (client side IP addresses deliberately obfuscated with RFC1918 entries) : – [23/Dec/2019:19:27:52 +0100] “GET / HTTP/1.1” 200 169284 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50728)” – [23/Dec/2019:21:20:52 +0100] “GET / HTTP/1.1” 200 155411 “-” “Mozilla/5.0 (Windows; U; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)” – [23/Dec/2019:21:32:17 +0100] “GET / HTTP/1.1” 200 169272 “-” “Mozilla/5.0 (Linux; U; Android 4.1.2; ja-jp; SC-06D Build/JZO54K) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30” – [24/Dec/2019:15:32:47 +0100] “GET / HTTP/1.1” 200 169257 “-” “Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/67.0.3396.99 Safari/537.36”

Now that says that the client connecting from the address in the first field is asking for the root of a webserver called “”. I don’t own that domain, I don’t host that domain and the only way that sort of entry could possibly appear in my logs is if someone, somewhere who /does/ own that domain has made a DNS A record pointing to my IP address. Well, actually there is another scenario. It is entirely possible that someone has made a local DNS entry (such as in a local hosts file or a local DNS server using say, DNSMasq or Unbound, pointing to my IP address. I do exactly that sort of thing myself when I move webs between servers so that I can test the entry on the new server before switching my DNS. However, given the sheer number of the log entries (in the tens of hundreds!) from multiple different source addresses it seemed to me unlikely that this latter scenario is accurate. So, someone has a DNS entry pointing to me that I don’t know about.

Having found one odd host name I did a quick scan for others (awk { print $2 } accesslogfiles | grep -v mydomains) and found around fifteen more. Fortunately, with the exception of just one other domain name, none of the others (for which there were mercifully few connections) pointed to my address at the time I checked the DNS (late January). I assumed that those domains were also potentially hostile, and had now moved elsewhere but some of course could just have been accidents – it can happen (be careful of ascribing to malice that which could be simple stupidity).

I decided to concentrate on the main domain appearing in my logs and did a bit more research.

Firstly, the DNS:

mick@shed ~ $ dig +ttlunits -t a


So there is an A record pointing to me – and it has a suspiciously low TTL value (meaning the owner can change it quickly).

What about the nameserver(s)?

mick@shed ~ $ dig +ttlunits -t ns


The standard TTL value for an A record is about 1 day (or longer) so that nameservers can cache the answer to the question: where is “”? for a reasonable length of time before having to ask again. Most people would only use a very short TTL (here it is 10 minutes) if they wanted to be able to move the domain name queried to a new host very quickly. There are legitimate reasons for this. For example if you manage a server which you know you are going to move to a new address shortly and you wish to minimise the lag in the DNS. However, “Bad Guys” (TM) are known to do this sort of thing when they point to compromised hosts on the net. Said “Bad Guys” will also often use an obviously spoofed domain (this one is meant to look like a genuine Microsoft domain for the MS Hub) in phishing attacks.

Conclusion? This looks bad.

What about the whois record?

That shows the domain to have been registered on the 6th of November last year. About three weeks before I was given the IP address.

mick@shed whois

Domain name:
Registry Domain ID:
Registrar WHOIS Server:
Registrar URL:
Updated Date: 2019-11-06T00:00:00+08:00
Creation Date: 2019-11-06T22:52:03.0000Z
Registrar Registration Expiration Date: 2020-11-06T00:00:00+08:00
Registrar IANA ID: 1868

Interestingly though, a current whois lookup gives:

mick@shed ~ $ whois

Registry Domain ID: 2452061049_DOMAIN_COM-VRSN
Registrar WHOIS Server:
Registrar URL:
Updated Date: 2020-02-24T02:19:45Z
Creation Date: 2019-11-06T14:52:03Z
Registry Expiry Date: 2020-11-06T14:52:03Z
Registrar: Eranet International Limited
Registrar IANA ID: 1868

So the record was last updated on the 24th of this February. And sure enough, that domain name disappeared from the DNS records at around 1.30 GMT on 24 February. Note – it disappeared – it did not get pointed to a holding address (such as But I’m getting ahead of myself here so let’s take a step back again.

Have I been pwned? And do I host malware?

Next step is some research on the domain name. A search for “” and “malware” turns up:

Firstly, joesandbox which, sure enough, shows that domain name on my IP address is dropping malware. Ouch. Not good. Not good at all. But wait. The submission time for that analysis (top right of the full page) is shown as 08.11.2019 14:08:03 – only a couple of days /after/ the domain was registered and again some three weeks /before/ I was given the IP address.

Secondly, also at joesandbox he shows that my IP address was hosting another set of malware. Also not good, but again at a date before I had the IP address (Submission Time: 08.11.2019 17:56:13)

Thirdly, again at joesandbox, there is a very detailed, and scary, analysis of the behaviour of a downloaded file “contract1.doc” taken from the spoofed domain on 8 November last year. That analysis is here and a copy is shown below:

The behaviour graph in that analysis, shown in the next image, shows how the dropper works and sure enough, my IP address is implicated. But again that analysis dates to before I inherited the IP address.

In the HTTPS packet section of the Network Behaviour analysis (shown in the next image below) it says that the domain originally had its own Lets-Encrypt certificate, valid from Friday November 8 2019 to Thursday February 6 2020. That in itself is interesting because it means that from the date I moved trivia (5 December last year) to that IP address with my own Lets-encrypt certificate covering my domains (and ONLY my domains) all future requests hitting my server with an invalid host name would get a big scary “This Connection is Untrusted” browser warning. But of course I know from my logs that almost all of those warnings were ignored.

Finally, in the “Domains” section of the analysis there is a link to Virustotal so that is the next port of call.

The image below, taken from VirusTotal gives us an overview of a scan from three months ago which shows that 7 engines out of a total of 76 used, recorded the domain as malicious. I’m not sure whether that is good, or bad. If, as I now believe to be the case, that domain is/was a source of windows malware then I would have expected a much higher percentage of positives. But no matter at this stage.

The “relations” section of the analysis (in the image below) shows the results of the scans for various URLS on the domain and give a worrying result of positives for dates when I /did/ have the IP address in question. Fortunately however, a click on the links (for example “” at 9/12/2019) gives the result that 2 months ago the URL gave a 404 not found. (As it should)

So, whilst the document originally at that URL registered as malware on a variety of tests, when the link was last tested by VirusTotal (9 December 2019) it was no longer found. I should add here that I have run a recursive find on my webserver for all the documents listed by a variety of analysts out there, and additionally for any “.doc” or “.exe” or “.xls” files and come up blank. So I am reasonably confident (he said!) that the site is clean.

The “details” section of the VirusTotal analysis (below) gives us the DNS records for the domain, together with the the HTTPS certificate seen when the domain was last checked (which is now mine, and not the original spoof microsoft certificate).

That same page gives us the results of a google search for the domain name as below:

About 5 results (0.20 seconds)

Sort by: Relevance

Kyle Ehmke on Twitter: “Most likely TA505 domain box-en-au[.]com …
19 Nov 2019 … This one is calling out to an older site: microsoft-hub-us[.]com. I have to imagine a wave of new docs will pop up soon. 1 reply 0 retweets 2 likes.

AS204957 – LAYER6, UA –, 2 days ago, 3 MB, 47, 9, 6., 2 days ago, 5 MB, 91, 3, 3., 3 days ago, 54 KB, 30, 4, 2. …

The Blacklist from UT1 bad Recipe # # This recipe demonstrates …
File Format: text/plain
17 Nov 2016 … … deny deny deny deny …

Ransomware Clop : une communication officielle trop tardive ?
25 nov. 2019 … … et évoqué publiquement sharefile-cnd[.]com, ms-home-live[.]com, box-en-au[.] com, box-en[.]com, microsoft-hub-us[.]com, microsoft-live-us[.] …

『男性は2019年6月、他の者と共謀してゆうちょ銀行のネットバンキングに …
2019年12月17日 … 2019年11月07日22時15分59秒 RT @kyleehmke: Possible TA505 domain microsoft-hub-us[.]com was registered on 11/6. Less confidence in …

So, onwards to Kyle Ehmke who is a researcher for Threat Connect. His tweets of 7 and 8 November last year say:

Kyle Ehmke
7 Nov 2019
Possible TA505 domain microsoft-hub-us[.]com was registered on 11/6. Less confidence in that association though as the domain is not currently hosted.


Kyle Ehmke
8 Nov 2019
The microsoft-hub-us[.]com domain is now hosted at 195.123.246[.]12.

That hosting at that address cannot have lasted long, because I was allocated the IP address along with my new Debian VM on 27 November last year. But I know from later analysis that the A record for that domain name continued to point to my address right up until 24 February this year. Kyle refers to the threat actor as “TA505“, known as an active and prolific attacker operating in the financial sphere – i.e. a criminal group motivated by money (rather than politics). On 19 November last, Kyle posted again on Twitter that:

“Another most likely TA505 domain registered at essentially the same time as box-en-au[.]com: microsoft-store-en[.]com. Currently hosted at 103.199.16[.]197.”

to which Kyle Eaton responded:

“Nice find! Seems to me, when a new site is spun up they’ll send out an older static doc for a while before we start getting new files. This one is calling out to an older site: microsoft-hub-us[.]com. I have to imagine a wave of new docs will pop up soon.”

Searches for TA505 on Mitre give us the information that:

“TA505 is a financially motivated threat group that has been active since at least 2014. The group is known for frequently changing malware and driving global trends in criminal malware distribution.”

The Mitre page about the group lists some 15 different attack techniques used and 5 different pieces of malware. Mitre also reference Proofpoint analyses of the group going back several years. A quick search on the Proofpoint site gives us a list of 27 separate postings about the group. Their profile of TA505, dating from September 2017, describes the group as:

“One of the more prolific actors that we track – referred to as TA505 – is responsible for the largest malicious spam campaigns we have ever observed, distributing instances of the Dridex banking Trojan, Locky ransomware, Jaff ransomware, The Trick banking Trojan, and several others in very high volumes.”

That profile gives an interesting timeline of activity attributed to TA505 going back to June 2014. So these guys have around for some time, they are well established, well organised and (apparently) quite successful. If I /have/ been pwned, at least it was done by a professional group……

Finally, is listed as having scanned the domain on December 23 2019. That analysis, given in the image below, shows the front page of my blog as it looked at the time with my “Welcome to Prague” post at the top.

Reassuringly, moreover, the ioscan analysis shows the website as “clean”. The historic list of scans given by ioscan (and shown below) detail two failed connects three months ago, four successful connects to my server, and a final failed connect attempt one hour ago (from the time of finishing writing this).

It is worth noting at this stage that the reason connections to the spoofed microft domain resulted in delivery of my blog is because (I confess) I had been dumb in my web server configuration. The server software I use (lighttpd) has a very simple virtual hosts configuration system. That mechanism allows you to set whatever host name you wish to be served depending upon the host name requested. So if you have a virtual host called “” and another called “” you merely need to tell the webserver to deliver the appropriate pages from the directories called “/var/www/pages/” or “/var/www/pages/” (or wherever you configure your web root to be). But what if someone connects just to the IP address and not a virtual host name? Here is where I was dumb. Lightty allows you to set a “default” virtual host and serve that in such a case. I had set “” as my default. Thus anyone coming in and asking for a domain that I don’t host would get my blog. Stupid. Very stupid on reflection. And as soon as I realised that was what was happening (and why my logs were full of crud) I changed it so that the default went to an empty directory with a blank index file. I have actually improved that now by changing the default to give a “403 Forbidden” response. Better, and more logical methinks.

Now, as I was documenting all this (for just this sort of blog post) I received an email from my hosting provider (ITLDC) saying:

Ticket no: “blah” is listed on the Spamhaus Block List – SBL is listed on the Spamhaus Botnet Controller List – BCL
2020-02-23 12:30:36 GMT |
TA505 botnet controller @

and telling me that accordingly they had shut my VM down. (For which I cannot blame them. In their position I would do exactly the same.)

Bugger – and this is where I am exceptionally grateful that I had separated my blog from my mail. I cannot afford to have my email server blacklisted by spamhaus, correct or not. It takes a long time to gain a clean record for a mail server. One bad listing can see you blocked by multiple other mail providers and things then start to cascade out of control. I can afford to lose my blog for a while, but not my mail reputation.

I responded to the ticket explaining what I had found myself and asking that they investigate further. I also offered copies of all my log extracts showing what I had found so far together with whatever further assistance they might need. Unfortunately, this happened at a weekend and my ticket had to be escalated to second line advanced support, who only worked monday to friday.

It was long weekend.

On the Monday after the weekend we corresponded further on the ticket and by about lunchtime I got the good news that whilst Spamhaus is a trusted source of abuse reports and, as is right and proper a responsible ISP will take appropriate steps to prevent damage to their own or others’ networks following a Spamhaus alert, in this case the report turned out to be a false alarm and I could have my VM back.

Even better news was that they offered to allocate me a new IP address – which I happily accepted. As you can see (because you are reading this) we are back up on that new address and all looks good.

Conclusion then? I probably have /not/ been pwned at trivia. The most likely scenario seems to be that a previous user of that IP address had been compromised, or, given that the TA505 mob seem to have gone to the trouble (and had been able) to get their own, valid, Lets-encrypt certificate for the spoof domain, that group itself rented a VM on that address. My money is on a root compromise of a previous owner.

My immense gratitude to the support team at ITLDC and my particular thanks to Dmitry in that support team for taking my problem seriously, investigating it appropriately and coming up with a satisfactory outcome. That kind of service would be exceptional even if I were paying them ten times what I actually pay for my VMs. Given how little I do spend with them it is nothing short of amazing. I can think of several hosting providers who would happily throw you under a bus following a spamhaus report rather than spend time supporting you.

So, my thanks again to Dmitry and his team.

Go buy some VMs from them. They are excellent.

Permanent link to this article:

TLS certificate checks

My move of trivia to a new VM last December prompted me to look again at my server configuration. In particular I wanted to ensure that I was properly redirecting all HTTP requests to HTTPS and that the ciphers and protocols I support are as up to date and strong as possible. Mozilla offers a very good security reference site which should be your first port of call if you care about server side security. The “cheat sheet” on that site gives pointers to existing good practice guidelines for most of the configuration options you should care about on a modern website. I have implemented as many of these as is possible on trivia – but I am hampered slightly by the fact that I still use WordPress as my blogging platform. WordPress (and its myriad plugins) still does lots of things I don’t actually like (such as setting cookies I can’t control, loading google fonts etc.) but I’m stuck with that unless I change platform (which I might).

I have tried to ensure that all session cookies sent are as secure as possible by setting the “HttpOnly” and “secure” attributes in my wp-config file (as below)

@ini_set(‘session.cookie_httponly’, true);
@ini_set(‘session.cookie_secure’, true);
@ini_set(‘session.use_only_cookies’, true);

but that seems to be bypassed by some plugins – which I have thus disabled (behave or begone!). Apart from that change. and some minor tweaks to my TLS configuration to ensure that I only use recommended protocols and ciphers, nothing much seemed to need changing.

My first port of call for remote checking of my security was then the Mozilla Observatory site. I thought the results were disappointing – I only scored a “B”.

mozilla result

However, a careful reading of the full results showed that trivia had actually passed 10 of the 11 tests and achieved a score of 75/100. The 25 missing points all came from the failure of the “Content-Security-Policy” (I don’t implement one – because it is largely impossible on WordPress sites and particularly on a blog like trivia which points to multiple external resources).

mozilla details

Mozilla themselves say that:

Content Security Policy (CSP) is an HTTP header that allows site operators fine-grained control over where resources on their site can be loaded from. The use of this header is the best method to prevent cross-site scripting (XSS) vulnerabilities. Due to the difficulty in retrofitting CSP into existing websites, CSP is mandatory for all new websites and is strongly recommended for all existing high-risk sites.

I conclude that on general security recommendations I am doing reasonably well apart from the CSP issue.

Next, and most importantly, is the TLS check.

tls observatory

Mozilla’s own check gives me an “I”, meaning “Intermediate”. This is not surprising since I have implemented their “intermediate” level recommendations. I considered using the “modern” set only, but that excludes TLSv1.2, would exclude users of many browsers and, oddly, result in a lower score at SSL labs. Besides, I really don’t see why my blog should set the bar higher than seems to be used much more widely elsewhere.

Lastly, the observatory links to third party test sites, including ssllabs, immuniweb, securityheaders and hstspreload. I’ve used some of these (notably ssllabs) independently in the past and found them to be robust, reliable and helpful in getting my site properly configured. None of the results there surprised me or bothered me over much. I still get a nice satisfying big green A+ at ssllabs.

A+ at SSLLabs

However, the immuniweb result intrigued me.

immuniweb result

Apparently, my blog is PCI-DSS compliant. I do hope not. It runs on a debian VM in a datacentre in Prague owned by a small European ISP – and it cost next to nothing. If that is all it takes to gain PCI-DSS compliance then I’m a little worried. (In reality, I expect all it means is that my /TLS/ configuration is PCI-DSS compliant).

So, having checked my own configuration, and found that I still get a nice green A+ at ssllabs, I thought I might check some other sites – particularly those which ought to take extra care about the strength of their TLS implementations. Given my apparent PCI-DSS compliance, what better sites to check than those of the banks? I picked fourteen bank sites, including four of which I am a customer (either as a saver or a borrower). Here, in no particular order, is what I found.


Nationwide Bank

Certicate expires in 1 year 8 months.



Certificate expires in 5 months.

Co-op Bank

Co-op Bank

Certificate expires in 9 months.

Halifax Bank

Halifax Bank

Certificate expires in 7 months.



Certificate expires in 7 months.

Lloyds Bank

Lloyds Bank

Certificate expires in 7 months.

Natwest Bank

Natwest bank

Certificate expires in 1 year and 1 month.



Certificate expires in 1 month.

Sainsburys Bank

Sainsburys Bank

Certificate expires in 3 months.



Certificate expires in 10 months.

Smile Bank

Smile bank

Certificate expires in 9 months.

Tesco Bank

Tesco Bank

Certificate expires in 1 year 5 months.

Virgin Money

Virgin Money

Certificate expires in 9 months.

So: of the fourteen banks I checked, only 3 get an A+, 5 get an A, 3 get a B, 2 get a C and poor old Santander gets an F. In Santander’s case this is because their server apparently remains unpatched for the Zombie Poodle vulnerabilities. Qualys published information about this vulnerabity in April 2019 and warned then that they would start giving an “F” grade to any server affected by these vulnerabilities from end of May 2019.

For the remainder, the majority of problems seem to stem from the failure to remove TLSv1 and TLSv1.1 protocols. It is generally accepted that only TLSv1.2 and above are to be considered “secure” these days. None of the sites I checked support TLSv1.3, and even those sites supporting TLSv1.2 offer weak ciphers or also offer TLS versions lower than 1.2. Certainly PCI-DSS compliance implies a minimum of TLSv1.2 (See the rationale for the “Intermediate” configuration at Mozilla”s site.)

I notice also that practically all the Banks use certificates which last for one or two years. This strikes me as rather a long time, but of course there is always the difficulty in a live IT environment of balancing the need for frequent certificate changes against the need for some stability. Nevertheless, certificate changes can be automated and it seems to me that a much shorter certificate lifetime (say 3 to 6 months) would be more appropriate.

Does this mean those Bank’s sites are insecure? Well, no, and the Banks themselves would almost certainly argue strongly, and correctly, that their TLS implementations meet industry best practice whilst catering for the (very wide) range of browsers in use by their clients. They may also argue that the sites I checked are not the actual portals to their on-line banking systems, merely the shop front door (so for example uses the subdomain

But I know what I think. They should do better. Much better.

And of course I’m not alone in my view. About 18 months ago Wired reported that “Top UK banks [weren’t] using the latest tech to secure transactions”. In that article, Wired pointed to research by Swansea University computer science student Edward Wall and also quoted Pen Test Partner’s Researcher David Lodge as noting “There are some significant issues in need of improvement. Encryption is possibly the most important, in particular the section marked TLS. There have been a selection of cryptographical flaws found in the implementation and algorithms with older forms of SSL/TLS, meaning that only TLS 1.2 and 1.3 are recommended nowadays”.

That article goes on to note that the PCI DSS requires that the latest encryption standards are used. Sadly little seems to have changed in the past 18 months.

Permanent link to this article:

do not ask me for guest posts or links

For the past four years or so I have been receiving increasingly frequent requests for either guest posts, or links to external sites (or sometimes both). The requests have increased in number ever since I started posting about my use of OpenVPN. Many of these requests want me to point to their commercial VPN site. The requests all look something like this:


My name is Foo. I represent Bar. I found your blog on google and read your article on “X”. I think your readers will like our discussion about “X” on our site. Would you be willing to host a guest post by us, or one of our affiliates, promoting the use of “Y”? It would also be really good if you could link to our site from your article.

We are really flexible, so we could totally negotiate about special deals.

Now, the least irritating of these requests tend to come to the correct email address (which shows they have read the “about” page) rather than “” or some other speculative email address, and they are also directly relevant to the article in question (which shows they have actually read that too). But unfortunately, a depressingly large number of requests point to article “X” which has nothing whatsoever to do with their site (which may be a commercial site of tangential, at best, relevance to anything I write about). The worst type of request merely asks for me to point to some external resource from some random post on trivia.

I very, very, very rarely respond to any such requests. And I never, ever respond to persistent, repeated requests from the same source.

One particularly laughable request came in about three years ago. It asked me to point to an on-line password generator/checker (not a smart thing to do). I tried it with an XKCD style password like “soldieravailablecrossmagnet” and got the stupid response:

“Weak Password

It would take a computer about 507 quintillion years to crack your

Weak password eh?

It should be obvious, but in case it isn’t I’ll spell it out here (and in an addition to my “about” page).

This is a personal blog. It is avowedly and intentionally non-commercial in nature. I pay for this blog from my own resources simply because I want to. I do not seek, nor will I accept, any sponsored content or linkages of any kind. Any external resources I point to are there simply because I have personally found those resources interesting or useful. So please do not ask me to point to your site. Please do not ask me for sponsored content. Please do not ask me for guest posts. If you do, it simply proves that you have not done your research properly – so you will be ignored.



Permanent link to this article:

retiring the slugs

I first started using Linksys NSLU2s (aka “slugs”) in early 2008. Back then I considered them quite useful and I even ran webservers and local apt-caches on them. But realistically they are (and even then, were) a tad underpowered. Worse, since Debian on the XScale-IXP42x hasn’t been updated for several years, the slugs are probably vulnerable to several exploits. The latest version of Debian available for the slugs is probably that which I have running (“uname -a” shows “Linux slug 3.2.0-6-ixp4xx #1 Debian 3.2.102-1 armv5tel”).

The advent of the Raspberry Pi (astonishingly eight years ago now) brought a much more powerful and flexible device into the hands of the masses – and it didn’t need complex re-flashing procedures to get a general purpose linux installation running on it. Over the christmas period last year I added two more Pis (Pi 4s this time) to my network and finally got around to retiring my slugs (well, actually I still have one running, but I will get around to replacing that too soon).

On replacing the slugs I noticed that the 1TB disk I bought as additional storage for my main slug had been running almost non-stop (apart from the occasional reboot) since March 2009. I think that is a remarkably good lifetime for a consumer grade hard disk. Certainly I have had internal disks fail at much lower usage timescales. I have even had supposedly more robust, and certainly way more expensive, disks fail on high end Sun workstations and servers in my professional life.

So if you are in the market for new consumer grade disks, I think I can safely recommend Toshiba.

Oh, and Happy New Year by the way.

Permanent link to this article:

welcome to prague

As of today we are now fully functional in our new home in a datacentre in Prague. We also have a new letsencypt certificate. If you see any problems, let me know at the usual email address.


Permanent link to this article:

a bargain VPS

I have been using services from ITLDC for about three years now. I initially picked one of their cheap VMs based in the Netherlands whilst I was expanding my VPN usage, and frankly, I was not expecting much in the way of customer service or assistance for the very low price I paid. After all I thought, you can’t expect much for under 3 euros a month. But I was pleasantly surprised to find that not only was the actual service pretty rock solid, but so was the help I received on the one or two occasions I had a problem. In fact I have never had to wait more than a few minutes for a response to a ticket. That is exceptional in my experience. For the last year or more, I have been using one of their VMs as an unbound DNS server and VPN endpoint.

So when I was considering a new VM I was very pleasantly surprised to note that ITLDC were offering a huge discount on new servers as part of a “black friday” promotion. I have now paid for a new debian server, based in Prague. That VM is one of their 2 Gig SSD offerings (2 Gb RAM, dual core, 15 Gb disk and unlimited traffic). Even at their normal undiscounted rate that would only have cost me 65.99 euros for a year. I paid the princely sum of 26.39 euros – a 60% discount.

Absolutely astounding value for money. Go get one before the offer runs out.

Permanent link to this article:


God help us all.

Permanent link to this article:

more password stupidity

A recent exchange of email with an old friend gave me cause to revisit on-line password/passphrase generators. I cannot for the life of me imagine why anyone would actually use such a thing, but there are a surprisingly large number out there. On the upside, most of these now seem to use TLS encrypted connections so at least the passwords aren’t actually passed back to the requester in clear, but the downside is that most generators are still woefully stupid.

I particularly liked this bonkers example:

password generator

The generator allows the user to select the length of the password together with other attributes such as character set and whether or not to include symbols. For fun I asked it to give me a sixteen character password and it duly generated the truly awful gibberish string “bJQhxyAe2R9NkcLN“. But the best bit was that it attempted to give me a way to remember this nonsense, by generating a further set of garbage:

“bestbuy JACK QUEEN hulu xbox yelp APPLE egg 2 ROPE 9 NUT korean coffee LAPTOP NUT“.

Forgive me, but that seems rather more difficult to remember than “soldier available cross magnet“.

Permanent link to this article:

add my name to the list

At the tail end of last year, Crispin Robinson and Ian Levy of GCHQ published a co-authored essay on “suggested” ways around the “going dark problem” that strong encryption in messaging poses Agencies such as GCHQ and its (foreign) National equivalents. In that essay, the authors were at pains to state that they were not in favour of weakening strong encryption, indeed they said:

The U.K. government strongly supports commodity encryption. The Director of GCHQ has publicly stated that we have no intention of undermining the security of the commodity services that billions of people depend upon and, in August, the U.K. signed up to the Five Country statement on access to evidence and encryption, committing us to support strong encryption while seeking access to data. That statement urged signatories to pursue the best implementations within their jurisdictions. This is where details matter, so with colleagues from across government, we have created some core principles that will be used to set expectations of our engagements with industry and constrain any exceptional access solution. We believe these U.K. principles will enable solutions that provide for responsible law enforcement access with service provider assistance without undermining user privacy or security.

They went to outline what they called six “principles” to inform the debate on “exceptional access” (to encrypted data).

These principles are:

  • Privacy and security protections are critical to public confidence. Therefore, we will only seek exceptional access to data where there’s a legitimate need, that access is the least intrusive way of proceeding and there is appropriate legal authorisation.
  • Investigative tradecraft has to evolve with technology.
  • Even when we have a legitimate need, we can’t expect 100 percent access 100 percent of the time.
  • Targeted exceptional access capabilities should not give governments unfettered access to user data.
  • Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users.
  • Transparency is essential.

(I particularly like that last one.)

On first reading, the paper seems reasonable and unexceptional (which is probably what it was designed to do). It argues against direct attacks on end-to-end encryption itself and instead advocates insertion of an additional “end” to the encrypted conversation. So when Bob talks to Alice over his “secure” device, he would actually be taking to Alice and Charlie where Charlie had been added to the conversation by the device manufacturer or service provider and the notification to Bob (or Alice) of that addition would be suppressed so they would not know of the eavesdropping.

This is what they said:

So, to some detail. For over 100 years, the basic concept of voice intercept hasn’t changed much: crocodile clips on telephone lines. Sure, it’s evolved from real crocodile clips in early systems through to virtual crocodile clips in today’s digital exchanges that copy the call data. But the basic concept has remained the same. Many of the early digital exchanges enacted lawful intercept through the use of conference calling functionality.

In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved – they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

We’re not talking about weakening encryption or defeating the end-to-end nature of the service. In a solution like this, we’re normally talking about suppressing a notification on a target’s device, and only on the device of the target and possibly those they communicate with. That’s a very different proposition to discuss and you don’t even have to touch the encryption.

Neat huh? No need to go to all the bother of crypto attack, key escrow or any of the “magic thinking” around weakened encryption. Who could possibly object to that?

Well, lots of people could, and many did just that.

The Open Technology Institute, worked to coordinate a response from an international coalition of 47 signatories, including 23 civil society organizations that work to protect civil liberties, human rights and innovation online; seven tech companies and trade associations, including providers that offer leading encrypted messaging services; and 17 individual experts in digital security and policy. Those signatories included: Big Brother Watch, the Center for Democracy & Technology, the Electronic Frontier Foundation, the Freedom of the Press Foundation, Human Rights Watch, Liberty, the Open Rights group, Privacy International, Apple, Google, Microsoft, WhatsApp, Steven M.Bellovin, Peter G. Neumann of SRI International, Bruce Schneier, Richard Stallman and Phil Zimmermann amongst others

On May 30th 2019, they published an open letter to GCHQ giving their concerns at the proposals. In that letter they outlined:

how the “ghost proposal” would work in practice, the ways in which tech companies that offer encrypted messaging services would need to change their systems, and the dangers that this would present. In particular, the letter outlines how the ghost proposal, if implemented, would “undermine the authentication process that enables users to verify that they are communicating with the right people, introduce potential unintentional vulnerabilities, and increase risks that communications systems could be abused or misused.” If users cannot trust that they know who is on the other end of their communications, it will not matter that their conversations are protected by strong encryption while in transit. These communications will not be secure, threatening users’ rights to privacy and free expression. (my emphasis)

They went on to say:

  • The Proposal Creates Serious Risks to Cybersecurity and Human Rights.
  • The Proposal Would Violate the Principle That User Trust Must be Protected.
  • The Ghost Proposal Would Violate the Principle That Transparency is Essential.

They concluded that GCHQ should:

abide by the six principles they have announced, abandon the ghost proposal, and avoid any alternate approaches that would similarly threaten digital security and human rights.

Additionally, Jon Callas at ACLU has published a series of four essays which breaks down the fatal flaws in the proposal. Those essays in themselves are well worth reading, but so are all the additional papers (by people such as Steven Bellovin, Matt Blaze, Susan Landau, Whitfield Diffie, Seth Schoen, Nate Cardozo and many others) pointed to in those essays.

So: back in your box Levy, no-one wants your shitty little stick.

Permanent link to this article:

openvpn clients on pfsense

In my 2017 article on using OpenVPN on a SOHO router I said: “In testing, I’ve found that using a standard OpenVPN setup (using UDP as the transport) has only a negligible impact on my network usage – certainly much less than using Tor.”

That was true back then but is unfortunately not so true now.

In 2017 my connection to the outside world was over a standard ADSL line. At its best, I saw around 11 – 12 Mbit/s. Using OpenVPN on my new Asus router I saw this drop to about 10 Mbit/s. I found that acceptable and assumed that it was largely caused by the overhead of encapsulation of TCP within UDP over the tunnel.

Not so.

My small corner of the rural English landscape has recently been provided with fast FTTC connectivity by BT Openreach. This meant that I could get a new fast fibre connection should I so wish. I did so wish, and at the end of my contract with my last ISP I switched to a new provider. I now have a VDSL connection giving me a 30 Mbit/s IP connection to the outside world. Plenty fast enough for our use case (though I can apparently get 60 Mbit/s should I so wish). However, my OpenVPN connection stayed stubbornly at the 10 Mbit/s mark. No way was that acceptable. In testing I switched the client connection endpoint away from my router and back to my i7 desktop. The tunnel speed went up to a shade under 30 Mbit/s. Conclusion? The overhead was /not/ caused by protocol encapsulation, but rather by the encryption load, and my SOHO router was simply not powerful enough to give me a decent fast tunnel. So I needed a new, beefier, router. I considered re-purposing an old Intel i5 box I had lying around unused, but on careful reflection I decided that that would be way too much of a power hog (and a bit on the large side) when all I really needed was something about the size and power consumption of my existing routers. But before selecting a hardware platform I looked for a likely OS. There are plenty of options around, varying from the fairly router specific OpenWRT/LEDE or DD-WRT firmware binaries, through to firewall platforms such as Endian, Smoothwall, IPFire, IPCop, pfSense or OPNsense.

At varying times in the past I have used OpenWRT, IPCop and IPFire with at best, mixed success. I decided fairly early on to discount the router firmware approach because that would mean simply re-flashing a SOHO router which would probably end up just as under powered as my existing setup. Besides I really wanted to try a new firewall with application layer capabilities to supplement my existing NAT based devices. Smoothwall, IPCop, IPFire and Endian are all based on hardened Linux distributions and whilst Endian looks particularly interesting (and I may well play with it later) I fancied a change to a BSD based product. I’m a big Linux fan, but I recognise the dangers of a monoculture in any environment. In a security setup a monoculture can be fatal. So I downloaded copies of both pfSense and OPNsense to play with.

As an aside, I should note that there appears to be a rather sad history of “bad blood” between the developers of pfSense and OPNsense. This can sometimes happen when software forks, but the animosity between these two camps seems to have been particularly nasty. I won’t point to the links here, but any search for “pfsense v opnsense” will lead you to some pretty awful places, including a spoof OPNsense website which ridiculed the new product.

OPNsense is a fork of pfSense, which is itself originally a fork of the m0n0wall embedded firmware firewall. The original fork of pfSense took place in 2004 with the first public version appearing in 2006. The fork of OPNsense from pfSense took place in January 2015 and when the original m0n0wal project closed in February 2015 it’s creator and developer recommended all users move to OPNSense. So pfSense has been in existence, and steady development for over 13 years, whilst OPNSense is a relative newcomer.

Politics of open source project forks aside, I was really only interested in the software itself. In my case, so long as the software meets my needs (in this case solid ability to handle multiple OpenVPN client configurations) what I care most about is usability, documentation, stability, longevity, active development and support (so no orphaned projects) and, preferably, an active community. Both products seem to meet most of these criteria, though I confess that I prefer the stability of pfSense over the (rather too) frequent updates to OPNsense. In my view, there is little to choose between the two products in terms of core functionality. The GUI’s are different, but preference there is largely a matter of personal taste, But crucially, for me, I found the pfSense documentation much better than that for OPNsense. I also found a much wider set of supplementary documentation on-line created by users of pfSense than exists for OPNsense. Indeed, when researching “openVPN on OPNsense” for example, I found many apparently confused users (even on OPNsense own forums) bemoaning the lack of decent documentation on how to set up openVPN clients. Documentation for both products leans heavily towards the creation of OpenVPN servers rather than clients, and neither is particularly good at explaining how to use pre-existing CAs, certificates and keys for either server or client end, but eventually I found it fairly straightforward to set up on pfsense and after now having it running successfully for a while I am happy to stick with that product.

Having chosen my preferred product I had to purchase appropriate hardware on which to run it. I eventually settled on a Braswell Celeron Dual Core Mini PC.

As you can see from the pictures, this device has dual (Gigabit) ethernet ports, twin HDMI ports, WiFi (which I don’t actually use in my configuration) and six USB ports (USB 2.0 and USB 3.0), also unused. Internally it has a dual core Intel Celeron N3050 CPU (which crucially supports AES-NI for hardware crypto acceleration), 4 GB of DDR3 RAM and a 64 Gig SSD, all housed in a fanless aluminium case measuring not much larger than a typical external hard disk drive. Very neat, and in testing it rarely runs hotter than around 32 degrees centigrade.

So: what does my configuration look like?

Initial configuration is fairly straightforward and takes place during the installation and consists of assigning the WAN and LAN interfaces and setting the IP addresses. When this is concluded, additional general configuration is handled through the “setup wizard” available from the web based GUI which appears on the LAN port at the address you have assigned. This early configuration includes: naming the firewall and local domain; setting the DNS and time servers; and some configuration of the GUI itself. In my case I have local DNS forwarders on both my inner and outer local nets so I pointed psSense to my outer local forwarder (which. in turn, forwards queries to my external unbound resolvers). Most users will probably configure the DNS address to point to their ISP’s server(s). At this point it is a good idea to change the default admin password and then reboot before further configuration.

One point worth noting here is whether to set the pfSense box as a DNS forwarder, or resolver. In most configurations you will wish to simply forward requests to an external forwarder or resolver (as do I). Internally pfSense uses DNSmasq as a forwarder and unbound as a caching resolver so you could use the new firewall itself to resolve addresses. Forwarding is simpler.

I did all the initial configuration off-line so as not to interrupt my existing network setup. But once I was happy with the new pfSense box I then had to simply amend the configuration of my existing internal router so that it’s RFC1918 WAN address matched the LAN address set on the new firewall (.1 at one end and .254 at the other). I had configured the WAN address of the pfSense box to match my existing external router setup so that insertion of the new box between the two routers caused minimum disruption. The new network looks something like this: (click the image for a larger view).

At this stage, the pfSense box is simply acting as a new NAT firewall and router. Testing from various points on the internal net showed that traffic flowed as I expected.

Now for the OpenVPN client configuration.

This assumes that we are using TLS/SSL with our own pre-configured CA, certificates and keys. Pfsense allows you to set up your own OpenVPN server and certificates if you wish. I chose not to do that because I am re-using an existing setup. You could also use the simpler pre-shared key setup (if this makes you feel safe).

These are the steps I followed:

1. Goto System -> Cert Manager – -> CA

Add the new CA.
Give it a descriptive name (such as “My Certificate Authority”).
Import an existing Authority.
Paste in your X509 Certificate and (optional but recommended) paste in your private key for that certificate).


2. Go to System -> cert manager -> certificates

(Note that there will already be a self signed cert for the pfsense webconfiguration GUI).

Add a new certificate.

Again give it a descriptive name (such as “My Openvpn Certificate”).
Import an existing certificate.
Paste in your X509 Certificate and private key.


3. Go to VPN -> Openvpn -> clients

Add a new client.

In the general Information Section:

Ensure the server mode is correct for your setup (we are using Peer to Peer TLS/SSL).
Check that the protocol and device mode are correct for your setup and that the interface is set to WAN.
Add the host server name or IP address for the remote end of the tunnel.
Give the connection a meaningful name (e.g. “hostname” in Paris).

If you use authentication, add the details.

In the Cryptographic settings section:

Ensure “use a TLS key” is checked.
But uncheck “automatically generate a TLS key” (because we have our own already).
Now paste in the TLS key and ensure that “TLS key usage mode” matches your use case (TLS Authentication or TLS Encryption and Authentication).
Select your previously created CA certificate from the “Peer Certificate Authority” drop down box together with any relevant revocation list.
Select your client Certificate (created at step 2 above) from the drop down box.
Select the encryption algorithm you use.
If you allow encryption algorithm negotiation at the server, then check the “Negotiable Cryptographic Parameter” box and select the algorithm(s) you want to use.
Select the “Auth digest algorithm” in use (I recommend a minimum of SHA256 – personally I use SHA512, but this must match the server end).
If your hardware supports it (AES-NI for example) then select “Hardware Crypto”.

In the Tunnel Settings section:

Leave everything at the default (because our servers set the Tunnel addresses) but ensure that the compression settings here match the remote server. Personally I disable compression (see OpenVPN documentation for some reasons) so I set this to “comp-lzo no” at both ends of the tunnel.

Finally, in the Advanced Configuration section:

Paste in any additional configuration commands that you have at the server end which have not been covered above.
I use:

remote-cert-tls server;
key-direction 1;

and select IPV4 only for the gateway connection (unless you actually use IPV6) and also select an optional log verbosity level. You may choose a high level whilst you are testing and change it later when all is working satisfactorily.


4. Repeat 3 above to create clients for all other servers (or VPN services) you may have.

Note that if you have multiple client configurations (as I do) then you should ensure that only one client at a time is enabled. You can selectively enable and disable clients by editing the configuration at VPN -> Openvpn -> clients for later usage.

5. Go To Interfaces -> Assignments -> Interface Assignment

Select an interface to assign to one of the clients created at 3 or 4 above from the drop down boxes.
Enable the interface by checking the box and give the interface a meaningful name (such as “tunnel to Paris”). (“We’ll always have Paris….”).
Leave everything else as the default and save.

Now allow access to the tunnel(s) through the interface(s):

6. Go to Firewall -> NAT -> Outbound

Check the radio button marked “select Manual Outbound NAT rule”. All the Firewall rules on the WAN interface which were created automatically as a result of your initial general setup will be shown. The source addresses for these rules will be the local loopback and the LAN IP address you set.

Add a new rule to the bottom of the list.

In the “Advanced Outbound NAT entry” section:

Change the address family to IPV4 only (if appropriate).
Give the source as the LAN network address of the pfsense F/W.
leave other entries as the default.


7. Go to Firewall -> Rules -> LAN

Disable the IPV6 rule (if appropriate to your use case)

8. Go to Firewall -> Rules – OpenVPN

Add a new rule to Pass IPV4 through the interface called OpenVPN. Give the rule a meaningful description (such as “allow traffic through the tunnel”

9. Now finally go to Status -> OpenVPN

The (single) OpenVPN client you have enabled from 3 above should be shown as running. You can stop or restart the service from this page.

10. Now check that traffic is actually going over the tunnel by checking your public IP address in a web browser (I use “” amongst others).

If all is working as you expect and you have multiple VPN endpoints, try disabling the tunnel you are using (from “VPN -> OpenVPN -> Clients, Edit Client”) and selectively enabling others. Check the status of each selected tunnel in “Status -> OpenVPN” and reload as necessary.

In my case, with the hardware I have chosen, and the configuration given above, I now get near native speed over any of my VPN tunnels. It will be interesting to see what I get should I move to even faster broadband in future.


Permanent link to this article:

one unbound and you are free

I have written about my use of OpenVPN in several posts in the past, most latterly in May 2017 in my note about the Investigatory Powers (IP) Bill. In that post I noted that all the major ISPs would be expected to log all their customers’ internet connectivity and to retain such logs for so long as is deemed necessary under the Act. In order to mitigate this unwarranted (and unwanted) surveillance as much as possible, I wrap my connectivity (and that of my family and any others using my networks) in an OpenVPN tunnel to one of several endpoints I have dotted about the ‘net. This tunnel shields my network activity from prying eyes at my ISP, but of course does not stop further prying eyes at the network end point(s). Here I am relying on the fact that my use of VMs in various European datacentres, and thus outside the scope of the IP Act, will give me some protection. But of course I could be wrong – and as I pointed out in my comparison of paid for versus roll your own VPNs, “there is no point in having a “secure” tunnel if the end server leaks like a sieve or is subject to surveillance by the server provider – you have just shifted surveillance from the UK ISP to someone else.”

That aside, I feel more comfortable in using my own VPN, to an end point I have chosen, in a location I have chosen, with a provider I have chosen, than I do in simply exiting my domestic ISP’s network with all that I /know/ they will be doing to log my activity. Call me picky.

Now one glaring omission in my protective stance has always been my reliance on third party DNS servers. Again, as I noted in my 2017 post, many commercial VPN providers rely on DNS servers of questionable reliability. By that I mean not that the DNS servers would necessarily fail, but that they could not be fully trusted. Google DNS servers (on and for example are very popular with ISPs precisely because the infrastructure they provide /is/ robust and reliable. But Google log your requests. In fact they are in a very powerful position. I can’t find statistics on the total proportion of DNS requests answered by Google (and I have looked, trust me) but back in late 2014, Google themselves stated “Google Public DNS resolvers serve 400 billion responses per day and more than 50% of them are location-sensitive.” That worries me – and it should worry Tor users (a naturally shy bunch of internet users) even more. Back in 2016, “Freedom to Tinker” published a blog post by the researchers Philipp Winter, Benjamin Greschbach, Tobias Pulls, Laura M. Roberts, and Nick Feamster (later published in a paper at (PDF). That research found “that a significant fraction of exit relays send DNS requests to Google’s public resolvers. Google sees about one–third of DNS requests that exit from the Tor network—an alarmingly high fraction for a single company, particularly because Tor’s very design avoids centralized points of control and observation.” Discussion on the Tor relays email list suggests that, even today, DNS lookups remain a threat to Tor user’s privacy and anonymity.

But worse than just logging, some DNS providers (notably Quad9 on and and OpenDNS on and for example) actively hijack and interfere with DNS requests. OpenDNS actually make a marketing point of this interference by saying that they will block access to “adult” sites (in the name of parental protection of course). Others, such as cleanbrowsing (on & make a similar virtue of blocking access to “malware” or “adult” sites. All this may appear eminently laudable, and for some people hoping to manage the sort of sites their kids access from home it may seem an attractive option. But I’m a purist. A DNS server should do one thing and only one thing and it should do it well. It should answer DNS requests according to the RFCs 1034 and 1035 (which obsoleted RFCs 882 and 883). It most certainly should not, for example, intercept requests and provide pointers to websites owned by the provider when that provider deems it appropriate to do so. If I ask for the A record for the DNS name “” for example. I should get the answer “NXDOMAIN” (as recommended by RFC 2308) telling me that that name does not exist in the DNS system. I most categorically should /not/ get a record pointing to another site. Indeed, back in the early part of this century, Verisign, who were the registry responsible for the .net and .com domains introduced what they called the “Site Finder service”. (See also the wikipedia article for further discussion.) That “service” (or in reality nothing less than a naked power grab by Verisign) returned the address of a Verisign owned and managed web server whenever a request was received for an unregistered .com or .net domain name. Fortunately, in this case ICANN stepped in in 2003 and forced Verisign to desist. But this example merely serves to illustrate how easy it is to interfere with legitimate DNS requests. UK ISPs do this all the time these days. They have to by law, not least in order to apply the (somewhat controversial) blocklists provided by The Internet Watch Foundation.

On my own network internally, I run dnsmasq as a local caching resolver – well actually, I run two such resolvers, one on my inner net, the other on my outside net which has a slightly different security policy stance. The advantage of running such local caches is that I can interfere with my /own/ DNS requests. I do this deliberately in order to block requests to sites I don’t want to see, and which attempt to infringe my privacy. dnsmasq gives me a very simple mechanism to do this through its configuration directive “addn-hosts=” which forces dnsmasq to consult a file similar to the local database of known hosts typically listed at “/etc/hosts”. In my case I set this to “addn-hosts=/etc/hosts.block” which is a locally modified copy of Dan Pollocks’ host file. So any website which tries to direct my browser to facebook, or google-analytics, or any other of the myriad irritating sites which try to shove cookies at me or track me or collect data about me (and these days, unfortunately, that is most of them) won’t succeed. I hate advertising sites and I loathe facebook in particular. So they get pointed to

But as I said above at the start of this post, one glaring omission in my attitude to DNS resolution was my reliance on external third party DNS servers for addresses not covered by my local resolver. As I said in my 2017 post, my local dnsmasq resolver files pointed to the OpenVPN endpoint(s) for resolution and both those servers and my local DNS resolvers pointed only to opennic DNS servers. I trust those servers a lot more than I would any of the larger public DNS servers, but they have flaws. The biggest problem is that many of those servers seem to be run by people like me – essentially hobbyists or activists who dislike internet censorship. There is nothing wrong with that, in fact I applaud it, but it often means that the DNS servers themselves are underspecced or underpowered and/or run on VMs in low bandwidth datacentres (because they are cheap). This means in turn that the servers themselves will often be overloaded, or periodically offline, or will even disappear altogether. This makes maintenance of my list of preferred servers too much of an overhead. I like simplicity. (As an aside, because I am naturally a suspicious sort of chap, there is also the possibility that one or more of the opennic servers may actually be run by persons I ought NOT to trust. It is well known amongst the Tor fraternity for example, that a proportion of the exits nodes at any one time may well be run by Government agencies, or others, keen to de-anonymise Tor users. If you are shown to care about your privacy, by using Tor for example, then of course you “must have something to hide”. Similar reasoning may lead “bad guys” (TM) to wish to run opennic servers. After all, they are all run by volunteers…..)

So, what to do to enhance my (fragile) privacy? Enter unbound, a validating, recursive, caching DNS resolver, designed to be fast and secure. Better yet, unbound supports the emerging standard for encrypted DNS and does DNSSEC validation by default in most configurations. Unbound is distributed under a BSD license and can be found in most linux repositories or bsd ports collections. It is also freely available in source form from NLnet labs. The extensive documentation is also excellent.

The configuration file options allow for extensive control over how unbound operates, but a simple configuration can use as few as 8 or 9 lines of text. My own configuration hides both the identity and the version of unbound in use, limits unbound to IPv4 only and disables all logging (for obvious reasons). And, of course I only allow queries from my own servers or networks – I don’t want to be used by all and sundry on the internet.

It is worth noting that several authors have published suggestions aimed at mitigating the threat to Tor users posed by relying on third party DNS servers. One nice example by Antonios A Charlton at proposes the use of the powerdns recursor a recursing only server which holds no authoritative data of its own – it always queries authoritative servers. Powerdns has many fans, I simply prefer unbound in my environment. YMMV.

Permanent link to this article:

back to the gym

Having just returned from a family holiday which included too much food and drink and nowhere near enough exercise (well, that’s what holidays are for) I needed to get back to the gym in order to work off some of the excess. My local gym has recently undergone a major refurbishment and equipment upgrade and some of the workstations (notably the treadmills) now have integrated touch screens providing access to a variety of services. As you can see from the picture below, these services range from the obviously relevant such as details of your workout, your heartrate or linkages to fitness trackers, through TV, Youtube or Netflix access, to the less obviously necessary social media services such as Facebook, Instagram and Twitter. God knows how you can tweet and run at the same time and it is beyond me why anyone would even consider giving their social media account details to a gym company. But hey, the technology is there and people do use it.

image of gym workstation screen

treadmill screen

Before the refurbishment all we had was wall mounted TV screens in front of the treadmills and static bicycles so the ability to pick my own TV programme during a workout rather than having to watch yet another bloody episode of “Homes under the Hammer” or “This Morning” was welcome. What I confess was also attractive was the option to watch Netflix. I pondered for a while the wisdom of plugging in my Netflix account details to my workstation login, but eventually concluded that the ability to watch more of what I wanted than was available on TV at the times I use the gym was worth the (fairly low) risk of loss of my Netflix credentials. After all, breach of my Netflix credentials would not expose anywhere near as much about me as would be the case if I was daft enough to use Facebook, Instagram and Twitter and then give /those/ credentials to a third party.

My treadmill workouts usually take around 45-50 minutes before I move on to my other exercises. Not enough time for a film, but ample time for TV re-runs or box set episodes so I have been doing just that. On my reurn to the gym after the holiday I found myself watching early episodes of Black Mirror. Now there is something faintly surreal about watching Charlie Brooker and Konnie Huq’s “15 million Merits” (which is about people riding exercise bikes whilst watching interactive video screens in order to gain “social media” points) on a touch screen attached to a gym treadmill.

Especially when that system is made by a company called Matrix.

Permanent link to this article:

more in the “you couldn’t make it up” dept

The UK Parliamentary petitions site is currently hosting what appears to be one of the most popular it has ever listed. The petition seeks to gain support for revocation of article 50 so that the UK can remain in the EU. Personal politics aside (though in the interests of transparency I should say that I am a passionate supporter of remain) I believe that this petition, or one very like it, was inevitable given our dear PM’s completely shambolic handling of the whole brexit fiasco. Her latest “appeal” to the “tired” public to get behind her version of brexit in which she lays the blame for the delay to getting her deal over the line in the lap of MPs was probably the last straw for many. It is certainly a risky strategy because she needs the support of those very MPs to get the agreement she wants.

Telling the public that she is “on [y]our side” and that she understands we have “had enough” is just asking for a kicking. So when the twitter hashtag #RevokeArticle50 pointed to the Parliamentary petition seeking the revocation of the whole sorry business it became almost inevitable that the public would respond appropriately. At one stage the petition signing rate was the highest ever seen.

Inevitably, however, the site could not cope with this demonstration of the will of the people and it slowed, and eventually crashed – repeatedly. When I went to sign the petition at around 16.00 today, it took me several attempts to get past the “ngnix 502 Bad Gateway” page and get a “thank you for signing” message.

Of course, unless I actually get the email message referred to, and I respond, then my signature won’t count. Right now though, the entire site is off line – but don’t worry, they are working on it.

As of 17:25 today, there were some 1114038 recorded signatures, and it is still growing. But don’t get too excited, Andrea Leadsom has reportedly dismissed the petition, saying that HMG will only take any notice if the total rises above 17.4 million – the number who voted in favour of leaving the EU.

Don’t you just love our political system?

Permanent link to this article:

postfix sender restrictions – job NOT done

OK, I admit to being dumb. I got another scam email yesterday of the same formulation as the earlier ones (mail From: me@mydomain, To: me@mydomain) attempting to extort bitcoin from me.

How? What had I missed this time?

Well, this was slightly different. Checking the mail headers (and my logs) showed that the email had a valid “Sender” address (some bozo calling themselves “”) so my earlier “check_sender_access” test would obviously have allowed the email to pass. But what I hadn’t considered was that the sender might then spoof the From: address in the data portion of the email (which is trivially easy to do).

Dumb, so dumb. So what to do to stop this?

Postfix allows for quite a lot of further directives to manage senders through the smtpd_sender_restrictions and mine were still not tight enough to stop this form of abuse. One additional check is offered by the reject_sender_login_mismatch directive which will:

“Reject the request when $smtpd_sender_login_maps specifies an owner for the MAIL FROM address, but the client is not (SASL) logged in as that MAIL FROM address owner; or when the client is (SASL) logged in, but the client login name doesn’t own the MAIL FROM address according to $smtpd_sender_login_maps.”

Now since I store all my user details in a mysql database called “virtual_mailbox_maps” it is simple enough to tell postfix to use that database as the “smtpd_sender_login_map” and check the “From” address against that, That way only locally authenticated valid users can specify a local “From:” address. Why I missed that check is just beyond me.

My postfix configuration now includes the following:

smtpd_sender_login_maps = $virtual_mailbox_maps

smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_non_fqdn_sender, reject_unauthenticated_sender_login_mismatch, check_sender_access hash:/etc/postfix/localdomains

(Note that I chose to use the “reject_unauthenticated_sender_login_mismatch” rather than the wider “reject_sender_login_mismatch” because I only care about outside unauthenticated senders abusing my system. I can deal with authenticated users differently…)

Now let’s see what happens.

Permanent link to this article:

postfix sender restrictions

I mentioned in my previous post that I had recently received one of those scam emails designed to make the recipient think that their account has been compromised in some way and that, furthermore, that compromise has led to malware being installed which has spied on the user’s supposed porn habits. The email then attempts a classic extortion along the lines, “send us money or we let all your friends and contacts see what you have been up to.”

In the scam as described by El Reg, the sender tries to lend credence to the email by including the recipient’s password. As the Reg points out, this password is likely to have been harvested from a web site used in the past by the poor unsuspecting recipient. In my case, the sender didn’t include any password, but they did send the email to me from the email address targetted (so they sent email to “mick@domain” with sender “mick@domain”). Needless to say, I thought that this should not have been possible (except in the unlikely scenario that the extortionist actually had compromised my mail server). After all, my mail server refuses to relay from addresses other than my own networks, and all mail sent from my server must come from an authenticated user (using SASL authentication). My postfix sender restrictions looked like this:

# sender relaying restrictions – authenticated users can send to anywhere

smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_non_fqdn_sender, permit

That says that locally authenticated users can send mail anywhere, but we should reject the sending request when the MAIL FROM address specifies a domain that is not in fully-qualified domain form as is required by the RFC. This stops outsiders trying to send mail to us from non-existent or badly forged from addresses. The final permit allows checking to proceed to the next steps (the relay and recipient restrictions).

So what was going on?

Well, there was nothing in my restrictions to say that an outsider could not send to a local user (i.e. an email recipient on one of my domains). After all, that is part of the function of my mail system – it must accept (valid) email from the outside world aimed at my local users. But therein lay the problem. My mail connection checks (along with the “smtpd_helo”, “smtpd_relay” and “smtpd_recipient” restrictions enforced outbound checks and limited mail sending to outside domains from locally authenticated users, but inbound checks assumed (incorrectly as it turns out) that the sender domain was external to me (i.e. FROM someone@external.domain TO someone@internal.domain). Crucially I had ommitted to enforce any rule stopping someone sending FROM someone@internal.domain TO someone@internal.domain). On reflection that was dumb – and the “extortionist” had taken advantage of that mistake to try to fool me.

Fixing this is actually quite easy. Postfix allows the smtpd_sender_restrictions to include a variety of checks, one of which is “check_sender_access”. This enforces checks against a database of MAIL FROM address, domains, parent domains, or localpart@ specifying actions to take in each case. The database table contains three fields – domain-to-check, action-to-take, optional-message.

So I created a database of local domains called /postfix/localdomains thus:

first.local.domain REJECT Oh no you don’t. You’re not local!
second.local.domain REJECT Oh no you don’t. You’re not local!
third.local.domain REJECT Oh no you don’t. You’re not local!

(I was tempted to add a rude message, but thought better of it…..)

Postfix supports a variety of different table types. You can find out which your system supports with the command “postconf -m”. I chose “hash” for my table. The local database file is created from the text table with the command “postmap /etc/postfix/localdomains”. Having done that I added the check to my sender_restrictions thus:

smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_non_fqdn_sender, check_sender_access hash:/etc/postfix/localdomains, permit

and reloaded postfix. Job done.

Permanent link to this article: