encrypting DNS on android

My previous two posts discussed the need for encrypted DNS and then how to do it on a linux desktop. I do not have any Microsoft systems so I have no idea how to approach the problem if you use any form of Windows OS, nor do I have any Apple devices so I can’t provide advice for iOS either, but I do use Android devices. All my mobile phones for some time have been re-flashed to run a version of Android’s OS software. My current mobile (and two previous ones) run lineageOS, earlier phones used the (now discontinued) cyanogenmod ROM. I dislike google’s cavalier approach to user privacy, but I do like the flexibility and usability provided by modern smartphones. Apple devices strike me as hugely overpriced and I have never been convinced by the “oooh shiny” adoration shown by some of Apple’s hardened admirers. But each to their own.

I tend to buy mid-range smartphones, SIM free and unlocked, costing around the £100-£200 mark or higher specification models second hand. When choosing a new phone I always check whether it will support a current version of lineageos (preferably the latest which is at 17.1, or Android 10). In fact the only reason I have to upgrade my phone is if it becomes unsupported by the OS of my choice. Of course, this happens with mainstream Android on many devices simply because the Telco providing the service stops providing OTA upgrades to the OS (they want you to buy a new “shiny, shiny” and sign a new contract). Fortunately for people like me this often means that we can then get fairly well specced phones second hand and then re-flash them with our preferred OS.

All that aside, prior to Android 9, it was quite difficult to reliably change the DNS settings on such a device. You could, of course have used a VPN app, but several of those actually leak DNS and there was no single central setting which enabled the user to change DNS settings for both WiFi and mobile data. Of course, changing the DNS settings on your home DHCP server (often your WiFi router) would help, but that still left you exposed to the default settings offered by your mobile provider or the WiFi hotspot in your local cafe when out and about. Several apps are (still) available to allow users to change DNS settings but some of those need the phone to be rooted, and this sort of change is often beyond the average normal user. A search for “DNS changer apps for android” will give you several lists of such apps if you are stuck on a device with an android version lower than 9. However, bear in mind that whilst these apps may allow you to change the DNS away from the defaults provided by your ISP or Telco, they will not offer any form of encryption, and many of the apps provide default settings to services which are less than privacy conscious – google on 8.8.8.8 is a common default.

To their credit, in Android 9, Google introduced a “private DNS” option to the central settings. And of course, lineageos 16 and later offers the same functionality. Here’s how:

1. Open “settings”

2. Select “Network & Internet”

3. Select “Advanced” then “Private DNS”

4. On the pop-up screen change “Off” (or “Automatic”) to “Private DNS Provider hostname” and add your preferred provider.

5. Save and exit.

The DNS provider I chose is “anycast.censurfridns.dk” – a service provided by UncensoredDNS. The owner of that system has this to say about his service.

UncensoredDNS was started in November 2009 shortly after I began working at an ISP where one of my responsibilities was to administer the censored DNS servers that all Danish ISPs run for their customers.

I had never been a fan of the Danish DNS censorship system, and working with it first hand didn’t exactly help. At the time OpenDNS was kind of new and Google DNS didn’t exist yet. Friends and family were coming to me asking for a recommendation for an alternative to the censored ISP DNS servers.

Back then OpenDNS did NXDOMAIN redirection, an advertising trick where misspelled nonexistant domains are redirected to a search page with ads instead of returning an NXDOMAIN error. To a DNS purist like myself this made OpenDNS as appealing as a turd on a stick. They also didn’t have ipv6 or DNSSEC, both of which were mandatory on a modern DNS server, even back in 2009.

Even if Google DNS had existed back then I wouldn’t have felt comfortable recommending them either. While they do run a stable, fast and uncensored service, I am not convinced that it is a good idea to hand over all your DNS lookups to Google. They already have all your searches, you might have one of their phones in your pocket, and so on.

All this prompted me to do something about the situation so I started UncensoredDNS with help from friends. After a while I also started giving talks about the service at various conferences, since people naturally have an easier time trusting a service if they know who is behind it. You can see some of my old talks online, for example from RIPE65.

You could of course choose any provider offering DoT services and you can find suggestions on both the dnsprivacy and privacytools sites. Whichever provider you choose, I recommend you stay away from any which log you, or which have less than satisfactory privacy policies (I’m looking at you google).

One final point, for users of Apple’s iphone, the privacytools site recommends an encrypted DNS client for iOS called “DNSCloak”. I have no personal experience of that client, but I am impressed overall by the guys at privacytools.

Permanent link to this article: https://baldric.net/2020/06/06/encrypting-dns-on-android/

encrypting DNS with dnsmasq and stubby

In my last post I explained that in order to better protect my privacy I wanted to move all my DNS requests from the existing system of clear text requests to one of encrypted requests. My existing system forwarded DNS requests from my internal dnsmasq caching servers to one of my (four) unbound resolvers and thence onward from them to the root or other authoritative servers. Whilst most of my requests would be shielded from prying eyes by my use of openVPN tunnels, unfortunately, this still left my requests to upstream servers from my unbound resolvers subject to snooping. I don’t like that and the opportunity to encrypt my requests using the new standard DNS over TLS (DoT) looked attractive.

This post describes how I made that change.

In an ideal world, all the root servers and other authoritative servers should accept encrypted DNS requests. Because unbound can be configured to both accept DoT requests and in turn forward requests using DoT to upstream resolvers, this would then mean that I could retain my unbound resolvers, albeit with some suitable configuration changes. All I would then need to do would be to find some way of encrypting DNS outbound from my internal networks to my unbound resolvers. Unfortunately however, DoT is not sufficiently widely deployed to make this possible. So my unbound resolvers would necessarily have to forward to one or more public DoT resolvers in order for this to work. I see no point in doing that. There would be no advantage over simply forwarding upstream requests directly from my internal caching resolvers. Again, unfortunately, dnsmasq cannot encrypt using DoT and according to Simon Kelley’s response to a request on the dmsmasq mailing list, is not likely to do so any time soon.

So – enter stubby, a non-caching DNS proxy which encrypts outbound requests (to one or more upstream resolvers) using DoT. It is fairly easy to chain dnsmasq to stubby and this is how I did it in my networks.

Firstly, install stubby, which should be available through your package manager. If it is not, then the source can be obtained from github. Be aware that the stubby configuration file, stubby.yml, uses a YAML like format which is particularly picky in its layout. The documentation mentions that the file is sensitive to indentation and I found this to my cost when editing the default file to include my preferred upstreams. A single errant space in my configuration caused me some considerable difficulty because stubby appeared to start and run as a daemon, but /not/ with my configuration. I finally tracked this down after finding the messages below in my syslog:

stubby[18078]: Scanner error: mapping values are not allowed in this context at line 174, column 19
stubby[18078]: Could not parse config file

and

stubby[18078: Error parsing config file “/etc/stubby/stubby.yml”: Generic error
stubby[18078: WARNING: No Stubby config file found… using minimal default config (Opportunistic Usage)

After removing the offending space I restarted stubby

“systemctl restart stubby ; tail -f /var/log/syslog”

to see the reassuring messages:

systemd[1]: Stopped DNS Privacy Stub Resolver.
systemd[1]: Started DNS Privacy Stub Resolver.
stubby[18097]: [16:00:13.923290] STUBBY: Read config from file /etc/stubby/stubby.yml

and a quick check showed that my configuration was now working.

My configuration changes to the default are given below:

# listen locally on 5353 and disable IPV6

listen_addresses:
– 127.0.0.1@5353
# – 0::1

and add our preferred resolvers (samples only given here). Note that I only use IPV4 and not IPV6. IPV6 is disabled in my networks and at my boundaries.

#############################
# Specify the list of upstream recursive name servers.
# In Strict mode upstreams need either a tls_auth_name
# or a tls_pubkey_pinset so the upstream can be authenticated.
#
####### IPv4 addresses ######

upstream_recursive_servers:

## The getdnsapi.net server
– address_data: 185.49.141.37
tls_auth_name: “getdnsapi.net”
tls_pubkey_pinset:
– digest: “sha256”
value: foxZRnIh9gZpWnl+zEiKa0EJ2rdCGroMWm02gaxSc9Q=
## The Uncensored DNS servers
– address_data: 89.233.43.71
tls_auth_name: “unicast.censurfridns.dk”
tls_pubkey_pinset:
– digest: “sha256”
value: wikE3jYAA6jQmXYTr/rbHeEPmC78dQwZbQp6WdrseEs=

########

I moved stubby from the default port 53 to 5353 because dnsmasq listens on port 53 in my configuration. Accordingly, my dnsmasq configuration in /etc/dnsmasq.local.conf had to be amended thus:

# resolv-file=/etc/resolv.conf
# We now use stubby to forward to a list of upstreams
# using DNS over TLS so we do not consult another resolver
#
no-resolv
server=127.0.0.1#5353

This instructs dnsmasq not to consult any external file for upstream resolvers but instead forward to port 5353 on localhost (where stubby listens). Simple really.

Before finalising my stubby configuration I ran some tests on a number of external public resolvers, because I really don’t want to pass my DNS requests to a large resolver which then logs them (as does Google) or both logs and interferes with them (as do cleanbrowsing, OpenDNS or Quad9). For my testing I used a small tool called dnsperftest.

I modified that script to add some additional resolvers taken from the defaults provided in stubby’s configuration plus some others taken from privacytools.io and dnsprivacy. I also added some other domains to the test set so that I had 20 domains to test across some 18 public resolvers. Some of those resolvers (such as google 8.8.8.8 and Quad9 at 9.9.9.9) I would not, and do not, actually use in real life, but they are big anycast systems and I wanted a benchmark for my likely slower choices which are privacy conscious.

I then ran the test script from my desktop over a straight connection through my ISP, followed by connections through each of my VPN endpoints. As expected, I got faster lookups without the VPNs than with. I then ran the same tests from the machines I use as VPN endpoints. And again, as expected, and because those VMS are in large datacentres dotted around europe, I got much faster lookups from there. This allowed me to choose my final list of six external resolvers to add to the stubby.yml file.

Fnally, with some six upstream resolvers configured in round robin form in stubby, I ran tcpdump on my local DNS servers to check that my requests were actually encrypted. They were, and I am now satisfied that my DNS traffic is as private as I can reasonably make it.

Permanent link to this article: https://baldric.net/2020/05/25/encrypting-dns-with-dnsmasq-and-stubby/

encrypting DNS

Any casual reader of trivia will be aware that I care about my privacy and that I go to some lengths to maintain that privacy in the face of concerted attempts by ISPs, corporations, government agencies and others to subvert it. In particular I use personally managed OpenVPN servers at various locations to tunnel my network activity and thus mask it from my ISP’s surveillance. I also use Tor (over those same VPNs), I use XMPP (and latterly Signal) for my messaging and my mobile ‘phone is resolutely non-google because I use lineageos‘ version of android (though that still has holes – it is difficult to be completely free of google if you use an OS developed by them). Unfortunately my email is still largely unprotected, but I treat that medium as if it were correspondence by postcard and simply accept the risks inherent in its use. I like encryption, and I particularly like strong encryption which offers forward secrecy (as is provided by TLS, and unlike that offered by PGP) and will therefore use encryption wherever possible to protect my own and my family member’s usage of the ‘net.

Back in June of last year, I wrote about the problems caused by relying on third party DNS resolvers and how I decided to use my own unbound servers instead. I now run unbound on several of my VMs (in particular at my OpenVPN endpoints) and point my internal caching dnsmasq resolvers to those external recursive resolvers. This minimises my exposure to external DNS surveillance, but of course since my DNS requests themselves are in clear, any observer on the network path(s) between my internal networks and my external unbound servers (or between those servers and the root servers or other authoritative domain servers) would still be able to snoop. My DNS requests from my internal servers /should/ be protected by the OpenVPN tunnels to my unbound servers, but only if I can guarantee that those requests actually go to the servers I expect (and not to one of the others I have specified in my dnsmasq resolver lists). I attempt to mitigate this possibility on my internal network by using OpenVPN’s server configuration options

push “redirect-gateway def1”

and

push “dhcp-option DNS 123.123.123.123”

but my network architecture (see below) and my iptables rules on my external servers started to make this complicated and (potentially) unreliable. Simplicity is easier to maintain, and usually safer. Certainly I am less likely to make a configuration mistake if I keep things simple.

Part of my problem stems from the fact that I have two separate internal networks (well, three if you count my guest WiFi net) each with their own security policy stance and their own dnsmasq resolvers. Worse, the external network (deliberately and consciously) does not use my OpenVPN tunnels and my mobile devices could (and often do) connect to either of the two networks. Since both my dnsmasq resolvers point to the same list of external unbound servers this necessarily complicates the iptables rules on my servers which have to allow inbound connections from both my external ISP provided IP address and all of my OpenVPN endpoints. When thinking through my connectivity I found that I could not always guarantee the route my DNS requests would take, or which unbound server would respond. All this is made worse by my tendency to swap VPN endpoints or add new ones as the whim (and price of VMs) takes me.

Yes, a boy needs a hobby, but not an unnecessarily complicated one……

On reflection I concluded that my maintenance task would be eased if I could rely on just one or two external resolvers and find some other way to protect my DNS requests from snooping. The obvious solution here would be encryption of all DNS requests leaving my local networks (I have to trust those or I am completely lost). But how?

At the time of writing, there are three separate mechanisms for encrypting DNS, DNScrypt, DNS over HTTPS (DoH) and DNS over TLS (DoT).

The first of these has never been an internet standard, indeed it has never been offered as a standard, but it was implemented, and publicly offered back in 2011 by OpenDNS. Personally I would shy away from using any non-standard protocol, particularly one which was not widely adopted, and very particularly one which was offered by OpenDNS. That company (alongside others such as Quad9 and Cleanbrowsing) market themselves as offering “filtered” DNS. I don’t like that.

Of the other two, both have RFCs describing them as internet standards, DoH in RFC 8484 and DoT in RFCs 7858 and 8310 but there is some (fairly widespread) disagreement over which protocol is “best”. This ZDNET article from October last year gives a good exposition of the arguments. That article comes down heavily against DoH and in favour of DoT. In particular it says that:

  • DoH doesn’t actually prevent ISPs user tracking
  • DoH creates havoc in the enterprise sector
  • DoH weakens cyber-security
  • DoH helps criminals
  • DoH shouldn’t be recommended to dissidents
  • DoH centralizes DNS traffic at a few DoH resolvers

And concludes:

The TL;DR is that most experts think DoH is not good, and people should be focusing their efforts on implementing better ways to encrypt DNS traffic — such as DNS-over-TLS — rather than DoH.

When people like Paul Vixie describe DoH in terms such as:

Rfc 8484 is a cluster duck for internet security. Sorry to rain on your parade. The inmates have taken over the asylum.

and

DoH is an over the top bypass of enterprise and other private networks. But DNS is part of the control plane, and network operators must be able to monitor and filter it. Use DoT, never DoH.

and Richard Bejtilch of TAO Security says:

DoH is an unfortunate answer to a complicated problem. I personally prefer DoT (DNS over TLS). Putting an OS-level function like name resolution in the hands of an application via DoH is a bad idea. See what @paulvixie has been writing for the most informed commentary.

I tend to take the view that perhaps DoH may not be the best way forward and I should look at DoT solutions.

Interestingly though, Theodore Ts’o replied to Paul Vixie in the second quote above that:

Unfortunately, these days more often than not I consider network operators to be a malicious man-in-the-middle actor instead of a service provider. These days I’m more often going to use IPSEC to insulate myself from the network operator, but diversity of defenses is good. :-)

and I have a lot of sympathy with that view. For example, that is exactly why I wrap my own network activity in as much protective encryption as I can. I don’t trust my local ISP.

In another discussion in a reddit thread, Bill Woodcock (Executive Director of Packet Clearing House and Chair of the Board of Quad 9) said:

[DNSCrypt] is not an IETF standard, so it’s not terribly widely implemented.

and

DNS-over-HTTPS is an ugly hack, to try to camouflage DNS queries as web queries.

and went on

DNS-over-TLS is an actual IETF standard, with a lot of interoperability work behind it. As a consequence of that, it’s the most widely supported in software, of the three options. DNS-over-TLS is the primary encryption method that Quad9 supports.

Whilst I might not like the way Quad9 handles its public DNS resolvers (and personally I wouldn’t use them), I can’t disagree with Woodcock’s conclusion.

My own view is that DoH looks very much like a bodge, and a possibly dangerous one at that. I’ve been a sysadmin in a corporate environment and I know that I would be very unhappy knowing that my users could bypass my local DNS resolvers at application level and mask their outgoing DNS requests as HTTPS web traffic. Indeed, when Mozilla announced its decision last year to include DoH within Firefox it caused some concern within both UK Central Government and the UK’s Internet Service Providers Association. Here I am slightly conflicted, however, because I can see exactly why that masking is attractive to the end user. For example, I sometimes run OpenVPN tunnels over TCP on port 443, rather than the default UDP to port 1194 for exactly the same reasons – camouflage and firewall bypass. And of course I use Tor. Mozilla reportedly bowed to UK pressure and did not (and have not) activated DoH by default in UK versions of Firefox. But it is not terribly difficult to activate should you so wish.

One of my main concerns over the use of a protocol which operates at the application layer, and ignores the network directives, is that those applications could, and probably would, come with a set of hard-coded DoH servers. Those servers could be hostile, or even if not directly hostile, they could be subverted by hostile entities for malicious purposes, or they could just fail. Mozilla itself hard-codes Cloudflare’s DoH servers into Firefox, but you could of course change that to any one or more of the servers on this list. The hard-coding of Cloudflare could cause Firefox users problems in future if that service were to fail. An article in the Register pointed to a failure in the F root server in February of this year caused by a faulty BGP advertisement connected with Cloudflare. As the Reg pointed out:

If a software bug in closed Cloudflare software can cause a root server to vanish an entire, significant piece of the internet then it is all too possible – in fact, likely – that at some point a similar issue will cause Firefox users to lose their secure DNS connections. And that could cause them to lose the internet altogether (it would still be there, but most users would have no idea what the cause was or how to get around it.)

Of course Mozilla isn’t the only browser provider to offer DoH. As the Register pointed out in November last year, both Google (with Chrome) and Microsoft (with Edge) are rolling out their own implementations. I’m with the Reg in being concerned about the centralisation of knowledge about DNS lookups this necessarily entails. If browsers (which after all are the critical application most used to access the web) all end up doing their DNS lookups by default to central servers controlled by a very few companies – and moreover companies which may have an inherent interest in monetising that information, then we will have lost a lot of the freedom, and privacy, that the proponents of the DoH protocol purport to support.

So DoT is the way to go for me. And I’ll cover how I did that in my next post.

Permanent link to this article: https://baldric.net/2020/05/06/encrypting-dns/

zooming in on china

Since my previous post below, I have been reading up on Zoom as a company, its staffing and its worrying security (or rather lack of) track record. When I wrote the initial post I said that “Zoom is a US company funded almost entirely by venture capital. Its servers are US based.”. It appears that is not entirely accurate. Indeed, it would appear that some of its servers are actually based in China, furthermore, a large number of its development staff are Chinese nationals, also based in China. And the CEO, Eric S Yuan grew up in China. On a Zoom blog post dated 26 February of this year Yuan wrote:

“I grew up in the eastern Shandong Province of China and studied at the Shandong University of Science & Technology. It’s a place I still hold dear. I am continuously inspired by the courageous efforts of those treating patients in China and around the world — working hard to try to further prevent the spread of the virus.”

According to his Wikipedia entry, Yuan was “born and raised in Tai’an, Shandong Province” and “moved to the US in the mid-1990s, after obtaining a visa on the ninth try.”

Now there is nothing inherently wrong with any of that, but when coupled with other findings by Security researchers such as Citizen Lab and Checkpoint amongst others a troubling picture starts to emerge. On 3 April, ElReg reported on the CitizenLab findings. The CitzenLab post is well worth reading. It found that:

  • Zoom documentation claims that the app uses “AES-256” encryption for meetings where possible. However, we find that in each Zoom meeting, a single AES-128 key is used in ECB mode by all participants to encrypt and decrypt audio and video. The use of ECB mode is not recommended because patterns present in the plaintext are preserved during encryption.
  • The AES-128 keys, which we verified are sufficient to decrypt Zoom packets intercepted in Internet traffic, appear to be generated by Zoom servers, and in some cases, are delivered to participants in a Zoom meeting through servers in China, even when all meeting participants, and the Zoom subscriber’s company, are outside of China.
  • Zoom, a Silicon Valley-based company, appears to own three companies in China through which at least 700 employees are paid to develop Zoom’s software. This arrangement is ostensibly an effort at labor arbitrage: Zoom can avoid paying US wages while selling to US customers, thus increasing their profit margin. However, this arrangement may make Zoom responsive to pressure from Chinese authorities.

The report concludes:

As a result of these troubling security issues, we discourage the use of Zoom at this time for use cases that require strong privacy and confidentiality, including:

  • Governments worried about espionage
  • Businesses concerned about cybercrime and industrial espionage
  • Healthcare providers handling sensitive patient information
  • Activists, lawyers, and journalists working on sensitive topics

For those using Zoom to keep in touch with friends, hold social events, or organize courses or lectures that they might otherwise hold in a public or semi-public venue, our findings should not necessarily be concerning.

In January of this year, Checkpoint published the details of a vulnerability (since mitigated) which could allow a threat actor to eavesdrop on a Zoom conference. And on 3 April, TwelveSecurity published a blog article looking at links between Zoom and the Chinese military, intelligence services, and their Army officers.

Whilst this latter article is a little breathless and (I think) overwrought in places, it does raise some questions about Zoom as a company and the security of its product in particular. The Register article cited above also points to a listing posted by Dan Ehrlick of TwelveSecurity of more than 130,000 subdomains of the main zoom.us domain. Whilst there is no analysis of the usage of those domains (beyond a rather sideways dig at some of the names used) I find it astonishing that zoom should actually /need/ that many subdomains. At best it raises some interesting questions.

Of course, a prior question still remains as to why on earth someone in the Cabinet Office or No 10 thought it would be a smart idea to use Zoom for a Cabinet meeting.

Permanent link to this article: https://baldric.net/2020/04/10/zooming-in-on-china/

zooming in on cabinet

On Tuesday of this week, Boris Johnson tweeted a picture of what he called the UK’s “first ever digital Cabinet”. That picture (copy below) shows that the Cabinet meeting was held using Zoom – the sort of video conferencing software which is currently popular with business users forced to work at home during the Covid19 pandemic.

As can be seen, the conference was run on a Microsoft platform (unsurprisingly) and it also clearly shows the zoom meeting ID in the top left of the picture.

Now Zoom is a US company funded almost entirely by venture capital. Its servers are US based. And whilst the company claims that its conferences are protected by end to end encryption, what it actually means is that the conference streams are protected by TLS between the end clients and the US based servers. Furthermore, what is not actually clear from the picture posted by our dear PM, is where all the end clients used by the 35 participants were located. I’d hazard a guess that not all of them were in what HMG would call “secure” locations.

So here we have a Cabinet meeting run over a completely unapproved video conferencing platform between 35 Ministers and Senior Officials using various clients in a number of locations. Well, at least they didn’t use Skype.

On the twitter feed, Stefan Simanowitz queried “You’ve just published the Cabinet’s Zoom ID number. Isn’t this a security risk?”. With all due respect to Mr Simanowitz, the bigger problem is the use of this platform at all. Someone, somewhere in No 10 or the Cabinet Office should be having an uncomfortable conversation with the Security Service.

Permanent link to this article: https://baldric.net/2020/04/03/zooming-in-on-cabinet/

beware the zombie apocalypse

Tom Scott is a young educational entertainer who publishes fairly regularly on youtube. Back in mid 2004, whilst still a linguistics student at York, he managed to upset both the Home Office and the Cabinet Office by publishing a Department of Vague Paranoia website spoofing the rather po faced official “Preparing for Emergencies” site. Tom’s website is still in operation – unlike the official one. I guess Tom never aspired to a career in the Civil Service.

I mention Tom here because I have just discovered his youtube channel called “The Basics” in which he addresses some of the complexities of computer science in ways which are accessible to a wide audience. In particular, he has a very good exposition of why encryption backdoors are not a terribly good idea. Take a look at the clip below:

I commend that clip to anyone who still adheres to the kind of “magic thinking” that leads them to believe that the laws of mathematics can be ignored, or that only the “good” guys (whoever they are) would ever take advantage of crippled encryption.

Permanent link to this article: https://baldric.net/2020/03/11/beware-the-zombie-apocalypse/

have I been pwned?

Well, I don’t think so. But for a while I was not entirely sure.

Following the move last November of trivia from a VM on UK2’s datacentre in London to our new home on a faster VM on ITLDC’s network I have been making a variety of minor changes and doing some essential housework. One of the biggest changes of course (fortunately for me as it turns out) was the complete separation of my two main services (mail and web) onto different VMs in different countries. My mailserver is now housed in Nuremburg where I have made some additional changes (for example I now run opendkim on it). This VM in Prague now houses just my webserver and of course is home to this blog.

Following the configuration changes which I noted in my last post, I spent a short while checking my web server logs – particularly the error log. That log shows a variety of messages such as:

SSL: 1 error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol
SSL: 1 error:1420918C:SSL routines:tls_early_post_process_client_hello:version too low
SSL: 1 error:1417A0C1:SSL routines:tls_post_process_client_hello:no shared cipher

which indicate browsers or other clients attempting to connect using either protocols I no longer support (such as ssl3) or TLS versions lower than I support server side. This is expected behaviour and the frequency of such log entries should decline over time as clients out there catch up with current acceptable security standards. I know from long experience that there are still a huge number of old, outdated browsers still in use – possibly on equally old and outdated platforms such as Android 4, or Windows XP (yes, it still exists). As a cross check I started looking through my access logs and sure enough I found user agent strings like:

“Mozilla/5.0 (Linux; U; Android 4.4.2; en-gb; SM-T310 Build/KOT49H)”

(almost certainly the default broswer on an old Android tablet) and

“Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1) Opera 7.54 [en]”

(probably Opera 7 on Windows XP).

So, no real surprise then that clients like that should have problems negotiating a secure connection with my server. However, this is where things started to look a little weird.

As I was scanning through my access logs I noticed entries like the following (client side IP addresses deliberately obfuscated with RFC1918 entries) :

192.168.1.1 microsoft-hub-us.com – [23/Dec/2019:19:27:52 +0100] “GET / HTTP/1.1” 200 169284 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50728)”
192.168.10.10 microsoft-hub-us.com – [23/Dec/2019:21:20:52 +0100] “GET / HTTP/1.1” 200 155411 “-” “Mozilla/5.0 (Windows; U; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)”
192.168.1.15 microsoft-hub-us.com – [23/Dec/2019:21:32:17 +0100] “GET / HTTP/1.1” 200 169272 “-” “Mozilla/5.0 (Linux; U; Android 4.1.2; ja-jp; SC-06D Build/JZO54K) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30”
192.168.2.20 microsoft-hub-us.com – [24/Dec/2019:15:32:47 +0100] “GET / HTTP/1.1” 200 169257 “-” “Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/67.0.3396.99 Safari/537.36”

Now that says that the client connecting from the address in the first field is asking for the root of a webserver called “microsoft-hub-us.com”. I don’t own that domain, I don’t host that domain and the only way that sort of entry could possibly appear in my logs is if someone, somewhere who /does/ own that domain has made a DNS A record pointing to my IP address. Well, actually there is another scenario. It is entirely possible that someone has made a local DNS entry (such as in a local hosts file or a local DNS server using say, DNSMasq or Unbound, pointing to my IP address. I do exactly that sort of thing myself when I move webs between servers so that I can test the entry on the new server before switching my DNS. However, given the sheer number of the log entries (in the tens of hundreds!) from multiple different source addresses it seemed to me unlikely that this latter scenario is accurate. So, someone has a DNS entry pointing to me that I don’t know about.

Having found one odd host name I did a quick scan for others (awk { print $2 } accesslogfiles | grep -v mydomains) and found around fifteen more. Fortunately, with the exception of just one other domain name, none of the others (for which there were mercifully few connections) pointed to my address at the time I checked the DNS (late January). I assumed that those domains were also potentially hostile, and had now moved elsewhere but some of course could just have been accidents – it can happen (be careful of ascribing to malice that which could be simple stupidity).

I decided to concentrate on the main domain appearing in my logs and did a bit more research.

Firstly, the DNS:

mick@shed ~ $ dig +ttlunits -t a microsoft-hub-us.com

;; ANSWER SECTION:
microsoft-hub-us.com. 10m IN A 195.123.246.12

So there is an A record pointing to me – and it has a suspiciously low TTL value (meaning the owner can change it quickly).

What about the nameserver(s)?

mick@shed ~ $ dig +ttlunits -t ns microsoft-hub-us.com

;; ANSWER SECTION:
microsoft-hub-us.com. 10m IN NS a.dnspod.com.
microsoft-hub-us.com. 10m IN NS b.dnspod.com.
microsoft-hub-us.com. 10m IN NS c.dnspod.com.

The standard TTL value for an A record is about 1 day (or longer) so that nameservers can cache the answer to the question: where is “baldric.net”? for a reasonable length of time before having to ask again. Most people would only use a very short TTL (here it is 10 minutes) if they wanted to be able to move the domain name queried to a new host very quickly. There are legitimate reasons for this. For example if you manage a server which you know you are going to move to a new address shortly and you wish to minimise the lag in the DNS. However, “Bad Guys” (TM) are known to do this sort of thing when they point to compromised hosts on the net. Said “Bad Guys” will also often use an obviously spoofed domain (this one is meant to look like a genuine Microsoft domain for the MS Hub) in phishing attacks.

Conclusion? This looks bad.

What about the whois record?

That shows the domain to have been registered on the 6th of November last year. About three weeks before I was given the IP address.

mick@shed whois microsoft-hub-us.com

Domain name: microsoft-hub-us.com
Registry Domain ID:
Registrar WHOIS Server: whois.eranet.com
Registrar URL: http://www.eranet.com
Updated Date: 2019-11-06T00:00:00+08:00
Creation Date: 2019-11-06T22:52:03.0000Z
Registrar Registration Expiration Date: 2020-11-06T00:00:00+08:00
Registrar: ERANET INTERNATIONAL LIMITED
Registrar IANA ID: 1868

Interestingly though, a current whois lookup gives:

mick@shed ~ $ whois microsoft-hub-us.com

Domain Name: MICROSOFT-HUB-US.COM
Registry Domain ID: 2452061049_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.eranet.com
Registrar URL: http://www.eranet.com
Updated Date: 2020-02-24T02:19:45Z
Creation Date: 2019-11-06T14:52:03Z
Registry Expiry Date: 2020-11-06T14:52:03Z
Registrar: Eranet International Limited
Registrar IANA ID: 1868

So the record was last updated on the 24th of this February. And sure enough, that domain name disappeared from the DNS records at around 1.30 GMT on 24 February. Note – it disappeared – it did not get pointed to a holding address (such as 127.0.0.1). But I’m getting ahead of myself here so let’s take a step back again.

Have I been pwned? And do I host malware?

Next step is some research on the domain name. A search for “microsoft-hub-us.com” and “malware” turns up:

Firstly, joesandbox which, sure enough, shows that domain name on my IP address is dropping malware. Ouch. Not good. Not good at all. But wait. The submission time for that analysis (top right of the full page) is shown as 08.11.2019 14:08:03 – only a couple of days /after/ the domain was registered and again some three weeks /before/ I was given the IP address.

Secondly, also at joesandbox he shows that my IP address was hosting another set of malware. Also not good, but again at a date before I had the IP address (Submission Time: 08.11.2019 17:56:13)

Thirdly, again at joesandbox, there is a very detailed, and scary, analysis of the behaviour of a downloaded file “contract1.doc” taken from the spoofed domain on 8 November last year. That analysis is here and a copy is shown below:

The behaviour graph in that analysis, shown in the next image, shows how the dropper works and sure enough, my IP address is implicated. But again that analysis dates to before I inherited the IP address.

In the HTTPS packet section of the Network Behaviour analysis (shown in the next image below) it says that the domain originally had its own Lets-Encrypt certificate, valid from Friday November 8 2019 to Thursday February 6 2020. That in itself is interesting because it means that from the date I moved trivia (5 December last year) to that IP address with my own Lets-encrypt certificate covering my domains (and ONLY my domains) all future requests hitting my server with an invalid host name would get a big scary “This Connection is Untrusted” browser warning. But of course I know from my logs that almost all of those warnings were ignored.

Finally, in the “Domains” section of the analysis there is a link to Virustotal so that is the next port of call.

The image below, taken from VirusTotal gives us an overview of a scan from three months ago which shows that 7 engines out of a total of 76 used, recorded the domain as malicious. I’m not sure whether that is good, or bad. If, as I now believe to be the case, that domain is/was a source of windows malware then I would have expected a much higher percentage of positives. But no matter at this stage.

The “relations” section of the analysis (in the image below) shows the results of the scans for various URLS on the domain and give a worrying result of positives for dates when I /did/ have the IP address in question. Fortunately however, a click on the links (for example “https://microsoft-hub-us.com/download.php” at 9/12/2019) gives the result that 2 months ago the URL gave a 404 not found. (As it should)

So, whilst the document originally at that URL registered as malware on a variety of tests, when the link was last tested by VirusTotal (9 December 2019) it was no longer found. I should add here that I have run a recursive find on my webserver for all the documents listed by a variety of analysts out there, and additionally for any “.doc” or “.exe” or “.xls” files and come up blank. So I am reasonably confident (he said!) that the site is clean.

The “details” section of the VirusTotal analysis (below) gives us the DNS records for the domain, together with the the HTTPS certificate seen when the domain was last checked (which is now mine, and not the original spoof microsoft certificate).

That same page gives us the results of a google search for the domain name as below:

About 5 results (0.20 seconds)

Sort by: Relevance

Kyle Ehmke on Twitter: “Most likely TA505 domain box-en-au[.]com …
twitter.com
19 Nov 2019 … This one is calling out to an older site: microsoft-hub-us[.]com. I have to imagine a wave of new docs will pop up soon. 1 reply 0 retweets 2 likes.

AS204957 – LAYER6, UA – urlscan.io
urlscan.io
www.ram6.ac.th, 2 days ago, 3 MB, 47, 9, 6. microsoft-hub-us.com, 2 days ago, 5 MB, 91, 3, 3. xurl.es/12l4x, 3 days ago, 54 KB, 30, 4, 2. microsoft-hub-us.com …

The Blacklist from UT1 bad Recipe # # This recipe demonstrates …
www.pluckeye.net
File Format: text/plain
17 Nov 2016 … … microsoft-cnd-en.com deny microsoft-home-en.com deny microsoft-hub-us.com deny microsoft-live-us.com deny microsoftoffice365box.com …

Ransomware Clop : une communication officielle trop tardive ?
www.lemagit.fr
25 nov. 2019 … … et évoqué publiquement sharefile-cnd[.]com, ms-home-live[.]com, box-en-au[.] com, box-en[.]com, microsoft-hub-us[.]com, microsoft-live-us[.] …

『男性は2019年6月、他の者と共謀してゆうちょ銀行のネットバンキングに …
benzaiten.dyndns.org
2019年12月17日 … 2019年11月07日22時15分59秒 RT @kyleehmke: Possible TA505 domain microsoft-hub-us[.]com was registered on 11/6. Less confidence in …

So, onwards to Kyle Ehmke who is a researcher for Threat Connect. His tweets of 7 and 8 November last year say:

Kyle Ehmke
@kyleehmke
·
7 Nov 2019
Possible TA505 domain microsoft-hub-us[.]com was registered on 11/6. Less confidence in that association though as the domain is not currently hosted.

and

Kyle Ehmke
@kyleehmke
·
8 Nov 2019
The microsoft-hub-us[.]com domain is now hosted at 195.123.246[.]12.

That hosting at that address cannot have lasted long, because I was allocated the IP address along with my new Debian VM on 27 November last year. But I know from later analysis that the A record for that domain name continued to point to my address right up until 24 February this year. Kyle refers to the threat actor as “TA505“, known as an active and prolific attacker operating in the financial sphere – i.e. a criminal group motivated by money (rather than politics). On 19 November last, Kyle posted again on Twitter that:

“Another most likely TA505 domain registered at essentially the same time as box-en-au[.]com: microsoft-store-en[.]com. Currently hosted at 103.199.16[.]197.”

to which Kyle Eaton responded:

“Nice find! Seems to me, when a new site is spun up they’ll send out an older static doc for a while before we start getting new files. This one is calling out to an older site: microsoft-hub-us[.]com. I have to imagine a wave of new docs will pop up soon.”

Searches for TA505 on Mitre give us the information that:

“TA505 is a financially motivated threat group that has been active since at least 2014. The group is known for frequently changing malware and driving global trends in criminal malware distribution.”

The Mitre page about the group lists some 15 different attack techniques used and 5 different pieces of malware. Mitre also reference Proofpoint analyses of the group going back several years. A quick search on the Proofpoint site gives us a list of 27 separate postings about the group. Their profile of TA505, dating from September 2017, describes the group as:

“One of the more prolific actors that we track – referred to as TA505 – is responsible for the largest malicious spam campaigns we have ever observed, distributing instances of the Dridex banking Trojan, Locky ransomware, Jaff ransomware, The Trick banking Trojan, and several others in very high volumes.”

That profile gives an interesting timeline of activity attributed to TA505 going back to June 2014. So these guys have around for some time, they are well established, well organised and (apparently) quite successful. If I /have/ been pwned, at least it was done by a professional group……

Finally, urlscan.io is listed as having scanned the domain on December 23 2019. That analysis, given in the image below, shows the front page of my blog as it looked at the time with my “Welcome to Prague” post at the top.

Reassuringly, moreover, the ioscan analysis shows the website as “clean”. The historic list of scans given by ioscan (and shown below) detail two failed connects three months ago, four successful connects to my server pipe.rlogin.net, and a final failed connect attempt one hour ago (from the time of finishing writing this).

It is worth noting at this stage that the reason connections to the spoofed microft domain resulted in delivery of my blog is because (I confess) I had been dumb in my web server configuration. The server software I use (lighttpd) has a very simple virtual hosts configuration system. That mechanism allows you to set whatever host name you wish to be served depending upon the host name requested. So if you have a virtual host called “something.com” and another called “somethingelse.net” you merely need to tell the webserver to deliver the appropriate pages from the directories called “/var/www/pages/something.com” or “/var/www/pages/somethingelse.net” (or wherever you configure your web root to be). But what if someone connects just to the IP address and not a virtual host name? Here is where I was dumb. Lightty allows you to set a “default” virtual host and serve that in such a case. I had set “baldric.net” as my default. Thus anyone coming in and asking for a domain that I don’t host would get my blog. Stupid. Very stupid on reflection. And as soon as I realised that was what was happening (and why my logs were full of crud) I changed it so that the default went to an empty directory with a blank index file. I have actually improved that now by changing the default to give a “403 Forbidden” response. Better, and more logical methinks.

Now, as I was documenting all this (for just this sort of blog post) I received an email from my hosting provider (ITLDC) saying:

Ticket no: “blah”

195.123.246.12/32 is listed on the Spamhaus Block List – SBL
195.123.246.12/32 is listed on the Spamhaus Botnet Controller List – BCL
2020-02-23 12:30:36 GMT | itldc.com
TA505 botnet controller @195.123.246.12

and telling me that accordingly they had shut my VM down. (For which I cannot blame them. In their position I would do exactly the same.)

Bugger – and this is where I am exceptionally grateful that I had separated my blog from my mail. I cannot afford to have my email server blacklisted by spamhaus, correct or not. It takes a long time to gain a clean record for a mail server. One bad listing can see you blocked by multiple other mail providers and things then start to cascade out of control. I can afford to lose my blog for a while, but not my mail reputation.

I responded to the ticket explaining what I had found myself and asking that they investigate further. I also offered copies of all my log extracts showing what I had found so far together with whatever further assistance they might need. Unfortunately, this happened at a weekend and my ticket had to be escalated to second line advanced support, who only worked monday to friday.

It was long weekend.

On the Monday after the weekend we corresponded further on the ticket and by about lunchtime I got the good news that whilst Spamhaus is a trusted source of abuse reports and, as is right and proper a responsible ISP will take appropriate steps to prevent damage to their own or others’ networks following a Spamhaus alert, in this case the report turned out to be a false alarm and I could have my VM back.

Even better news was that they offered to allocate me a new IP address – which I happily accepted. As you can see (because you are reading this) we are back up on that new address and all looks good.

Conclusion then? I probably have /not/ been pwned at trivia. The most likely scenario seems to be that a previous user of that IP address had been compromised, or, given that the TA505 mob seem to have gone to the trouble (and had been able) to get their own, valid, Lets-encrypt certificate for the spoof domain, that group itself rented a VM on that address. My money is on a root compromise of a previous owner.

My immense gratitude to the support team at ITLDC and my particular thanks to Dmitry in that support team for taking my problem seriously, investigating it appropriately and coming up with a satisfactory outcome. That kind of service would be exceptional even if I were paying them ten times what I actually pay for my VMs. Given how little I do spend with them it is nothing short of amazing. I can think of several hosting providers who would happily throw you under a bus following a spamhaus report rather than spend time supporting you.

So, my thanks again to Dmitry and his team.

Go buy some VMs from them. They are excellent.

Permanent link to this article: https://baldric.net/2020/02/27/have-i-been-pwned/

TLS certificate checks

My move of trivia to a new VM last December prompted me to look again at my server configuration. In particular I wanted to ensure that I was properly redirecting all HTTP requests to HTTPS and that the ciphers and protocols I support are as up to date and strong as possible. Mozilla offers a very good security reference site which should be your first port of call if you care about server side security. The “cheat sheet” on that site gives pointers to existing good practice guidelines for most of the configuration options you should care about on a modern website. I have implemented as many of these as is possible on trivia – but I am hampered slightly by the fact that I still use WordPress as my blogging platform. WordPress (and its myriad plugins) still does lots of things I don’t actually like (such as setting cookies I can’t control, loading google fonts etc.) but I’m stuck with that unless I change platform (which I might).

I have tried to ensure that all session cookies sent are as secure as possible by setting the “HttpOnly” and “secure” attributes in my wp-config file (as below)

@ini_set(‘session.cookie_httponly’, true);
@ini_set(‘session.cookie_secure’, true);
@ini_set(‘session.use_only_cookies’, true);

but that seems to be bypassed by some plugins – which I have thus disabled (behave or begone!). Apart from that change. and some minor tweaks to my TLS configuration to ensure that I only use recommended protocols and ciphers, nothing much seemed to need changing.

My first port of call for remote checking of my security was then the Mozilla Observatory site. I thought the results were disappointing – I only scored a “B”.

mozilla result

However, a careful reading of the full results showed that trivia had actually passed 10 of the 11 tests and achieved a score of 75/100. The 25 missing points all came from the failure of the “Content-Security-Policy” (I don’t implement one – because it is largely impossible on WordPress sites and particularly on a blog like trivia which points to multiple external resources).

mozilla details

Mozilla themselves say that:

Content Security Policy (CSP) is an HTTP header that allows site operators fine-grained control over where resources on their site can be loaded from. The use of this header is the best method to prevent cross-site scripting (XSS) vulnerabilities. Due to the difficulty in retrofitting CSP into existing websites, CSP is mandatory for all new websites and is strongly recommended for all existing high-risk sites.

I conclude that on general security recommendations I am doing reasonably well apart from the CSP issue.

Next, and most importantly, is the TLS check.

tls observatory

Mozilla’s own check gives me an “I”, meaning “Intermediate”. This is not surprising since I have implemented their “intermediate” level recommendations. I considered using the “modern” set only, but that excludes TLSv1.2, would exclude users of many browsers and, oddly, result in a lower score at SSL labs. Besides, I really don’t see why my blog should set the bar higher than seems to be used much more widely elsewhere.

Lastly, the observatory links to third party test sites, including ssllabs, immuniweb, securityheaders and hstspreload. I’ve used some of these (notably ssllabs) independently in the past and found them to be robust, reliable and helpful in getting my site properly configured. None of the results there surprised me or bothered me over much. I still get a nice satisfying big green A+ at ssllabs.

A+ at SSLLabs

However, the immuniweb result intrigued me.

immuniweb result

Apparently, my blog is PCI-DSS compliant. I do hope not. It runs on a debian VM in a datacentre in Prague owned by a small European ISP – and it cost next to nothing. If that is all it takes to gain PCI-DSS compliance then I’m a little worried. (In reality, I expect all it means is that my /TLS/ configuration is PCI-DSS compliant).

So, having checked my own configuration, and found that I still get a nice green A+ at ssllabs, I thought I might check some other sites – particularly those which ought to take extra care about the strength of their TLS implementations. Given my apparent PCI-DSS compliance, what better sites to check than those of the banks? I picked fourteen bank sites, including four of which I am a customer (either as a saver or a borrower). Here, in no particular order, is what I found.

Nationwide

Nationwide Bank

Certicate expires in 1 year 8 months.

TSB

TSB

Certificate expires in 5 months.

Co-op Bank

Co-op Bank

Certificate expires in 9 months.

Halifax Bank

Halifax Bank

Certificate expires in 7 months.

HSBC

HSBC

Certificate expires in 7 months.

Lloyds Bank

Lloyds Bank

Certificate expires in 7 months.

Natwest Bank

Natwest bank

Certificate expires in 1 year and 1 month.

RBS

RBS

Certificate expires in 1 month.

Sainsburys Bank

Sainsburys Bank

Certificate expires in 3 months.

Santander

Santander

Certificate expires in 10 months.

Smile Bank

Smile bank

Certificate expires in 9 months.

Tesco Bank

Tesco Bank

Certificate expires in 1 year 5 months.

Virgin Money

Virgin Money

Certificate expires in 9 months.

So: of the fourteen banks I checked, only 3 get an A+, 5 get an A, 3 get a B, 2 get a C and poor old Santander gets an F. In Santander’s case this is because their server apparently remains unpatched for the Zombie Poodle vulnerabilities. Qualys published information about this vulnerabity in April 2019 and warned then that they would start giving an “F” grade to any server affected by these vulnerabilities from end of May 2019.

For the remainder, the majority of problems seem to stem from the failure to remove TLSv1 and TLSv1.1 protocols. It is generally accepted that only TLSv1.2 and above are to be considered “secure” these days. None of the sites I checked support TLSv1.3, and even those sites supporting TLSv1.2 offer weak ciphers or also offer TLS versions lower than 1.2. Certainly PCI-DSS compliance implies a minimum of TLSv1.2 (See the rationale for the “Intermediate” configuration at Mozilla”s site.)

I notice also that practically all the Banks use certificates which last for one or two years. This strikes me as rather a long time, but of course there is always the difficulty in a live IT environment of balancing the need for frequent certificate changes against the need for some stability. Nevertheless, certificate changes can be automated and it seems to me that a much shorter certificate lifetime (say 3 to 6 months) would be more appropriate.

Does this mean those Bank’s sites are insecure? Well, no, and the Banks themselves would almost certainly argue strongly, and correctly, that their TLS implementations meet industry best practice whilst catering for the (very wide) range of browsers in use by their clients. They may also argue that the sites I checked are not the actual portals to their on-line banking systems, merely the shop front door (so for example nationwide.co.uk uses the subdomain onlinebanking.nationwide.co.uk).

But I know what I think. They should do better. Much better.

And of course I’m not alone in my view. About 18 months ago Wired reported that “Top UK banks [weren’t] using the latest tech to secure transactions”. In that article, Wired pointed to research by Swansea University computer science student Edward Wall and also quoted Pen Test Partner’s Researcher David Lodge as noting “There are some significant issues in need of improvement. Encryption is possibly the most important, in particular the section marked TLS. There have been a selection of cryptographical flaws found in the implementation and algorithms with older forms of SSL/TLS, meaning that only TLS 1.2 and 1.3 are recommended nowadays”.

That article goes on to note that the PCI DSS requires that the latest encryption standards are used. Sadly little seems to have changed in the past 18 months.

Permanent link to this article: https://baldric.net/2020/01/22/tls-certificate-checks/

do not ask me for guest posts or links

For the past four years or so I have been receiving increasingly frequent requests for either guest posts, or links to external sites (or sometimes both). The requests have increased in number ever since I started posting about my use of OpenVPN. Many of these requests want me to point to their commercial VPN site. The requests all look something like this:

Hi.

My name is Foo. I represent Bar. I found your blog on google and read your article on “X”. I think your readers will like our discussion about “X” on our site. Would you be willing to host a guest post by us, or one of our affiliates, promoting the use of “Y”? It would also be really good if you could link to our site from your article.

We are really flexible, so we could totally negotiate about special deals.

Now, the least irritating of these requests tend to come to the correct email address (which shows they have read the “about” page) rather than “postmaster@baldric.net” or some other speculative email address, and they are also directly relevant to the article in question (which shows they have actually read that too). But unfortunately, a depressingly large number of requests point to article “X” which has nothing whatsoever to do with their site (which may be a commercial site of tangential, at best, relevance to anything I write about). The worst type of request merely asks for me to point to some external resource from some random post on trivia.

I very, very, very rarely respond to any such requests. And I never, ever respond to persistent, repeated requests from the same source.

One particularly laughable request came in about three years ago. It asked me to point to an on-line password generator/checker (not a smart thing to do). I tried it with an XKCD style password like “soldieravailablecrossmagnet” and got the stupid response:

“Weak Password

It would take a computer about 507 quintillion years to crack your
password.”

Weak password eh?

It should be obvious, but in case it isn’t I’ll spell it out here (and in an addition to my “about” page).

This is a personal blog. It is avowedly and intentionally non-commercial in nature. I pay for this blog from my own resources simply because I want to. I do not seek, nor will I accept, any sponsored content or linkages of any kind. Any external resources I point to are there simply because I have personally found those resources interesting or useful. So please do not ask me to point to your site. Please do not ask me for sponsored content. Please do not ask me for guest posts. If you do, it simply proves that you have not done your research properly – so you will be ignored.

Regards

Mick

Permanent link to this article: https://baldric.net/2020/01/14/do-not-ask-me-for-guest-posts-or-links/

retiring the slugs

I first started using Linksys NSLU2s (aka “slugs”) in early 2008. Back then I considered them quite useful and I even ran webservers and local apt-caches on them. But realistically they are (and even then, were) a tad underpowered. Worse, since Debian on the XScale-IXP42x hasn’t been updated for several years, the slugs are probably vulnerable to several exploits. The latest version of Debian available for the slugs is probably that which I have running (“uname -a” shows “Linux slug 3.2.0-6-ixp4xx #1 Debian 3.2.102-1 armv5tel”).

The advent of the Raspberry Pi (astonishingly eight years ago now) brought a much more powerful and flexible device into the hands of the masses – and it didn’t need complex re-flashing procedures to get a general purpose linux installation running on it. Over the christmas period last year I added two more Pis (Pi 4s this time) to my network and finally got around to retiring my slugs (well, actually I still have one running, but I will get around to replacing that too soon).

On replacing the slugs I noticed that the 1TB disk I bought as additional storage for my main slug had been running almost non-stop (apart from the occasional reboot) since March 2009. I think that is a remarkably good lifetime for a consumer grade hard disk. Certainly I have had internal disks fail at much lower usage timescales. I have even had supposedly more robust, and certainly way more expensive, disks fail on high end Sun workstations and servers in my professional life.

So if you are in the market for new consumer grade disks, I think I can safely recommend Toshiba.

Oh, and Happy New Year by the way.

Permanent link to this article: https://baldric.net/2020/01/14/retiring-the-slugs/

welcome to prague

As of today we are now fully functional in our new home in a datacentre in Prague. We also have a new letsencypt certificate. If you see any problems, let me know at the usual email address.

Enjoy

Permanent link to this article: https://baldric.net/2019/12/05/welcome-to-prague/

a bargain VPS

I have been using services from ITLDC for about three years now. I initially picked one of their cheap VMs based in the Netherlands whilst I was expanding my VPN usage, and frankly, I was not expecting much in the way of customer service or assistance for the very low price I paid. After all I thought, you can’t expect much for under 3 euros a month. But I was pleasantly surprised to find that not only was the actual service pretty rock solid, but so was the help I received on the one or two occasions I had a problem. In fact I have never had to wait more than a few minutes for a response to a ticket. That is exceptional in my experience. For the last year or more, I have been using one of their VMs as an unbound DNS server and VPN endpoint.

So when I was considering a new VM I was very pleasantly surprised to note that ITLDC were offering a huge discount on new servers as part of a “black friday” promotion. I have now paid for a new debian server, based in Prague. That VM is one of their 2 Gig SSD offerings (2 Gb RAM, dual core, 15 Gb disk and unlimited traffic). Even at their normal undiscounted rate that would only have cost me 65.99 euros for a year. I paid the princely sum of 26.39 euros – a 60% discount.

Absolutely astounding value for money. Go get one before the offer runs out.

Permanent link to this article: https://baldric.net/2019/11/28/a-bargain-vps/

fsckd

God help us all.

Permanent link to this article: https://baldric.net/2019/07/23/fsckd/

more password stupidity

A recent exchange of email with an old friend gave me cause to revisit on-line password/passphrase generators. I cannot for the life of me imagine why anyone would actually use such a thing, but there are a surprisingly large number out there. On the upside, most of these now seem to use TLS encrypted connections so at least the passwords aren’t actually passed back to the requester in clear, but the downside is that most generators are still woefully stupid.

I particularly liked this bonkers example:

password generator

The generator allows the user to select the length of the password together with other attributes such as character set and whether or not to include symbols. For fun I asked it to give me a sixteen character password and it duly generated the truly awful gibberish string “bJQhxyAe2R9NkcLN“. But the best bit was that it attempted to give me a way to remember this nonsense, by generating a further set of garbage:

“bestbuy JACK QUEEN hulu xbox yelp APPLE egg 2 ROPE 9 NUT korean coffee LAPTOP NUT“.

Forgive me, but that seems rather more difficult to remember than “soldier available cross magnet“.

Permanent link to this article: https://baldric.net/2019/07/15/more-password-stupidity/

add my name to the list

At the tail end of last year, Crispin Robinson and Ian Levy of GCHQ published a co-authored essay on “suggested” ways around the “going dark problem” that strong encryption in messaging poses Agencies such as GCHQ and its (foreign) National equivalents. In that essay, the authors were at pains to state that they were not in favour of weakening strong encryption, indeed they said:

The U.K. government strongly supports commodity encryption. The Director of GCHQ has publicly stated that we have no intention of undermining the security of the commodity services that billions of people depend upon and, in August, the U.K. signed up to the Five Country statement on access to evidence and encryption, committing us to support strong encryption while seeking access to data. That statement urged signatories to pursue the best implementations within their jurisdictions. This is where details matter, so with colleagues from across government, we have created some core principles that will be used to set expectations of our engagements with industry and constrain any exceptional access solution. We believe these U.K. principles will enable solutions that provide for responsible law enforcement access with service provider assistance without undermining user privacy or security.

They went to outline what they called six “principles” to inform the debate on “exceptional access” (to encrypted data).

These principles are:

  • Privacy and security protections are critical to public confidence. Therefore, we will only seek exceptional access to data where there’s a legitimate need, that access is the least intrusive way of proceeding and there is appropriate legal authorisation.
  • Investigative tradecraft has to evolve with technology.
  • Even when we have a legitimate need, we can’t expect 100 percent access 100 percent of the time.
  • Targeted exceptional access capabilities should not give governments unfettered access to user data.
  • Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users.
  • Transparency is essential.

(I particularly like that last one.)

On first reading, the paper seems reasonable and unexceptional (which is probably what it was designed to do). It argues against direct attacks on end-to-end encryption itself and instead advocates insertion of an additional “end” to the encrypted conversation. So when Bob talks to Alice over his “secure” device, he would actually be taking to Alice and Charlie where Charlie had been added to the conversation by the device manufacturer or service provider and the notification to Bob (or Alice) of that addition would be suppressed so they would not know of the eavesdropping.

This is what they said:

So, to some detail. For over 100 years, the basic concept of voice intercept hasn’t changed much: crocodile clips on telephone lines. Sure, it’s evolved from real crocodile clips in early systems through to virtual crocodile clips in today’s digital exchanges that copy the call data. But the basic concept has remained the same. Many of the early digital exchanges enacted lawful intercept through the use of conference calling functionality.

In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved – they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

We’re not talking about weakening encryption or defeating the end-to-end nature of the service. In a solution like this, we’re normally talking about suppressing a notification on a target’s device, and only on the device of the target and possibly those they communicate with. That’s a very different proposition to discuss and you don’t even have to touch the encryption.

Neat huh? No need to go to all the bother of crypto attack, key escrow or any of the “magic thinking” around weakened encryption. Who could possibly object to that?

Well, lots of people could, and many did just that.

The Open Technology Institute, worked to coordinate a response from an international coalition of 47 signatories, including 23 civil society organizations that work to protect civil liberties, human rights and innovation online; seven tech companies and trade associations, including providers that offer leading encrypted messaging services; and 17 individual experts in digital security and policy. Those signatories included: Big Brother Watch, the Center for Democracy & Technology, the Electronic Frontier Foundation, the Freedom of the Press Foundation, Human Rights Watch, Liberty, the Open Rights group, Privacy International, Apple, Google, Microsoft, WhatsApp, Steven M.Bellovin, Peter G. Neumann of SRI International, Bruce Schneier, Richard Stallman and Phil Zimmermann amongst others

On May 30th 2019, they published an open letter to GCHQ giving their concerns at the proposals. In that letter they outlined:

how the “ghost proposal” would work in practice, the ways in which tech companies that offer encrypted messaging services would need to change their systems, and the dangers that this would present. In particular, the letter outlines how the ghost proposal, if implemented, would “undermine the authentication process that enables users to verify that they are communicating with the right people, introduce potential unintentional vulnerabilities, and increase risks that communications systems could be abused or misused.” If users cannot trust that they know who is on the other end of their communications, it will not matter that their conversations are protected by strong encryption while in transit. These communications will not be secure, threatening users’ rights to privacy and free expression. (my emphasis)

They went on to say:

  • The Proposal Creates Serious Risks to Cybersecurity and Human Rights.
  • The Proposal Would Violate the Principle That User Trust Must be Protected.
  • The Ghost Proposal Would Violate the Principle That Transparency is Essential.

They concluded that GCHQ should:

abide by the six principles they have announced, abandon the ghost proposal, and avoid any alternate approaches that would similarly threaten digital security and human rights.

Additionally, Jon Callas at ACLU has published a series of four essays which breaks down the fatal flaws in the proposal. Those essays in themselves are well worth reading, but so are all the additional papers (by people such as Steven Bellovin, Matt Blaze, Susan Landau, Whitfield Diffie, Seth Schoen, Nate Cardozo and many others) pointed to in those essays.

So: back in your box Levy, no-one wants your shitty little stick.

Permanent link to this article: https://baldric.net/2019/07/10/add-my-name-to-the-list/

openvpn clients on pfsense

In my 2017 article on using OpenVPN on a SOHO router I said: “In testing, I’ve found that using a standard OpenVPN setup (using UDP as the transport) has only a negligible impact on my network usage – certainly much less than using Tor.”

That was true back then but is unfortunately not so true now.

In 2017 my connection to the outside world was over a standard ADSL line. At its best, I saw around 11 – 12 Mbit/s. Using OpenVPN on my new Asus router I saw this drop to about 10 Mbit/s. I found that acceptable and assumed that it was largely caused by the overhead of encapsulation of TCP within UDP over the tunnel.

Not so.

My small corner of the rural English landscape has recently been provided with fast FTTC connectivity by BT Openreach. This meant that I could get a new fast fibre connection should I so wish. I did so wish, and at the end of my contract with my last ISP I switched to a new provider. I now have a VDSL connection giving me a 30 Mbit/s IP connection to the outside world. Plenty fast enough for our use case (though I can apparently get 60 Mbit/s should I so wish). However, my OpenVPN connection stayed stubbornly at the 10 Mbit/s mark. No way was that acceptable. In testing I switched the client connection endpoint away from my router and back to my i7 desktop. The tunnel speed went up to a shade under 30 Mbit/s. Conclusion? The overhead was /not/ caused by protocol encapsulation, but rather by the encryption load, and my SOHO router was simply not powerful enough to give me a decent fast tunnel. So I needed a new, beefier, router. I considered re-purposing an old Intel i5 box I had lying around unused, but on careful reflection I decided that that would be way too much of a power hog (and a bit on the large side) when all I really needed was something about the size and power consumption of my existing routers. But before selecting a hardware platform I looked for a likely OS. There are plenty of options around, varying from the fairly router specific OpenWRT/LEDE or DD-WRT firmware binaries, through to firewall platforms such as Endian, Smoothwall, IPFire, IPCop, pfSense or OPNsense.

At varying times in the past I have used OpenWRT, IPCop and IPFire with at best, mixed success. I decided fairly early on to discount the router firmware approach because that would mean simply re-flashing a SOHO router which would probably end up just as under powered as my existing setup. Besides I really wanted to try a new firewall with application layer capabilities to supplement my existing NAT based devices. Smoothwall, IPCop, IPFire and Endian are all based on hardened Linux distributions and whilst Endian looks particularly interesting (and I may well play with it later) I fancied a change to a BSD based product. I’m a big Linux fan, but I recognise the dangers of a monoculture in any environment. In a security setup a monoculture can be fatal. So I downloaded copies of both pfSense and OPNsense to play with.

As an aside, I should note that there appears to be a rather sad history of “bad blood” between the developers of pfSense and OPNsense. This can sometimes happen when software forks, but the animosity between these two camps seems to have been particularly nasty. I won’t point to the links here, but any search for “pfsense v opnsense” will lead you to some pretty awful places, including a spoof OPNsense website which ridiculed the new product.

OPNsense is a fork of pfSense, which is itself originally a fork of the m0n0wall embedded firmware firewall. The original fork of pfSense took place in 2004 with the first public version appearing in 2006. The fork of OPNsense from pfSense took place in January 2015 and when the original m0n0wal project closed in February 2015 it’s creator and developer recommended all users move to OPNSense. So pfSense has been in existence, and steady development for over 13 years, whilst OPNSense is a relative newcomer.

Politics of open source project forks aside, I was really only interested in the software itself. In my case, so long as the software meets my needs (in this case solid ability to handle multiple OpenVPN client configurations) what I care most about is usability, documentation, stability, longevity, active development and support (so no orphaned projects) and, preferably, an active community. Both products seem to meet most of these criteria, though I confess that I prefer the stability of pfSense over the (rather too) frequent updates to OPNsense. In my view, there is little to choose between the two products in terms of core functionality. The GUI’s are different, but preference there is largely a matter of personal taste, But crucially, for me, I found the pfSense documentation much better than that for OPNsense. I also found a much wider set of supplementary documentation on-line created by users of pfSense than exists for OPNsense. Indeed, when researching “openVPN on OPNsense” for example, I found many apparently confused users (even on OPNsense own forums) bemoaning the lack of decent documentation on how to set up openVPN clients. Documentation for both products leans heavily towards the creation of OpenVPN servers rather than clients, and neither is particularly good at explaining how to use pre-existing CAs, certificates and keys for either server or client end, but eventually I found it fairly straightforward to set up on pfsense and after now having it running successfully for a while I am happy to stick with that product.

Having chosen my preferred product I had to purchase appropriate hardware on which to run it. I eventually settled on a Braswell Celeron Dual Core Mini PC.

As you can see from the pictures, this device has dual (Gigabit) ethernet ports, twin HDMI ports, WiFi (which I don’t actually use in my configuration) and six USB ports (USB 2.0 and USB 3.0), also unused. Internally it has a dual core Intel Celeron N3050 CPU (which crucially supports AES-NI for hardware crypto acceleration), 4 GB of DDR3 RAM and a 64 Gig SSD, all housed in a fanless aluminium case measuring not much larger than a typical external hard disk drive. Very neat, and in testing it rarely runs hotter than around 32 degrees centigrade.

So: what does my configuration look like?

Initial configuration is fairly straightforward and takes place during the installation and consists of assigning the WAN and LAN interfaces and setting the IP addresses. When this is concluded, additional general configuration is handled through the “setup wizard” available from the web based GUI which appears on the LAN port at the address you have assigned. This early configuration includes: naming the firewall and local domain; setting the DNS and time servers; and some configuration of the GUI itself. In my case I have local DNS forwarders on both my inner and outer local nets so I pointed psSense to my outer local forwarder (which. in turn, forwards queries to my external unbound resolvers). Most users will probably configure the DNS address to point to their ISP’s server(s). At this point it is a good idea to change the default admin password and then reboot before further configuration.

One point worth noting here is whether to set the pfSense box as a DNS forwarder, or resolver. In most configurations you will wish to simply forward requests to an external forwarder or resolver (as do I). Internally pfSense uses DNSmasq as a forwarder and unbound as a caching resolver so you could use the new firewall itself to resolve addresses. Forwarding is simpler.

I did all the initial configuration off-line so as not to interrupt my existing network setup. But once I was happy with the new pfSense box I then had to simply amend the configuration of my existing internal router so that it’s RFC1918 WAN address matched the LAN address set on the new firewall (.1 at one end and .254 at the other). I had configured the WAN address of the pfSense box to match my existing external router setup so that insertion of the new box between the two routers caused minimum disruption. The new network looks something like this: (click the image for a larger view).

At this stage, the pfSense box is simply acting as a new NAT firewall and router. Testing from various points on the internal net showed that traffic flowed as I expected.

Now for the OpenVPN client configuration.

This assumes that we are using TLS/SSL with our own pre-configured CA, certificates and keys. Pfsense allows you to set up your own OpenVPN server and certificates if you wish. I chose not to do that because I am re-using an existing setup. You could also use the simpler pre-shared key setup (if this makes you feel safe).

These are the steps I followed:

1. Goto System -> Cert Manager – -> CA

Add the new CA.
Give it a descriptive name (such as “My Certificate Authority”).
Import an existing Authority.
Paste in your X509 Certificate and (optional but recommended) paste in your private key for that certificate).

Save.

2. Go to System -> cert manager -> certificates

(Note that there will already be a self signed cert for the pfsense webconfiguration GUI).

Add a new certificate.

Again give it a descriptive name (such as “My Openvpn Certificate”).
Import an existing certificate.
Paste in your X509 Certificate and private key.

Save.

3. Go to VPN -> Openvpn -> clients

Add a new client.

In the general Information Section:

Ensure the server mode is correct for your setup (we are using Peer to Peer TLS/SSL).
Check that the protocol and device mode are correct for your setup and that the interface is set to WAN.
Add the host server name or IP address for the remote end of the tunnel.
Give the connection a meaningful name (e.g. “hostname” in Paris).

If you use authentication, add the details.

In the Cryptographic settings section:

Ensure “use a TLS key” is checked.
But uncheck “automatically generate a TLS key” (because we have our own already).
Now paste in the TLS key and ensure that “TLS key usage mode” matches your use case (TLS Authentication or TLS Encryption and Authentication).
Select your previously created CA certificate from the “Peer Certificate Authority” drop down box together with any relevant revocation list.
Select your client Certificate (created at step 2 above) from the drop down box.
Select the encryption algorithm you use.
If you allow encryption algorithm negotiation at the server, then check the “Negotiable Cryptographic Parameter” box and select the algorithm(s) you want to use.
Select the “Auth digest algorithm” in use (I recommend a minimum of SHA256 – personally I use SHA512, but this must match the server end).
If your hardware supports it (AES-NI for example) then select “Hardware Crypto”.

In the Tunnel Settings section:

Leave everything at the default (because our servers set the Tunnel addresses) but ensure that the compression settings here match the remote server. Personally I disable compression (see OpenVPN documentation for some reasons) so I set this to “comp-lzo no” at both ends of the tunnel.

Finally, in the Advanced Configuration section:

Paste in any additional configuration commands that you have at the server end which have not been covered above.
I use:

remote-cert-tls server;
key-direction 1;
persist-key;
persist-tun

and select IPV4 only for the gateway connection (unless you actually use IPV6) and also select an optional log verbosity level. You may choose a high level whilst you are testing and change it later when all is working satisfactorily.

Save.

4. Repeat 3 above to create clients for all other servers (or VPN services) you may have.

Note that if you have multiple client configurations (as I do) then you should ensure that only one client at a time is enabled. You can selectively enable and disable clients by editing the configuration at VPN -> Openvpn -> clients for later usage.

5. Go To Interfaces -> Assignments -> Interface Assignment

Select an interface to assign to one of the clients created at 3 or 4 above from the drop down boxes.
Enable the interface by checking the box and give the interface a meaningful name (such as “tunnel to Paris”). (“We’ll always have Paris….”).
Leave everything else as the default and save.

Now allow access to the tunnel(s) through the interface(s):

6. Go to Firewall -> NAT -> Outbound

Check the radio button marked “select Manual Outbound NAT rule”. All the Firewall rules on the WAN interface which were created automatically as a result of your initial general setup will be shown. The source addresses for these rules will be the local loopback and the LAN IP address you set.

Add a new rule to the bottom of the list.

In the “Advanced Outbound NAT entry” section:

Change the address family to IPV4 only (if appropriate).
Give the source as the LAN network address of the pfsense F/W.
leave other entries as the default.

Save.

7. Go to Firewall -> Rules -> LAN

Disable the IPV6 rule (if appropriate to your use case)

8. Go to Firewall -> Rules – OpenVPN

Add a new rule to Pass IPV4 through the interface called OpenVPN. Give the rule a meaningful description (such as “allow traffic through the tunnel”

9. Now finally go to Status -> OpenVPN

The (single) OpenVPN client you have enabled from 3 above should be shown as running. You can stop or restart the service from this page.

10. Now check that traffic is actually going over the tunnel by checking your public IP address in a web browser (I use “check2ip.com” amongst others).

If all is working as you expect and you have multiple VPN endpoints, try disabling the tunnel you are using (from “VPN -> OpenVPN -> Clients, Edit Client”) and selectively enabling others. Check the status of each selected tunnel in “Status -> OpenVPN” and reload as necessary.

In my case, with the hardware I have chosen, and the configuration given above, I now get near native speed over any of my VPN tunnels. It will be interesting to see what I get should I move to even faster broadband in future.

Enjoy.

Permanent link to this article: https://baldric.net/2019/07/07/openvpn-clients-on-pfsense/