welcome to prague

As of today we are now fully functional in our new home in a datacentre in Prague. We also have a new letsencypt certificate. If you see any problems, let me know at the usual email address.

Enjoy

Permanent link to this article: https://baldric.net/2019/12/05/welcome-to-prague/

a bargain VPS

I have been using services from ITLDC for about three years now. I initially picked one of their cheap VMs based in the Netherlands whilst I was expanding my VPN usage, and frankly, I was not expecting much in the way of customer service or assistance for the very low price I paid. After all I thought, you can’t expect much for under 3 euros a month. But I was pleasantly surprised to find that not only was the actual service pretty rock solid, but so was the help I received on the one or two occasions I had a problem. In fact I have never had to wait more than a few minutes for a response to a ticket. That is exceptional in my experience. For the last year or more, I have been using one of their VMs as an unbound DNS server and VPN endpoint.

So when I was considering a new VM I was very pleasantly surprised to note that ITLDC were offering a huge discount on new servers as part of a “black friday” promotion. I have now paid for a new debian server, based in Prague. That VM is one of their 2 Gig SSD offerings (2 Gb RAM, dual core, 15 Gb disk and unlimited traffic). Even at their normal undiscounted rate that would only have cost me 65.99 euros for a year. I paid the princely sum of 26.39 euros – a 60% discount.

Absolutely astounding value for money. Go get one before the offer runs out.

Permanent link to this article: https://baldric.net/2019/11/28/a-bargain-vps/

fsckd

God help us all.

Permanent link to this article: https://baldric.net/2019/07/23/fsckd/

more password stupidity

A recent exchange of email with an old friend gave me cause to revisit on-line password/passphrase generators. I cannot for the life of me imagine why anyone would actually use such a thing, but there are a surprisingly large number out there. On the upside, most of these now seem to use TLS encrypted connections so at least the passwords aren’t actually passed back to the requester in clear, but the downside is that most generators are still woefully stupid.

I particularly liked this bonkers example:

password generator

The generator allows the user to select the length of the password together with other attributes such as character set and whether or not to include symbols. For fun I asked it to give me a sixteen character password and it duly generated the truly awful gibberish string “bJQhxyAe2R9NkcLN“. But the best bit was that it attempted to give me a way to remember this nonsense, by generating a further set of garbage:

“bestbuy JACK QUEEN hulu xbox yelp APPLE egg 2 ROPE 9 NUT korean coffee LAPTOP NUT“.

Forgive me, but that seems rather more difficult to remember than “soldier available cross magnet“.

Permanent link to this article: https://baldric.net/2019/07/15/more-password-stupidity/

add my name to the list

At the tail end of last year, Crispin Robinson and Ian Levy of GCHQ published a co-authored essay on “suggested” ways around the “going dark problem” that strong encryption in messaging poses Agencies such as GCHQ and its (foreign) National equivalents. In that essay, the authors were at pains to state that they were not in favour of weakening strong encryption, indeed they said:

The U.K. government strongly supports commodity encryption. The Director of GCHQ has publicly stated that we have no intention of undermining the security of the commodity services that billions of people depend upon and, in August, the U.K. signed up to the Five Country statement on access to evidence and encryption, committing us to support strong encryption while seeking access to data. That statement urged signatories to pursue the best implementations within their jurisdictions. This is where details matter, so with colleagues from across government, we have created some core principles that will be used to set expectations of our engagements with industry and constrain any exceptional access solution. We believe these U.K. principles will enable solutions that provide for responsible law enforcement access with service provider assistance without undermining user privacy or security.

They went to outline what they called six “principles” to inform the debate on a “exceptional access” (to encrypted data).

These principles are:

  • Privacy and security protections are critical to public confidence. Therefore, we will only seek exceptional access to data where there’s a legitimate need, that access is the least intrusive way of proceeding and there is appropriate legal authorisation.
  • Investigative tradecraft has to evolve with technology.
  • Even when we have a legitimate need, we can’t expect 100 percent access 100 percent of the time.
  • Targeted exceptional access capabilities should not give governments unfettered access to user data.
  • Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users.
  • Transparency is essential.

(I particularly like that last one.)

On first reading, the paper seems reasonable and unexceptional (which is probably what it was designed to do). It argues against direct attacks on end-to-end encryption itself and instead advocates insertion of an additional “end” to the encrypted conversation. So when Bob talks to Alice over his “secure” device, he would actually be taking to Alice and Charlie where Charlie had been added to the conversation by the device manufacturer or service provider and the notification to Bob (or Alice) of that addition would be suppressed so they would not know of the eavesdropping.

This is what they said:

So, to some detail. For over 100 years, the basic concept of voice intercept hasn’t changed much: crocodile clips on telephone lines. Sure, it’s evolved from real crocodile clips in early systems through to virtual crocodile clips in today’s digital exchanges that copy the call data. But the basic concept has remained the same. Many of the early digital exchanges enacted lawful intercept through the use of conference calling functionality.

In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved – they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

We’re not talking about weakening encryption or defeating the end-to-end nature of the service. In a solution like this, we’re normally talking about suppressing a notification on a target’s device, and only on the device of the target and possibly those they communicate with. That’s a very different proposition to discuss and you don’t even have to touch the encryption.

Neat huh? No need to go to all the bother of crypto attack, key escrow or any of the “magic thinking” around weakened encryption. Who could possibly object to that?

Well, lots of people could, and many did just that.

The Open Technology Institute, worked to coordinate a response from an international coalition of 47 signatories, including 23 civil society organizations that work to protect civil liberties, human rights and innovation online; seven tech companies and trade associations, including providers that offer leading encrypted messaging services; and 17 individual experts in digital security and policy. Those signatories included: Big Brother Watch, the Center for Democracy & Technology, the Electronic Frontier Foundation, the Freedom of the Press Foundation, Human Rights Watch, Liberty, the Open Rights group, Privacy International, Apple, Google, Microsoft, WhatsApp, Steven M.Bellovin, Peter G. Neumann of SRI International, Bruce Schneier, Richard Stallman and Phil Zimmermann amongst others

On May 30th 2019, they published an open letter to GCHQ giving their concerns at the proposals. In that letter they outlined:

how the “ghost proposal” would work in practice, the ways in which tech companies that offer encrypted messaging services would need to change their systems, and the dangers that this would present. In particular, the letter outlines how the ghost proposal, if implemented, would “undermine the authentication process that enables users to verify that they are communicating with the right people, introduce potential unintentional vulnerabilities, and increase risks that communications systems could be abused or misused.” If users cannot trust that they know who is on the other end of their communications, it will not matter that their conversations are protected by strong encryption while in transit. These communications will not be secure, threatening users’ rights to privacy and free expression. (my emphasis)

They went on to say:

  • The Proposal Creates Serious Risks to Cybersecurity and Human Rights.
  • The Proposal Would Violate the Principle That User Trust Must be Protected.
  • The Ghost Proposal Would Violate the Principle That Transparency is Essential.

They concluded that GCHQ should:

abide by the six principles they have announced, abandon the ghost proposal, and avoid any alternate approaches that would similarly threaten digital security and human rights.

Additionally, Jon Callas at ACLU has published a series of four essays which breaks down the fatal flaws in the proposal. Those essays in themselves are well worth reading, but so are all the additional papers (by people such as Steven Bellovin, Matt Blaze, Susan Landau, Whitfield Diffie, Seth Schoen, Nate Cardozo and many others) pointed to in those essays.

So:  back in your box Levy, no-one wants your shitty little stick.

Permanent link to this article: https://baldric.net/2019/07/10/add-my-name-to-the-list/

openvpn clients on pfsense

In my 2017 article on using OpenVPN on a SOHO router I said: “In testing, I’ve found that using a standard OpenVPN setup (using UDP as the transport) has only a negligible impact on my network usage – certainly much less than using Tor.”

That was true back then but is unfortunately not so true now.

In 2017 my connection to the outside world was over a standard ADSL line. At its best, I saw around 11 – 12 Mbit/s. Using OpenVPN on my new Asus router I saw this drop to about 10 Mbit/s. I found that acceptable and assumed that it was largely caused by the overhead of encapsulation of TCP within UDP over the tunnel.

Not so.

My small corner of the rural English landscape has recently been provided with fast FTTC connectivity by BT Openreach. This meant that I could get a new fast fibre connection should I so wish. I did so wish, and at the end of my contract with my last ISP I switched to a new provider. I now have a VDSL connection giving me a 30 Mbit/s IP connection to the outside world. Plenty fast enough for our use case (though I can apparently get 60 Mbit/s should I so wish). However, my OpenVPN connection stayed stubbornly at the 10 Mbit/s mark. No way was that acceptable. In testing I switched the client connection endpoint away from my router and back to my i7 desktop. The tunnel speed went up to a shade under 30 Mbit/s. Conclusion? The overhead was /not/ caused by protocol encapsulation, but rather by the encryption load, and my SOHO router was simply not powerful enough to give me a decent fast tunnel. So I needed a new, beefier, router. I considered re-purposing an old Intel i5 box I had lying around unused, but on careful reflection I decided that that would be way too much of a power hog (and a bit on the large side) when all I really needed was something about the size and power consumption of my existing routers. But before selecting a hardware platform I looked for a likely OS. There are plenty of options around, varying from the fairly router specific OpenWRT/LEDE or DD-WRT firmware binaries, through to firewall platforms such as Endian, Smoothwall, IPFire, IPCop, pfSense or OPNsense.

At varying times in the past I have used OpenWRT, IPCop and IPFire with at best, mixed success. I decided fairly early on to discount the router firmware approach because that would mean simply re-flashing a SOHO router which would probably end up just as under powered as my existing setup. Besides I really wanted to try a new firewall with application layer capabilities to supplement my existing NAT based devices. Smoothwall, IPCop, IPFire and Endian are all based on hardened Linux distributions and whilst Endian looks particularly interesting (and I may well play with it later) I fancied a change to a BSD based product. I’m a big Linux fan, but I recognise the dangers of a monoculture in any environment. In a security setup a monoculture can be fatal. So I downloaded copies of both pfSense and OPNsense to play with.

As an aside, I should note that there appears to be a rather sad history of “bad blood” between the developers of pfSense and OPNsense. This can sometimes happen when software forks, but the animosity between these two camps seems to have been particularly nasty. I won’t point to the links here, but any search for “pfsense v opnsense” will lead you to some pretty awful places, including a spoof OPNsense website which ridiculed the new product.

OPNsense is a fork of pfSense, which is itself originally a fork of the m0n0wall embedded firmware firewall. The original fork of pfSense took place in 2004 with the first public version appearing in 2006. The fork of OPNsense from pfSense took place in January 2015 and when the original m0n0wal project closed in February 2015 it’s creator and developer recommended all users move to OPNSense. So pfSense has been in existence, and steady development for over 13 years, whilst OPNSense is a relative newcomer.

Politics of open source project forks aside, I was really only interested in the software itself. In my case, so long as the software meets my needs (in this case solid ability to handle multiple OpenVPN client configurations) what I care most about is usability, documentation, stability, longevity, active development and support (so no orphaned projects) and, preferably, an active community. Both products seem to meet most of these criteria, though I confess that I prefer the stability of pfSense over the (rather too) frequent updates to OPNsense. In my view, there is little to choose between the two products in terms of core functionality. The GUI’s are different, but preference there is largely a matter of personal taste, But crucially, for me, I found the pfSense documentation much better than that for OPNsense. I also found a much wider set of supplementary documentation on-line created by users of pfSense than exists for OPNsense. Indeed, when researching “openVPN on OPNsense” for example, I found many apparently confused users (even on OPNsense own forums) bemoaning the lack of decent documentation on how to set up openVPN clients. Documentation for both products leans heavily towards the creation of OpenVPN servers rather than clients, and neither is particularly good at explaining how to use pre-existing CAs, certificates and keys for either server or client end, but eventually I found it fairly straightforward to set up on pfsense and after now having it running successfully for a while I am happy to stick with that product.

Having chosen my preferred product I had to purchase appropriate hardware on which to run it. I eventually settled on a Braswell Celeron Dual Core Mini PC.

As you can see from the pictures, this device has dual (Gigabit) ethernet ports, twin HDMI ports, WiFi (which I don’t actually use in my configuration) and six USB ports (USB 2.0 and USB 3.0), also unused. Internally it has a dual core Intel Celeron N3050 CPU (which crucially supports AES-NI for hardware crypto acceleration), 4 GB of DDR3 RAM and a 64 Gig SSD, all housed in a fanless aluminium case measuring not much larger than a typical external hard disk drive. Very neat, and in testing it rarely runs hotter than around 32 degrees centigrade.

So: what does my configuration look like?

Initial configuration is fairly straightforward and takes place during the installation and consists of assigning the WAN and LAN interfaces and setting the IP addresses. When this is concluded, additional general configuration is handled through the “setup wizard” available from the web based GUI which appears on the LAN port at the address you have assigned. This early configuration includes: naming the firewall and local domain; setting the DNS and time servers; and some configuration of the GUI itself. In my case I have local DNS forwarders on both my inner and outer local nets so I pointed psSense to my outer local forwarder (which. in turn, forwards queries to my external unbound resolvers). Most users will probably configure the DNS address to point to their ISP’s server(s). At this point it is a good idea to change the default admin password and then reboot before further configuration.

One point worth noting here is whether to set the pfSense box as a DNS forwarder, or resolver. In most configurations you will wish to simply forward requests to an external forwarder or resolver (as do I). Internally pfSense uses DNSmasq as a forwarder and unbound as a caching resolver so you could use the new firewall itself to resolve addresses. Forwarding is simpler.

I did all the initial configuration off-line so as not to interrupt my existing network setup. But once I was happy with the new pfSense box I then had to simply amend the configuration of my existing internal router so that it’s RFC1918 WAN address matched the LAN address set on the new firewall (.1 at one end and .254 at the other). I had configured the WAN address of the pfSense box to match my existing external router setup so that insertion of the new box between the two routers caused minimum disruption. The new network looks something like this: (click the image for a larger view).

At this stage, the pfSense box is simply acting as a new NAT firewall and router. Testing from various points on the internal net showed that traffic flowed as I expected.

Now for the OpenVPN client configuration.

This assumes that we are using TLS/SSL with our own pre-configured CA, certificates and keys. Pfsense allows you to set up your own OpenVPN server and certificates if you wish. I chose not to do that because I am re-using an existing setup. You could also use the simpler pre-shared key setup (if this makes you feel safe).

These are the steps I followed:

1. Goto System -> Cert Manager – -> CA

Add the new CA.
Give it a descriptive name (such as “My Certificate Authority”).
Import an existing Authority.
Paste in your X509 Certificate and (optional but recommended) paste in your private key for that certificate).

Save.

2. Go to System -> cert manager -> certificates

(Note that there will already be a self signed cert for the pfsense webconfiguration GUI).

Add a new certificate.

Again give it a descriptive name (such as “My Openvpn Certificate”).
Import an existing certificate.
Paste in your X509 Certificate and private key.

Save.

3. Go to VPN -> Openvpn -> clients

Add a new client.

In the general Information Section:

Ensure the server mode is correct for your setup (we are using Peer to Peer TLS/SSL).
Check that the protocol and device mode are correct for your setup and that the interface is set to WAN.
Add the host server name or IP address for the remote end of the tunnel.
Give the connection a meaningful name (e.g. “hostname” in Paris).

If you use authentication, add the details.

In the Cryptographic settings section:

Ensure “use a TLS key” is checked.
But uncheck “automatically generate a TLS key” (because we have our own already).
Now paste in the TLS key and ensure that “TLS key usage mode” matches your use case (TLS Authentication or TLS Encryption and Authentication).
Select your previously created CA certificate from the “Peer Certificate Authority” drop down box together with any relevant revocation list.
Select your client Certificate (created at step 2 above) from the drop down box.
Select the encryption algorithm you use.
If you allow encryption algorithm negotiation at the server, then check the “Negotiable Cryptographic Parameter” box and select the algorithm(s) you want to use.
Select the “Auth digest algorithm” in use (I recommend a minimum of SHA256 – personally I use SHA512, but this must match the server end).
If your hardware supports it (AES-NI for example) then select “Hardware Crypto”.

In the Tunnel Settings section:

Leave everything at the default (because our servers set the Tunnel addresses) but ensure that the compression settings here match the remote server. Personally I disable compression (see OpenVPN documentation for some reasons) so I set this to “comp-lzo no” at both ends of the tunnel.

Finally, in the Advanced Configuration section:

Paste in any additional configuration commands that you have at the server end which have not been covered above.
I use:

remote-cert-tls server;
key-direction 1;
persist-key;
persist-tun

and select IPV4 only for the gateway connection (unless you actually use IPV6) and also select an optional log verbosity level. You may choose a high level whilst you are testing and change it later when all is working satisfactorily.

Save.

4. Repeat 3 above to create clients for all other servers (or VPN services) you may have.

Note that if you have multiple client configurations (as I do) then you should ensure that only one client at a time is enabled. You can selectively enable and disable clients by editing the configuration at VPN -> Openvpn -> clients for later usage.

5. Go To Interfaces -> Assignments -> Interface Assignment

Select an interface to assign to one of the clients created at 3 or 4 above from the drop down boxes.
Enable the interface by checking the box and give the interface a meaningful name (such as “tunnel to Paris”). (“We’ll always have Paris….”).
Leave everything else as the default and save.

Now allow access to the tunnel(s) through the interface(s):

6. Go to Firewall -> NAT -> Outbound

Check the radio button marked “select Manual Outbound NAT rule”. All the Firewall rules on the WAN interface which were created automatically as a result of your initial general setup will be shown. The source addresses for these rules will be the local loopback and the LAN IP address you set.

Add a new rule to the bottom of the list.

In the “Advanced Outbound NAT entry” section:

Change the address family to IPV4 only (if appropriate).
Give the source as the LAN network address of the pfsense F/W.
leave other entries as the default.

Save.

7. Go to Firewall -> Rules -> LAN

Disable the IPV6 rule (if appropriate to your use case)

8. Go to Firewall -> Rules – OpenVPN

Add a new rule to Pass IPV4 through the interface called OpenVPN. Give the rule a meaningful description (such as “allow traffic through the tunnel”

9. Now finally go to Status -> OpenVPN

The (single) OpenVPN client you have enabled from 3 above should be shown as running. You can stop or restart the service from this page.

10. Now check that traffic is actually going over the tunnel by checking your public IP address in a web browser (I use “check2ip.com” amongst others).

If all is working as you expect and you have multiple VPN endpoints, try disabling the tunnel you are using (from “VPN -> OpenVPN -> Clients, Edit Client”) and selectively enabling others. Check the status of each selected tunnel in “Status -> OpenVPN” and reload as necessary.

In my case, with the hardware I have chosen, and the configuration given above, I now get near native speed over any of my VPN tunnels. It will be interesting to see what I get should I move to even faster broadband in future.

Enjoy.

Permanent link to this article: https://baldric.net/2019/07/07/openvpn-clients-on-pfsense/

one unbound and you are free

I have written about my use of OpenVPN in several posts in the past, most latterly in May 2017 in my note about the Investigatory Powers (IP) Bill. In that post I noted that all the major ISPs would be expected to log all their customers’ internet connectivity and to retain such logs for so long as is deemed necessary under the Act. In order to mitigate this unwarranted (and unwanted) surveillance as much as possible, I wrap my connectivity (and that of my family and any others using my networks) in an OpenVPN tunnel to one of several endpoints I have dotted about the ‘net. This tunnel shields my network activity from prying eyes at my ISP, but of course does not stop further prying eyes at the network end point(s). Here I am relying on the fact that my use of VMs in various European datacentres, and thus outside the scope of the IP Act, will give me some protection. But of course I could be wrong – and as I pointed out in my comparison of paid for versus roll your own VPNs, “there is no point in having a “secure” tunnel if the end server leaks like a sieve or is subject to surveillance by the server provider – you have just shifted surveillance from the UK ISP to someone else.”

That aside, I feel more comfortable in using my own VPN, to an end point I have chosen, in a location I have chosen, with a provider I have chosen, than I do in simply exiting my domestic ISP’s network with all that I /know/ they will be doing to log my activity. Call me picky.

Now one glaring omission in my protective stance has always been my reliance on third party DNS servers. Again, as I noted in my 2017 post, many commercial VPN providers rely on DNS servers of questionable reliability. By that I mean not that the DNS servers would necessarily fail, but that they could not be fully trusted. Google DNS servers (on 8.8.8.8 and 8.8.4.4) for example are very popular with ISPs precisely because the infrastructure they provide /is/ robust and reliable. But Google log your requests. In fact they are in a very powerful position. I can’t find statistics on the total proportion of DNS requests answered by Google (and I have looked, trust me) but back in late 2014, Google themselves stated “Google Public DNS resolvers serve 400 billion responses per day and more than 50% of them are location-sensitive.” That worries me – and it should worry Tor users (a naturally shy bunch of internet users) even more. Back in 2016, “Freedom to Tinker” published a blog post by the researchers Philipp Winter, Benjamin Greschbach, Tobias Pulls, Laura M. Roberts, and Nick Feamster (later published in a paper at nymity.ch (PDF). That research found “that a significant fraction of exit relays send DNS requests to Google’s public resolvers. Google sees about one–third of DNS requests that exit from the Tor network—an alarmingly high fraction for a single company, particularly because Tor’s very design avoids centralized points of control and observation.” Discussion on the Tor relays email list suggests that, even today, DNS lookups remain a threat to Tor user’s privacy and anonymity.

But worse than just logging, some DNS providers (notably Quad9 on 9.9.9.9 and 149.112.112.112 and OpenDNS on 208.67.222.222 and 208.67.220.220 for example) actively hijack and interfere with DNS requests. OpenDNS actually make a marketing point of this interference by saying that they will block access to “adult” sites (in the name of parental protection of course). Others, such as cleanbrowsing (on 185.228.168.9 & 185.228.169.9) make a similar virtue of blocking access to “malware” or “adult” sites. All this may appear eminently laudable, and for some people hoping to manage the sort of sites their kids access from home it may seem an attractive option. But I’m a purist. A DNS server should do one thing and only one thing and it should do it well. It should answer DNS requests according to the RFCs 1034 and 1035 (which obsoleted RFCs 882 and 883). It most certainly should not, for example, intercept requests and provide pointers to websites owned by the provider when that provider deems it appropriate to do so. If I ask for the A record for the DNS name “nosuchserver.org” for example. I should get the answer “NXDOMAIN” (as recommended by RFC 2308) telling me that that name does not exist in the DNS system. I most categorically should /not/ get a record pointing to another site. Indeed, back in the early part of this century, Verisign, who were the registry responsible for the .net and .com domains introduced what they called the “Site Finder service”. (See also the wikipedia article for further discussion.) That “service” (or in reality nothing less than a naked power grab by Verisign) returned the address of a Verisign owned and managed web server whenever a request was received for an unregistered .com or .net domain name. Fortunately, in this case ICANN stepped in in 2003 and forced Verisign to desist. But this example merely serves to illustrate how easy it is to interfere with legitimate DNS requests. UK ISPs do this all the time these days. They have to by law, not least in order to apply the (somewhat controversial) blocklists provided by The Internet Watch Foundation.

On my own network internally, I run dnsmasq as a local caching resolver – well actually, I run two such resolvers, one on my inner net, the other on my outside net which has a slightly different security policy stance. The advantage of running such local caches is that I can interfere with my /own/ DNS requests. I do this deliberately in order to block requests to sites I don’t want to see, and which attempt to infringe my privacy. dnsmasq gives me a very simple mechanism to do this through its configuration directive “addn-hosts=” which forces dnsmasq to consult a file similar to the local database of known hosts typically listed at “/etc/hosts”. In my case I set this to “addn-hosts=/etc/hosts.block” which is a locally modified copy of Dan Pollocks’ host file. So any website which tries to direct my browser to facebook, or google-analytics, or any other of the myriad irritating sites which try to shove cookies at me or track me or collect data about me (and these days, unfortunately, that is most of them) won’t succeed. I hate advertising sites and I loathe facebook in particular. So they get pointed to 127.0.0.1.

But as I said above at the start of this post, one glaring omission in my attitude to DNS resolution was my reliance on external third party DNS servers for addresses not covered by my local resolver. As I said in my 2017 post, my local dnsmasq resolver files pointed to the OpenVPN endpoint(s) for resolution and both those servers and my local DNS resolvers pointed only to opennic DNS servers. I trust those servers a lot more than I would any of the larger public DNS servers, but they have flaws. The biggest problem is that many of those servers seem to be run by people like me – essentially hobbyists or activists who dislike internet censorship. There is nothing wrong with that, in fact I applaud it, but it often means that the DNS servers themselves are underspecced or underpowered and/or run on VMs in low bandwidth datacentres (because they are cheap). This means in turn that the servers themselves will often be overloaded, or periodically offline, or will even disappear altogether. This makes maintenance of my list of preferred servers too much of an overhead. I like simplicity. (As an aside, because I am naturally a suspicious sort of chap, there is also the possibility that one or more of the opennic servers may actually be run by persons I ought NOT to trust. It is well known amongst the Tor fraternity for example, that a proportion of the exits nodes at any one time may well be run by Government agencies, or others, keen to de-anonymise Tor users. If you are shown to care about your privacy, by using Tor for example, then of course you “must have something to hide”. Similar reasoning may lead “bad guys” (TM) to wish to run opennic servers. After all, they are all run by volunteers…..)

So, what to do to enhance my (fragile) privacy? Enter unbound, a validating, recursive, caching DNS resolver, designed to be fast and secure. Better yet, unbound supports the emerging standard for encrypted DNS and does DNSSEC validation by default in most configurations. Unbound is distributed under a BSD license and can be found in most linux repositories or bsd ports collections. It is also freely available in source form from NLnet labs. The extensive documentation is also excellent.

The configuration file options allow for extensive control over how unbound operates, but a simple configuration can use as few as 8 or 9 lines of text. My own configuration hides both the identity and the version of unbound in use, limits unbound to IPv4 only and disables all logging (for obvious reasons). And, of course I only allow queries from my own servers or networks – I don’t want to be used by all and sundry on the internet.

It is worth noting that several authors have published suggestions aimed at mitigating the threat to Tor users posed by relying on third party DNS servers. One nice example by Antonios A Charlton at daknob.net proposes the use of the powerdns recursor a recursing only server which holds no authoritative data of its own – it always queries authoritative servers. Powerdns has many fans, I simply prefer unbound in my environment. YMMV.

Permanent link to this article: https://baldric.net/2019/06/26/one-unbound-and-you-are-free/

back to the gym

Having just returned from a family holiday which included too much food and drink and nowhere near enough exercise (well, that’s what holidays are for) I needed to get back to the gym in order to work off some of the excess. My local gym has recently undergone a major refurbishment and equipment upgrade and some of the workstations (notably the treadmills) now have integrated touch screens providing access to a variety of services. As you can see from the picture below, these services range from the obviously relevant such as details of your workout, your heartrate or linkages to fitness trackers, through TV, Youtube or Netflix access, to the less obviously necessary social media services such as Facebook, Instagram and Twitter. God knows how you can tweet and run at the same time and it is beyond me why anyone would even consider giving their social media account details to a gym company. But hey, the technology is there and people do use it.

image of gym workstation screen

treadmill screen

Before the refurbishment all we had was wall mounted TV screens in front of the treadmills and static bicycles so the ability to pick my own TV programme during a workout rather than having to watch yet another bloody episode of “Homes under the Hammer” or “This Morning” was welcome. What I confess was also attractive was the option to watch Netflix. I pondered for a while the wisdom of plugging in my Netflix account details to my workstation login, but eventually concluded that the ability to watch more of what I wanted than was available on TV at the times I use the gym was worth the (fairly low) risk of loss of my Netflix credentials. After all, breach of my Netflix credentials would not expose anywhere near as much about me as would be the case if I was daft enough to use Facebook, Instagram and Twitter and then give /those/ credentials to a third party.

My treadmill workouts usually take around 45-50 minutes before I move on to my other exercises. Not enough time for a film, but ample time for TV re-runs or box set episodes so I have been doing just that. On my reurn to the gym after the holiday I found myself watching early episodes of Black Mirror. Now there is something faintly surreal about watching Charlie Brooker and Konnie Huq’s “15 million Merits” (which is about people riding exercise bikes whilst watching interactive video screens in order to gain “social media” points) on a touch screen attached to a gym treadmill.

Especially when that system is made by a company called Matrix.

Permanent link to this article: https://baldric.net/2019/06/11/back-to-the-gym/

more in the “you couldn’t make it up” dept

The UK Parliamentary petitions site is currently hosting what appears to be one of the most popular it has ever listed. The petition seeks to gain support for revocation of article 50 so that the UK can remain in the EU. Personal politics aside (though in the interests of transparency I should say that I am a passionate supporter of remain) I believe that this petition, or one very like it, was inevitable given our dear PM’s completely shambolic handling of the whole brexit fiasco. Her latest “appeal” to the “tired” public to get behind her version of brexit in which she lays the blame for the delay to getting her deal over the line in the lap of MPs was probably the last straw for many. It is certainly a risky strategy because she needs the support of those very MPs to get the agreement she wants.

Telling the public that she is “on [y]our side” and that she understands we have “had enough” is just asking for a kicking. So when the twitter hashtag #RevokeArticle50 pointed to the Parliamentary petition seeking the revocation of the whole sorry business it became almost inevitable that the public would respond appropriately. At one stage the petition signing rate was the highest ever seen.

Inevitably, however, the site could not cope with this demonstration of the will of the people and it slowed, and eventually crashed – repeatedly. When I went to sign the petition at around 16.00 today, it took me several attempts to get past the “ngnix 502 Bad Gateway” page and get a “thank you for signing” message.

Of course, unless I actually get the email message referred to, and I respond, then my signature won’t count. Right now though, the entire site is off line – but don’t worry, they are working on it.

As of 17:25 today, there were some 1114038 recorded signatures, and it is still growing. But don’t get too excited, Andrea Leadsom has reportedly dismissed the petition, saying that HMG will only take any notice if the total rises above 17.4 million – the number who voted in favour of leaving the EU.

Don’t you just love our political system?

Permanent link to this article: https://baldric.net/2019/03/21/more-in-the-you-couldnt-make-it-up-dept/

postfix sender restrictions – job NOT done

OK, I admit to being dumb. I got another scam email yesterday of the same formulation as the earlier ones (mail From: me@mydomain, To: me@mydomain) attempting to extort bitcoin from me.

How? What had I missed this time?

Well, this was slightly different. Checking the mail headers (and my logs) showed that the email had a valid “Sender” address (some bozo calling themselves “susanne@mangomango.de”) so my earlier “check_sender_access” test would obviously have allowed the email to pass. But what I hadn’t considered was that the sender might then spoof the From: address in the data portion of the email (which is trivially easy to do).

Dumb, so dumb. So what to do to stop this?

Postfix allows for quite a lot of further directives to manage senders through the smtpd_sender_restrictions and mine were still not tight enough to stop this form of abuse. One additional check is offered by the reject_sender_login_mismatch directive which will:

“Reject the request when $smtpd_sender_login_maps specifies an owner for the MAIL FROM address, but the client is not (SASL) logged in as that MAIL FROM address owner; or when the client is (SASL) logged in, but the client login name doesn’t own the MAIL FROM address according to $smtpd_sender_login_maps.”

Now since I store all my user details in a mysql database called “virtual_mailbox_maps” it is simple enough to tell postfix to use that database as the “smtpd_sender_login_map” and check the “From” address against that, That way only locally authenticated valid users can specify a local “From:” address. Why I missed that check is just beyond me.

My postfix configuration now includes the following:

smtpd_sender_login_maps = $virtual_mailbox_maps

smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_non_fqdn_sender, reject_unauthenticated_sender_login_mismatch, check_sender_access hash:/etc/postfix/localdomains

(Note that I chose to use the “reject_unauthenticated_sender_login_mismatch” rather than the wider “reject_sender_login_mismatch” because I only care about outside unauthenticated senders abusing my system. I can deal with authenticated users differently…)

Now let’s see what happens.

Permanent link to this article: https://baldric.net/2019/02/16/postfix-sender-restrictions-job-not-done/

postfix sender restrictions

I mentioned in my previous post that I had recently received one of those scam emails designed to make the recipient think that their account has been compromised in some way and that, furthermore, that compromise has led to malware being installed which has spied on the user’s supposed porn habits. The email then attempts a classic extortion along the lines, “send us money or we let all your friends and contacts see what you have been up to.”

In the scam as described by El Reg, the sender tries to lend credence to the email by including the recipient’s password. As the Reg points out, this password is likely to have been harvested from a web site used in the past by the poor unsuspecting recipient. In my case, the sender didn’t include any password, but they did send the email to me from the email address targetted (so they sent email to “mick@domain” with sender “mick@domain”). Needless to say, I thought that this should not have been possible (except in the unlikely scenario that the extortionist actually had compromised my mail server). After all, my mail server refuses to relay from addresses other than my own networks, and all mail sent from my server must come from an authenticated user (using SASL authentication). My postfix sender restrictions looked like this:

# sender relaying restrictions – authenticated users can send to anywhere

smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_non_fqdn_sender, permit

That says that locally authenticated users can send mail anywhere, but we should reject the sending request when the MAIL FROM address specifies a domain that is not in fully-qualified domain form as is required by the RFC. This stops outsiders trying to send mail to us from non-existent or badly forged from addresses. The final permit allows checking to proceed to the next steps (the relay and recipient restrictions).

So what was going on?

Well, there was nothing in my restrictions to say that an outsider could not send to a local user (i.e. an email recipient on one of my domains). After all, that is part of the function of my mail system – it must accept (valid) email from the outside world aimed at my local users. But therein lay the problem. My mail connection checks (along with the “smtpd_helo”, “smtpd_relay” and “smtpd_recipient” restrictions enforced outbound checks and limited mail sending to outside domains from locally authenticated users, but inbound checks assumed (incorrectly as it turns out) that the sender domain was external to me (i.e. FROM someone@external.domain TO someone@internal.domain). Crucially I had ommitted to enforce any rule stopping someone sending FROM someone@internal.domain TO someone@internal.domain). On reflection that was dumb – and the “extortionist” had taken advantage of that mistake to try to fool me.

Fixing this is actually quite easy. Postfix allows the smtpd_sender_restrictions to include a variety of checks, one of which is “check_sender_access”. This enforces checks against a database of MAIL FROM address, domains, parent domains, or localpart@ specifying actions to take in each case. The database table contains three fields – domain-to-check, action-to-take, optional-message.

So I created a database of local domains called /postfix/localdomains thus:

first.local.domain REJECT Oh no you don’t. You’re not local!
second.local.domain REJECT Oh no you don’t. You’re not local!
third.local.domain REJECT Oh no you don’t. You’re not local!
etc

(I was tempted to add a rude message, but thought better of it…..)

Postfix supports a variety of different table types. You can find out which your system supports with the command “postconf -m”. I chose “hash” for my table. The local database file is created from the text table with the command “postmap /etc/postfix/localdomains”. Having done that I added the check to my sender_restrictions thus:

smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_non_fqdn_sender, check_sender_access hash:/etc/postfix/localdomains, permit

and reloaded postfix. Job done.

Permanent link to this article: https://baldric.net/2019/01/24/postfix-sender-restrictions/

congratulations to BT

I have been running my own mail server now for well over a decade. Whilst the actual physical hardware (or actually VPS system) may have changed once or twice during that time, the underlying software (postfix and dovecot on debian) has not really changed all that much. However, what has changed over the last decade or so, is the expectation that mail systems will be much more robust, better managed, less insecure (no more “open relays”) and harder on spam than had been the case in the early days of wide takeup of email by the public. Ignoring the “free” offerings from the likes of google, microsoft and others, it would arguably be cheaper, and certainly easier, for me to simply pay for an external mail service by one of the many providers out there. It is pretty easy to find companies offering to host personal email for about a tenner or at most twenty pounds a year. Those “solutions” (as providers seem to love to call their products) usually give you decent anti-spam, A/V scanning, POP3S/IMAPS connectivity (or if you really must, a webmail interface) and can usually alias mail to your preferred domain – particularly if you buy a domain name with your email service. But they always have limitations that I don’t like. The most obvious ones are: restrictions on the number of actual email addresses (as opposed to aliases), limited storage (though that is becoming less of a problem these days), and artificial restrictions on attachment sizes. And I’m bloody minded. I like to control my own email. I run my own email service for the same reason I manage my own DNS, run my own webservers, manage my own wordpress installation, run my own XMPP server and VPNs and manage my own domestic local network with assorted servers hanging off it. I like control and I dislike the opportunity outsourced services have for providing third parties access to my data. My personal data.

Besides, a boy needs a hobby.

However, I do occasionally get one or two problems in mail delivery – though usually /to/ my system rather than /from/ my system. For example I still get the occasional spam or cruddy email which gets past my protection mechanisms. Indeed I recently received one of those ridiculous extortion scam emails purporting to come from my own email adddress – more of which later – but this post is about an outbound mail failure from me to a friend of mine with a btinternet.com account.

I routinely correspond by email with a bunch of long standing friends who once lived relatively close together but are now more widely geographically dispersed. The group (or sub groups in some cases) get together on occasion for holidays, outings and meals. For some odd reason, many of those friends of mine have AOL accounts (I know, I know, but try telling them that). In a list of about two dozen regular correspondents, about a quarter of those people use AOL. The majority of the rest use BT, hotmail and gmail with one or two minor providers or work based accounts. On occasion in the past I have had mail to those AOL based accounts refused by AOL on the spurious grounds that my mail looked like spam because it was aimed at about half a dozen separate AOL accounts all at once. Well, that’s what happens when you “reply-all” to a mail list. Sadly AOL never could figure this out. After a while I gave up emailing their postmaster explaining the problem (and it was /their/ problem, identical email to the individual accounts always got through) because I never, ever, received a reply.

But this is about BT, not AOL.

Members of the mail list are shortly to meet for the group’s annual Christmas meal (it is always late, but hey) and one member “volunteered” to arrange the gathering, find a venue, sort menus etc. Said member has a btinternet email account (@btinternet.com) and he circulated a menu seeking choices for the meal. My reply was refused by BT with a “hard” 554 message which was reported to me by my mail system as below:

The mail system

forename.surname@btinternet.com: host mx.bt.lon5.cpcloud.co.uk[65.20.0.49] said:
554 Message rejected on 2019/01/15 15:00:24 GMT, policy (3.2.2.1) – Your
message looks like SPAM or has been reported as SPAM please read
www.bt.com/bulksender (in reply to end of DATA command)

Now this was decidedly odd, because not 10 days beforehand I had happily sent earlier mails to the same address when our volunteer was initially talking about venue and proposed dates for the gathering. Just to be certain I wasn’t at fault, I checked the advice given by BT on their mail site referred to by the bounce message. Now the only thing I do not have set up for my mail server is DKIM signing. Everything else is hunky-dory – Proper “From” address? check. SPF? check. Proper MX records? check. Fixed IP address? check. PTR record? check. Good reputation? check. Not blacklisted? check (mxtoolbox says I’m fine). Furthermore, I never send HTML email (which I abhor as an abominable bastardisation of proper email standards) so did not have any embedded images or other bloody silly links in my mail). So after trying once or twice more later in the day (and failing) I emailed the BT postmaster saying I was having a problem and pointing out that whilst I might not use Domain keys, there seemed to me to be little else wrong with my email. I didn’t expect an answer, but you have to try,

BT responded – and they responded quickly. I sent my notification, with the failure message, to the BT postmaster address timed at 17.16. At 17.23 I received a reply saying:

“Hi,
Can you please send an example of the failing email to [investigation-address]@btinternet.com.
Please do not forward the email as an attachment but resend it.
Please let “postmaster” know when this has been sent so we can check the email’s content and possible reason for thinking it is spam.

Thank You,”

Slightly stunned, I did as requested and a short time later (at half past midnight when I was asleep) I received another email from BT saying:

“Hi,
That email is scoring high as spam so I have reported it to our spam engine provider, I will email you again when I have some news.
Thank you,”

Sure enough, that same morning at 02.50, I received the following good news:

“Hello,

We have made a change that should stop the emails being scored as spam, this change is being rolled-out now so please try again later.

Thanks”

On reading this when I got up that day I resent my email and, sure enough, it got through. Way to go BT! I have never, ever received that kind of rapid response from any ISP anywhere in the world – and I quite often email “abuse@” network addresses when some toerag or particularly persistent ‘bot shows up in my logs trying to do things I don’t like.

However, as much as I would like to believe that BT fixed a problem simply to accomodate my mail system, I actually think that unlikely. Given that mail from my system to @btinternet.com addresses had been working fine up until a few days ago, I think it much more likely that BT mail administrators had made some recent change, perhaps in one of their spam filters, which caused sigificant volumes of inbound mail to be rejected. My email had then simply been caught up in that wider problem and they were receiving queries or complaints from other mail administrators and not just me. Be that as it may, they still responded correctly, and efficiently as they moved to rectify whatever was causing the problem. So, my congratulations, and heartfelt thanks to the BT postmaster team for actually doing the sort of job that postmasters are supposed to, but rarely do properly.

Permanent link to this article: https://baldric.net/2019/01/23/congratulations-to-bt/

always keep the address

I normally post a “happy birthday trivia” message at this time of year. Indeed I have been doing this for 12 years now. Of late my posting has become less frequent which is somewhat odd since I now have much more free time than I had back when I started trivia. But no matter – some things are much more important than blogging.

This year I was struck by a BBC article by the poet Ian McMillan which I read yesterday. The article recalls how McMillan briefly met a chap called “Brian” at Jersey airport on a breezy night in autumn many years ago. McMillan was apparently very worried about the impending flight but was reassured by Brian that all would be well. After chatting for a short while and just before boarding the flight, Brian and McMillan swapped addresses and said that they would stay in touch. Unfortunately McMillan then lost Brian’s address. But Brian obviously did not lose McMillan’s address because each Christmas thereafter he sent a card, despite receiving nothing back.

The article ends with McMillan saying:

“Always keep the address. Always remember where people are, and then you can translate those moments of the kindness of strangers into a winter scene and a first class stamp. “

I’d say that was good advice.

Merry Christmas all.

Permanent link to this article: https://baldric.net/2018/12/24/always-keep-the-address/

wordpress 5.0 editor error

When I posted yesterday I noticed that there was a new version (5.0) of wordpress available for installation. So I decided to spend a short while today upgrading as I always do when a new software version is released. But I hit a snag – a big one.

The new version of wordpress includes a completely re-written editor called “gutenberg”. That editor fails quite spectacularly for many users. In my case I could not edit any existing posts or pages and wordpress threw up the error message shown below:

No “attempts at recovery” were successful. So I was left with a broken upgrade and no way to edit any of my existing posts. Not good.

Now I always make backups before any upgrade so I thought I’d just roll back to the earlier version and reinstall the database and then wait until wordpress fixed whatever was wrong (probably in a 5.1 release). However, since I’d already gone to the trouble of completing the upgrade I thought I’d first check to see how many others had hit the same snag and see if there was a workaround. It seems the error is widespread. There is some differing advice online as to whether the error is caused by a conflict with some plugin or other, but since I don’t use many plugins, and certainly not the ones which seemed to get most of the blame, that didn’t seem to be the case for me. Certainly I couldn’t remove a plugin I don’t have.

There is, however, a fix released by wordpress in the shape of a plugin called “classic editor”. This plugin replaces the new (broken) editor with the old, (working) one. Once I’d installed that I was good to go again.

But, and this is a big but, the fact that the plugin has had over 900,000 downloads to date suggests very strongly that a) the new editor is seriously borked, and b) many users, like me, are happy with the classic editor.

Does this remind anyone of Microsoft?

Permanent link to this article: https://baldric.net/2018/12/12/wordpress-5-0-editor-error/

well I never

It’s not often that I find myself agreeing with GCHQ, but ex GCHQ Director Robert Hannigan’s recent comments in an interview with the BBC Today programme struck a chord.

Hannigan headed GCHQ from April 2014 until his resignation for family reasons last year. Whilst in post he pushed for greater transparency at the SIGINT agency. He was responsible for setting up the National Cyber Security Centre in 2017. And in 2016 he argued publicly in favour of strong encryption and against the idea of “back doors” in crypto software. So, arguably, Hannigan is more liberal and open than is common in GCHQ. Certainly his approach was very different to that of his predecessors Iain Lobban or David Pepper.

In his Today interview, Hannigan said of Facebook:

“This isn’t a kind of fluffy charity providing free services. It’s is a very hard-headed international business and these big tech companies are essentially the world’s biggest global advertisers, that’s where they make their billions.

“So in return for the service that you find useful they take your data… and squeeze every drop of profit out of it.”

Asked if Facebook was a threat to democracy, Hannigan said:

“Potentially yes. I think it is if it isn’t controlled and regulated.

“But these big companies, particularly where there are monopolies, can’t frankly reform themselves. It will have to come from outside.”

So he is arguing for greater democratic control of the behemoth which is Facebook (and by extrapolation, other similar companies such as Google). That may put him at odds with many in the US.

More interestingly though, Hannigan also went on to comment on the Chinese Telecoms giant Huawei.

Huawei has been in the news a lot recently. Last week (7 December) Meng Wanzhou, Huawei’s chief financial officer and the daughter of its founder, was detained at Vancouver airport on a US extradition request. In November, New Zealand reported that it had decided to follow the lead of the US and Australia in barring Huawei from involvement in its 5G networks. Canada is reportedly carrying out a security review of Huawei telecoms equipment, and in the UK, BT has said that it will be removing Huawei kit from the core of its 5G network. All these decisions are said to flow from fears that China may be using Huawei as a proxy so it can spy on rival nations.

Hannigan had this to say about Huawei:

“My worry is there is a sort of hysteria growing at the moment about Chinese technology in general, and Huawei in particular, which is driven by all sorts of things but not by understanding the technology or the possible threat. And we do need a calmer and more dispassionate approach here.”

He went on to say “no malicious backdoors” had been found in Huawei’s systems, although there were concerns about the firm’s approach to cyber security and engineering.

He added:

“The idea… that we can cut ourselves off from all Chinese technology in the future, which is not just going to be the cheapest – which it has been in the past – but in many areas the best, is frankly crazy.”

Indeed. It is worth remembering that in 2005 BT selected Huawei as a preferred supplier for equipment for its 21CN network – much to the chagrin of the obvious competitors. Marconi never recovered from the loss of sales to BT who took the decision on the entirely hard headed basis of best value for money (i.e. cost).

At the time of the decision by BT to go with Huawei there were lots of rumblings about “security concerns”. Those rumblings have never gone away and the UK is still under pressure from the US to ditch Huawei. But it could be argued that the biggest reason for this is actually a protectionist desire by the US to see its main communications infrastructure companies (Cisco, Juniper et al) getting business rather than the newcomers from China.

And who is to say that equipment from those US companies poses any less of a security threat than that from Huawei? I’d guess that the NSA would much prefer to see US equipment deployed across the world’s Telcoms Companies – for fairly obvious reasons – the very same reasons which are adduced to Huawei.

Permanent link to this article: https://baldric.net/2018/12/11/well-i-never/

re-encrypting trivia

Back in June 2015 I decided to force all connections to trivia over TLS rather than allow plain unencrypted connections. I decided to do this for the obvious reason that it was (and still is) a “good thing” (TM). In my view, all transactions over the ‘net should be encrypted, preferably using strong cyphers offering perfect forward secrecy – just to stop all forms of “bad guys” snooping on what you are doing. Of course, even in such cases there are still myriad ways said “bad guys” can get some idea what you are doing (unencrypted DNS tells them where you are going for example) but hey, at least we can make the buggers work a bit harder.

Unfortunately, as I soon discovered, my self-signed X509 certificates were not well received by RSS aggregators or by some spiders. And as Brett Parker at ALUG pointed out to me, the algorithms used by some (if not all) of the main web spiders (such as Google) would down rank my site on the (in my view laughably specious) grounds that the site could not be trusted.

As I have said before, I’m with Michael Orlitzky, both in his defence of self-signed certificates and his distaste for the CA “terrorists”. I think the CA model is fundamentally broken and I dislike it intensely. It is also, in my view, completely wrong to confuse encryption with identification and authentication. Admittedly, you might care about the (claimed) identity of an email correspondent using encryption (which is why PGP’s “web of trust” exists – even though that too is flawed) or whether the bank you are connecting to is actually who it says it is. But why trust the CA to verify that? Seriously, why? How did the CA verify that the entity buying the certificate is actually entitled to identify itself in that way? Why do you trust that CA as a third party verifier of that identity? How do you know that the certificate offered to your browser is a trustworthy indicator of the identity of the site you are visiting? How do you know that the certificate exchange has not been subject to a MITM attack? How do you know that your browser has not been compromised?

You don’t know. You can’t be sure. You simply trust the nice big green padlock.

Interestingly, banks, and I am sure other large organisations which are heavily regulated, are now beginning to add features which give more feedback to the end user on their identity during transactions. I recently applied for a new zero interest credit card (I like the idea of free money). In addition to the usual UID, password and security number requested of me (in order to identify me to them) the bank providing that card asked me to pick a “personal image” together with a personally chosen secure phrase known only to me in order that they could present those back to me to identify them to me. I am instructed not to proceed with any transaction unless that identification is satisfactory.

So even the banks recognise that the CA model is inadequate as a means of trusted identification. But we still use it to provide encryption.

For some time now browsers have thrown all sorts of overblown warnings about “untrusted” sites which offer self-signed certificates such as the ones I have happily used for years (and which I note that Mike Orlitzky still uses). As I have said in the past, that is simply daft when the same browser will happily connect to the same site over an unencrypted plain HTTP channel with no warning whatsoever. Now, however, there is a concerted effort (started by Google – yes them again) to move to warning end users that plain HTTP sites are “insecure”. Beginning in July 2018 (that’s now) with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure” (sigh). And where Google goes with Chrome, Mozilla, Microsoft and Apple will surely follow with Firefox, Edge and Safari. As much as I may applaud the move to a more fully encrypted web, I deplore the misuse of the word “secure” in this context. Many small sites will now face balkanisation as their viewers fall away in the face of daft warnings from their browsers. Worse, the continued use of warnings which may be ignored by end users (who, let’s face it, often just carry on clicking until they get what they want to see) will surely desensitise those same users to /real/ security warnings that they should pay attention to. Better I feel to simply warn the user that “access to this site is not encrypted”. But what do I know?

I write articles on trivia in the expectation that someone, somewhere, will read them. Granted, blogging is the ultimate form of vanity publishing, but I flatter myself that some people genuinely may find some of my “how-to” style articles of some use. Indeed, I know from my logs and from email corresondence that my articles on VPN usage for example are used and found to be useful. It would be a shame (and largely pointless) to continue to write here if no-one except the hardiest of souls persistent enough to ignore their browsers ever read it. Worse, of course, is the fact that for many people, Google /is/ the internet, They turn to Google before all else when searching for something. If that search engine doesn’t even index trivia, then again I am wasting my time. So, reluctantly, I have decided now is the time to bite the bullet and apply a CA provided TLS certificate to trivia. Some of my more perceptive readers may have already noticed that trivia now defaults to HTTPS rather than plain HTTP. Fortunately, letsencrypt, offers free (as in beer) certificates and the EFF provides an automated system of both installation and renewal of the necessary certificates. So I have deployed and installed a letsencrypt certificate here.

I still don’t like the CA model but, like Cnut the Great (and unlike his courtiers), I recognise my inability to influence the tides around me.

[Postscript]

Note that in order to ensure that I do not get a browser warning about “mixed content”, in addition to the necessary blog and lighttpd configuration changes I have run a global search and replace of all “http://” by “https://” on trivia. Whilst this now gives me a satisfyingly good clear green A+ on the SSL Labs site, it means that all off-site references which may have previously pointed to “http://somewhere.other” will now necessarily point to “https://somewhere.other”. This may break some links where the site in question has not yet moved to TLS support. If that happens, you may simply remove the trailing “s” from the link to get to the original site. Of course, if that still doesn’t work, then the link (or indeed entire site) may have moved or disappeared. It happens.

Permanent link to this article: https://baldric.net/2018/07/07/re-encrypting-trivia/