encrypting DNS with dnsmasq and stubby

In my last post I explained that in order to better protect my privacy I wanted to move all my DNS requests from the existing system of clear text requests to one of encrypted requests. My existing system forwarded DNS requests from my internal dnsmasq caching servers to one of my (four) unbound resolvers and thence onward from them to the root or other authoritative servers. Whilst most of my requests would be shielded from prying eyes by my use of openVPN tunnels, unfortunately, this still left my requests to upstream servers from my unbound resolvers subject to snooping. I don’t like that and the opportunity to encrypt my requests using the new standard DNS over TLS (DoT) looked attractive.

This post describes how I made that change.

In an ideal world, all the root servers and other authoritative servers should accept encrypted DNS requests. Because unbound can be configured to both accept DoT requests and in turn forward requests using DoT to upstream resolvers, this would then mean that I could retain my unbound resolvers, albeit with some suitable configuration changes. All I would then need to do would be to find some way of encrypting DNS outbound from my internal networks to my unbound resolvers. Unfortunately however, DoT is not sufficiently widely deployed to make this possible. So my unbound resolvers would necessarily have to forward to one or more public DoT resolvers in order for this to work. I see no point in doing that. There would be no advantage over simply forwarding upstream requests directly from my internal caching resolvers. Again, unfortunately, dnsmasq cannot encrypt using DoT and according to Simon Kelley’s response to a request on the dmsmasq mailing list, is not likely to do so any time soon.

So – enter stubby, a non-caching DNS proxy which encrypts outbound requests (to one or more upstream resolvers) using DoT. It is fairly easy to chain dnsmasq to stubby and this is how I did it in my networks.

Firstly, install stubby, which should be available through your package manager. If it is not, then the source can be obtained from github. Be aware that the stubby configuration file, stubby.yml, uses a YAML like format which is particularly picky in its layout. The documentation mentions that the file is sensitive to indentation and I found this to my cost when editing the default file to include my preferred upstreams. A single errant space in my configuration caused me some considerable difficulty because stubby appeared to start and run as a daemon, but /not/ with my configuration. I finally tracked this down after finding the messages below in my syslog:

stubby[18078]: Scanner error: mapping values are not allowed in this context at line 174, column 19
stubby[18078]: Could not parse config file

and

stubby[18078: Error parsing config file “/etc/stubby/stubby.yml”: Generic error
stubby[18078: WARNING: No Stubby config file found… using minimal default config (Opportunistic Usage)

After removing the offending space I restarted stubby

“systemctl restart stubby ; tail -f /var/log/syslog”

to see the reassuring messages:

systemd[1]: Stopped DNS Privacy Stub Resolver.
systemd[1]: Started DNS Privacy Stub Resolver.
stubby[18097]: [16:00:13.923290] STUBBY: Read config from file /etc/stubby/stubby.yml

and a quick check showed that my configuration was now working.

My configuration changes to the default are given below:

# listen locally on 5353 and disable IPV6

listen_addresses:
– 127.0.0.1@5353
# – 0::1

and add our preferred resolvers (samples only given here). Note that I only use IPV4 and not IPV6. IPV6 is disabled in my networks and at my boundaries.

#############################
# Specify the list of upstream recursive name servers.
# In Strict mode upstreams need either a tls_auth_name
# or a tls_pubkey_pinset so the upstream can be authenticated.
#
####### IPv4 addresses ######

upstream_recursive_servers:

## The getdnsapi.net server
– address_data: 185.49.141.37
tls_auth_name: “getdnsapi.net”
tls_pubkey_pinset:
– digest: “sha256”
value: foxZRnIh9gZpWnl+zEiKa0EJ2rdCGroMWm02gaxSc9Q=
## The Uncensored DNS servers
– address_data: 89.233.43.71
tls_auth_name: “unicast.censurfridns.dk”
tls_pubkey_pinset:
– digest: “sha256”
value: wikE3jYAA6jQmXYTr/rbHeEPmC78dQwZbQp6WdrseEs=

########

I moved stubby from the default port 53 to 5353 because dnsmasq listens on port 53 in my configuration. Accordingly, my dnsmasq configuration in /etc/dnsmasq.local.conf had to be amended thus:

# resolv-file=/etc/resolv.conf
# We now use stubby to forward to a list of upstreams
# using DNS over TLS so we do not consult another resolver
#
no-resolv
server=127.0.0.1#5353

This instructs dnsmasq not to consult any external file for upstream resolvers but instead forward to port 5353 on localhost (where stubby listens). Simple really.

Before finalising my stubby configuration I ran some tests on a number of external public resolvers, because I really don’t want to pass my DNS requests to a large resolver which then logs them (as does Google) or both logs and interferes with them (as do cleanbrowsing, OpenDNS or Quad9). For my testing I used a small tool called dnsperftest.

I modified that script to add some additional resolvers taken from the defaults provided in stubby’s configuration plus some others taken from privacytools.io and dnsprivacy. I also added some other domains to the test set so that I had 20 domains to test across some 18 public resolvers. Some of those resolvers (such as google 8.8.8.8 and Quad9 at 9.9.9.9) I would not, and do not, actually use in real life, but they are big anycast systems and I wanted a benchmark for my likely slower choices which are privacy conscious.

I then ran the test script from my desktop over a straight connection through my ISP, followed by connections through each of my VPN endpoints. As expected, I got faster lookups without the VPNs than with. I then ran the same tests from the machines I use as VPN endpoints. And again, as expected, and because those VMS are in large datacentres dotted around europe, I got much faster lookups from there. This allowed me to choose my final list of six external resolvers to add to the stubby.yml file.

Fnally, with some six upstream resolvers configured in round robin form in stubby, I ran tcpdump on my local DNS servers to check that my requests were actually encrypted. They were, and I am now satisfied that my DNS traffic is as private as I can reasonably make it.

Permanent link to this article: https://baldric.net/2020/05/25/encrypting-dns-with-dnsmasq-and-stubby/