iptables firewall for servers

I paid for a new VPS to run tor this week. It is cheaper, and offers a higher bandwidth allowance than my existing tor server so I may yet close that one down – particularly as I recently had trouble with the exit policy on my existing server.

In setting up the new server, the first thing I did after the base installation of debian and the first apt-get update/upgrade was to install my default minimum iptables firewall ruleset. This rule simply locks down the server to accept inbound connections only to my SSH port and only from my home trusted network. All other connections are denied. I have a variety of different iptables rules depending upon the system (rules for headless servers are clearly different to those needed on desktops running X for example). In reviewing my policy stance for this new server, I started comparing the rules I was using on other servers, both externally on the ‘net and internally on my LAN. I found I was inconsistent. Worse, I was running multiple rulesets with no clear documentation and no obvious commonality where the rules should have been consistent, or any explanation of the differences. In short I was being lazy, but in doing so I was actually making things more difficult for myself because a) I was reinventing rulesets each time I built a server, and b) the lack of documentation and consistency meant that checking the logic of the rules was unnecessarily time consuming.

To add to my woes, I noted that in one or two cases I was not even filtering outbound traffic properly. This is a bad thing (TM), but not untypical of the approach I have often seen used elsewhere. Indeed, a quick check around the web will show that most sites offering advice about iptables rulesets concentrate only on the input chain of the filter table and ignore forwarding and output. To be fair, many sites discussing iptables seem to assume that ipforwarding is turned off in the kernel (or at least recommend that it should be) but very few that I could find even consider output filtering.

In my view, output fitering is almost as important, if not as important as input filtering. Consider for example how most system compromises occur these days. Gone are the days when systems were compromised by remote attacks on vulnerable services listening on ports open to the outside world. Today, systems are compromised by malicious software running locally which calls out to internet based command and control or staging servers. That malicious software initially reaches the desktop through email or web browsing activity. This “first stage” malware is often small, aimed at exploiting a very specific (and usually completely unpatched) vulnerability and is unnoticed by the unsuspecting desktop user. The first stage malware will then call out to a server (over http or https usually) to both register its presence and obtain the next stage malware. That next stage will give the attacker greater functionality and persistence on the compromised system. It is the almost ubiquitous ability of corporate desktops to connect to any webserver in the world that has led to the scale of compromise we now routinely see.

But does output filtering matter on a server? And does it really matter when that server is running linux and not some other proprietary operating system? Actually, yes, it matters. And it matters regardless of the operating system. There is often a disconcerting smugness from FOSS users that “our software is more secure than that other stuff – we don’t need to worry”. We do need to worry, And as good net citizens we should do whatever we can to ensure that any failures on our part do not impact badly on others.

I’m afraid I was not being a good net citizen. I was being too lax in places.

If your linux server is compromised and your filtering is inadequate, or non-existent, then you make the attacker’s job of obtaining additional tools easy. Additionally, you run the risk of your server being used to attack others because you have failed to prevent outbound mailicious activity, from port scanning to DOS, to email spamming to running IRC or other services he wants, on your server (for which you pay the bills). Of course if the attacker has root on your box, no amount of iptables filtering is going to protect you. He will simply change the rules. But if he (or she) has not yet gained root, and his privilege escalation depends upon access to the outside world, then your filters may delay him enough to give you time to take appropriate recovery action. Not guaranteed of course, but at least you will have tried.

So how can your server be compromised? Well, if you get your input filtering wrong and you run a vulnerable service, you could be taken over by a ‘bot. There are innumerable ‘bots out there routinely scanning for services with known vulnerabilities. If you don’t believe that, try leaving your SSH port open to the world on the default port number and watch your logs. Fortunately for us, most distros these days ship with the minimum of services enabled by default, often not even SSH. But how often have you turned on a service simply to try something new? And how often did you review your iptables rules at the same time? And have you ever used wget to pull down some software from a server outside your distro’s repository? And did you then bother to check the MD5 sum on that software? Are you even sure you know fully what that software does? Do you routinely su to root to run software simply because the permissions require that? Do you have X forwarding turned on? Have you ever run X software on your server (full disclosure – I have)? Ever run a browser on that? In the corporate world I have even seen sysadmins logged in to servers which were running a full desktop suite. That way lies madness.

Believe me, there are innumerable ways your server could become compromised. What you need to do is minimise the chances of that happening in the first place, and mitigating the impact if it does happen. Which brings me back to iptables and my configuration.

The VM running trivia is also my mailserver. So this server has the following services running:

  • a mail server listening on port 25;
  • an http/https server listening on ports 80 and 443;
  • my SSH server listening on a non standard port;
  • an IMAPS/POP3S server listening on ports 993 and 995.

My tails mirror only has port 80 and my nonstandard SSH port open, my tor server has ports 80, 9001 and my non standard SSH port open, and of course some of my internal LAN servers listen on ports such as 53, 80, 443, 2049, (and even occasionally on 139 and 445 when I decide I need to play with samba, horrible though that is). I guess this is not an unusual mix,

My point here though, is that not all of those ports need to be accessible to all network addresses. On my LAN, none of them need to be reachable from anywhere other than my internal selected RFC1918 addresses. My public servers only need to be reachable over SSH from my LAN (if I need to reach one of them when I am out, I can do so from a VPN back into my LAN) and given that my public servers are on different networks, they in turn do not need to reach the same DNS servers or distro repositories (one of my ISPs runs their own distro mirror. I trust that. Should I?). Whilst inevitably the iptables rules for each of these servers needs to be different, the basic rule configuration should really be the same (for example, all should have a default drop policy, none need allow inbound connections to any non existent service, none need allow broadcasts, none need access to anything other than named DNS servers, or NTP servers etc.) so that I am sure it does what I think it should do. My rules didn’t conform to that sort of approach. They do now.

Having spent some time considering my policy stance, I decided that what I needed was a single iptables script that could be modified quite simply, and clearly, in a header which stated the name of the server, the ports it needed open or which it needed access to and the addresses of any servers which it trusted or it needed access to. This turned out to be harder to implement than I at first thought it should be.

Consider again this server. It should be possible to nail it down so that it only allows inbound new or established connections to the ports listed and only allows oubound established connections to those inbound. Further, it should not call out to any servers other than my DNS/NTP and distro repositories. Easy. But not so. Mail is awkward for example because we have to cater for inbound to port 25 from anywhere as well as outbound to port 25 anywhere. That feels a bit lax to me, but it is necessary unless we connect only to our ISP’s mailserver as a relay. Worse, as I discovered when I first applied my new tight policy, I found that my wordpress installation slowed to a crawl in certain circumstances. Here it transpired that I had forgotten that I run the akismet plugin which needs access to four akismet servers (Question. Do I need to continue to run akismet? What are the costs/benefits?) It is conceivable that other plugins will have similar requirements. I also noticed that I had over thirty entries for rpc servers in my wordpress “Update Services” settings (this lists rpc servers you wish to automatically notify about posts/updates on your blog). Of course WP was attempting to reach those servers and failing. So I found myself adding exceptions to an initially simple rulebase. I don’t like that. And what if the IP addresses of those servers change?

So I actually ended up with two possible policy stances, which I called “tight” and “loose”. The first attempts to limit all access to known services and servers (with the obvious exception of allowing inbound to public services). The second takes a more permissive stance in that it recognises that it may not be possible to list all the servers we must allow connection to, but limits those connections to particular services (so for example, whilst it will allow out connection only to DNS on one or two servers, it will allow out new connections to any server on say port 80 (I actually don’t like this, for fairly obvious reasons, but it is at least more restrictive than the usual “allow anything to anywhere”).

Others may find these scripts useful so I have posted them here: iptables.tight.sh and iptables.loose.sh Since the scripts must be run at boot time they should be run out of one your boot run control scripts (such as /etc/init.d/rc.local) or at network initialisation as a script in /etc/network/if-up.d. Before doing so, however, I strongly advise you to test them on a VM locally, or at least on a machine to which you have console access. Locking yourself out of a remote VM can be embarrassing.

By way of explanation of the policy stances taken, I have posted separate descriptions of each at tight and loose.

Comments. feedback, suggestions for improvement or criticism all welcome.

Permanent link to this article: https://baldric.net/2012/09/09/iptables-firewall-for-servers/

2 comments

    • redrs on 2012/10/10 at 2:57 pm

    You could lock down the tight iptables rules further by using iptables owner matching. I do this for DNS traffic by adding something like ” –match owner –uid-owner httpd “

    • Mick on 2012/10/10 at 8:20 pm
      Author

    redrs

    Interesting idea – and one which I had not considered before. Thank you. I’ll take a look at the possibilities (though I must say on first look, I’m not sure I’d necessarily include root as one of the allowed users by default).

    From a quick look at your blog, it would appear that we have similar interests (tor, debian, openwrt).

    Mick

Comments have been disabled.