please sign here

This post has nothing whatsoever to do with the usual topics I cover here, but this is my blog so hey I can write what I like.

My family has a proud tradition of working in the UK public sector. Despite multiple machinery of government changes by administrations of both major political colours thoughout my career I somehow managed to avoid being pushed into a career path not of my choosing. My wife still, just, retains her public sector status. One of my younger brothers worked for two major Departments of State before voluntarily leaving for what he saw as greener pastures (I think he meant more money – but that was at least his choice). Another of my younger brothers has worked for some years in the Probation Service – and he is, rightly, very proud of that. However, the current administration is determined to end his public service career by privatising (lovely word) the Probation Service. God forbid it should be bought by G4S.

He would like anyone and everyone concerned at this prospect to sign this E-government petition. Not for him, but for the continued high quality of rehabilitation of offenders based on need, and not on expenditure reduction targets.

Please do so.

Permanent link to this article: https://baldric.net/2013/06/17/please-sign-here/

trivial traffic bump

I normally get around 1000 to 1300 hits a day (or 32,000 to 40,000 per month) on trivia. Not a huge hit rate, but consistent and on a slight upward trend over the past year. Today I have seen over double that – most of it this morning. Between 05.30 and 07.00 local time my logs show nearly 1800 hits from one IP address alone consistently knocking on the door looking for login capability. That address comes from a netblock allocated to CHINANET’s fujian province network.

Please stop it. Whoever you are. (I may have to install fail2ban, much as I dislike that idea.)

Permanent link to this article: https://baldric.net/2013/06/17/trivial-traffic-bump/

prism opt-out

In all the noise on the ‘net about the alleged NSA PRISM program, this new site offers an amusing, but nonetheless useful, list of free alternatives to proprietary software. In part the site sort of misses the point about PRISM, but it is still good to see someone taking the time to point out that you don’t have to use closed, proprietary software when there are excellent free alternatives. Be aware, however, that some of the listings point to proprietary /services/ (e.g. startpage) which are flagged as such.

Permanent link to this article: https://baldric.net/2013/06/16/prism-opt-out/

Edward Snowden

The revelations of the past week or so have been interesting to me more for what they haven’t said, than what they have. There are a few points arising from Snowden’s story which puzzle me and which don’t seem to have been addressed by the mainstream media – at least not the ones I read.

And there is more than a whiff of Captain Renault in the reaction in some quarters to the story. “I’m shocked, shocked, to find that the NSA is spying on the internet.” No shit Sherlock. What do you think they do? That’s their job. Go and read their publicly avowed Mission Statement. Their vision is “Global Cryptologic Dominance through Responsive Presence and Network Advantage.” Key words – “Global” “Network Advantage”. Nor should anyone be surprised to learn that the NSA and GCHQ share intelligence. The two Agencies are extremely close and have been ever since the UKUSA agreements initially forged in 1943, 1946 and 1948. Richard J Aldrich devotes a whole chapter of his book on GCHQ to the UKUSA agreements. That chapter quotes Admiral Andrew Cunningham (Chief of the Naval Staff in 1945) as saying “Much discussion about 100% cooperation with the USA about SIGINT. Decided that less than 100% cooperation was not worth having.” David Omand said in his Guardian article of 11 June that he was “delighted at this evidence that our transatlantic co-operation extends in this hi-tech way into the 21st century, when so much communication is carried on the internet.” (Though he couldn’t resist the opportunity to plug a greater UK intercept capability as proposed by the draft Communications Data Bill, saying “It would be good, nevertheless, if the UK security authorities were able to identify directly themselves more of the traffic of terrorists and serious criminals that threaten us, and I hope amended interception legislation will be presented to parliament soon.”)

And again, nor should anyone be surprised that the NSA has close links with major internet focused companies such as Google, Microsoft and Facebook. Think about it. Facebook alone is a spook’s wet dream. Google has vast amounts of information about its users. If you use an android smartphone, that information also includes real-time geo-location information.

nsa-tracking

Of course NSA are going to seek access to that resource – the only question is how they will do that.

All the companies named in the initial Guardian articles denied that they allowed access in the way claimed by Snowden. But here again, don’t be surprised by such denial. It seems highly probable to me that any US company asked such questions would be likely to deny that they had done anything illegal. And bear in mind that any company served a “National Security Letter” (as is used primarily by the FBI seeking information on individuals from banks, internet and telecommunication companies etc.) is prohibited by law from telling anyone about it. Inevitably then those companies are going to be a little reticent when quizzed by UK journalists about their relationships with any intelligence agency. Furthermore. as Duncan Campbell says in his Register article, it is hardly surprising that all nine companies deny they have ever heard of PRISM. Why should they have heard of it if that is simply the internal classified name for the program?

Snowden strikes me as intelligent, rational, thoughtful, resourceful, and, within the framework of his apparent view of the world, probably well meaning. Certainly he has managed to achieve one of his proclaimed objectives in that he has provoked discussion of how Intelligence Agencies should act in today’s networked world. Whether you think he is traitor or hero will depend on your own personal take on what a mature democracy should be prepared to do in the name of protecting its citizens. Personally I welcome the debate but regret the way it has been initiated.

As I said above, some aspects of this story puzzle me. Snowden is, or rather was, reportedly a sysadmin working for Booz Allen Hamilton on contract to NSA. (He calls himself an “Infrastructure Analyst” but lists a series of roles which seem to be focused on network and systems management). Moreover, he had only been working for Booz Allen for a short time, having previously been employed by Dell (though still working at the Agency). He obviously had TS clearance, and as a sysadmin would have had extensive access to NSA infrastructuure and assets. Now I’ve been a sysadmin and the view such an administrator has of IT systems is very different to the view provided to users of those systems. The sysadmin’s view will be dominated by access to logs, audit systems, monitoring and control systems, backup and recovery systems, file and device management systems, security controls and so on. The sysadmin’s tools will not normally provide a view of the data held on the systems in quite the same way as that provided to the system’s users. A system user on the other hand will have a desktop providing access to the applications he or she needs to do his or her job. Those applications will give a view of the data which makes sense in the context of that person’s job.

In an organisation like the NSA, I would expect strict role based access control systems to be in place. I would also expect there to be extensive auditing and monitoring systems enforcing those controls. At a trivial level, any user without authority to use a particular system (or see the data it holds) would not even be provided with the desktop tools to allow them to even know the system exists. Where they do have access to a particular system, that access should be granular with a strict need to know ruleset applied. Furthermore, any attempted access to systems by persons without the privilege granted by the need to know should set off all sorts of alarms and should result in that person having an interesting discussion with either their line management or internal security. The same principle should apply to those staff with the most privileged access – the system administrators. Whilst a sysadmin on a TS system may need to have fairly extensive low level access to the operating system and file structure, there is no need for that admin to have the same kind of view of the data on that system as is provided to say an intelligence analyst. So whilst the admin may be able to copy, move, delete, backup, recover files etc, he or she has no need to see the contents of those files in the way the system user does. Snowden says in his interview that as a sysadmin he saw so much, and so much more than a normal user would over the course of his or her career that he felt compelled to expose what the NSA was doing. I find that odd.

To give an overly simplistic example, an analyst may use a system which allows him or her to compare high resolution photographic images over time. The sysadmin managing that system may not need routine access to the application which renders those images on screen in quite the same way. So any attempt by the admin to look at the data in the same way as the analyst does should be an auditable event which should be logged. And any and all attempts to copy such data should similarly be an auditable event. Moreover, the staff monitoring the audit controls should be completely separate from the administration team. On a highly secured system it should not be possible for a sysadmin to scan though files and copy them without someone somewhere being alerted to that fact and them then asking questions. Furthermore, the systems available to the sysadmin should not even provide the capability for off-line copies to be made. Snowden reportedly copied the files he has released to a USB memory stick. I assume that USB memory sticks (or any other portable removable media) are not permitted on NSA premises, but since Bradley Manning seemed to use a similar device to remove files from US Defence systems I think we can assume that the policy is a little lax. I confess to being surprised that the systems in use even provide USB access. But since it seems that they do, then I would expect that any and all USB device insertions would be auditable events with a high alerting level. It seems they weren’t – or at least no-one followed them up.

This is made even more puzzling in my view by the fact that Snowden professed in his interview to have complained openly in the past about his view that what he was seeing constituted “abuses” of power and “wrongdoing”, He even said “the more you talk about this, the more you are ignored”. So we have here an individual who cares deeply about democratic accountability, who is openly critical of what he calls abuses of power and who has TS clearance and system level access to highly classified systems which do not seem to prevent off-line copying and which furthermore do not seem to have any meaningful auditing in place. That says something interesting about the NSA’s security policies and vetting procedures. Given that he has only recently been recruited by Booz Allen Hamilton and that in the process he seems to have retained his TS clearance, that also says something about the NSA’s attitude towards its outside partners’ processes.

What I also find puzzling is that Snowden apparently told his management that he was taking some leave (for treatment for recently diagnosed eplilepsy) and then simply boarded a plane for Hongkong where he met the journalist he had previously contacted over a “secure route” and gave his interview.

That series of events – poor vetting, lax security policies, poor audit and control, and failure to spot a nascent whistleblower’s contact with journalists, sits oddly with the assertion that the NSA is all seeing and all powerful and must make some people feel very uncomfortable. On the other hand, it may make some others feel very much more relaxed.

I look forward to the story developing further.

Permanent link to this article: https://baldric.net/2013/06/15/edward-snowden/

microsoft windows is conspicuous by its absence

At DigitalOcean – or so says Netcraft in its latest write up on their astonishingly fast rise over the last six months. Apparently, in December 2012, DigitalOcean had just over 100 web-facing computers whilst in June 2013, Netcraft found more than 7,000. That is some growth.

But I’m not surprised. I make no apology for mentioning DigitalOcean here again. They have provided me with solid, astonishingly fast servers at an equally astonishingly low cost since I first trialled a Tor node with them back in January. Better still, they have continued to allow me to burn through around 10 TiB of traffic every month on that node ever since without so much as blinking.

That sort of customer dedication explains why they are growing so fast. That and the focus on linux VMs of course.

Permanent link to this article: https://baldric.net/2013/06/13/microsoft-windows-is-conspicuous-by-its-absence/

blimey that was quick

The cable tester I ordered at around 17.00 yesterday arrived in this morning’s post. And jolly good it is too for such a ridiculously cheap item. As expected, the instructions are amusing but pretty clear for all that. It is easy to use and feels fairly robust, despite the price.

Now the results. I am an idiot. On testing it turns out that there is nothing wrong with the RJ45 coupler I blamed earlier. All the pins are correctly wired straight through. What /was/ at fault was one of the two cables I had joined with that coupler. One was fine, the other was only wired at 1,2,3 and 6 (so two pairs, not four as required by 1000baseT – and as pointed out by David). When I removed the coupler I had also removed the offending cable and ended up blaming the wrong item.

In my defence I plead poor eyesight exacerbated by poor ambient lighting underneath my desk. That is why I did not spot the incorrect wiring at the time.

Ahem.

Permanent link to this article: https://baldric.net/2013/06/12/blimey-that-was-quick/

Emails from PayPal will always address you by your first and last name

Except when they don’t.

I have just received a wonderful email from paypal headed “Tim Harrison, your monthly activity is now ready to view online.”

Way to go guys. That really inspires confidence.

Permanent link to this article: https://baldric.net/2013/06/11/emails-from-paypal-will-always-address-you-by-your-first-and-last-name/

blimey that is cheap

cat-5-cable-tester
David’s comment to my post about my gigabit ethernet upgrade prompted me to look for a cheap LAN tester so that I could check continuity through the RJ45 coupler that had caused me difficulty. It would also be handy to be able to check the box full of old patch cables that I seem to have accumulated so that I don’t get caught out again.

I found this thing on ebay in minutes, and for the price (£2.49) it was just irresistible. I’m willing to bet the manual will be interesting reading though.

Permanent link to this article: https://baldric.net/2013/06/11/blimey-that-is-cheap/

PRISM – we had it first

I can exclusively reveal that the UK government had a PRISM database long before those upstarts in the USA.

In the late 1970s I worked in the Statistics Division of what was then the UK Civil Service Department. We used a database of Civil Service personnel called PRISM (Personnel Record Information System for Management). I used to interrogate the database (housed on an ICL 1904S at Chessington Computer Centre) using a SQL like language called PERL – PRISM Extraction and Retrieval Language (or perhaps, my memory isn’t what it used to be) PIRL – PRISM Information Retrieval Language.

To those worried about the new US version I can only say, don’t be. The database itself was crap and the interrogation language worse. I cannot believe that the NSA will have improved it that much. (Well, they may have boxes slightly faster than a 1904, but hey….)

Permanent link to this article: https://baldric.net/2013/06/10/prism-we-had-it-first/

slow gigabit ethernet

I have been making some changes to my domestic network of late which I will write about later. However, one of the main changes has been an upgrade from 10/100 switches to gigabit – mainly to improve throughput between my central filestore and desktop machines. For cosmetic reasons (and to keep my wife happy) I try to avoid having cat 5 cables strung around the place and most of them are hidden behind bookcases, furniture or soft furnishings. Lengthier cabling between my study (which houses the routers) and other rooms is catered for either by wifi or by ethernet over powerline. Both of these, of course, run at much lower speeds than even my existing 10/100 cabling so they remained unchanged.

However, my main machines are all housed in the one room (my study) and it was here that I wanted to improve file transfer speeds. I was therefore more than a little disappointed (and puzzled) to find that replacing my old 10/100 Mbps Netgear 8 port switch with a TP-Link Gigabit switch made no difference whatsoever. At first I was inclined to blame myself for buying a TP-Link device when I had previously had a poor experience with their products, but realistically I could not believe that performance should be that bad. The best connection speed I saw in a straight file transfer over FTP between my new file server and my desktop was 9 MB/s. Now a gigabit switch has a theoretical throughput of 125 MB/sec – call it 95-100 MB/sec after allowing for overheads and transmission inefficiencies. But 9 MB/s? Hell that is way too low and looks more like the rate I would expect to see on a 100 Mbps connection.

After some head scratching I decided to pull out all the cabling between the two machines in question and the new switch. I found something I had forgotten was there – a CAT 5 RJ45 connector joining two separate lengths of cable into one. I had used that some long time ago when the cable run between my desktop and the old switch wasn’t long enough to stretch the full distance. By re-siting the switch (which because of my architectural changes no longer needed to be sited so far away) I could remove the junction and simply use two clean cable runs between the switch and the desktop and server. Bingo 89-92 MB/s throughput.

The offending object (below) has been consigned to the bin. But I confess to being puzzled as to why a single dumb connector should have had such an adverse impact over such a comparatively short cable run.

rj45-coupler

Permanent link to this article: https://baldric.net/2013/06/10/slow-gigabit-ethernet/

Iain Banks

I first met Frank (the protagonist and narrator in “The Wasp Factory”) in about May or June 1990. I had taken my bike (then an FJ1200) in to the dealer for a routine service and tyre change and had wandered in to a local newsagent to pick up a magazine or two to read whilst I was waiting. On impulse I bought a copy of Iain Banks’ first novel after reading the intriguing cover notes. I read it in one sitting and finished it before the bike was ready.

That same copy was passed around a bunch of friends with whom I shared a holiday in southern France later that summer. No-one, but no-one could put it down once they had started it, and equally no-one could really believe what they had just read.

Banks went on to become one of the greatest writers of his generation, despite “The Wasp Factory” being rejected by multiple publishers before finally seeing the light of day in 1984. Tragically he died on Sunday 9 June merely a couple of months after he had announced publicly that he had inoperable cancer of the gall bladder. He was only 59 years old.

Permanent link to this article: https://baldric.net/2013/06/10/iain-banks/

another good reason not to buy one

Back in November 2011 I wrote about the TP-Link TL-SC3130G IP camera. I had some trouble getting that device to work properly over wifi so I returned it and got my money back.

Today, Core Security released an advisory about this device (and several others from TP-Link) about a remotely exploitable vulnerability arising from “hard-coded credentials” (i.e. a manufacturer installed back-door). The advisory says, inter alia:

7.1. *Hard-Coded Credentials in Administrative Web Interface*

[CVE-2013-2572] TP-Link IP cameras use the Boa web server [1], a popular tiny server for embedded Linux devices.

‘boa.conf’ is the Boa configuration file, and the following account can be found inside:

/—–

# MFT: Specify manufacture commands user name and password MFT manufacture erutcafunam

—–/

This account is not visible from the user web interface; users are not aware of the existence and cannot eliminate it. Through

this account it is possible to access two CGI files located in ‘/cgi-bin/mft/’:

 

1. ‘manufacture.cgi’

2. ‘wireless_mft.cgi’

The last file contains the OS command injection showed in the following section.

7.2. *OS Command Injection in wireless_mft.cgi*

[CVE-2013-2573] The file ‘/cgi-bin/mft/wireless_mft.cgi’, has an OS command injection in the parameter ‘ap’ that can be

exploited using the hard-coded credentials showed in the previous section:

/—–

username: manufacture

password: erutcafunam

—–/

Nothing suspicious about that at all.

Permanent link to this article: https://baldric.net/2013/05/29/another-good-reason-not-to-buy-one/

lighttpd graceful shutdown

I run two tails mirrors. One in NYC, the other in SanFrancisco. They each serve around 2-3 TiB of data per month. In common with my other servers, occasionally I need to interrupt those VMs in order to effect a system upgrade. I had to do this very recently with my upgrade of all my debian servers to wheezy.

Most software upgrades do not need a system restart. But once I had switched the kernels on the servers I had no other choice but to reboot. However, given the popularity of my mirrors and the fact that some clients are apparently on the end of slow lines whilst downloading large ISO images (tcptrack showed some connections running at 2-4 KB/s), I was reluctant to simply pull the plug for fear of interrupting some poor user’s long download before completion. I could, of course, just be brutal. After all, they are my servers, I pay for them, and the client gets the software for nothing, But brutality just doesn’t feel right.

Waiting for existing connections to finish whilst watching for new ones and then shutting down seemed like a really good way to go nuts slowly. I needed a simple graceful way of blocking incoming connections whilst continuing to serve existing established connections.

It turns out that lighttpd will do just what I want if sent a SIGINT i.e. send the process a SIGINT signal and lighty will stop accepting new connections but continue to serve existing connections until they are all complete. The server then will shut down entirely. This is not well documented. Here is a one-line script to do just that.

#!/bin/bash
#
# shut down lighty in a friendly manner. Send a SIGINT to lighttpd process so that it stops
# accepting new connections, but continues to service existing connections. Downloads will
# continue uninterrupted until all connections are closed, then lighty will close.
#
/bin/kill -INT `cat /var/run/lighttpd.pid`
#
exit
#

(Yes, I know that is more than one line.)

Permanent link to this article: https://baldric.net/2013/05/27/lighttpd-graceful-shutdown/

LMGTFY is the new RTFM

Back in the day, questions aimed at technical mailing lists or usenet news sometimes attracted the response “RTFM”. Personally, I always found that sort of reply both rude and somewhat arrogant. Often the questioner was obviously new to the topic under discussion and really wanted to know how to solve his or her particular problem. The fact that the same (or very similar) question had sometimes been answered many times in the past, or had really been well covered in the documentation usually led to the publication of FAQ lists. Unfortunately, this could then lead to responses along the lines of RTFFAQ – only marginally more helpful that RTFM, and just as rude in my view. This sort of response was once distressingly common on technical mailing lists where the respondent seemed more interested in showing off than in actually helping a fellow user.

Of late the smart-alecs seem to prefer LMGTFY in their responses. Admittedly, some of the questions I have seen which elicited this sort of response could really have been answered quite quickly if the questioner had bothered to type the obvious query into the search engine of their choice. And it is not as if anyone using the web can claim not to understand how to use a search engine. But I still think a LMGTFY response smacks of arrogance. Along with answering the question, the respondent is effectively saying “Look, you are obviously too stupid to use google, so I’ll do it for you”. Instead of actually using google and passing that search URL back, they deliberately use a secondary site which aims to insult by including in its name a condescending reference to the fact that the respondent does know how to use google and has done it for the questioner. Bad manners.

If you really want to be ill mannered and arrogant, you should choose something like jfgi.com. That way the questioner will be left in no doubt about your intentions. Of course, it will also prove to others that you are an idiot.

Permanent link to this article: https://baldric.net/2013/05/27/lmgtfy-is-the-new-rtfm/

digitalocean do it again

I can’t believe these guys. Not only do I get unlimited traffic on a 1 Gig network (now at three locations – I have a VPS in each of Amsterdam, New York and San Francisco) for peanuts, but they have just given me a $5.00 credit (i.e. one month free for one of the servers). Why? Simply because I pointed out that their email alert system was failing to add the required “Date: ” header.

Most ISPs would see that as a complaint and might, just might, get around to scheduling a fix. Not only did digitalocean fix the problem in short order, but they gave me a service credit saying:

Hi there —

Thanks for pointing this out to us! We thought we had this in place, but in reality our ticketing system was crafting email

differently than expected. We’ve added the header, and now it appears to be working. We’ve given you a credit in thanks.

Let us know if you need anything else.

Astonishing. Go buy their service.

Permanent link to this article: https://baldric.net/2013/05/09/digitalocean-do-it-again/

too close to the logo

I came across this entry on the blog of a company called Conformal today. The company purports to specialise in open source security products and the blog entry was about the logo for their secure online backup product called “cyphertite”. Apparently the marketing discussions concentrated on “artfully applied make-up, designer clothes, and a narrative to match. There was a lot of back and forth about layering, defense-in-depth and secure storage.” i.e. the usual type of meaningless fluff that is common to marketing meetings. But apparently nobody noticed the way the logo looked to an outsider.

I was reminded of the OGC Cockup in designing its logo back in 2008. No-one in the marketing team noticed the problem until they printed it on mouse mats and distributed them to staff.

Permanent link to this article: https://baldric.net/2013/04/29/too-close-to-the-logo/

cool

I have just been notified that I am eligible for a Tor T shirt. How cool is that?

This is a Tor Weather Report.

Congratulations! The node 0xbaddad (id: C332 113D F99E 367E 4190 424C E825 057D 9133 7ADD) you’ve been

observing has been running for 61 days with an average bandwidth of 2278 KB/s,which makes the operator eligible to

receive an official Tor T-shirt! If you’re interested in claiming your shirt, please visit the following link for more information.

https://www.torproject.org/getinvolved/tshirt.html

My digital-ocean tor node has been running uninterrupted since early January and has been shovelling traffic at a consistent 30+ Mbit/s for the last couple of months.

bin-vnstat-april-2013

Not bad for $5.00 per month.

Permanent link to this article: https://baldric.net/2013/04/27/cool/

this is not a political blog

but I cannot let her passing go unremarked.

Yesterday even the Guardian devoted 15 pages, plus the centre spread and a separate 16 page supplement to Margaret Thatcher. The usual suspects on the right wing tabloids were, predictably, particularly revisionist and swivel eyed. We are in danger of letting the death of one ex prime minister push real news analysis and comment onto the back burner. Worse, the revisionist theses may start to take root. A state funeral (which Thatcher herself apparently argued against) would only add to the lunacy.

I’m with Ken Loach who was quoted today as saying “How should we honour her? Let’s privatise her funeral. Put it out to competitive tender and accept the cheapest bid. It’s what she would have wanted.”

And let’s give management of the exercise to Capita and security control to G4S.

Everything will be fine.

Permanent link to this article: https://baldric.net/2013/04/10/this-is-not-a-political-blog/

Debian iz free operatin sistem (OS) 4 ur computr

For a moment today I had the awful feeling that the debian website had been compromised by an illiterate 14 year old. The front page had been changed debian-01-04-13and contained such pearls as: “Debian providez moar than pure OS: it comez wif ovar 29000 packagez” and “Peeps hoo use sistems othr than Intel x86 shud check teh ports secshun.”

And then I realised the date.

(The Grauniad has a list of others here.)

Permanent link to this article: https://baldric.net/2013/04/01/debian-iz-free-operatin-sistem-os-4-ur-computr/

gchq recruitment site stores plaintext passwords

I can’t resist this. El Reg today points to a blog post by a guy called Dan Farrall who has commented on his experience of receiving a plain text reminder of his GCHQ recruitment site password by email after filling out its forgotten password form.

Farrall’s blog post is worth reading. Whilst he acknowledges that the recruitment site is likely to be run by a third party, he rightly points out that their security practices should still have been audited by GCHQ.

At the minimum, this is embarassing for the guys in the doughnut. You’d expect GCHQ to have higher standards than the BCS.

Permanent link to this article: https://baldric.net/2013/03/27/gchq-recruitment-site-stores-plaintext-passwords/

using an ssh reverse tunnel to bypass NAT firewalls

There is usually more than one way to solve a problem.

Back in October last year I wrote about using OpenVPN to bypass NAT firewalls when access to the firewall configuration was not available. I have also written about using ssh to tunnel out to a tor proxy. What I haven’t previously commented on is the ability to use ssh to set up reverse tunnels. By this I mean the capability to set up an outbound connection from one machine to a remote machine which can then be used as a tunnel back from the remote machine to the original.

You might ask why on earth anyone would want to do that. In fact this is a very useful trick. If we think back to the problem outlined in my october post, we were faced with the requirement to connect from a client machine on one network to a remote host on another network which was behind two NAT firewalls. The scenario was as below:

openvpn-scenario

As I said in my previous post, ordinarily all we would have to do is run an SSH daemon (or indeed openVPN) on Host A and set up port forwarding rules on routers A and B to forward the connection to that host. But in the case we were considering, the owner of Host A was not able to set up the requisite port forwarding rules on the outermost router (because he didn’t own it). Indeed, this scenario is quite common in cases such as mobile data connections where the client device (say your smart phone or tablet) is given an RFC 1918 reserved address by the telco and is firewalled (often transparently) from the real internet by their NAT proxy.

We previously tackled this problem by setting up an intermediary openVPN server on a cheap VPS and using that as a relay between the two devices. However, ssh allows us to set up a reverse tunnel directly to the machine marked as “client” from the machine marked as “Host A” in the diagram above. We could, of course, also set up the tunnel to the VPS as an intermediate if we so desired. This might be necessary in cases where we don’t have access to router C for example and cannot configure inbound port forwarding there either.

Again, here’s how to accomplish what we want, this time using ssh.

On “client” (or the intermediary VPS) ensure that we have an unprivileged user such as “nobody” who has no login shell but does have a valid password. I set the shell on all unused accounts to /bin/false anyway, but you may choose otherwise. The sshd daemon itself uses /usr/sbin/nologin. The nobody account does not need a shell because we are only using it to set up the tunnel, not to actually login to the remote machine. Of course the “client” machine must have an ssh daemon listening on a port reachable by “Host A” or the initial tunnel connection will fail.

Host A must also have an ssh daemon listening in order to accept the back connection. Note that this latter daemon must be listening on “localhost” as well as the machine’s external IP address. This can most easily be accomplished by setting “ListenAddress 0.0.0.0” (rather than say, “ListenAddress 192.168.3.12”) in Host A’s /etc/ssh/sshd_config file. Localhost is necessary because that is the address on which Host A will see the incoming connection over the tunnel when it is established

Now on “Host A” open the tunnel to “client” thus:

ssh -N -f -R 2020:localhost:222 nobody@client

Here -N means no command at the remote end, i.e we are simply tunnelling; -f means background the process; -R means remote (or reverse); then listen on port 2020 on localhost on the client and connect back to port 222 on Host A). Note that nowhere in this command do we specify the ssh port used by the daemon on “client”. That is taken care of by default by ssh itself (ssh will assume port 22 unless we have changed that behaviour by modifying the port for “client” in /etc/ssh/ssh_config or a local .ssh configuration). Alternatively we could add the -p switch, followed by the requisite port number, to the above command line. Port 222 is the port on which the ssh daemon on Host A listens. Port 2020 can actually be any port which is currently unused on the “client” machine. Pick a port above 1024 to avoid the need for root privileges.

On running the command above we should be prompted for “nobody’s” password on the machine “client”. After supplying the password we will be returned to the command line on Host A. Now we can check that we have actually opened a connection by running netstat where we should see an established connection to ssh on the remote machine.

On the remote machine called “client”, we should similarly see the established ssh connection from Host A, but we should also be able to see a new ssh process listening on port 2020 on localhost. It is this process listening on port 2020 that allows us to connect back to Host A.

Now that we have an open tunnel, we can connect from “client” to “Host A” thus:

ssh -p 2020 localhost

This will default to the user we are currently logged in as on “client” and we will be prompted for the password for that userid “@localhost” (actually, the password for that user on Host A). Of course, if we wish to specify a different user we could add the “-l userid” switch to the ssh command line. If the password is accepted we will now be logged in to “Host A” from “client” and we have completely bypassed the NAT firewalls.

I strongly recommend that you do not attempt to use this mechanism (or openVPN) to circumvent a corporate security policy unless you wish to jeopardise your career.

Permanent link to this article: https://baldric.net/2013/03/26/using-an-ssh-reverse-tunnel-to-bypass-nat-firewalls/

impolite spam

Most blogs get hit by spammers aiming to get their URLs posted in the comments section. Like most wordpress based blogs, I use the default Akismet antispam plugin. I don’t like it, I don’t like the fact that it is shipped by default, I don’t like the fact that it is increasingly becoming non-free (as in beer), I don’t like the way my blog has to call home to Akismet servers. I particularly don’t like the way that all comments (including the genuine ones) are thus passed to Akismet servers, giving them the IP and email addresses of my commenters (which is another good reason for you to lie to me and use an anonymising service to mask your IP address. But Akismet has the one huge redeeming factor that it works. It stops a vast amount of crud from reaching my blog.

However, some still gets through and hits my moderation queue (I will never allow unfettered commentary on trivia). Most of the stuff that does get through has fooled the Akismet filters by using vaguely plausible (if usually ungrammatical) phrasing. Typical comments are bland, generic staments along the lines of “Nice blog, interesting discussion I have bookmarked this for later and will tell all my friends about you”. That comment will then have a URL somewhere which points to a site flogging knock-off chinese imitations of western luxury goods (or porn, though this is becoming less frequent).

I was therefore delighted to find the following in my queue a couple of days ago:

“The next time I read a weblog, I hope that it doesnt disappoint me as much as this one. I mean, I know it was my option to read, but I in fact thought youd have some thing fascinating to say. All I hear is really a bunch of whining about some thing that you could fix when you werent too busy seeking for attention.”

Way to go. Having managed to get through a spam filter and hit my moderation queue, the poster then shoots him or herself in the foot by insulting me.

No way will I post that……

Permanent link to this article: https://baldric.net/2013/03/13/impolite-spam/

touching update

I have recently upgraded the internal disk on my main desktop from 1TB to 2TB. I find it vaguely astonishing that I should have needed to do that, but I do have a rather large store of MP4 videos, jpeg photos and audio files held locally. And disk prices are again coming down so the upgrade didn’t cost too much. One noticable improvement following the upgrade is the reduction in noise. The disk I chose is one of Western Digital’s “Green Desktop” range which is remarkably quiet. Thoroughly recommended. But the point of this post is the consequence of the upgrade.

In order to minimise disruption after installation of the new disk (and of course a fresh install of Mint) I simply slotted the new disk into a spare slot in the PC chassis and hooked it up to the SATA port used by the old disk. I then hooked the old disk up to a separate spare SATA port. (I could, of course, have changed the boot order in BIOS to achieve the same effect.) Having installed Mint, I then rebooted the machine from the old disk for one final check of my old configuration before copying my data from old-disk/home/mick to new-disk/home/mick. Despite the fact that my data occupied over 900GB, the copy went reasonably quickly and painlessly – one of the advantages of a disk to disk copy over SATA, even if it is only SATA 2.0 (3Gb/s) – (Note to self. Next build should include SATA 3.0).

However, what happened next certainly wasn’t quick. In my haste to copy my old data and get back to using my PC, I stupidly forgot to preserve the file attributes (-p or -a switch) in my recursive cp. This meant of course that all my files on the new disk now had a current date attached to them. Worse, I didn’t immediately notice until I came to backup my desktop to my NAS. I do this routinely on a daily basis using rsync in a script like so:

/usr/bin/rsync -rLptgoDvz –stats –exclude-from=rsync-excludes /home/mick nas:/home/mick-backup

Guess what? Since all my desktop files now had a current modification time, rsync seemed to want to recopy them all to the NAS. This was going to take a /lot/ longer than a local cp. So I killed it so that I could figure out what had gone wrong (that didn’t take long when I spotted the file timestamps) and could find a simple fix (that took longer).

Now I had thought that rsync was smart enough to realise that the source and destination files were actually the same, regardless of the file timestamp change. Realising that I didn’t actually know what I /thought/ I knew about rsync I explained to colleagues on the ALUG mailing list what I had done and sought advice. They didn’t laugh (well not publicly anyway) and a couple of them offered very helpful suggestions to sort my problem. Wayne Stallwood first pointed out that “the default behaviour for rsync is to only compute file differences via the checksums if the modification dates between source and destination indicate the file has changed (source is newer than the destination) So it’s not actually recopying everything (though if things like permissions have changed on the source files they will now overwrite those on the target and naturally the timestamps will be updated). Previously when you ran your backup it would have just skipped anything that had the same or older timestamp than the target.”

So, what I saw as a possibly very long re-copy exercise was actually rsync comparing files and computing checksums. Needless to say, that computation on a fairly low powered NAS was going to take a long time anyway. And besides, I didn’t /want/ the timestamps of my backups all changed to match the (now incorrect) timestamps on my desktop. I wanted the original timestamps restored.

Then Mike Dorrington suggested that I could simply reset the timestamps with a “find -exec touch” approach. As Mike pointed out, touch could be made to use the timestamp of one file (in this case my old original file on the 1TB disk) to update the timestamp of the target file without any need for copying whatsover. But I confess I couldn’t at first see how I could get touch to recurse down one filetree whilst find recursed down another. Mark Rogers put me out of my misery by suggesting the following:

cd /old-disk/home/mick

find . -exec touch /new-disk/home/mick/\{\} –reference=\{\} –no-dereference –no-create \;

I think this is rather elegant and it certainly would not have occurred to me without prompting.

Permanent link to this article: https://baldric.net/2013/02/28/touching-update/

now that is what an isp should be like

In my post about the astonishing speed of the DigitalOcean network compared to the appalling service I was getting at ThrustVPS, I mentioned that the free bandwidth model didn’t look sustainable in the long run. Indeed, DigitalOcean told me themselves that they would move to a more normal commercial model when they had a better idea of their customer profile. So I was fully prepared to be told eventually that the good times were over and I should now expect to pay a lot more for my traffic. Regardless of that though, I intended to stay in Amsterdam because the network was so refreshingly fast.

Today I was delighted to receive an email from DigitalOcean telling me that as an early customer they had “grandfathered” me in to their new commercial arrangements on the same terms as I orginally signed up to – i.e. free bandwidth forever. Not quite believing my luck I asked for clarification – particularly whether that meant my account as a whole, or just the current VM. My reason for asking was that I am about to create two new VMs with them following their upgrade of the basic $5.00 pcm offering to 512MB RAM and 20 Gig of SSD disk space. I will use the first new VM to move my new tor node and the second to replace my tails mirror (which eats about 900 GiB pcm on the slow, unreliable Thrust network).

They replied:

It’s true, since you have an older account you have unlimited bandwidth for your full account, meaning every droplet you create is set to unlimited bandwidth too!

Now /that/ is what I call service. And a truly fantastic committment from DigitalOcean.

So, if you want a fast VM on a /really/ fast network. Take a look at DigitalOcean. Even without the bandwidth allowance I have, they are still astonishing value for money.

Permanent link to this article: https://baldric.net/2013/01/25/now-that-is-what-an-isp-should-be-like/