HMG goes cloudy

The UK Cabinet Office has announced the winning bidders to supply IT goods and services to UK Government under its new framework contract called “G-Cloud”. The winners are listed on a new website called the CloudStore which, supposedly, allows HMG procurement specialists to search for the goods and services they want to purchase. The new framework is supposed to break the old cosy relationships in HMG procurement circles between the big suppliers and HMG Departments. Politics and personal prejudice aside, I think Francis Maude’s intentions in setting the new services framework is actually quite honourable. But, frankly, the results baffle me.

I picked “Infrastructure as a Service” as my first choice and the list I was presented with gave several suppliers for which the description said “The supplier did not provide a description of this service, please click on the link to find out about this service.”. Of course, clicking the link merely confirms what the description says – no info. So I tried a search for “open source software” on the IaaS page and got no results. I also got no results when similarly searching “Software as a Service”. Excuse me? Am I expected to believe that not one supplier of the 255 successful companies even mentions open source software in their offering of IaaS os SaaS? Has no-one heard of the LAMP stack?

I then widened the search to include any and all service by any and all provider and got just one result – for some company called “Cloud Cache and Archive Limited”. The description says:

“Cloud Cache And Archive Limited is a privately funded software company with development based in London. Cloud Cache and Archive provides a game changing solution to allow the rapid integration of legacy applications and databases; and the deployment of new enterprise services and Web 2.0 applications on the Cloud. The solution leverages “big data” technologies; a 100% open source software; and cloud native platforms to provide Agile Information Integration and Agile Information Management all based on a cloud native platform. The proven solution is designed for Governments and large commercial organizations.”

I’m sorry, but that is just marketing drivel. WTF does that actually mean? What solution? To what problem? What is a “cloud native platform”? And how will this help a government procurement specialist (who, trust me, will not be an ICT specialist) choose a supplier?

Answers on a post card please.

Permanent link to this article: https://baldric.net/2012/02/20/hmg-goes-cloudy/

Досвидания камрад?

From a peak of around 25,000 mail drops per month in the backscatter I was getting from the .ru domain to the non-existent address “info@baldric.net” I am now seeing virtually none. My logs show a distinct drop off from mid to late December last year to about 10-15 emails per day (when I had previously been seeing anywhere between 600 and 900 per day). Since then the trickle has slowed to a crawl. I now receive only a handful a week, with most days being completely clear.

I wonder where they have gone.

Permanent link to this article: https://baldric.net/2012/02/14/%d0%b4%d0%be%d1%81%d0%b2%d0%b8%d0%b4%d0%b0%d0%bd%d0%b8%d1%8f-%d0%ba%d0%b0%d0%bc%d1%80%d0%b0%d0%b4/

perl programmer goes rabid

As a Guardian reader I find the Daily Mail distasteful and I would not normally refer to it in trivia. However, a friend of mine has just sent me a link to a random Daily Mail page generator which manages to lampoon the rag quite successfully.

image of spoof daily mail random page

Further investigation of the author’s blog reveals another random page generator which suggests he may have too much time on his hands.

Permanent link to this article: https://baldric.net/2012/02/07/perl-programmer-goes-rabid/

tomorrow the world

A slightly breathless new post over at omgubuntu proudly boasts that the market share of Linux on the desktop jumped “from 0.96% in January 2011 to 1.41% by the year’s end.” (That could equally be be written as a close to 50% rise in Linux’ popularity). No doubt this will scare the pants off Steve Ballmer.

I can’t help being amused by the comments below this post which run like this:

1. Thanks to Unity!

2. Despite unity.

3. Despite unity & gnome shell.

4. Thanks to gnome & despite unity.

5. Thanks to Ubuntu.

6. Thanks to Linux Mint.

This sort of united, combined front in opposition to proprietary software is exactly what will drive free software to say, oh around 2% of the desktop.

Permanent link to this article: https://baldric.net/2012/01/30/tomorrow-the-world/

moxie’s proxy

Moxie Marlinspike, a security researcher probably best known for his SSL proxy tool, likes google even less than I do. His googlesharing website says:

“Google thrives where privacy does not. If you’re like most internet users, Google knows more about you than you might be comfortable with. Whether you were logged in to a Google account or not, they know everything you’ve ever searched for, what search results you clicked on, what news you read, and every place you’ve ever gotten directions to. Most of the time, thanks to things like Google Analytics, they even know which websites you visited that you didn’t reach through Google. If you use Gmail, they know the content of every email you’ve ever sent or received, whether you’ve deleted it or not.

They know who your friends are, where you live, where you work, and where you spend your free time. They know about your health, your love life, and your political leanings. These days they are even branching out into collecting your realtime GPS location and your DNS lookups. In short, not only do they know a lot about what you’re doing, they also have significant insight into what you’re thinking.”

His solution to this problem was interesting. He came up with the idea of a proxy system which would intercept all google queries, strip off identifying material (such as cookies and UserAgent strings and other HTTP headers) substitute new identifiers and mix the requests up with those from other users before forwarding to google. Implementation depended upon a Firefox addon (nothing for other browsers) which identified google queries and forwarded them to the proxy. All other traffic was untouched.

image of googlesharing proxy

I stopped using google (except via scoogle) some time ago, and when Moxie’s new proxy first surfaced I thought it interesting but susceptible to the same problem I discussed in mid 2009 when writing about Hal Roberts’ experience of GIFC – all you are doing is shifting knowledge of your searches from google to a new intermediary. However, Moxie later addressed this problem with the release of version 0.20 of his addon so I thought I’d take another look at it. Unfortunately the addon won’t work with FF 9 (which I am using). Moxie’s proxy is not the only one out there however. Because he released the code under an open source licence, others have picked it up. I found one at gs.netsend.nl. They also provide an updated FF addon which will work with versions up to 15 (i.e. probably around next wednesday given the speed with which Mozilla is currently shipping new FF releases).

Once the addon is installed, it gives you two proxy options in the preferences settings – one is the original proxy.googlesharing.net, the other is gs.netsend.nl itself. In testing I found that the original googlesharing proxy seemed to be off-line, but when using the netsend.nl proxy I was reassured to see the message “Search results anonymized by GoogleSharing” added to the google homepage. I was even more reassured that my sniffer showed a connection to vps1101.pcextreme.nl on 31.21.98.201 and not to any known google network.

So, will I use it? Maybe. But the proxy mechanism seems to be unreliable. In many tests, the proxy connection seemed to be bypassed and the connection was obviously made direct to google (as evidenced by my sniffer). I think this failure is doubly unfortunate because it does not fail safe (i.e. the connection does not simply fail with an error message, it passes you direct through to google). This could lead the unwary to think that they are protected when in fact they are not.

I prefer not to use google at all. And in those cases where I do want to compare results with another search engine I prefer to do so via tor. But it is one more option in my toolkit if used carefully. And if using it pisses off google, then it is worth it occasionally.

Permanent link to this article: https://baldric.net/2012/01/22/moxies-proxy/

and darkness shall be upon the face of the net

Today, 18 January 2012, parts of the ‘net went deliberately dark in combined opposition to the SOPA (A Bill to:“promote prosperity, creativity, entrepreneurship, and innovation by combating the theft of U.S. property, and for other purposes.” I love the “other purposes” bit.) and PIPA bills currently being considered by the US legislative machinery. These two bills are classic examples of badly thought through legislation developed in response to lobby group pressure to protect an existing business model which is failing. I don’t normally make political comment, but I find myself entirely in agreement with the sentiments expressed on the torproject site this morning.

When first attempting to view the tor site, readers are faced with this:

image of blacked out tor website

Clicking on the blacked out section you are taken to a copy of the 18 January blog posting which says:

“The Tor Project doesn’t usually get involved with U.S. copyright debates. But SOPA and PIPA (the House’s “Stop Online Piracy Act” and the Senate’s “Protect-IP Act”) go beyond enforcement of copyright. These copyright bills would strain the infrastructure of the Internet, on which many free communications — anonymous or identified — depend. Originally, the bills proposed that so-called “rogue sites” should be blocked through the Internet’s Domain Name System (DNS). That would have broken DNSSEC security and shared U.S. censorship tactics with those of China’s “great firewall.”

Now, while we hear that DNS-blocking is off the table, the bills remain threatening to the network of intermediaries who carry online speech. Most critically to Tor, SOPA contained a provision forbidding “circumvention” of court-ordered blocking that was written broadly enough that it could apply to Tor — which helps its users to “circumvent” local-network censorship. Further, both bills broaden the reach of intermediary liability, to hold conduits and search engines liable for user-supplied infringement. The private rights of action and “safe harbors” could force or encourage providers to censor well beyond the current DMCA’s “notice and takedown” provision (of which Chilling Effects documents numerous burdens and abuses).”

Jimmy Wales, the founder of wikipedia has been a particularly vocal critic of the impending legislation. Today, english speaking users of wikipedia were greeted with the following page:

image of the wikipedia blackout page

There is plenty of discussion about the effects of SOPA and PIPA on-line in the usual technical fora (see wired, for example) but as El Reg said about a week ago, the mainstream media in the US have been largely quiet about the implications of the Bills should they ever become law.

I wonder why.

Permanent link to this article: https://baldric.net/2012/01/18/and-darkness-shall-be-upon-the-face-of-the-net/

t-mobile resets its policy?

As I have mentioned in other posts here, I run my own mail server on one of my VMs. I do this for a variety of reasons, but the main one is that I like to control my own network destiny. Back in October last year I noticed an interesting change in my mail experience with my HTC mobile (actually my wife first noticed it and blamed me, assuming that I had “twiddled with something” as she put it). Heaven forfend.

My mail setup is postfix/dovecot with SASL authentication and TLS protecting the mail authentication exchange. My X509 certs are self generated (and so not signed by any CA). I pick up mail over IMAPS (when mobile) and POP3S (at home – for perverse reasons of history I like to actually download mail to my main desktop over POP3 and archive it to two separate NAS backups). I send via the standard SMTP port 25 but require authentication and protect the exchange with TLS.

My mail had been working fine ever since I set it up some years ago, but as I said, back in October my wife complained that she could no longer send email from her HTC mobile (we both use t-mobile as the network provider). She was at work at the time so away from my home network. Both our phones are setup to use use wifi for connectivity where it is available (as it is at home of course). When my wife complained I checked my phone and it could send and receive without problem. But when I switched wifi off, thus forcing the data connection though the mobile network, I got the same problem as my wife reported. On checking my mail server logs I read this:

postfix/smtpd[28089]: connect from unknown[149.254.186.120]
postfix/smtpd[28089]: warning: network_biopair_interop: error reading 11 bytes from the network: Connection reset by peer
postfix/smtpd[28089]: SSL_accept error from unknown[149.254.186.120]:-1
postfix/smtpd[28089]: lost connection after STARTTLS from unknown[149.254.186.120]
postfix/smtpd[28089]: disconnect from unknown[149.254.186.120]

(the ip address is one of t-mobile’s servers on their “TMUK-WBR-N2” network)

Everything I could find about that sort of message suggested that the client was tearing down the connection because there was something wrong with the TLS handshake and it was not trusted. Checking earlier logs, I found that t-mobile’s address had apparently changed (to the address above) recently. So I assumed that some recent network change following the Orange/T-mobile merger had been badly managed and all would be well again as soon as the problem was spotted. Wrong. It persisted. So I had to investigate further. As part of my investigation of the error, I tried moving mail from port 25 to 587 (submission) because that sometimes gets around the problem of ISPs blocking, or otherwise interfering, with outbound connections from their networks to port 25, No deal. In fact it looked as if t-mobile were blocking all connections to port 587 (I assumed a whitelisting policy block, or again, a cockup).

So, the scenario was: mail works when connecting over wifi and using my domestic ISP’s network, but doesn’t when using t-mobile’s 3G network. Symptoms point to a lack of trust in the TLS handshake. Tentative conclusion? There is an SSL/TLS proxy somewhere in the mobile operator’s chain. That proxy sucessfully negotiates with our phones, but when it gets my self certified X509 cert from the server. it can’t authenticate it and decides that the connection is untrusted so tears it down. My server sees this as the client (my phone) tearing down the connection. [As it turns out, this conclusion was completely wrong, but hey].

I said in an email at the time to a friend whose advice I was seeking, “I suspect cockup rather than outright conspiracy, but if my telco is dumb enough to stick a MITM ssl proxy in my mail chain, they really ought to have thought about handling self signed certs a little better. Otherwise it sort of gives the game away.”

In response, he very sensibly suggested that I should run a sniffer on the server and check what was going on. At that time, I was busy doing something else so I didn’t. And because the problem was intermittent (and my wife stopped complaining) I never got around to properly investigating further. (I should explain that I rarely send mail from my mobile nowadays. I just read mail there and wait until I get home to a decent keyboard and can reply to whatever needs handling from there. My wife just gave up bothering to try).

I should have persisted because of course I wasn’t the only one to experience this problem.

Back in November, a member of the t-mobile discussion forum called “dpg” posted a message complaining that he could not connect to port 587 over t-mobile’s 3G network. In response, a member of the t-mobile forum team suggested that dpg might reconfigure his email so that it was relayed via t-mobile’s own SMTP server. Not unreasonably, dpg didn’t think this was an acceptable response – not least because he would then have to send his email in clear. He then posted again saying that “the TLS handshake fails when the mail client receives a TCP packet with the reset (RST) flag set.” (This is a bad thing (TM). Further, he posted again saying that he had set up his own mail server and repeated earlier tests so that he could see both ends of the connection. At the client side he posted mail from his laptop tethered to his phone which was connected to the t-mobile 3G network. By running sniffers at both ends of the connection he was able to prove to his own satisfaction that something in the t-mobile network was sending a RST and tearing down any connection when a STARTTLS was seen. Again, in a later post in response to one from another poster who apparently manages several mail servers and had been looking at the same issue for a client, dpg says:

“I must say I’m not too pleased to discover that T-Mobile may be snooping all traffic to check for SMTP messages. I have demonstrated that they may be doing this by running a SMTP server on a non-standard port and finding that they still sent TCP reset packets during TLS negotiation – so they must be examining all packets and not just those destined for TCP ports 25 and 587.

I’m also not that keen on T-Mobile spoofing/forging TCP resets. This is the sort of tactic resorted to by the Great Firewall of China (https://www.lightbluetouchpaper.org/2006/06/27/ignoring-the-great-firewall-of-china/) and also by Comcast back in 2007 (https://www.eff.org/wp/packet-forgery-isps-report-comcast-affair) until the US FCC told them to stop (https://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-08-183A1.pdf).”

Then 9 days ago, dpg posted this message:

“I finally got to the bottom of this. I was contacted by T-Mobile technical support today and was told that they are now actively looking for and blocking any TLS-secured SMTP sessions. So, it is a deliberate policy after all, despite what the support staff have been saying on here, twitter and on 150. They told me it is something they have been rolling out over the last three months – which explains why it was intermittent and dependent on IP address and APN to begin with.

So, the only options for sending email over T-Mobile’s network are:
– unencrypted but authenticated SMTP (usually on port 25)
– SSL-encrypted SMTP (usually on port 465)
– unauthenticated and unencrypted email to smtp.t-email.co.uk

TLS-encrypted SMTP sessions are always blocked whether or not they are on the default port of 587.”

(As an aside, there is, of course, another alternative. You can ditch t-mobile as your provider and pick one which doesn’t use DPI to screw your connections. You pays your money….)

Following this, a new poster called “mickeyc” said this:

“I’ve been experiencing this exact same problem. I run my own mail server which has SSL on port 465 and also uses TLS on port 587. I used wireshark to confirm that the RST packets are being spoofed. This is the exact same technology used by “The Great Firewall of China”. I have two t-mobile sims. One is about a year old and doesn’t experience this problem (yet), one is a few weeks old and does.”

He went to say that he had also experienced problems with his OpenVPN connections and would be blogging about the problem (damned bloggers get everywhere) and sure enough, Mike Cardwell did so at grepular.com. That blog post is worth reading because it has an interesting set of comments and responses from Mike appended.

Mike’s post seems to have been picked up by a few others (El Reg has one, and as Mike himself has pointed out, boingboing.net has a particularly OTT post which seems to say that he is accusing t-mobile of something he clearly isn’t.

Finally, two days ago, dpg posted this:

“I’m pleased to report that T-Mobile is no longer blocking TLS-secured email on port 587. As a follow-up to an email exchange over the Christmas period I was contacted today to say that, contrary to what I had been told previously, it was never a deliberate policy to block TLS-secured outgoing email. There was a problem with some equipment after all, which was resolved yesterday.”

I tried again myself today. Initially, I got the same old symptoms (“lost connection after STARTTLS”) then I rebooted my ‘phone and lo and behold I could send email.

Like Mike, I tend to the cockup over conspiracy theory, it’s more likely for one thing. IANAL, but it seems to me that it would be in breach of RIPA part I, Unlawful Interception, for the telco to intercept my SMTP traffic in the way it seems to have been doing. That is not likely to be a deliberate act by a major UK mobile network provider.

But I’ll still keep an eye on things.

Permanent link to this article: https://baldric.net/2012/01/12/t-mobile-resets-its-policy/

tails in a spin

When I first tested running a tails mirror on one of my VMs, the traffic level reported by vnstat ran at around 20-30 GiB per day. I figured I could live with that because it meant that my total monthly traffic would be unlikely to exceed my monthly 1TB allowance. However, when I checked the stats on that server last week (around the 9th of Jan) I found that I was shipping out around 150 GiB per day and vnstat was predicting a monthly total of close to 3 TB. As the tails admins said when I told them that I would have to shut off the mirror on that VM while I sorted something, “Ooops”. Ooops indeed. I couldn’t chance a massive bill for exceeding my bandwidth allowance by quite that much. The actual stats for 4, 5, 6, 7, 8 and 9 January before I pulled the plug were: 34.23 GiB, 69.14 GiB, 178.31 GiB, 131.68 GiB, 99.05 GiB and 133.27 Gib. It turns out that tails 0.10 was released on 4 January and I hadn’t been prepared. A lesson learned.

Having shut down and had the DNS round robin amended, I attended to finding some way of throttling my traffic so that I could live within my allowance whilst still providing a useful mirror. I scratched my head for a while before stumbling on the obvious, I should be throttling at application level. (Sometimes I find that I miss simple answers because I am looking for complicated ones).

I started out by assuming that I should be using tc and iptables mangling, or something like the userspace tool trickle, all of which looked horribly more complicated than the approach taken by tor (which allows you to simply set the acceptable bandwidth rate to some limit, plus set an accounting period maximum of some total transfer limit per day/week whatever). And of course it turns out that my webserver (lighttpd) allows something similar. Just set the server limit to some chosen max transfer rate and, if necessary, also impose a per IP max rate. The magic configuration file options are:

# limit server throughput to 3000 kbytes/sec (~30000 kbits/sec)
server.kbytes-per-second = 3000
#
# and limit individual connections to 50 kbytes (~500 kbits/sec) – NB. I don’t actually use this
# connection.kbytes-per-second = 50

I tested this by pulling a copy of the tails iso from one of my other VMs which has a high bandwidth connection and got acceptable (and expected) results. So now I can go back on-line later this month safe in the knowledge that I’m not going to blow all my bandwidth in one week.

Permanent link to this article: https://baldric.net/2012/01/12/tails-in-a-spin/

well it’s not me

xkcd cartoon number 386

With grateful thanks as always to xkcd.

Permanent link to this article: https://baldric.net/2012/01/05/well-its-not-me/

happy birthday trivia

Astonishingly, today is the fifth anniversary of my first post to trivia. So, five years ago on christmas eve, I was writing a blog post. Five years later, it is again christmas eve and what am I doing?

Hmmm.

Permanent link to this article: https://baldric.net/2011/12/24/happy-birthday-trivia/

bah, humbug

At this time of year it is traditional to receive christmas cards from people with whom you may have only infrequent, if any, contact on a normal daily basis. If you are in a relationship, these cards will often be addressed to you as a couple or family, and be signed on behalf of other couples or families. In my case, on opening such cards I often then end up shouting out something like, “Darling, who the hell are Sarah and Jimmy?” and “Did we send them a card?” (as if it mattered.)

In my view, this problem has become exacerbated by the rise of the e-card (an email substitute for those too idle, or too penny pinching, to even go to the trouble of sending actual cards through the real postal system). Maybe I’m becoming more reactionary in my old age (it happens) but e-cards are, in my view, even worse than e-books.

Strange as it may sound, most people I know use their christmas cards as decorative features by hanging them on string around doorways, or placing them on the mantle over the fireplace alongside the christmas tree. What am I supposed to do with a bloody flash animation of a kitten playing with a bauble?

Worse, these e-cards do not usually even come direct from the sender’s (known) email address but via the commercial creator’s website. This means that the email runs the risk of being treated as spam and thus not reaching the intended destination. Or, again, in my case, if they do actually reach their destination and I see an email from some unknown sender with the message “Sarah and Jimmy have sent you the attached e-card in support of save the vegetarian whales. Click here to see it”, it goes straight into the deleted pile unopened.

Hah! Take that! You aren’t going to engineer me into installing your damned trojan.

Merry Christmas.

Permanent link to this article: https://baldric.net/2011/12/24/bah-humbug/

the amnesic incognito live system

Or “tails” if you prefer, is a live CD/USB distribution based on debian which aims to help you preserve your privacy and anonymity when out and about. As the home website says, tails helps you to:

  • use the Internet anonymously almost anywhere you go and on any computer:
    all connections to the Internet are forced to go through the Tor network;
  • leave no trace on the computer you’re using unless you ask it explicitly;
  • use state-of-the-art cryptographic tools to encrypt your files, email and instant messaging.

This is a good thing (TM).

I already have a system at home which allows me to use the tor network whenever I want to be anonymous, but tails allows me to do the same thing when I’m away from that setup. I like the idea so much that I now provide a mirror for the tails distribution to complement my tor exit node. Every little helps.

Permanent link to this article: https://baldric.net/2011/12/20/the-amnesic-incognito-live-system/

tunnelling X over ssh

OK, yes, I know there are probably already a gazillion web pages on the ‘net explaining exactly how to do this, but I got caught out by a silly gotcha when I tried to do this a couple of days ago, so I thought I’d post a note.

Firstly, X is not exactly a secure protocol, nor is it easy to filter at NAT firewalls, so the ability to tunnel it over ssh is hugely welcome. In fact, ssh can be used to tunnel practically any other protocol you care to name, so it should be your first port of call should you wish to connect to a remote system using an insecure protocol. (I use it to wrap rsync for example).

I don’t run X on my VMs (there is no need, they don’t run desktop software) and I had not previously seen the need to run X based graphical programs on those servers. However, a couple of days ago I thought it would be really useful to run etherape on one particular remote server so that I could watch the traffic patterns. Normally I use iptraf (which is ncurses based) when I want to monitor network traffic in real time, but etherape is pretty cool and gives a nice graphical view of your network connections. But it runs on an X based gui.

So. I changed the remote server’s sshd_config to enable X forwarding (“X11Forwarding no” becomes “X11Forwarding yes”) and restarted sshd. On my desktop I similarly changed my local ssh_config file to allow X forwarding (“ForwardX11 no” becomes “ForwardX11 yes”) to obviate the need to use the -X switch on the command line. I then installed etherape on the remote server and fired it up only to get the message “Error: no display specified”. Sure enough “echo $DISPLAY” showed nothing. But I had thought (and everything I had read confirmed) that ssh should take care of setting the appropriate display when X11 forwarding was set.

So I then tried setting a display manually (export DISPLAY=localhost:10.0 on the remote server) and then got the response “Error: cannot open display: localhost:10.0”. So, still no deal. I spent some time scratching my head (and reading man pages) and sent off a query to my local Linux User group in parallel asking for advice. They were gentle with me.

The first, and rapid, response, said:

On the server:

sudo apt-get install xauth

Then disconnect and reconnect the client.

Jobs a good un.

Thank you Brett.

So the moral is, make sure that you have X authorisation working properly on the remote system (check for the existence of $HOME/.Xauthority) if you experience the same symptoms I did.

Permanent link to this article: https://baldric.net/2011/12/19/tunnelling-x-over-ssh/

tp-link respond

A couple of weeks ago, I wrote about the problems I had with a TP-Link IP camera. Today I received a comment on that post from a guy called Luke in the TP-Link support team. In that response he apologises for the difficulties I had and promises to investigate further.

His response deserves as wide an audience as my original post, so I am drawing attention to it here.

Thank you Luke for taking the time to comment.

Permanent link to this article: https://baldric.net/2011/11/30/tp-link-respond/

no you can’t have my mobile number

I guess, like me, many parents will have facebook accounts simply as a means of communicating with their kids. In the past I have used my account as a way of finding out what my kids actually do, or like in the way of music for example. This can be more fruitful than attempting a conversation with a grumpy teenager. My kids are no longer teenagers so I don’t use it much these days. However I tried today to check my son’s page in the hope that it might give me some inspiration for a christmas present. Facebook won’t let me log on unless I give it a mobile phone number.

image of facebook login page

No Zuckerberg, you cannot have my mobile number. And I am seriously pissed off that I cannot now even get to my account to delete it.

Permanent link to this article: https://baldric.net/2011/11/23/no-you-cant-have-my-mobile-number/

the most influential people in UK IT?

This would be funny if it weren’t quite so tragic. A friend of mine has just pointed me to the Computer Weekly “second annual UKtech50” poll of “the definitive list of the real movers and shakers in UK IT – the CIOs, industry executives, public servants and business leaders driving the creation of a high-tech economy.”

The flummery goes on, “Voting has begun to find out who is the most influential person in the UK IT community. Our panel of judges has chosen the shortlist of 50 names, and we want your opinion on who should win.”

So who are these 50 top “movers and shakers” in UK IT? A depressing list of the (maybe) worthy but dull. The sort of list that the President of a local chapter of the BCS might dream up. It even includes the Cabinet Office Minister Frances Maude. I don’t think his CV contains much in the way of technical capability, With one or two exceptions (pick your own) few if any of those listed could be deemed UK IT leaders – influential maybe, IT leaders? I doubt it.

So let’s take a look at the list of judges. This is where the tragedy is most manifest. Take a look at the bottom of that page – the section headed “Read More”. It says:

People who read this also read…

What is 3G (third generation of mobile telephony)? – Definition from Whatis.com
What is TCP/IP (Transmission Control Protocol/Internet Protocol)? – Definition from Whatis.com
What is cloud computing? – Definition from Whatis.com
What is supply chain management (SCM)? – Definition from Whatis.com

Oh deary, deary, deary me.

Permanent link to this article: https://baldric.net/2011/11/23/the-most-influential-people-in-uk-it/

google buys advertising

In an interesting reverse of the norm, google paid for three full page adverts in the guardian a couple of days ago. Today there is yet another full page ad in the same paper. I assume they have run similar campaigns in other UK newspapers over the past few days, The ads are quite intriguing in that they seem to be addressing potential concerns about the use of well established web technologies. Today’s ad, for example, was about cookies. Each ad points to a google site giving further detail.

These adverts cannot have been cheap. What are they worried about?

Permanent link to this article: https://baldric.net/2011/11/23/google-buys-advertising/

do not buy one of these

 

Standalone IP cameras have come down in price quite remarkably over the past few years. It is now perfectly possible to get a camera for between £50.00 and £75.00, and this makes them attractive for anyone wanting to set up simple “home surveillance” systems. I bought one recently just to see what I could realistically do with such a beast. I chose the TP-Link TL-SC3130G,

image of TP-Link IP camera

which goes for around £60.00. I bought mine from amazon. I chose this particular camera because, on paper, it looked to have a good specification at a keen price point. According to the TP Link website, the camera’s highlights include:

  • 54Mbps wireless connectivity brings flexible placement
  • Bi-directional audio allows users to listen and talk remotely
  • Excellent low light sensitivity ensures good video quality even in the dawn
  • MPEG-4/MJPEG dual streams for simultaneous remote recording and local surveillance

plus an impressive list of protocol capabilities all in a reasonably compact and attractive hardware package.

When the camera arrived I was pleased to find that the hardware was indeed quite solid and attractive. Such a shame I can’t say anything good about the software though.

As you would expect, I had to first configure the camera over a wired link. By default the camera comes up on 192.168.1.10. The login credentials are the usual “admin/admin” – which is the first thing you should change, but sadly I’ll bet that few people bother. The web interface presents the user with a set of configuration menus on the left of the screen and an image taken from the camera towards the centre of the screen. The software assumes that the user has IE and ActiveX running so for those of us with more sensible setups, some of the configuration and control options on the camera (such as snapshot, zoom and audio volume control) are unavailable. No matter, the important thing from my point of view, and the reason I bought this camera rather than its slightly cheaper brother, the SC3130, is the supposed wireless capability. At first sight, the camera and network configuration options look surprisingly comprehensive. In fact, I’d go so far as to say that the list of options available might confuse a user who had little networking experience. For example, besides the obvious options to set new static IP addressing or change to DHCP, you can change HTTP, RTP and RTSP ports, set up multicast streaming, change the multicast address, change the ports used for video and audio streaming, set viewer authentication, set the camera to use PPPoE and dynamic DNS and even send users an alert via email containing the new network settings (such as IP address) should these change. Of course, in order to do so the user must first configure email on the camera. Altogether an impressive looking range of capabilities. Again, such a shame they don’t all work.

Annoyingly, the web interface sometimes simply refused to accept changes or the system reset the changes after reboot, I first noticed this when changing the camera’s clock setting to sync with the time on my PC. It simply refused. NTP worked eventually, but it tended to stop working for no apparent reason. But by far the worst fault was in the WiFi stack. WiFi configuration options were all accepted and it was soon possible to connect wirelessly both to configure the camera and to view either a video stream or a still image. However, as soon as the wired connection was removed, both interfaces went down. Nor was it possible to connect wirelessly if the camera was booted without a cable inserted. Now it is pretty pointless to have a WiFi camera that insists on having a wired connection present as well and I couldn’t believe that no-one had tested this so I assumed that there was some way to get the thing working. Besides I hate being beaten. So I spent what was, on reflection, a disproportionately silly amount of time playing with various configuration options (DHCP vs static addressing, various combinations of UPnP and no UPnP (which involved me changing my router configs as well), changing various network port numbers, all to no avail. I searched the manufacturer’s website in case there was a new firmware image I could try, but that was a waste of time because the image on the website (1.6.17 dated 29 October 2010) was older than the firmware on the camera (1.6.18 dated 17 March 2011).

After trying umpteen variations of settings, at one point the camera froze completely and refused to boot. I had to resort to a hardware reset to get the thing back up again. Here it got weirder still. The camera came back up on 192.168.1.97 and not the default 192.168.1.10 (I found it with a sniffer). God help the average punter trying to get this thing to work.

I sent it back, and amazon refunded my money. Do yourself a favour. Don’t even think about buying one.

Permanent link to this article: https://baldric.net/2011/11/16/do-not-buy-one-of-these/

ubuntu de-throned

For the first time since early 2005, Ubuntu has fallen off the top spot on distrowatch. The new number one, by page hit ranking, is Linux Mint.

I’m not at all surprised.

Permanent link to this article: https://baldric.net/2011/11/09/ubuntu-de-throned/

do I trust this site?

Following a visit to EFF to read an article on e-book privacy, I met this:

image of SSL certificate view

So. EFF uses a wildcard SSL cert issued by a company which was breached earlier this year.

Permanent link to this article: https://baldric.net/2011/11/09/do-i-trust-this-site/

dis-unity

The reaction to Ubuntu’s move to Unity seems to be getting wider coverage. Over at LWN, Bruce Byfield blogged recently about the rift between the Ubuntu developers and its users. In particular he highlights Tal Liron’s entry to the Ubuntu launchpad bug wiki under bug number 882274. In that entry, entitled “Community engagement is broken” Liron gently rebukes the developers for their apparent lack of enegagement with the community, saying:

“The bug is easy to reproduce: open a Launchpad bug about how Unity breaks a common usage pattern, and you get a “won’t fix” status and then radio silence. The results of this bug are what seems to be a sizable community of disgruntled, dismayed and disappointed users, who go on to spread their discontent and ill will.”

Both Liron’s bug entry (and the subsequent commentary) and Byfield’s analysis of that discussion bear reading. I found myself frustrated by the obvious lack of understanding of (and impatience with) Liron’s position apparent in Mark Shuttleworth’s responses. Byfield concludes that:

“[Suttleworth] sounds impatient, resorting to personal attacks and invoking his personal authority or the necessities of design or standard practice instead of offering explanations. At times, he seems to address issues that at best approximate what others in the discussion are saying. Exactly why this change has happened is uncertain, but it adds a sting to Shuttleworth’s once-humorous title of Benevolent Dictator for Life.”

Meanwhile, over at El Reg, Liam Proven offers his analysis of the Ubuntu upheaval. In that article, Proven describes the differences between GNOME 3, GNOME 2 and Unity and explains how these changes (or more properly, the management of these changes) have led to the difficulties now facing a wide range of users. Proven concludes:

“Ubuntu is gambling that Unity will attract floods of new Linux users in such numbers as to outweigh those abandoning it for its spin-offs and rivals. If it’s correct, then Ubuntu will continue its rise to near-total dominance of the Linux desktop. But if it’s wrong, it will leave the Linux world more fragmented than ever.”

In my view Ubuntu (or more precisely Canonical and Shuttleworth himself) is wrong and will regret this decision not to properly engage with its user base. I don’t blame them for changing the desktop, after all, the GNOME developers have forced that change upon them. But I do agree strongly with Liron’s position. Ubuntu could do well to listen more.

And in a nice summary of Xfce, Scott Gilbertson today explains why previous GNOME users are moving to that desktop in the wake of the GNOME 3 and Unity changes. It seems I’m in the company of a growing number of other users.

Permanent link to this article: https://baldric.net/2011/11/09/dis-unity/

I prefer the chip wrapper version

My newspaper of choice is the Guardian. Recently they were forced to increase the cover price and ever since have been running a series of advertisements for various forms of subscription which will lower the cost from some £35.00 pcm (if you include its sister paper the Observer on sundays) to as little as £9.99 if you go for the kindle option.

The economic case for change is unarguable. A saving of over £20 a month, plus you get the “paper” delivered to your breakfast table in seconds over the airwaves. No need to go out in the rain down to the shop to pick up a copy (I live in the sticks and the local shop won’t deliver to us). No disappointment when they have sold out (it happens). No waste paper as I immediately bin the sports section. No waste paper when I eventually discard the bits I do read. And it would mean that I actually use the kindle as something other than a rather expensive paper weight. The Grauniad even kindly offered a two week free trial if you signed up.

So I tried it. I really did. But it just didn’t work for me.

To be fair practically all the editorial is there. And the layout is pretty good. Down the left hand of the screen you see the headings for the main sections – Top Stories, UK News, International, Financial etc. whilst on the right hand side you are given the headlines for each of the main stories in each section. The layout also makes good use of the kindle’s navigation features so it is easy to skip from one article to another or even from one section to another. But it lacks that essential aesthetic which makes a good newspaper. I’m sorry, but the medium is the message – at least it is over the breakfast table.

I don’t read a newspaper in serial, article after article, front to back form. I skip about. First I throw away that useless sport section. Then I start at the back of G2 with Steve Bell, and flick back two pages to Doonesbury. What? No Steve Bell? No Doonesbury? Oh dear. I then flick to the front of G2 while my tea is brewing and I munch my cornflakes whilst reading whatever takes my fancy. (Note to non Guardian readers. The G2 is a tabloid sized insert to the main paper. It contains little in the way of editorial and much in the way of entertainment. Ideal breakfast fodder.) G2 section finished (normally about the time I have finished my breakfast) I can retire to my armchair with my second cup of tea and the main paper.

And I don’t read that in serial fashion either. I skip about. I scan the pages for something I want to read first, then read that before scanning for something else. Doing so gives me a good feel for the main issues of the day. I’ll see the obvious front page article – get a paragraph or two under my belt, then flick through for further details in other articles before going back to read the main news in detail. On the way I will inevitably be exposed to advertising (none in the kindle version) and will see a wide range of pictorial editorial content (very little in the kindle). And I can fold the paper to match what I am looking at. And it doesn’t weigh much. And it has two crosswords, plus the soduko and more Steve Bell!

I’m sorry, but a newspaper is more than just the sum of its content. I think I’ll carry on wasting 20 quid a month. And throwing away the sport section unread. And my local newsagent will continue to benefit from both the sale of the paper itself and any incidental purchase I may make whilst I am there. They wouldn’t get that if I carried on with the kindle.

Permanent link to this article: https://baldric.net/2011/11/08/i-prefer-the-chip-wrapper-version/

fully minted

After exploring the alternatives to Ubuntu, I finally settled on Linux Mint Debian Edition (LMDE) running Xfce as the desktop. I am now Ubuntu free and have a desktop that looks the way /I/ want it to look rather than the way some design nut wants it to look. I am also hopeful that the desktop will stay that way in future.

My main desktop now looks like this:

image of linux desktop

and my netbook looks like this:

image of linux desktop on my netbook

I chose LMDE rather than Xubuntu partly out of pique with the way Canonical is taking Ubuntu, and partly out of a genuine desire to move to a distro which is closer to the ideals of the FOSS community which Ubuntu used to espouse and which Debian always has done. For me, LMDE now offers the best compromise between a truly useable modern desktop (with all that implies for proprietary codecs) and the purity and stability of Debian. I know where things are in Debian and I much prefer the Debian package manager to RPM (which immediately rules out Fedora or SUSE). Having now spent some time playing with Xfce I find myself surprised that I didn’t move to it much earlier. It is clean, relatively lightweight, fast and eminently configurable.

On my main desktop machine (which is running the 64 bit version to take full advantage of the 8 Gig of RAM I have installed) everything works as it should – even the dreaded flash (yes, I occasionally watch youtube). On the netbook (32 bit version) everything except the RHS card reader works. Hot plugging works on the left, and the right /will/ work if there is an SD card in place on boot. (But no, I /still/ can’t read Sony memory sticks. I have sort of given up on that now anyway since I no longer use the PSP to watch videos.)

Now to convert my wife.

Permanent link to this article: https://baldric.net/2011/11/06/fully-minted/

there is no version 7

This week’s BOFH in El Reg rings horribly true:

“I JUST WANT MY MENU BACK!”

“You mean you don’t like the ribbon? It’s new!”

“I don’t care if it’s new – I can’t find anything!”

Back when I was a sysadmin we used to call users a “test load”.

Permanent link to this article: https://baldric.net/2011/11/04/there-is-no-version-7/