and darkness shall be upon the face of the net

Today, 18 January 2012, parts of the ‘net went deliberately dark in combined opposition to the SOPA (A Bill to:“promote prosperity, creativity, entrepreneurship, and innovation by combating the theft of U.S. property, and for other purposes.” I love the “other purposes” bit.) and PIPA bills currently being considered by the US legislative machinery. These two bills are classic examples of badly thought through legislation developed in response to lobby group pressure to protect an existing business model which is failing. I don’t normally make political comment, but I find myself entirely in agreement with the sentiments expressed on the torproject site this morning.

When first attempting to view the tor site, readers are faced with this:

image of blacked out tor website

Clicking on the blacked out section you are taken to a copy of the 18 January blog posting which says:

“The Tor Project doesn’t usually get involved with U.S. copyright debates. But SOPA and PIPA (the House’s “Stop Online Piracy Act” and the Senate’s “Protect-IP Act”) go beyond enforcement of copyright. These copyright bills would strain the infrastructure of the Internet, on which many free communications — anonymous or identified — depend. Originally, the bills proposed that so-called “rogue sites” should be blocked through the Internet’s Domain Name System (DNS). That would have broken DNSSEC security and shared U.S. censorship tactics with those of China’s “great firewall.”

Now, while we hear that DNS-blocking is off the table, the bills remain threatening to the network of intermediaries who carry online speech. Most critically to Tor, SOPA contained a provision forbidding “circumvention” of court-ordered blocking that was written broadly enough that it could apply to Tor — which helps its users to “circumvent” local-network censorship. Further, both bills broaden the reach of intermediary liability, to hold conduits and search engines liable for user-supplied infringement. The private rights of action and “safe harbors” could force or encourage providers to censor well beyond the current DMCA’s “notice and takedown” provision (of which Chilling Effects documents numerous burdens and abuses).”

Jimmy Wales, the founder of wikipedia has been a particularly vocal critic of the impending legislation. Today, english speaking users of wikipedia were greeted with the following page:

image of the wikipedia blackout page

There is plenty of discussion about the effects of SOPA and PIPA on-line in the usual technical fora (see wired, for example) but as El Reg said about a week ago, the mainstream media in the US have been largely quiet about the implications of the Bills should they ever become law.

I wonder why.

Permanent link to this article: https://baldric.net/2012/01/18/and-darkness-shall-be-upon-the-face-of-the-net/

t-mobile resets its policy?

As I have mentioned in other posts here, I run my own mail server on one of my VMs. I do this for a variety of reasons, but the main one is that I like to control my own network destiny. Back in October last year I noticed an interesting change in my mail experience with my HTC mobile (actually my wife first noticed it and blamed me, assuming that I had “twiddled with something” as she put it). Heaven forfend.

My mail setup is postfix/dovecot with SASL authentication and TLS protecting the mail authentication exchange. My X509 certs are self generated (and so not signed by any CA). I pick up mail over IMAPS (when mobile) and POP3S (at home – for perverse reasons of history I like to actually download mail to my main desktop over POP3 and archive it to two separate NAS backups). I send via the standard SMTP port 25 but require authentication and protect the exchange with TLS.

My mail had been working fine ever since I set it up some years ago, but as I said, back in October my wife complained that she could no longer send email from her HTC mobile (we both use t-mobile as the network provider). She was at work at the time so away from my home network. Both our phones are setup to use use wifi for connectivity where it is available (as it is at home of course). When my wife complained I checked my phone and it could send and receive without problem. But when I switched wifi off, thus forcing the data connection though the mobile network, I got the same problem as my wife reported. On checking my mail server logs I read this:

postfix/smtpd[28089]: connect from unknown[149.254.186.120]
postfix/smtpd[28089]: warning: network_biopair_interop: error reading 11 bytes from the network: Connection reset by peer
postfix/smtpd[28089]: SSL_accept error from unknown[149.254.186.120]:-1
postfix/smtpd[28089]: lost connection after STARTTLS from unknown[149.254.186.120]
postfix/smtpd[28089]: disconnect from unknown[149.254.186.120]

(the ip address is one of t-mobile’s servers on their “TMUK-WBR-N2” network)

Everything I could find about that sort of message suggested that the client was tearing down the connection because there was something wrong with the TLS handshake and it was not trusted. Checking earlier logs, I found that t-mobile’s address had apparently changed (to the address above) recently. So I assumed that some recent network change following the Orange/T-mobile merger had been badly managed and all would be well again as soon as the problem was spotted. Wrong. It persisted. So I had to investigate further. As part of my investigation of the error, I tried moving mail from port 25 to 587 (submission) because that sometimes gets around the problem of ISPs blocking, or otherwise interfering, with outbound connections from their networks to port 25, No deal. In fact it looked as if t-mobile were blocking all connections to port 587 (I assumed a whitelisting policy block, or again, a cockup).

So, the scenario was: mail works when connecting over wifi and using my domestic ISP’s network, but doesn’t when using t-mobile’s 3G network. Symptoms point to a lack of trust in the TLS handshake. Tentative conclusion? There is an SSL/TLS proxy somewhere in the mobile operator’s chain. That proxy sucessfully negotiates with our phones, but when it gets my self certified X509 cert from the server. it can’t authenticate it and decides that the connection is untrusted so tears it down. My server sees this as the client (my phone) tearing down the connection. [As it turns out, this conclusion was completely wrong, but hey].

I said in an email at the time to a friend whose advice I was seeking, “I suspect cockup rather than outright conspiracy, but if my telco is dumb enough to stick a MITM ssl proxy in my mail chain, they really ought to have thought about handling self signed certs a little better. Otherwise it sort of gives the game away.”

In response, he very sensibly suggested that I should run a sniffer on the server and check what was going on. At that time, I was busy doing something else so I didn’t. And because the problem was intermittent (and my wife stopped complaining) I never got around to properly investigating further. (I should explain that I rarely send mail from my mobile nowadays. I just read mail there and wait until I get home to a decent keyboard and can reply to whatever needs handling from there. My wife just gave up bothering to try).

I should have persisted because of course I wasn’t the only one to experience this problem.

Back in November, a member of the t-mobile discussion forum called “dpg” posted a message complaining that he could not connect to port 587 over t-mobile’s 3G network. In response, a member of the t-mobile forum team suggested that dpg might reconfigure his email so that it was relayed via t-mobile’s own SMTP server. Not unreasonably, dpg didn’t think this was an acceptable response – not least because he would then have to send his email in clear. He then posted again saying that “the TLS handshake fails when the mail client receives a TCP packet with the reset (RST) flag set.” (This is a bad thing (TM). Further, he posted again saying that he had set up his own mail server and repeated earlier tests so that he could see both ends of the connection. At the client side he posted mail from his laptop tethered to his phone which was connected to the t-mobile 3G network. By running sniffers at both ends of the connection he was able to prove to his own satisfaction that something in the t-mobile network was sending a RST and tearing down any connection when a STARTTLS was seen. Again, in a later post in response to one from another poster who apparently manages several mail servers and had been looking at the same issue for a client, dpg says:

“I must say I’m not too pleased to discover that T-Mobile may be snooping all traffic to check for SMTP messages. I have demonstrated that they may be doing this by running a SMTP server on a non-standard port and finding that they still sent TCP reset packets during TLS negotiation – so they must be examining all packets and not just those destined for TCP ports 25 and 587.

I’m also not that keen on T-Mobile spoofing/forging TCP resets. This is the sort of tactic resorted to by the Great Firewall of China (https://www.lightbluetouchpaper.org/2006/06/27/ignoring-the-great-firewall-of-china/) and also by Comcast back in 2007 (https://www.eff.org/wp/packet-forgery-isps-report-comcast-affair) until the US FCC told them to stop (https://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-08-183A1.pdf).”

Then 9 days ago, dpg posted this message:

“I finally got to the bottom of this. I was contacted by T-Mobile technical support today and was told that they are now actively looking for and blocking any TLS-secured SMTP sessions. So, it is a deliberate policy after all, despite what the support staff have been saying on here, twitter and on 150. They told me it is something they have been rolling out over the last three months – which explains why it was intermittent and dependent on IP address and APN to begin with.

So, the only options for sending email over T-Mobile’s network are:
– unencrypted but authenticated SMTP (usually on port 25)
– SSL-encrypted SMTP (usually on port 465)
– unauthenticated and unencrypted email to smtp.t-email.co.uk

TLS-encrypted SMTP sessions are always blocked whether or not they are on the default port of 587.”

(As an aside, there is, of course, another alternative. You can ditch t-mobile as your provider and pick one which doesn’t use DPI to screw your connections. You pays your money….)

Following this, a new poster called “mickeyc” said this:

“I’ve been experiencing this exact same problem. I run my own mail server which has SSL on port 465 and also uses TLS on port 587. I used wireshark to confirm that the RST packets are being spoofed. This is the exact same technology used by “The Great Firewall of China”. I have two t-mobile sims. One is about a year old and doesn’t experience this problem (yet), one is a few weeks old and does.”

He went to say that he had also experienced problems with his OpenVPN connections and would be blogging about the problem (damned bloggers get everywhere) and sure enough, Mike Cardwell did so at grepular.com. That blog post is worth reading because it has an interesting set of comments and responses from Mike appended.

Mike’s post seems to have been picked up by a few others (El Reg has one, and as Mike himself has pointed out, boingboing.net has a particularly OTT post which seems to say that he is accusing t-mobile of something he clearly isn’t.

Finally, two days ago, dpg posted this:

“I’m pleased to report that T-Mobile is no longer blocking TLS-secured email on port 587. As a follow-up to an email exchange over the Christmas period I was contacted today to say that, contrary to what I had been told previously, it was never a deliberate policy to block TLS-secured outgoing email. There was a problem with some equipment after all, which was resolved yesterday.”

I tried again myself today. Initially, I got the same old symptoms (“lost connection after STARTTLS”) then I rebooted my ‘phone and lo and behold I could send email.

Like Mike, I tend to the cockup over conspiracy theory, it’s more likely for one thing. IANAL, but it seems to me that it would be in breach of RIPA part I, Unlawful Interception, for the telco to intercept my SMTP traffic in the way it seems to have been doing. That is not likely to be a deliberate act by a major UK mobile network provider.

But I’ll still keep an eye on things.

Permanent link to this article: https://baldric.net/2012/01/12/t-mobile-resets-its-policy/

tails in a spin

When I first tested running a tails mirror on one of my VMs, the traffic level reported by vnstat ran at around 20-30 GiB per day. I figured I could live with that because it meant that my total monthly traffic would be unlikely to exceed my monthly 1TB allowance. However, when I checked the stats on that server last week (around the 9th of Jan) I found that I was shipping out around 150 GiB per day and vnstat was predicting a monthly total of close to 3 TB. As the tails admins said when I told them that I would have to shut off the mirror on that VM while I sorted something, “Ooops”. Ooops indeed. I couldn’t chance a massive bill for exceeding my bandwidth allowance by quite that much. The actual stats for 4, 5, 6, 7, 8 and 9 January before I pulled the plug were: 34.23 GiB, 69.14 GiB, 178.31 GiB, 131.68 GiB, 99.05 GiB and 133.27 Gib. It turns out that tails 0.10 was released on 4 January and I hadn’t been prepared. A lesson learned.

Having shut down and had the DNS round robin amended, I attended to finding some way of throttling my traffic so that I could live within my allowance whilst still providing a useful mirror. I scratched my head for a while before stumbling on the obvious, I should be throttling at application level. (Sometimes I find that I miss simple answers because I am looking for complicated ones).

I started out by assuming that I should be using tc and iptables mangling, or something like the userspace tool trickle, all of which looked horribly more complicated than the approach taken by tor (which allows you to simply set the acceptable bandwidth rate to some limit, plus set an accounting period maximum of some total transfer limit per day/week whatever). And of course it turns out that my webserver (lighttpd) allows something similar. Just set the server limit to some chosen max transfer rate and, if necessary, also impose a per IP max rate. The magic configuration file options are:

# limit server throughput to 3000 kbytes/sec (~30000 kbits/sec)
server.kbytes-per-second = 3000
#
# and limit individual connections to 50 kbytes (~500 kbits/sec) – NB. I don’t actually use this
# connection.kbytes-per-second = 50

I tested this by pulling a copy of the tails iso from one of my other VMs which has a high bandwidth connection and got acceptable (and expected) results. So now I can go back on-line later this month safe in the knowledge that I’m not going to blow all my bandwidth in one week.

Permanent link to this article: https://baldric.net/2012/01/12/tails-in-a-spin/

well it’s not me

xkcd cartoon number 386

With grateful thanks as always to xkcd.

Permanent link to this article: https://baldric.net/2012/01/05/well-its-not-me/

happy birthday trivia

Astonishingly, today is the fifth anniversary of my first post to trivia. So, five years ago on christmas eve, I was writing a blog post. Five years later, it is again christmas eve and what am I doing?

Hmmm.

Permanent link to this article: https://baldric.net/2011/12/24/happy-birthday-trivia/

bah, humbug

At this time of year it is traditional to receive christmas cards from people with whom you may have only infrequent, if any, contact on a normal daily basis. If you are in a relationship, these cards will often be addressed to you as a couple or family, and be signed on behalf of other couples or families. In my case, on opening such cards I often then end up shouting out something like, “Darling, who the hell are Sarah and Jimmy?” and “Did we send them a card?” (as if it mattered.)

In my view, this problem has become exacerbated by the rise of the e-card (an email substitute for those too idle, or too penny pinching, to even go to the trouble of sending actual cards through the real postal system). Maybe I’m becoming more reactionary in my old age (it happens) but e-cards are, in my view, even worse than e-books.

Strange as it may sound, most people I know use their christmas cards as decorative features by hanging them on string around doorways, or placing them on the mantle over the fireplace alongside the christmas tree. What am I supposed to do with a bloody flash animation of a kitten playing with a bauble?

Worse, these e-cards do not usually even come direct from the sender’s (known) email address but via the commercial creator’s website. This means that the email runs the risk of being treated as spam and thus not reaching the intended destination. Or, again, in my case, if they do actually reach their destination and I see an email from some unknown sender with the message “Sarah and Jimmy have sent you the attached e-card in support of save the vegetarian whales. Click here to see it”, it goes straight into the deleted pile unopened.

Hah! Take that! You aren’t going to engineer me into installing your damned trojan.

Merry Christmas.

Permanent link to this article: https://baldric.net/2011/12/24/bah-humbug/

the amnesic incognito live system

Or “tails” if you prefer, is a live CD/USB distribution based on debian which aims to help you preserve your privacy and anonymity when out and about. As the home website says, tails helps you to:

  • use the Internet anonymously almost anywhere you go and on any computer:
    all connections to the Internet are forced to go through the Tor network;
  • leave no trace on the computer you’re using unless you ask it explicitly;
  • use state-of-the-art cryptographic tools to encrypt your files, email and instant messaging.

This is a good thing (TM).

I already have a system at home which allows me to use the tor network whenever I want to be anonymous, but tails allows me to do the same thing when I’m away from that setup. I like the idea so much that I now provide a mirror for the tails distribution to complement my tor exit node. Every little helps.

Permanent link to this article: https://baldric.net/2011/12/20/the-amnesic-incognito-live-system/

tunnelling X over ssh

OK, yes, I know there are probably already a gazillion web pages on the ‘net explaining exactly how to do this, but I got caught out by a silly gotcha when I tried to do this a couple of days ago, so I thought I’d post a note.

Firstly, X is not exactly a secure protocol, nor is it easy to filter at NAT firewalls, so the ability to tunnel it over ssh is hugely welcome. In fact, ssh can be used to tunnel practically any other protocol you care to name, so it should be your first port of call should you wish to connect to a remote system using an insecure protocol. (I use it to wrap rsync for example).

I don’t run X on my VMs (there is no need, they don’t run desktop software) and I had not previously seen the need to run X based graphical programs on those servers. However, a couple of days ago I thought it would be really useful to run etherape on one particular remote server so that I could watch the traffic patterns. Normally I use iptraf (which is ncurses based) when I want to monitor network traffic in real time, but etherape is pretty cool and gives a nice graphical view of your network connections. But it runs on an X based gui.

So. I changed the remote server’s sshd_config to enable X forwarding (“X11Forwarding no” becomes “X11Forwarding yes”) and restarted sshd. On my desktop I similarly changed my local ssh_config file to allow X forwarding (“ForwardX11 no” becomes “ForwardX11 yes”) to obviate the need to use the -X switch on the command line. I then installed etherape on the remote server and fired it up only to get the message “Error: no display specified”. Sure enough “echo $DISPLAY” showed nothing. But I had thought (and everything I had read confirmed) that ssh should take care of setting the appropriate display when X11 forwarding was set.

So I then tried setting a display manually (export DISPLAY=localhost:10.0 on the remote server) and then got the response “Error: cannot open display: localhost:10.0”. So, still no deal. I spent some time scratching my head (and reading man pages) and sent off a query to my local Linux User group in parallel asking for advice. They were gentle with me.

The first, and rapid, response, said:

On the server:

sudo apt-get install xauth

Then disconnect and reconnect the client.

Jobs a good un.

Thank you Brett.

So the moral is, make sure that you have X authorisation working properly on the remote system (check for the existence of $HOME/.Xauthority) if you experience the same symptoms I did.

Permanent link to this article: https://baldric.net/2011/12/19/tunnelling-x-over-ssh/

tp-link respond

A couple of weeks ago, I wrote about the problems I had with a TP-Link IP camera. Today I received a comment on that post from a guy called Luke in the TP-Link support team. In that response he apologises for the difficulties I had and promises to investigate further.

His response deserves as wide an audience as my original post, so I am drawing attention to it here.

Thank you Luke for taking the time to comment.

Permanent link to this article: https://baldric.net/2011/11/30/tp-link-respond/

no you can’t have my mobile number

I guess, like me, many parents will have facebook accounts simply as a means of communicating with their kids. In the past I have used my account as a way of finding out what my kids actually do, or like in the way of music for example. This can be more fruitful than attempting a conversation with a grumpy teenager. My kids are no longer teenagers so I don’t use it much these days. However I tried today to check my son’s page in the hope that it might give me some inspiration for a christmas present. Facebook won’t let me log on unless I give it a mobile phone number.

image of facebook login page

No Zuckerberg, you cannot have my mobile number. And I am seriously pissed off that I cannot now even get to my account to delete it.

Permanent link to this article: https://baldric.net/2011/11/23/no-you-cant-have-my-mobile-number/

the most influential people in UK IT?

This would be funny if it weren’t quite so tragic. A friend of mine has just pointed me to the Computer Weekly “second annual UKtech50” poll of “the definitive list of the real movers and shakers in UK IT – the CIOs, industry executives, public servants and business leaders driving the creation of a high-tech economy.”

The flummery goes on, “Voting has begun to find out who is the most influential person in the UK IT community. Our panel of judges has chosen the shortlist of 50 names, and we want your opinion on who should win.”

So who are these 50 top “movers and shakers” in UK IT? A depressing list of the (maybe) worthy but dull. The sort of list that the President of a local chapter of the BCS might dream up. It even includes the Cabinet Office Minister Frances Maude. I don’t think his CV contains much in the way of technical capability, With one or two exceptions (pick your own) few if any of those listed could be deemed UK IT leaders – influential maybe, IT leaders? I doubt it.

So let’s take a look at the list of judges. This is where the tragedy is most manifest. Take a look at the bottom of that page – the section headed “Read More”. It says:

People who read this also read…

What is 3G (third generation of mobile telephony)? – Definition from Whatis.com
What is TCP/IP (Transmission Control Protocol/Internet Protocol)? – Definition from Whatis.com
What is cloud computing? – Definition from Whatis.com
What is supply chain management (SCM)? – Definition from Whatis.com

Oh deary, deary, deary me.

Permanent link to this article: https://baldric.net/2011/11/23/the-most-influential-people-in-uk-it/

google buys advertising

In an interesting reverse of the norm, google paid for three full page adverts in the guardian a couple of days ago. Today there is yet another full page ad in the same paper. I assume they have run similar campaigns in other UK newspapers over the past few days, The ads are quite intriguing in that they seem to be addressing potential concerns about the use of well established web technologies. Today’s ad, for example, was about cookies. Each ad points to a google site giving further detail.

These adverts cannot have been cheap. What are they worried about?

Permanent link to this article: https://baldric.net/2011/11/23/google-buys-advertising/

do not buy one of these

 

Standalone IP cameras have come down in price quite remarkably over the past few years. It is now perfectly possible to get a camera for between £50.00 and £75.00, and this makes them attractive for anyone wanting to set up simple “home surveillance” systems. I bought one recently just to see what I could realistically do with such a beast. I chose the TP-Link TL-SC3130G,

image of TP-Link IP camera

which goes for around £60.00. I bought mine from amazon. I chose this particular camera because, on paper, it looked to have a good specification at a keen price point. According to the TP Link website, the camera’s highlights include:

  • 54Mbps wireless connectivity brings flexible placement
  • Bi-directional audio allows users to listen and talk remotely
  • Excellent low light sensitivity ensures good video quality even in the dawn
  • MPEG-4/MJPEG dual streams for simultaneous remote recording and local surveillance

plus an impressive list of protocol capabilities all in a reasonably compact and attractive hardware package.

When the camera arrived I was pleased to find that the hardware was indeed quite solid and attractive. Such a shame I can’t say anything good about the software though.

As you would expect, I had to first configure the camera over a wired link. By default the camera comes up on 192.168.1.10. The login credentials are the usual “admin/admin” – which is the first thing you should change, but sadly I’ll bet that few people bother. The web interface presents the user with a set of configuration menus on the left of the screen and an image taken from the camera towards the centre of the screen. The software assumes that the user has IE and ActiveX running so for those of us with more sensible setups, some of the configuration and control options on the camera (such as snapshot, zoom and audio volume control) are unavailable. No matter, the important thing from my point of view, and the reason I bought this camera rather than its slightly cheaper brother, the SC3130, is the supposed wireless capability. At first sight, the camera and network configuration options look surprisingly comprehensive. In fact, I’d go so far as to say that the list of options available might confuse a user who had little networking experience. For example, besides the obvious options to set new static IP addressing or change to DHCP, you can change HTTP, RTP and RTSP ports, set up multicast streaming, change the multicast address, change the ports used for video and audio streaming, set viewer authentication, set the camera to use PPPoE and dynamic DNS and even send users an alert via email containing the new network settings (such as IP address) should these change. Of course, in order to do so the user must first configure email on the camera. Altogether an impressive looking range of capabilities. Again, such a shame they don’t all work.

Annoyingly, the web interface sometimes simply refused to accept changes or the system reset the changes after reboot, I first noticed this when changing the camera’s clock setting to sync with the time on my PC. It simply refused. NTP worked eventually, but it tended to stop working for no apparent reason. But by far the worst fault was in the WiFi stack. WiFi configuration options were all accepted and it was soon possible to connect wirelessly both to configure the camera and to view either a video stream or a still image. However, as soon as the wired connection was removed, both interfaces went down. Nor was it possible to connect wirelessly if the camera was booted without a cable inserted. Now it is pretty pointless to have a WiFi camera that insists on having a wired connection present as well and I couldn’t believe that no-one had tested this so I assumed that there was some way to get the thing working. Besides I hate being beaten. So I spent what was, on reflection, a disproportionately silly amount of time playing with various configuration options (DHCP vs static addressing, various combinations of UPnP and no UPnP (which involved me changing my router configs as well), changing various network port numbers, all to no avail. I searched the manufacturer’s website in case there was a new firmware image I could try, but that was a waste of time because the image on the website (1.6.17 dated 29 October 2010) was older than the firmware on the camera (1.6.18 dated 17 March 2011).

After trying umpteen variations of settings, at one point the camera froze completely and refused to boot. I had to resort to a hardware reset to get the thing back up again. Here it got weirder still. The camera came back up on 192.168.1.97 and not the default 192.168.1.10 (I found it with a sniffer). God help the average punter trying to get this thing to work.

I sent it back, and amazon refunded my money. Do yourself a favour. Don’t even think about buying one.

Permanent link to this article: https://baldric.net/2011/11/16/do-not-buy-one-of-these/

ubuntu de-throned

For the first time since early 2005, Ubuntu has fallen off the top spot on distrowatch. The new number one, by page hit ranking, is Linux Mint.

I’m not at all surprised.

Permanent link to this article: https://baldric.net/2011/11/09/ubuntu-de-throned/

do I trust this site?

Following a visit to EFF to read an article on e-book privacy, I met this:

image of SSL certificate view

So. EFF uses a wildcard SSL cert issued by a company which was breached earlier this year.

Permanent link to this article: https://baldric.net/2011/11/09/do-i-trust-this-site/

dis-unity

The reaction to Ubuntu’s move to Unity seems to be getting wider coverage. Over at LWN, Bruce Byfield blogged recently about the rift between the Ubuntu developers and its users. In particular he highlights Tal Liron’s entry to the Ubuntu launchpad bug wiki under bug number 882274. In that entry, entitled “Community engagement is broken” Liron gently rebukes the developers for their apparent lack of enegagement with the community, saying:

“The bug is easy to reproduce: open a Launchpad bug about how Unity breaks a common usage pattern, and you get a “won’t fix” status and then radio silence. The results of this bug are what seems to be a sizable community of disgruntled, dismayed and disappointed users, who go on to spread their discontent and ill will.”

Both Liron’s bug entry (and the subsequent commentary) and Byfield’s analysis of that discussion bear reading. I found myself frustrated by the obvious lack of understanding of (and impatience with) Liron’s position apparent in Mark Shuttleworth’s responses. Byfield concludes that:

“[Suttleworth] sounds impatient, resorting to personal attacks and invoking his personal authority or the necessities of design or standard practice instead of offering explanations. At times, he seems to address issues that at best approximate what others in the discussion are saying. Exactly why this change has happened is uncertain, but it adds a sting to Shuttleworth’s once-humorous title of Benevolent Dictator for Life.”

Meanwhile, over at El Reg, Liam Proven offers his analysis of the Ubuntu upheaval. In that article, Proven describes the differences between GNOME 3, GNOME 2 and Unity and explains how these changes (or more properly, the management of these changes) have led to the difficulties now facing a wide range of users. Proven concludes:

“Ubuntu is gambling that Unity will attract floods of new Linux users in such numbers as to outweigh those abandoning it for its spin-offs and rivals. If it’s correct, then Ubuntu will continue its rise to near-total dominance of the Linux desktop. But if it’s wrong, it will leave the Linux world more fragmented than ever.”

In my view Ubuntu (or more precisely Canonical and Shuttleworth himself) is wrong and will regret this decision not to properly engage with its user base. I don’t blame them for changing the desktop, after all, the GNOME developers have forced that change upon them. But I do agree strongly with Liron’s position. Ubuntu could do well to listen more.

And in a nice summary of Xfce, Scott Gilbertson today explains why previous GNOME users are moving to that desktop in the wake of the GNOME 3 and Unity changes. It seems I’m in the company of a growing number of other users.

Permanent link to this article: https://baldric.net/2011/11/09/dis-unity/

I prefer the chip wrapper version

My newspaper of choice is the Guardian. Recently they were forced to increase the cover price and ever since have been running a series of advertisements for various forms of subscription which will lower the cost from some £35.00 pcm (if you include its sister paper the Observer on sundays) to as little as £9.99 if you go for the kindle option.

The economic case for change is unarguable. A saving of over £20 a month, plus you get the “paper” delivered to your breakfast table in seconds over the airwaves. No need to go out in the rain down to the shop to pick up a copy (I live in the sticks and the local shop won’t deliver to us). No disappointment when they have sold out (it happens). No waste paper as I immediately bin the sports section. No waste paper when I eventually discard the bits I do read. And it would mean that I actually use the kindle as something other than a rather expensive paper weight. The Grauniad even kindly offered a two week free trial if you signed up.

So I tried it. I really did. But it just didn’t work for me.

To be fair practically all the editorial is there. And the layout is pretty good. Down the left hand of the screen you see the headings for the main sections – Top Stories, UK News, International, Financial etc. whilst on the right hand side you are given the headlines for each of the main stories in each section. The layout also makes good use of the kindle’s navigation features so it is easy to skip from one article to another or even from one section to another. But it lacks that essential aesthetic which makes a good newspaper. I’m sorry, but the medium is the message – at least it is over the breakfast table.

I don’t read a newspaper in serial, article after article, front to back form. I skip about. First I throw away that useless sport section. Then I start at the back of G2 with Steve Bell, and flick back two pages to Doonesbury. What? No Steve Bell? No Doonesbury? Oh dear. I then flick to the front of G2 while my tea is brewing and I munch my cornflakes whilst reading whatever takes my fancy. (Note to non Guardian readers. The G2 is a tabloid sized insert to the main paper. It contains little in the way of editorial and much in the way of entertainment. Ideal breakfast fodder.) G2 section finished (normally about the time I have finished my breakfast) I can retire to my armchair with my second cup of tea and the main paper.

And I don’t read that in serial fashion either. I skip about. I scan the pages for something I want to read first, then read that before scanning for something else. Doing so gives me a good feel for the main issues of the day. I’ll see the obvious front page article – get a paragraph or two under my belt, then flick through for further details in other articles before going back to read the main news in detail. On the way I will inevitably be exposed to advertising (none in the kindle version) and will see a wide range of pictorial editorial content (very little in the kindle). And I can fold the paper to match what I am looking at. And it doesn’t weigh much. And it has two crosswords, plus the soduko and more Steve Bell!

I’m sorry, but a newspaper is more than just the sum of its content. I think I’ll carry on wasting 20 quid a month. And throwing away the sport section unread. And my local newsagent will continue to benefit from both the sale of the paper itself and any incidental purchase I may make whilst I am there. They wouldn’t get that if I carried on with the kindle.

Permanent link to this article: https://baldric.net/2011/11/08/i-prefer-the-chip-wrapper-version/

fully minted

After exploring the alternatives to Ubuntu, I finally settled on Linux Mint Debian Edition (LMDE) running Xfce as the desktop. I am now Ubuntu free and have a desktop that looks the way /I/ want it to look rather than the way some design nut wants it to look. I am also hopeful that the desktop will stay that way in future.

My main desktop now looks like this:

image of linux desktop

and my netbook looks like this:

image of linux desktop on my netbook

I chose LMDE rather than Xubuntu partly out of pique with the way Canonical is taking Ubuntu, and partly out of a genuine desire to move to a distro which is closer to the ideals of the FOSS community which Ubuntu used to espouse and which Debian always has done. For me, LMDE now offers the best compromise between a truly useable modern desktop (with all that implies for proprietary codecs) and the purity and stability of Debian. I know where things are in Debian and I much prefer the Debian package manager to RPM (which immediately rules out Fedora or SUSE). Having now spent some time playing with Xfce I find myself surprised that I didn’t move to it much earlier. It is clean, relatively lightweight, fast and eminently configurable.

On my main desktop machine (which is running the 64 bit version to take full advantage of the 8 Gig of RAM I have installed) everything works as it should – even the dreaded flash (yes, I occasionally watch youtube). On the netbook (32 bit version) everything except the RHS card reader works. Hot plugging works on the left, and the right /will/ work if there is an SD card in place on boot. (But no, I /still/ can’t read Sony memory sticks. I have sort of given up on that now anyway since I no longer use the PSP to watch videos.)

Now to convert my wife.

Permanent link to this article: https://baldric.net/2011/11/06/fully-minted/

there is no version 7

This week’s BOFH in El Reg rings horribly true:

“I JUST WANT MY MENU BACK!”

“You mean you don’t like the ribbon? It’s new!”

“I don’t care if it’s new – I can’t find anything!”

Back when I was a sysadmin we used to call users a “test load”.

Permanent link to this article: https://baldric.net/2011/11/04/there-is-no-version-7/

time to ditch ubuntu?

I’ve used Ubuntu on my desktops/laptops and netbook for some time now. I think my first installation was 6.06 (the version 6.04 which was late by two months) and my desktops currently all run 10.04 LTS. I got over the minor irritation of the move of the window control buttons from the top right to the top left (a la Mac OSX). But I disliked the first version of 10.10 I tried on the netbook (sporting an early version the unity desktop) so much I quickly switched that back to to 10.04.

I have used the LTS versions of Ubuntu because, in my view, it provides the best trade off between bleeding edge and stability. I’m a huge fan of Debian and use it on my servers and slugs, but Debian is too conservative (and too purist about non-free software such as multimedia codecs) to make it a truly attractive OS for the modern desktop without a lot of additional work. So, the fact that Ubuntu was based on Debian, but with a rather faster release schedule and added usability has made it an obvious choice for some time. And it has become hugely popular. It still ranks number one at distrowatch and there are many other distributions which are based upon it. But Canonical have been taking some controversial decisions of late, many of which have split the user base.

After trialling the unity desktop on the netbook edition in Ubutu 10.10, Canonical merged the netbook and desktop versions into one with 11.04. This meant that users upgrading from an earlier (GNOME based) version were suddenly faced with a radically different looking desktop. The GNOME desktop (called Ubuntu classic) was still available as a fallback from unity in 11.04, but from the latest release (11.10) this is no longer the case, instead you get a 2D version of unity. So, you have unity or you have a worse version of unity.

Ubuntu may be using the GNOME libraries (and it is now using the GNOME 3 libraries rather than those for GNOME 2 as it did when unity was first launched) but many people, myself included, cannot understand why Canonical did not simply work with the GNOME project on version 3. But Canonical have form here. As a company they have been criticised many times in the past for taking rather too much from the FOSS community and not putting enough back. Without Debian, Ubuntu would never have existed. Ian Murdock (the “ian” in Debian) himself expressed concern some time ago that the Ubuntu codebase could diverge too much from Debian unless Canonical developers pushed changes back into the upstream projects. Furthermore, unlike companies such as Intel and Redhat, Canonical developers seem to be almost entirely absent from the linux kernel development community. An interesting, indeed almost comical, statistic emerged recently showing that Microsoft was the fifth most productive contributor to the Linux 3.0 kernel behind only Redhat, Intel, Novell and IBM respectively. As admin magazine notes however, this position owes much to the fact that Microsoft employee K. Y. Srinivasan made 343 changes. Most of those changes were to clean up the code implementing a driver for Hyper-V virtualization. But this is just a statistical blip – I fully expect Microsoft to drop out of the top five, or even top twenty five, shortly.

Canonical also got into a spot of bother when they ditched the GNOME audio player Rhythmbox in favour of Banshee. Rythmbox is decidedly “free software” and links users to free music downloads from Jamendo and paid for music from Magnatune, whilst Banshee looks far more commercially oriented (it linked to Amazon’s MP3 store for downloads in mid 2010 and Canonical used it to link to its own Ubuntu One music store in the 11.04 release. Such decisions can upset people (and make Canonical begin to look like Apple). If they introduce any form of DRM then there will be hell to pay.

With the release of 11.04, Ubuntu Studio, the Ubuntu based distro aimed at multimedia creators, defaulted to retaining GNOME in preference to unity, saying in its release notes “Ubuntu Studio does not currently use Unity. As the user logs in it will default to Gnome Classic Desktop (i.e. Gnome2)”. Shortly thereafter, in May of this year, Scott Lavender, the project lead for Ubuntu Studio announced that they would move away from unity (and GNOME) and use the lightweight Xfce desktop as the default environment in future.

Criticism of Ubuntu (and of Canonical the company) has become so loud and frequent of late that Jono Bacon, the Ubuntu Community “spokesman” reacted by founding openrespect.org apparently as a means of deflecting some of that criticism. The openrespect website says:

“OpenRespect was founded out of a concern that discussion and discourse in the Open Source, Free Software, and Free Culture community has become a little too fiery and flamey in recent years. The goal of OpenRespect is simple: to provide a simple declaration that distills some of the core elements of showing respect to other participants in discussions.”

But as itwire points out, the timing here is rather odd since it is only now “when Canonical has its feet held to the fire, we have a new website called OpenRespect.org registered and volumes of spiel being generated by Bacon.” Quite so.

Jono Bacon has also popped up in a variety of fora getting all defensive about Canonical’s design decisions. He even fronted an article in the July 2011 issue of LinuxFormat magazine where he “interviewed” four key players at Canonical (including Mark Shuttleworth). That interview included such unbiased questions as “Unity is an exciting new vision. What are your goals and inspirations?” Worse, the article did not bother to mention that Bacon was a key Canonical employee.

I have no doubt that Canonical will make unity work. The installed base of Ubuntu users is so large that developers will be forced to make it work, but I don’t have to like it. My problem is that GNOME itself has also changed radically in the move from 2.30 to 3.0. And I don’t like that either. I find myself in good company though, back in July of this year, Linus Torvalds called GNOME 3.0 an “unholy mess” and announced that he was ditching it in favour of Xfce. Although unlike Linus, I never liked KDE, even before the KDE 4 debacle

Permanent link to this article: https://baldric.net/2011/10/19/time-to-ditch-ubuntu/

I don’t have a start menu

You know those irritating conversations you have with “support” staff at your ISP whenever you have a problem which is even slightly off their script? Well xkcd has a solution. Use the code word “shibboleet”. It might work. Nothing else ever does.

XKCD cartoon number 806

With thanks as always to xkcd

Permanent link to this article: https://baldric.net/2011/10/19/i-dont-have-a-start-menu/

clarity is not a virtue

Picking up my copy of the second edition I was reminded of the old obfuscated c contests. One of the earliest (anonymous) entries was this tribute to K&R’s famous printf(“hello world\n”);

—————————————————————————-
int i;main(){for(;i[“]<i;++i){–i;}”];read(‘-‘-‘-‘,i+++”hell\
o, world!\n”,’/’/’/’));}read(j,i,p){write(j/p+p,i—j,i/i);}
—————————————————————————-

c is not known as flexible for nothing.

Permanent link to this article: https://baldric.net/2011/10/14/clarity-is-not-a-virtue/

mountain streams

In response to the news of Dennis Ritchie’s death, Ted Harding, a long time member of the anglia linux users group posted an interesting comment to the list this morning. Ted has kindly given me permission to link to that comment. Like Ted, I too hope we shall be seeing proper tributes to both Dennis Ritchie and the elegance of his creations. Sadly I feel that the mainstream media may pay less attention to the passing of this man than he deserves.

Addendum

The guardian ran a reasonable obituary profile of dmr on 13 October 2011. And on 16 October, John Naughton wrote a good piece for the Guardian’s sister paper, the Observer. In that article, Naughton says:

“It’s funny how fickle fame can be. One week Steve Jobs dies and his death tops the news agendas in dozens of countries. Just over a week later, Dennis Ritchie dies and nobody – except for a few geeks – notices.”

Quite.

And Linux Magazine posted a nice article by Jon “maddog” Hall.

Permanent link to this article: https://baldric.net/2011/10/14/mountain-streams/

a double googol

It seems that google has lost a recent battle to wrest control of the goggle.com domain away from its owner. I wonder if they’ll want to have a go at me next.

Permanent link to this article: https://baldric.net/2011/10/13/a-double-googol/