we’ve moved

As I mentioned in the last post. I decided to move trivia from its old home on a shared hosting platform to my own VPS at bytemark. I also mentioned that this was proving trickier than it should – for no real good reason. However, the move is now complete and the blog is now completely under my control on my own VPS. So if anything goes wrong, I have only myself to blame.

So why did it take so long? Apart from the fact that I went on holiday immediately after the last post, the main reasons are twofold; firstly the difference in versions between that on my old host (2.2.1) and the current release (2.8.4) were sufficiently great to make the upgrade process trickier than it need have been; but secondly, and more importantly, my old provider’s DNS management process was less than helpful.

Before committing to the move, I naturally tested the installation and migration first on my new platform. This raised the problem of how I could install as “baldric.net” without clashing with the existing blog (I didn’t want to install under a different domain name for fairly obvious reasons). Changing my local DNS settings to point the domain name at my new IP address solved this problem (changing /etc/hosts would also have worked) but that meant that I could not have both old and new blogs on screen for comparison at the same time. Irritating, but not ultimately an insuperable problem. In moving to 2.8.4 I discovered that none of my (blogroll) links migrated properly and I had to recreate them all by hand. This took rather longer than I had anticipated, but it proved a useful exercise because I found some broken links in the process. They are currently still broken but at least I know that and I’ll fix them shortly. Because I use lighttpd and not the more usual apache I also had to address the problem of getting permalinks to work properly, but that didn’t prove too difficult – I’ll cover that in a separate post about wordpress on lighttpd.

Having got the new installation up and running to my satisfaction, I now wanted to point my domain name at the new blog. This is where I ran into some oddities in the way 1and1 set up their blog hosting and domain management. Ordinarily it is pretty easy to switch the A record for a 1and1 hosted domain (I have several) from the default to a new address. Not so if you have a blog hosted on that domain – the domain becomes “unmodifiable”. Technical support were initially not particularly helpful since they didn’t seem to understand my problem (and there were worrying echoes of my experiences with BT “support”). But this simply reaffirmed my belief that I was better off controlling my own destiny in future.

Eventually I was told that the only way I could unlock the domain to allow me to point to a new A record was to a) move the blog to a new domain (tricky if you don’t have one, and a pretty dumb idea anyway) or b) delete the blog (an even dumber idea if. like me, you are cautious enough to want to test the transition before committing). Eventually I decided to move the blog to a spare domain. I’ll delete it in the next week or so. Meanwhile, if you find an apparent duplicate of trivia on a completely different domain, you know why.

Permanent link to this article: https://baldric.net/2009/09/09/weve-moved/

wordpress woes

As is common with many blogs, my public ramblings on this site are made possible through the ease of use and flexibility of the mysql/php based software known as wordpress.. And again, as is common to much php/mysql based software, that package has vulnerabilities – sometimes serious, remotely exploitable vulnerabilities. When vulnerabilities are made public, and a patch to correct the problem becomes available, the correct response is to apply the patch, and quickly. In the world of mission critical software, or even in the world where your business or reputation depends upon correctly functional, dependable, and “secure” (for some arguable definition of secure) software it is absolutely essential that you patch to correct known faults. If you don’t, and as a result you get bitten, then your business or reputation, or both will suffer accordingly.

Yet again, as is common, I have to date used the services offered by a third party to host my blog rather than go to the trouble of managing my own installation. Many bloggers simply sign up to one of the services such as is offered by wordpress itself on the related wordpress.com site. Such sites tend to give you a blog presence of the form “yourblogname.wordpress.com” or “myblogname.blogger.com” etc. Other, usually paid for, service providers such as the one I use offer a blog with your own domain name. Whatever service you choose though, you are inevitably reliant on the service provider to ensure that the software used to host your site is patched and up to date. My own provider uses a template based approach to its blog service, This limits me (and others) to the functionality they choose to provide. In return, I expect them to ensure that the version of software they provide and support is as secure as is reasonably possible to expect for the sum I pay each month.

A couple of recent events have caused me to question this arrangement though and I am now in the process of moving this blog to one of my own servers. Firstly, wordpress itself has recently suffered from a particularly embarrassing remote exploit which allows an attacker to reset the admin password, and secondly, as I discussed at zf05 below, the servers belonging to some supposedly security conscious individuals were compromised largely because poor patch management practices (amongst other things) left them exposed.

Time to rethink my posture.

I currently have three separate VPSs with different providers and I figure it is time I took responsibility for my own configuration management rather than relying on my current provider (which, incidentally, hasn’t updated its wordpress version for some long time despite both this current and many earlier security updates being released). However, for a variety of “interesting” and ultimately annoying reasons, this is proving to be trickier than it should be.

I’ll post an update when I have made the transition. Meanwhile, I hope not to see any break in service – unlike the self-inflicted cock-up in transfer of one of my domains.

Watch this space.

Permanent link to this article: https://baldric.net/2009/08/26/wordpress-woes/

zf05

I really missed the old phrack magazine. Some of the “loopback” entries in particular are superb examples of technical nous, complete irreverance and deadpan humour. One of my favourites (from phrack 55) appears in my blogroll under “network (in)security”. I am particularly fond of the observation that details of how to exploit old vulnerabilities are “[ As useless as 1950’s porn. ] “. As I said, sorely missed (but now, with issue 66 back in action after over a year since the last release).

It would seem, however, that I have been missing a new kid on the block who follows in phrack’s footsteps. A group called zf0 (zero for owned) appears to publish a ‘zine in the mold of the phrack of old. And their latest release, zf05.txt has been causing something of a stir because it relates details of the compromise of systems owned and/or managed by some high profile and well known personages such as Dan Kaminsky and Kevin Mitnick.

The ‘zine bears reading. The style is unmistakably “underground” and “down with the kids” and it is (unnecessarily in my view) filled with unix-geek listings of bash history files and such like, but its authors still manage to make the sort of pertinent comments that I so loved in phrack.

“It’s the simple stuff that works now, and will continue to work years into the future. Not only is it way easier to dev for simple mistakes, but they are easier to find and are more plentiful.”

How well patched are you?

Permanent link to this article: https://baldric.net/2009/08/02/zf05/

dns failure – a cautionary tale

I recently moved one of my domains between two registrars. It seemed like a good idea at the time, but on reflection it was both foolish and unnecessary. Unnecessary because my main requirement for moving it (greater control of my DNS records for that domain) could have been met simply by my redelegating the NS records from my old registrar’s servers to the nameservers run by my new provider; foolish because it lost me control over, and usage of, that domain for eleven (yes eleven) days. This particular domain happens to host the mailserver (and MX record) for a bunch of my other domains. So the loss of that domain meant that I also lost email functionality on a bunch of other domains as well as the primary domain in question. Not good. Had I been running a business webserver on that domain, or been completely reliant on the mail from that smtp host I could have been in deep trouble. As it was, I was simply hugely inconvenienced (neither of my two main domains were affected because I kept the mail for those domains pointed at a different mailserver).

So what happened?

My new provider offers greater granularity of control over DNS records than my main registrar. Moving my DNS to them would give me complete control rather than being limited to creation of a restricted number of subdomains and new MX records. I like control. What I didn’t think through carefully enough was whether I (a) really needed that additional control and (b) really needed to actually change registrar to gain that control. As it turns out, the answer to both those questions is no – but hey, we all make mistakes.

Anyway, having convinced myself that I actually did need to move my domain to the new registrar, the following series of events lost me the domain for those eleven days.

Firstly I tried to use my new registrar’s control panel to inititate the transfer. This failed – for some technical reason which the registrar identified and fixed later. This alone should have forewarned me of impending difficulty, but no, I pressed ahead when the tech support team offered to initiate the transfer manually. I accepted,

Secondly, I created the necessary new DNS records on the new registrar’s DNS servers ready for the transfer. Naively, I believed that once the old registrar surrendered control, my new registrar’s servers would be shown as authoritative and I would have control. I also believed (again naively and incorrectly as it happens) that my old registrar would maintain its view of my domain until the delegation had switched.

Thirdly, I used my old registrar’s control panel to initiate cancellation of registration at their end and transfer to my new registrar. This is where things started to go seriously wrong. As soon as my old registrar had confirmed cancellation at their end, they effectively switched off the DNS for that domain. Presumably this is because they were no longer contractually responsible for its maintenance. But the whois records continued to show that their nameservers were authoritative for my domain for the next six days whilst the transfer was taking place. I confess to being completely bemused as to why it should take so long for this to happen, but I put that in the same category of mystery as to what happens to my money in the time I transfer sums electronically between two bank accounts – slow electrons I guess.

So now the old registrar is shown as authoritative but doesn’t answer. The new registrar has the correct records but can’t answer because it is not authoritative.

Eventually my new registrar is shown in the whois record as the correct sponsor, but the NS records of my old registrar are still shown as authoritative. Here it gets worse. The control panel for my new registrar is still broken and I have no way of changing the NS records to point to the correct servers. So I email support. And email support. And email support. Eventually I get a (deeply apologetic) response from support which says that they were so busy fixing the problem highlighted by the failure uncovered in their automatic process that they “forgot” to keep me (the customer) informed.

Now, whilst neither company concerned covered themselves in glory during this process, on reflection I am reluctant to beat them up too much because I have come to the conclusion that, technical failure aside, much of the trouble could have been avoided if I had thought carefully about what it was I was trying to achieve, and had read and carefully considered the documentation on both company’s sites before starting the transfer. Documentation about registrant transfer is fairly clear in its warning that the process can take about five or six days. It is also not unreasonable that a company losing contracted responsibility for DNS maintenance should cease to answer queries about that domain (after all, they could be wrong…) OK – the new registrar failed big time in its customer care, but they did apologise profusely and (so far) they haven’t actually charged me anything for the transfer.

What I should have done before starting the transfer was to redelegate authority for the domain from the old registrar’s nameservers to my new registrar’s servers. That way I would not have had the long break in service. In fact, if I had thought about it carefully, I could have simply left it at that and not started the transfer of registrar at all. After all, once authority was redelegated, I would have complete control on my new servers.

Lesson? Once again, read the documentation. And think. I really ought to know better at my age.

Permanent link to this article: https://baldric.net/2009/08/02/dns-failure-a-cautionary-tale/

zebu update

Well google was clearly the fastest at indexing my blog. It only took a day for my nonsense phrase to appear as number one on their index. Since then, the phrase has appeared in ask, bing, clusty and yahoo, but still not in cuil, despite being visited by their robot on 29 June.

Obvious (and very obvious) conclusion – use google if you want to find absolute rubbish very, very quickly.

Permanent link to this article: https://baldric.net/2009/07/12/zebu-update/

aspire one bios updates

This post is partly for my own benefit. It records some of the most useful references to bios updates for the AAO. My own AAO is actually running a fairly early bios (3114) and deliberately so. I upgraded to 3309 in (yet another futile) attempt to get sony memory sticks to work but found that screen brightness suffered for no perceptible gain in any other functionality. So I reverted. That aside, the references here may be useful to others.

Firstly, there is considerable discussion of the relative merits of various bios versions on the Aspire One User Forum. That site also has a lot of other useful (and sometimes not so useful) discussion points. Broadly, though, I have found that forum a good starting point for searching for AAO related information.

Next up is the very useful blog site run by Macles. Macles gave a very good set of instructions on flashing the bios in this post, which has been added to. and improved with the addition of lots of step by step pictures, by “flung” at netbooktech in this posting. Note however that the link to the acerpanam site supposedly hosting the bios images is broken. The images, along with many other driver downloads can currently be found by following links on the gd.panam.acer.com pages. Alternatively, I have found the images on the Acer Europe ftp site a good source (though it has yet to show the 3310 release).

No – my sony memory sticks still don’t work.

Permanent link to this article: https://baldric.net/2009/07/05/aspire-one-bios-updates/

tor on a vps

I value my privacy – and I dislike the increasing tendency of every commercial website under the sun to attempt to track and/or profile me. Yes, I know all the arguments in favour of advertising, and well targeted advertising at that, but I get tired of the Amazon style approach which assumes that just because I once bought a book about subject X, I would also like another book about almost the same subject. I don’t much like commercial organisations profiling me (and, incidentally, I find it highly ironic that we in the UK seem to make a much bigger fuss about potential “big brother” Government than we do about commercial data aggregation, but hey).

Sure, I routinely bin cookies, block adware and irritating pop up scripts, and use all the, now almost essential, firefox privacy plugins, but even there we still have a problem. I don’t know who wrote those plugins, I just have to trust them. That worries me. Some of the best known search engines are even more scary if you think carefully about the aggregate information they have about you.

Sometimes I care about the footprint I leave, sometimes I don’t, but the point is that I should be in control of that footprint. Increasingly that is becoming difficult. Besides being tracked by sites I visit, last year’s controversy about BT’s use of phorm is also worrying. If my ISP can track everything I do, then I face another level of difficulty in protecting my fast vanishing privacy.

Besides using a locked down browser, DNS filtering which blocks adware, cutting cookies and all the other tedious precautions I now feel are necessary to make me feel comfortable, I often use anonymous proxies when I don’t want the end site to know where I came from. But even that now looks problematic. If you use a single anonymising proxy, all you are doing is shifting the knowledge about your browsing from the end site to an interrmediary. That intermediary may (indeed should) have a very strict security policy. Ideally, it should log absolutely nothing about transit traffic. But if that intermediary does log traffic data and then sells that data to a third party, you may be in an even worse position than if you had not attempted to become anonymous. Back in january of this year, Hal Roberts of Harvard University, posted a blog item about GIFC selling user data. If sites such as Dynaweb are prepared to sell user data, then the future for true anonymity looks problematic. As Doc Searle said in this blog posting,

We live in a time when personalized advertising is legitimized on the supply side. (It has no demand side, other than the media who get paid to place it.) Worse, there’s a kind of gold rush going on. Even in a crapped economy, a torrent of money is flowing into online advertising of all kinds, including the “personalized” sort. No surprise that companies in the business of fighting great evils rationalize the committing of lesser ones. I’m sure they do it it the usual way: It’s just advertsing! And it’s personalized, so it’s good for you!

No, as Searle well knows, it is not good for you.

What to do? Enter tor and privoxy.

I first used tor some years ago in its earlier incarnation as “the onion router” (hence its name) and until recently had used it only sporadically since. The main drawback of the early tor network was its speed, or lack of it. Tor gets is strength (anonymity) from the way it routes traffic.

how-tor-works

Tor traffic passes through a series of nodes before exiting at a node which cannot be linked back to the original source. So tor performance depends on a large number of both fast intermediate relays and a large number of exit nodes. Since not all tor users are prepared to run relays, let alone an exit node (it can be bandwidth expensive and in the case of an exit node can lead to your system being mistaken for a hostile, or compromised, site) tor can be slow, at times painfully slow. But recently tor has been getting faster as more relay and exit nodes are added. It is now at a state which is probably usable most of the time, so long as you are prepared to wait a little longer than is customary for some web pages to load (and you don’t use youtube…..).

When using tor recently I have tended to follow the well trodden path of local installation alongside privoxy. Because I believe in giving something back to the community if I am gaining benefit, I also set my local configuration to run as a relay. But that caused some difficulty. If we assume that my tor usage was fairly representative of the majority of tor users out there, then the fact that my relay was only operational when my client system was up and running meant that the relay would be seen by the tor network as unstable and probably slow, Indeed, the fact that I had to throttle tor usage to the minimum to stop the network from impacting unduly on my ADSL bandwidth, meant that I was not entirely happy with the setup. So I stopped relaying. But that leaves me feeling that I am taking advantage of a free good when I could be contributing to that good.

Some while back I bought myself a VPS from Bytemark (an excellent, technically savvy, UK based hosting company) to run a couple of webs and an MTA. I use it now largely as a mail server (running postfix and dovecot) and the traffic is relatively low volume. That VPS is pretty small (though actually way better specced than some real servers I have run in the past) but I reasoned that I could easily run a tor relay on that machine and then connect to it remotely from my client system. I did, and it worked fine. But I soon found that the tor network seems to have a voracious appettite for bandwidth, Even with a fairly strict exit policy (no torrents allowed!) and some tight bandwidth shaping, I still found that I was using about 2 Gig of traffic per day (vnstat is useful here). Any more than that would start to encroach on my bandwidth allowance for my VPS and possibly impact on the main business use of that server. Monthly rates for VPSs are now less than I pay for my mobile phone contract (and arguably more useful than a phone contract too) so I decided to specialise and buy another VPS just for tor. I now run an exit node on a VPS with 384 MB of RAM and 150 Gig monthly traffic allowance. That server is currently throttled to about 2 Gig of traffic per day, but I will double that very shortly.

Now one of the nicest things about running a tor relay is the fact that your own tor usage is masked and you may get better anonymity. I therefore run privoxy on my tor relay and proxy through from my client to that proxy which in turn chains to tor internally on my relay. However, if you simply configure your local client to proxy through to your relay in clear you are allowing your ISP (and anyone else who cares to look) to see your tor requests – not smart. So I tunnel my requests to the tor relay through ssh. My local client has an ssh listener which tunnels tor requests through to the relay and connects to privoxy on port 8118 bound to localhost on the relay. I also have a separate browser on my desktop which has as its proxy the ssh listener on my client system. For a good description of how to do this see tyranix’s howto on the tor wiki site. Now whenever I want to use tor myself I simply switch browser (and that browser is particularly dumb and stripped, and has no plugins or toolbars which could leak information). Of course, should I get really paranoid, I could always run the local browser in a VM on my desktop and reload the VM after each session.

But I’m not that paranoid.

Permanent link to this article: https://baldric.net/2009/07/05/tor-on-a-vps/

wild xenomanic yiddish zebu

This post is an experiment. I have noticed from my logs that several different search engines index this blog. Ironically, the indexing has sometimes given me a little trouble when I have been searching for topics of interest to me (such as fixing the sony memory stick problem on my AAO) and my search returns my own blog as one of the answers.

So I thought I’d try to find out how quickly I could get a uniquely high rank in a range of search engines. Hence the meaningless phrase above, I shall try searching for that exact string across a range of engines over the next few weeks. At the time of posting I cannot find any page containing this phrase.

Permanent link to this article: https://baldric.net/2009/06/28/wild-xenomanic-yiddish-zebu/

jaunty netbook remix DVD iso

My daughter saw my netbook the other day and decided that she wanted UNR on her Tosh laptop to replace the 8.04 hardy I had installed for her (no-one in my family is allowed a proprietary OS – this occasionally causes some friction).

Anyway, the old Tosh she uses (which has seen various distros during its life) initially presented me with something of a challenge when she asked for UNR – it cannot boot from a USB stick. I couldn’t find an iso image of jaunty-unr so I decided to see if I could build one myself. It turned out to be quite easy. The USB stick image contains all you need to make an iso using mkisofs.

Here’s how:

– mount the USB image;
– copy the entire image structure to a new directory (call it /home/user/temp or whatever, just be sure to copy the entire structure including the hidden .disk directory);
– cd to the new directory and rename the file “syslinux/syslinux.cfg” “isolinux/isolinux.cfg”;
– rename the directory “syslinux” “isolinux”;

Now build an iso image with mkisofs thusly:

mkisofs -J -joliet-long -r -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o /home/user/outputdirectory/jaunty-unr.iso .

(where /home/user/outputdirectory is your chosen directory for building the new iso called jaunty-unr.iso)

Now simply burn the iso to DVD.

Permanent link to this article: https://baldric.net/2009/05/12/jaunty-netbook-remix-dvd-iso/

acer aspire one – a netbook experience

I mentioned in an earlier post that I had recently acquired an Acer Aspire One (AAO) netbook. I chose the AAO in preference to any other of the netbooks on the market for two reasons – firstly it looks a lot cooler than most of the competition (particularly in blue – see below), but secondly, and most importantly, the price was excellent. Apparently many people are buying the AAO with Linpus installed only to find that the machine is not compatible with Microsoft software – either that, or they are just not comfortable enough with an unfamiliar user interface to persevere. This is a shame. Linpus seems to have gained a reputation as “Fisher Price Computing” or “My First Computer”. In my view this is unfair. It does exactly what it says on the tin. The default Linpus install boots quickly, connects with both wired and wireless networks with ease and provides all that most users could want from a netbook – i.e. web browsing, email, chat and word processing.

acer aspire one

Whatever the reasons for the returns, this means that there are a fair number of perfectly good AAOs coming back to Acer. Acer in turn is punting those machines back on to the market through various resellers as “refurbished”. Whilst I may be disappointed at the lack of engagement of the buying public with a perfectly usable linux based netbook, from a purely selfish viewpoint this means that I got hold of an excellent machine at well below the original market price. My machine is the AOA 150-Ab ZG5 model. This has the 1.6 GHz N270 atom processor, 1 GB DDR2 RAM and a 120 GB fixed disk. Not so very long ago a machine with that sort of specification (processor notwithstanding) would have been priced at close to £500. I got mine for under £190 including delivery. An astounding bargain.

To be frank, I didn’t really need a netbook, but I’m a gadget freak and I couldn’t resist the added functionality offered by the AAO over my Nokia N800 internet tablet. The addition of a useable keyboard, 120 Gig of storage, and a decent screen in a package weighing just over a kilo means that I can carry the AAO in circumstances where I wouldn’t bother with a conventional laptop. And whilst the N800 is really useful for casual browsing, the screen is too small for comfort, and ironically, I stll prefer my PSP for watching movies on the move. So the N800 hasn’t had the usage I expected.

There are plenty of reviews of the AAO out there already, so I won’t add much here. This post is mainly about my experience in changing the default linux install for one I find more useful. As I said above, Linpus is perfectly usable, but the configuration built for the AAO is aimed at the casual user wth no previous linux experience. Most linux users dump Linpus and install their preferred distro. Indeed, there is a very active community out there discussing (in typical reserved fashion) the pros and cons of a wide variety of distros. It is relatively simple to “unlock” the default Linpus installation to gain access to the full functionality offered by that distribution, but Linpus is based on a (fairly old) version of Fedora and I much prefer the debian way of doing things. So for me, it was simply a choice between debian itself, or one of the ubuntu based distros.

I run debian on my servers (and slugs) but ubuntu on my desktops and laptops so ubuntu seemed to be the obvious way to go. Some quick research led me to the debian AAO wiki which gives some excellent advice which is applicable to all debian (and hence ubuntu) based installations. Whilst this wiki is typically thorough in the debian way, it does make installation look difficult and the sections on memory stick usage, audio and the touchpad are not encouraging for the faint hearted. I was particularly disappointed at the advice to blacklist the memory stick modules because I actually want to use sony memory sticks (remember my PSP….).

The best resource I found, and the one that I eventually relied upon for a couple of different installations was the ubuntu community page. This page is being actively updated and now offers probably the best set of advice for anyone wishing to install an ubuntu derivative on any of the AAO models.

So, having played with Linpus for all of about two days, I dumped it in favour of xubuntu 8.10. I chose that distro because it uses the xfce window manager which is satisfactorily lightweight and fast on small machines (and because the default theme is blue and looks really cool on a blue AAO – see my screenshot below).

aspire one xubuntu screenshot

By following the advice on the ubuntu community help page for 8.10 installs, I managed to get a reasonably fast, functional and attractive desktop which made the most of the (admittedly cramped) AAO screen layout. I had some trouble with the default ath_pci wireless module (as is documented) so I opted for the madwifi drivers which worked perfectly. The only functions I failed to get working successfully remained the sony memory sticks in the RHS card reader (SD cards worked fine) and the internal microphone.

Further searching led me to the kuki linux site which gives as its objective “a fully working, out of the box replacement for Linpus on the AAO”. I like the objective, but not the distro. However, that distro uses a kernel compiled by “sickboy” which promised to offer full functionality. I tried that kernel with my xubuntu installation (with madwifi wireless drivers) and indeed everything worked – except the sony memory sticks. So I decided to see what else I could do.

By now, I had been using the xubuntu installation for about three or four weeks. However, a fresh visit to the ubuntu community site led me to consider testing jaunty in a “netbook remix” form. I had earlier dismissed this option at intrepid (8.10) because it looked too flakey and seemed like an afterthought rather than a well considered desktop build. I was pleasantly surprised at the look of the “live” installation of the beta of jaunty-nr so decided to give that a go for real. Jaunty comes with kernel 2.6.28.11 which is pretty much up to date and I guessed that playing with that might give me the complete distro I wanted. I was also quite taken with the desktop itself which makes the most of the limited AAO screen real estate by dispensing with traditional gnome panels and virtual desktops and offering a “tabbed” application layout akin to that used in maemo on the Nokia N800. So, following the instructions on the UNR wiki, I downloaded the latest daily snapshot from the cd-image site, and made a new USB install stick. (Note that it is worth using decent branded USB sticks here, I had no problem with a 4 Gig Kingston stick, but an unbranded freebie I tried was useless both here and in my earlier installations.) My newly installed desktop looked as below.

aspire one ubuntu-nr screenshot

(Not blue, so not quite so cool, but hey.)

Note that the top panel only shows some limited information (battery and wireless connection status, time/date etc) whilst the left side of the screen shows menu options which would traditionally be given as drop down options from a toolbar and the right hand side of the screen is taken up by what would normally be called “places” (directories etc) on a standard ubuntu desktop. The centre of the screen gives the icons for the applications in the currently highlighted left side menu option. The overall effect is quite attractive and very easy to read. Selecting any application opens that application in full screen mode. Opening several applications leaves you with the latest application at the front and the others available as icons on the top panel. See my desktop below with firefox open as an example.

baldric UNR screenshot

The completed installation with the default kernel does not allow for pci hotplugging and the RHS card reader doesn’t work. This is a retrograde step, but conversely, wireless worked properly with the atheros ath5k module (not the ath_pci module) in my snapshot. Earlier snaphots included a kernel in which the acer_wmi module had to be blacklisted because it conflicted with the ath5k wireless module. The fix for the pci hotplug problem involved passing “pciehp.pciehp_force=1” as an option to the kernel at boot time. Whilst this fixed the RHS card reader failure, I still couldn’t get the damned reader to recognise my memory sticks.

So having found a distro I really like and can probably live with long term on my AAO, I need to address the remaining problems. Given the range of problems I was facing whichever distro I chose, I decided to bite the bullet and compile my own kernel. It was clear to me that the main problems with wireless were conflicting modules so it seemed obviously better to build a kernel with only the modules required rather than include modules which only had to be blacklisted. Similarly, the requirement to pass boot time parameters to the kernel meant that pci hotplug support and related code wasn’t modularised properly. It is pretty difficult to load and unload kernel modules which don’t exist.

It is some time since I last built a linux kernel (I think it was round about 1999 or 2000 on a redhat box) so I had to spend a fun few hours getting reacquainted with the necessary tools. Unfortunately there is a lot of old and conflicting advice around on the net about how best to do this nowadays – certainly the old “make”, “make_modules install”, “make install” routine doesn’t work these days. And mkinitrd seems long gone….. Unusually, the debian wiki site wasn’t as helpful as I expected but the ubuntu community came good again and the kernel compile advice is pretty good and reasonably up to date. Even here though, there are a couple of mistakes which gave me cause to stop and think. So, as an aide-memoire (largely for myself) I documented the steps I followed to get a kernel build for .deb installation images.

For my kernel build I took the latest available stable kernel from kernel.org (2.6.29.1) and used as a starting point the kernel config from jaunty-nr (i.e. the config for 2.6.28-11 as shipped by canonical). The standard ubuntu kernel is highly generic (and as I have found, hugely and unnecessarily bloated with unused options, debug code and unnecessary modules. I may now rebuild the kernels on my standard desktops too.) To some extent this bloat is inevitable in a kernel which is aimed at a wide range of target architectures. Canonical have also done an excellent job of ensuring that practically any kernel module you could ask for is available should you plug in that weird USB device or install a PCI card giving some obscure capability that only 5% of users will ever need. But I want a kernel which works for a particular piece of hardware, and works well on that hardware. In particular I want my sony memory sticks to be available!

In configuring my kernel I took out all the obviously unnecessary code – stuff like support for specific hardware not in the AAO (AMD or VIA chips, parallel ports, Dell and Toshiba laptops etc. etc), old and unnecessary networking code (apple, decnet, IPX etc. etc) or network devices (token ring (!) ISDN etc. etc.) sheesh there is some cruft in there. More importantly I made sure that I built pci hotplug as a module (pciehp) and that the memory stick modules (mspro_block, jmb38x_ms and memstick) were built correctly. I also checked my config against sickboy’s (to whom I am indebted for some pointers when I was unsure of what to leave out). Sickboy has been quite brutal in excluding support for some devices I feel might be useful (largely in attached USB systems) so we differ. I feel that my kernel is still too large and could do with some pruning. But I’ve been conservative where I’m not sure of the impact of stripping out code.

My kernel now works and supports all the hardware in the AAO (well, my AAO as specified above). The only modifications required to get pci hotplug working are:

– add “pciehp” (without the quotes) to the end of /etc/modules to get the module to load.

– create a new file in /etc/modprobe.d (I called mine acer-aspire.conf) to include the line “options pciehp pciehp_force=1” (again without the quotes).

This means that both card readers work and are hot pluggable as are all the USB ports. Wireless works fine, and it restarts after hibernate/suspend. Audio and the webcam also both work fine (though the internal microphone is still crackly, I get better results from the external mike) . In general I’m fairly happy with the kernel build. I have posted copies of the .deb kernel image and headers here along with my config if anyone wants to try it out. I’d be grateful for any and all feedback on how it works with various other AAO configs. You can email about this at “acer[at]baldric.net“. I think the kernel should work on any AAO running debian lenny or an ubuntu distro of 8.04 or later vintage (but obviously I haven’t tested that fully). Potential users should note that both my kernel and sickboy’s omit support for IPV6 amongst other things. If you find you have a particular problem (say a missing kernel module when you plug in a USB gadget), please check the config file first. It is almost certain that I will have omitted module support for your okey-cokey 2000 USB widget.

To install the downloaded kernel package, either open it with the gdebi package installer or, from the command line, type:

dpkg -i /path/to/dowlnload/linux-image-2.6.29.1-baldric-0.7_2.6.29.1-baldric-0.7-10.00.Custom_i386.deb

(where /path/to/download/ obviously is the path to the directory containing the kernel package).

The package installer will modify your /boot/grub/menu.lst appropriately to allow you to select the new kernel. By default, your machine will boot into the new kernel so you may wish to modify the grub menu to allow you to choose which kernel to boot. I suggest you ensure that “##hiddenmenu” is actually commented out and that you set timeout to a minimum of 3 seconds to give you a chance to choose the appropriate kernel.

I have made every effort to ensure that the kernel works as described, but as always, caveat emptor. If it breaks, you get to keep the pieces. So you really need to enure that you can boot back into a known good kernel before you boot into mine.

Enjoy.

I would be particularly grateful to hear from anyone who has managed to get sony memory sticks to work. Despite everything I have tried (all the modules load correctly) the bloody things still don’t work on my machine.

Permanent link to this article: https://baldric.net/2009/04/12/acer-aspire-one-a-netbook-experience/

bad science and worse

I’m a big fan of Ben Goldacre’s “bad science” column in the Guardian. He is particularly scathing about quackery and spurious medical science. His views of “Dr” Gillian McKeith in particular are well worth reading.

Whilst I was reading one of his columns recently, I was reminded of another “Dr” who seems to get away with hype and nonsense, one DK Matai “PhD” (though references to actually gaining the PhD are woefully thin these days), chairman of mi2g security. According to the ATCA membership page of the mi2g website:

“ATCA: The Asymmetric Threats Contingency Alliance is a philanthropic expert initiative founded in 2001 to resolve complex global challenges through collective Socratic dialogue and joint executive action to build a wisdom based global economy. Adhering to the doctrine of non-violence, ATCA addresses asymmetric threats and social opportunities arising from climate chaos and the environment; radical poverty and microfinance; geo-politics and energy; organised crime & extremism; advanced technologies — bio, info, nano, robo & AI; demographic skews and resource shortages; pandemics; financial systems and systemic risk; as well as transhumanism and ethics. Present membership of ATCA is by invitation only and has over 5,000 distinguished members from over 120 countries: including 1,000 Parliamentarians; 1,500 Chairmen and CEOs of corporations; 1,000 Heads of NGOs; 750 Directors at Academic Centres of Excellence; 500 Inventors and Original thinkers; as well as 250 Editors-in-Chief of major media. ”

(I think I’m meant to be impressed. Actually, I’m just baffled.)

Not surprisingly mi2g has recently jumped on the banking bandwagon and reinvented itself yet again, this time as a centre of expertise on the finance sector. Back in November 2002, el Reg posted an article about Matai which still bears reading, as does the earlier July article referring to the vmyths commentary on mi2g.

The really depressing point here is that the briefings all seem to come from members themselves. All that ATCA does is recycle the brief with the caveat: “Please note that the views presented by individual contributors are not necessarily representative of the views of ATCA, which is neutral. ATCA conducts collective Socratic dialogue on global opportunities and threats.”

This looks like a wonderfully inventive and highly lucrative variant on the blog theme. According to the ATCA membership pages of the website, I can receive 250 HTML briefings for £2,790.63 (including taxes) “as they are published”. This is the “gold” level of membership. The “bronze” level of membership (for £131.60 (including taxes)) would give me up to 10 HTML briefings “as they are published”. Perhaps readers of this blog would like to pay me similar amounts for something I may, or may not, write in future. I promise that the gold payer will get more than the bronze payer, but that is all.

(Interested readers are invited to do some simple on-line research. Try your favourite search engine with terms such as “hype” “mi2g” “myths” etc.)

Permanent link to this article: https://baldric.net/2009/03/29/bad-science-and-worse/

the strong blue light

The 1TB Toshiba disk I bought a couple of weeks ago to upgrade storage on the slug has one big drawback in my view. Whilst the disk itself is fine, Toshiba have made the mistake of sticking a very intense blue LED on the front panel, presumably because they think it looks “cool”. Well it isn’t, it is just irritating. The damned thing is so bright it lights up the study when the main lights are off. Two strips of black insulating tape seem to have cured it though.

Permanent link to this article: https://baldric.net/2009/03/29/the-strong-blue-light/

so what is a netbook?

Having just invested in an Acer Aspire One (which I’ll write about later), I also enjoyed this FAQ from el Reg.

is it a netbook?

Nice chart.

Permanent link to this article: https://baldric.net/2009/03/16/so-what-is-a-netbook/

a thirteen amp plug just won’t cut it

I normally read the register for its IT tech related reporting – and I enjoy it just because it is a wonderfully scurrilous rag. However, an article about the Swedish supercar maker Koenigsegg’s “Quant”, which el Reg chose to call “Mary”, piqued my interest somewhat. I can’t quite make the arithmetic work out. To quote the article:

“The Mary has a top speed of 275kph (171mph), a 0-62 time of 5.2 seconds, a range of 500km (312 miles) and is powered by two electric motors pumping out a combined 512bhp (381kW) of power and 715nm (527lb ft) of torque.

While Koenigsegg is shy on exact technical details, its press release abounds with interesting ‘facts’ – including the claim that that it will be possible to charge the Mary’s NLV-developed “redox FAES (Flow Accumulator Energy Storage) to full capacity in 20 minutes and give the vehicle a range of 500 kilometres”.”

Now we if we unpick that a bit we get the following:

– the car uses 381kW of power at peak – let’s say a maximum 200kW at a sensible cruising speed of 100 kph.
– it can travel for 500 kilometers on one charge.
– it can be charged to capacity in 20 minutes.

Now 500 kilometers at 100 kph is 5 hours travel. Multiply that by 200kW and we get 1000kWh. But it can be charged in 20 minutes, so the charge rate must be three times that – i.e. 3000kWh.

No way can you get that through a 13 amp socket.

Permanent link to this article: https://baldric.net/2009/03/16/a-thirteen-amp-plug-just-wont-cut-it/

upgrading the slug – a lesson in addresses

My ever growing DVD collection has been taking its toll on my disk storage. Despite the fact that ripping a DVD to PSP format typically shrinks it to between 300 and 500 MB, that still means that I have over 300 GB of videos on my PC. Add to that the OGG vorbis audio collection of my ripped CDs and the usual collection of photos and other critical data that I back up to the slug and I was getting perilously close the 500 GB limit of the attached USB disk. Time for an upgrade.

ITB disks are now appearing on the market at well under £100.00. Ebuyer are currently selling 1TB Toshiba external disks for an astonishing £69.00 inc VAT, but there were none actually in stock this weekend. Fortunately I managed to source exactly the same disk from a local supplier for only a few pounds more than the ebuyer price. Times certainly are hard. I doubt that he made much of a margin on the sale. But it made me happy and he knows I’ll go back there again.

Since I originally built my slugs last year, Debian Lenny has moved from testing to stable, and the latest Debian installer from slug-firmware.net is now “Debian/NSLU2 (armel) 5.0 Stable”. For a while the installer was available in two flavours for the ARM architecture used by the slugs.. The old ARM port was called “arm”, whilst the new ARM port using the EABI (see wiki.debian.org/ArmEabiPort) is called “armel”. This port supposedly offers better support for floating point and other features. Both the arm and armel architectures are supported for Lenny (now Debian 5.0) but according to Martin Michlmayr the old arm port will be dropped after this release. So, it looks as if an OS upgrade is necessary now anyway. Unfortunately, there seems to be no easy upgrade path from arm to armel, so a reflash was in order. This took me rather longer than I had anticipated because of a stupid mistake on my part. Lesson – always document any changes you make – even on a small network……

Martin has updated his excellent installation notes to cover both the new image and the installer itself. In that note he says that the installation should take around four hours. Well mine took nearer to six because I couldn’t connect to the damned slug after reflashing with upslug2. The IP address I had previously been using on the slug wouldn’t respond at all. I tried reflashing again, then reflashing with the original Linksys image in the hope that I could then connect to the default Linksys address of 192.168.1.77 and reconfiguring from there Then reflashing again. Nothing worked.

Now, whilst my network is not overly complicated, it is segmented and I use two separate RFC1918 netblocks. I couldn’t recall using the slug on a different netblock to the one I was attempting to install on, but in a “what the hell, it’s worth a try” moment, I unhooked the slug from my internal net and stuck it on a separate switch along with a laptop to test connections. I configured the laptop with my outer net’s address and then ran nmap to scan the entire range hoping to find the slug. No joy,

At this stage I thought that I must have fritzed the slug somehow and was about to give up. But before doing so, I switched back to testing on the original network address – i.e. I reconfigured the laptop to my internal network address range and re-ran nmap. Bingo – up popped the slug. On the same IP address as my main PC on the internal net. I then remembered that I had shuffled some machines around on the internal net and moved from DHCP to static addresses (in a “rationalisation” period a few months back). I had given the slug a new fixed IP address, but had, of course, forgotten that the old IP address would be hard-wired into the slug’s flash memory . The only way you can change this hard-wired address is through the Linksys interface, which of course I was no longer using. My reflash of the slug had removed the IP address I had configured in Debian and left the old, conflicting address, in use. And no, I have no idea why I chose to use the same address for my main PC. A great way to spend a saturday afternoon. Next time, write it all down.

Martin is right though. After the wasted time, the new install took around four hours.

Permanent link to this article: https://baldric.net/2009/03/01/upgrading-the-slug-a-lesson-in-addresses/

party like it’s 1234567890

Unix geeks the world over today celebrate the passing of unix time = 1234567890 at 23:31:30 GMT on friday the 13th of February 2009.

Personally I’ll be asleep.

Permanent link to this article: https://baldric.net/2009/02/13/party-like-its-1234567890/

are we lost yet

I bought my wife a car SatNav system for christmas. She complained about the voice.

 cheap-gps-cartoon

I can’t understand why.

(My thanks again to xkcd.)

Permanent link to this article: https://baldric.net/2008/12/28/are-we-lost-yet/

small doesn’t have to mean slow

Much as I love my slugs (and low power consumption coupled with almost completely silent running means I love them a lot) I do sometimes need just a little more “grunt” than they offer. I have been running a PHP based webserver together with postfix on an old (actually very old) Compaq Armada 4160T (that’s a laptop dating from the mid 90s – look it up) simply because I happened to have it lying around when I needed to build the mailman listserver I described in an earlier post. Astonishingly that has worked well for some time – if a little too slowly.

So I recently wanted to consolidate some services (and add a few others) currently running on the Compaq and a slug and started looking for some cheap, preferably quiet and small machines which wouldn’t over tax my power bill. There are a number of NAS machines coming up which looked as if they would fit the bill once reconfigured to run debian – take a look at debonaras for example – but most of them come with little memory, low CPU power and limited upgrade capability. Worse, they can be quite expensive for what they offer. For example, one of the likeliest candidates, the Thecus N2100. comes with only 128/256 Mb of RAM and a 600Mhz Intel IOP 80219 CPU yet costs around £170. For that I can get a much beefier box.

My first considered alternative was a barebones shuttle – possibly the KPC K45 which could easily take an intel core duo processor and a couple of gig of RAM. Adding a terabyte of disk would give a very useful system. However, a visit to a local specialist supplier convinced me that an Asus P1-P945 would be a better bet. My new Asus is small and quiet. The barebones system cost just over £100. I added an E2200 dual core processor, two gig of Crucial RAM, and a 500 Gig disk for a total of just over £230. A very nice system. In fact, I’m considering using one as the basis for a media center because it wouldn’t look out of place slotted under the TV. Now where’s that mythbuntu disk…..

Permanent link to this article: https://baldric.net/2008/12/27/small-doesnt-have-to-mean-slow/

egroupware mail with dovecot and postfix

I have recently built an egroupware system to be used as a social networking site. The application suite itself is relatively easy to install and configure, but the webmail system it offers (a fork of squirrelmail called felamimail) is rather poorly documented. It took me some time to figure out how to authenticate mail users in IMAP against the egroupware (mysql) database – largely because the documentation doesn’t go into details about the database structure, but also because it doesn’t give any help with choosing or configuring an IMAP server to go with it.

Hence this post. I have documented how I configured the necessary components here.

Enjoy.

Permanent link to this article: https://baldric.net/2008/12/27/dovecot-and-postfix-on-debian/

and yet more DNS lunacy

A company called Unified Root is offering to register new top level domains in advance of the proposed ICANN changes. The company describes itself in the following terms: “UnifiedRoot (Unified Root) is an independent, privately owned company, based in Amsterdam, which makes corporate and public top-level domains (TLDs) available worldwide. Through our own efforts and our collaboration with other leaders in the industry, UnifiedRoot (Unified Root) intends to achieve the free-market, user-driven approach to domain names that was one of the leading principles of the founding fathers of the internet. UnifiedRoot (Unified Root) provides a simple, direct, consistent and comprehensive internet addressing system, enabling governments, businesses, ISPs, and individual “www-users” to provide easier, user-friendly access to their information on the Internet. ”

The company operates a website at tldhomepage.com which markets the new TLDs and describes how users may make use of those new TLDs by becoming “unified”. They even have a useful little button marked “UnifymeNow” which will attempt to modify your DNS settings. Yep, you guessed it – to use this service, you have to point your DNS resolver at servers owned and managed by UnifiedRoot. Whoop de do! Yet another subversion of DNS by a company outside the internet governance process.

Just out of interest I checked the avaliability of the TLD “.con”. It’s available.

That could be useful.

Permanent link to this article: https://baldric.net/2008/12/24/and-yet-more-dns-lunacy/

more DNS silliness

I came across an interesting post on Avert labs site recently. That post pointed to an earlier SANS posting, which in turn, referenced a Symantec discussion of a new Trojan called Trojan.Flush.M. This trojan is an interesting variant of a class of trojans which hijack local DNS settings to force the compromised machine to use a hostile DNS server. The hostile server will then redirect the user to fake sites – usually Banks in an attempt to extract identification and authentication credentials. As the Avert post says, there have been various types of DNS changing tactics employed in the past, but the clever tactic used by this latest trojan is that it subverts the use of DHCP on any network which uses that protocol to manage client system settings. Once the trojan has been installed on a (windows) PC it creates a new service on that PC which allows the machine to send fake DHCP offer packets to any requesting client on the network. The DHCP offer includes the address of a hostile DNS server outside the network. The neat point here is that any client system on the network, regardless of the operating system in use, can then be subverted – and without some network traffic analysis it will be very difficult to find out how the subverted machine was compromised.

But, and this is a big but, the whole attack fails when faced with a properly designed and well managed network. Consider: for the attack to be succesful the subverted client must be able to make DNS requests directly to the hostile server. But no corporate network should allow a client system direct access to the net. All DNS requests should be answered by a local DNS server and that server should be the only machine which is allowed to forward DNS requests to the outside world. Indeed, that server should probably only forward DNS requests to specific servers on the company’s service provider network. The bad news of course, is that any home or SOHO network is unlikely to be well designed and protected.

One of the respondents to the Avert post seems to have missed the point entirely though. He said “All the more reason to consider using trusted third party DNS networks, such as OpenDNS.”. Oh dear, that is so wrong in so many ways. Just think that through will you Jason?

Permanent link to this article: https://baldric.net/2008/12/24/more-dns-silliness/

gun, foot, shoot

As a chartered member of the British Computer Society (BCS) I recently received through the post my voting forms for the 2008 AGM. The process gives me the option of voting electronically using a website run by Electoral Reform Services. My security codes (two separate numeric IDs, one of six characters, the other of four) were printed on my personalised letter from the Society. So far so dandy.

However, the following day I received an email from Electoral Reform Services giving me exactly the same information, together with the address of the webite where I may cast my votes.

Am I happy? Guess.

Permanent link to this article: https://baldric.net/2008/09/25/gun-foot-shoot/

webanalytics – just say no

I have just built myself a new intel core 2 duo based machine to replace one of my older machines which was beginning to struggle under the load of video transcoding I was placing upon it. The new machine is based on an E8400 and is nice and shiny and fast. Because it is a new build, I decided to install the OS and all my preferred applications, tools and utilities from scratch. Yes, I could have just copied my old setup, or at the least, my home directory and system configuration from my older machine, but I chose to do a completely new clean build on top of a clean install of ubuntu 8.04. I did this largely because my older system has been upgraded and “tweaked” so often I am no longer sure exactly what is on there or why. I am sure that it contains a lot of unnecessary cruft and I felt it was time for a clear out. A new build should ensure that I only installed what I actually needed. Of course I copied over my mail, bookmarks and other personal data, but the applications themselves I simply installed from new and then configured to my preferred standard.

Like most modern linux distros, Ubuntu is pretty secure straight out of the box. Gone are the (good old, bad old) days when umpteen unnecessary services were fired up by init or run out of inetd by default. But old habits die hard and I still like to check things over and stop/remove stuff I don’t want, or don’t trust. I also like to check outbound connections because a lot of programs these days have a habit of “calling home” – a habit I dislike. I noticed and cleared up one or two oddities I’d forgotten about (Ubuntu uses ntpdate to call a canonical server if ntpd is not configured for example. Since I use my own internal ntp server, this was easy to sort). However, after clearing, or identifying all other connections I was left with one outbound http connection I didn’t recognise, and worse, it was to a network I know to be untrustworthy. The connection was to 66.235.133.2. This machine is on the omniture network. Omniture is notorious for running the deeply suspicious 2o7.net. Omniture market webanalytics services and are used by a whole range of (perfectly respectable) companies who pay them for web usage statistics. But omniture have never successfully explained why they choose to use a domain name which looks like, but isn’t, a local RFC 1918 address from the 16 bit block (e.g. 192.168.112.207). I don’t trust them, and I didn’t like the fact that my shiny new machine was connecting to them. So what was responsible? And what to do?

Well, the “what to do” bit is easy – just blackhole the whole 66.235.128.0 – 66.235.159.255 network at my firewall. But that feels a bit OTT, even for me. A bit of thought, and a bit of digging gave me a better solution, and one which incidentally solves a range of related problems. What I actually needed was a way of preventing oubound connections to any hosts I don’t like or don’t trust. So long as the IP addresses of the hosts are not hard coded in the application (as sometimes happens in trojans) the classic way to do this is to simply map the hostname to the local loopback address in your hosts file. But this can become tedious. Fortunately, it turns out that a guy called Dan Pollock maintains a pretty comprehensive hosts file on-line at someonewhocares.org. Result.

Because I run my own local DNS server (DNSmasq on one of the slugs) it was easy for me to add Dan’s host file to my central hosts file. So now all my machines will routinely bin any attempted outbound connection to adservers, porn sites, or whatever in the list. The downside, of course, is that this is a bit of blunt instrument and may cause some difficulty with some sites (ebay for example). But I’m prepared to put up with that whilst I fine tune the list. I can also pull the list regularly and automatically via cron so that I stay up to date (but of course I won’t just blindly update my DNS, I’ll pull the file in for inspection and manual substitution…..).

So what was making the connection? Well it looks to me as if adobe is the culprit. I had installed the acroreader plugin for firefox.

Silly me. Must remember to avoid proprietary software.

(Oh, and you just have to love omniture’s guidance on how to opt-out of their aggregation and analysis. You have to install an opt-out cookie. Oh yes, indeedy, I’ll do that.)

Permanent link to this article: https://baldric.net/2008/09/12/webanalytics-just-say-no/

french slugs?

In an earlier post I speculated that the CherryPal PC might be a possible option for users considering replacements for the slug. But that device has still yet to hit the streets and is beginning to look suspiciously like vapourware. However, linuxdevices, the site devoted to linux on embedded devices, wrote about the interesting looking french made linutop some months back. The linutop site looks to me as if it is actually taking orders.

linutop

Now if they could just ship one with two ethernet ports, it might make a good base for a firewall.

Permanent link to this article: https://baldric.net/2008/09/12/french-slugs/