Experimenting with Tiny Core Linux on QEMU

In response to a post on the Ubuntu UK Loco mailing list today, I thought the perfect way to produce a cross-platform, stable web server… would be to create a QEMU bootable image of Tiny Core.

So, the first thing I did was to download a Tiny Core image. This I obtained from the Tiny Core Download Page. I then created a 512MB disk image to store my packages on.

qemu-img create tinycore-tce.img 512M

After a bit of experimenting, I ended up with this command to boot TinyCore. At the moment, it’s relatively cross-platform, but will need some tweaking to get to the point where I can do anything with it…

qemu -hda tinycore-tce.img -m 512 -cdrom tinycore-current.iso -boot d -net nic -net user,hostfwd=tcp:127.0.0.1:8008-:80 -vnc 127.0.0.1:0 -daemonize

So, let’s explain some of those options.

-hda tinycore-tce.img

This means, use the image we created before, and install it in /dev/hda on the visualised machine.

-cdrom tinycore-current.iso -boot d

Create a virtual CD using the ISO file we downloaded. Boot from the CD rather than any other media.

-m 512

Allocate the virtual machine 512Mb RAM.

-net nic -net user,hostfwd=tcp:127.0.0.1:8008-:80

Create a virtual network interface in “UserMode”, and port forward from port 80 on the dynamically allocated IP address on the virtual machine to port 127.0.0.1:8008 (which means it’s only accessible from the host machine, not from any other machine on the network)

-vnc 127.0.0.1:0 -daemonize

This makes the service “headless” – basically meaning it won’t show itself, or need a terminal window open to keep it running. If you want to interact with the system, you need to VNC to localhost. If you’ve already got a VNC service running on the machine (for example, if you’re using Vino under Ubuntu), increment the :0 to something else – I used :2, but you could use anything.

At the moment, because I’ve not had much opportunity to tweak TinyCore’s boot process, it won’t start running automatically (you have to tell it what to start when it boots), nor will it start any of the services I want from it, I’ve had to use VNC to connect to it. I’ll be trying out more things with this over the next few days, and I’ll update this as I go.

Also, I’ve not tried using the Windows qemu packages to make sure the same options all work with that system, and I’ll probably be looking into using the smb switch for the -net user option, so that as much of the data is clearly accessible without needing to drop in to the qemu session just to upload a few photos into the system. I guess we’ll see :)

A tip for users who SSH to a system running ecryptfs and byobu

I’ve been an Ubuntu User for a while (on and off), and a few versions back, Ubuntu added two great installed-by-default options (both of which are turned off by default), called Byobu (a Pimp-My-GnuScreen app) and ECryptFS (an “Encrypt my home directory” extension).

Until just recently, if you wanted to enable both, and then SSH to the box using public/private keys, it would use the fact you’d connected and authenticated with keys to unlock the ECryptFS module and then start Byobu. A few months back, I noticed that if I rebooted, it wouldn’t automatically unlock the ECryptFS module, so I’d be stuck without either having started. A few login attempts later, and it was all sorted, but just recently, this has got worse, and now every SSH session leaves me at a box with an unmounted ECryptFS module and no Byobu.

So, how does one fix such a pain? With a .profile file of course :)

SSH in, and before you unlock your ECryptFS module run this:

sudo nano .profile

You need to run the above using sudo, as the directory you access before you start ECryptFS is owned by root, and you have no permissions to write to it.

In that editor, paste this text.

#! /bin/bash
`which ecryptfs-mount-private`
cd
`which byobu-launcher`

Then use Ctrl+X to exit the editor and save the file.

The next time you log in, it’ll ask you for your passphrase to unlock the ECryptFS module. Once that’s in, it’ll start Byobu. Job’s a good’n.

Watching for file changes on a shared linux web server

$NEWPROJECT has a script which runs daily to produce a file which will be available for download, but aside from that one expected daily task, there shouldn’t be any unexpected changes to the content on the website.

As I’m hosting this on a shared webhost, I can’t install Tripwire or anything like that, and to be honest, for what I’m using it for, I probably don’t need it. So, instead, I wrote my own really simple file change monitor which runs as a CronJob.

Here’s the code:

#! /bin/bash
# This file is called scan.sh
function sha512sum_files() {
find $HOME/$DIR/* -type f -exec sha512sum '{}' \; >> $SCAN_ROOT/current_status
}
SCAN_ROOT=$HOME/scan
mv $SCAN_ROOT/current_status $SCAN_ROOT/old_status
for DIR in site_root media/[A-Za-z]*
do
sha512sum_files
done
diff -U 0 $SCAN_ROOT/old_status $SCAN_ROOT/current_status

And here’s my crontab:


MAILTO="my.email@add.ress"
# Minute Hour Day of Month Month Day of Week Command
# (0-59) (0-23) (1-31) (1-12 or Jan-Dec) (0-6 or Sun-Sat)
0,15,30,45 * * * * /home/siteuser/scan/scan.sh

And lastly, a sample of the output

--- /home/siteuser/scan/old_status 2010-10-25 14:30:03.000000000 -0700
+++ /home/siteuser/scan/current_status 2010-10-25 14:45:06.000000000 -0700
@@ -4 +4 @@
-baeb2692403619398b44a510e8ca0d49db717d1ff7e08bf1e210c260e04630606e9be2a3aa80f7db3d451e754e189d4578ec7b87db65e6729697c735713ee5ed /home/siteuser/site_root/LIBRARIES/library.php
+c4d739b3e0a778009e0d53315085d75cf8380ac431667c31b23e4b24d4db273dfc98ffad6842a1e5f59d6ea84c33ecc73bed1437e6105475fefd3f3a966de118 /home/siteuser/site_root/LIBRARIES/library.php
@@ -71 +71 @@
-88ddd746d70073183c291fa7da747d7318caa697ace37911db55afce707cd1634f213f340bb4870f1194c48292f846adaf006ad61b4ff1cb245972c26962b42d /home/siteuser/site_root/api.php
+d79e8a6e6c3db39e07c22e7b7485050007fd265ad7e9bdda728866f65638a8aa534f8cb51121a68e9287f384e8694a968b48d840d37bcd805c117ff871e7c618 /home/siteuser/site_root/api.php

While this isn’t the most technically sound way (I’m sure) of checking for file changes, at least it gives me some idea (to within 15 minutes or so) of what files have been changed, so gives me a time to start hunting.

Weirdness with Bash functions and Curl

I’m writing a script (for $NEW_PROJECT) which, due to my inability to figure out how to compile a certain key library on Dreamhost, runs SSH to a remote box (with public/private keys and a limitation on what that key can *actually* achieve) to perform an off-box process of some data.

After it’s all done, I am using curl to call the API of the project like this:

curl --fail -F "file=@`pwd`/file" -F "other=form" -F "options=are_set" http://user:password@server/api/function

Because I’m making a few calls against the API, I wrote a function like this:

function callAPI() {
API=$1
if [ "$2" != "" ]
then
API=$API/$2
fi
if [ "$3" != "" ]
then
API=$API/$3
fi
if [ "${OPTION}" != "" ]
then
FORM="${OPTION}"
else
FORM=""
fi
if [ $DEBUG == "1" ]
then
echo "curl --fail ${FORM} http://${USER}:**********@${SITE}/api/${API}"
fi
eval `curl --fail ${FORM} http://${USER}:${PASS}@${SITE}/api/${API} 2>/dev/null`
}

and then call it like this:

OPTION="-F \"file=@filename\" -F \"value=one\" -F \"value=two\""
callAPI function

For all the rest of my API calls (those which ask for data, rather than supply it, these calls work *fine*, but as soon as I tell it to post a form containing a file, it throws this error:

curl: (26) failed creating form post data

I did some digging around, and found that this means that the script can’t read from the file. The debug line, when run outside of the script processed the command perfectly, so what’s going on?

To be honest, in the end, I just copied the command into the body of the code, and I’m praying that I can figure out why I can’t compile this library on Dreamhost, before I need to work out why running that curl line doesn’t work from inside a function.

Like the idea of GMail’s Priority Inbox, but you’ve already got “Multiple Inboxes” and you don’t want to loose them?

That’s the position I’m in. Because I use my Android phone for e-mail a lot, and so I don’t want my phone to beep every 5 minutes, I set up a huge bundle of filters to shunt my e-mail into various labels, for the social groups I belong to, for my SVN commits and ticket tracking, to prioritize emails from friends and family.

OK, so technically, GMail’s Priority Inbox should automagically do some of this for me, but, well, I wanted more!

So, I thought I’d write up some short notes on how to use Priority Inbox in a way that might actually be useful.

First, turn on Priority Inbox. It’s a simple radio button, found under “Settings” -> “Priority Inbox” -> “Show Priority Inbox”. This will probably make you reload your GMail session.

Next, go back to the “Priority Inbox” settings page, and set your “Default Inbox” to “Inbox”. I like as much information as possible in my GMail screen, so I’ve got the indicators turned on and I’m overriding the filters (I don’t know if this is useful or not, but, why not, eh?)

Save your changes. Again, I’m guessing this will reload your GMail session.

Go into “Settings” -> “Multiple Inboxes” (feel free to turn it on under Labs first, if it’s not already there).

Before Priority Inbox, I had two “new” inboxes – “All Unread” and “Muted” (so that I can mark-all-as-read those mails I’d already muted but that kept on being noisy!). These two inboxes sat underneath my main inbox, but as “Priority Inbox” is supposed to go above all that lot, it’s not going to be much use after the main Inbox. So, I’ve changed my Multiple Inboxes now as follows:

  1. (in:important OR is:starred) AND is:unread [Called “Priority Inbox”]
  2. in:inbox AND -in:important AND is:unread [Called “Inbox Unread”]
  3. -in:inbox AND -in:important AND -is:muted AND is:unread [Called “Unread Other”]
  4. is:muted is:unread [Called “Muted”]

These are all configured to show 20 messages, and to sit above the Inbox. I’ll accept, there is some waste with having the inbox at the bottom of the screen, and again part way up, but at least now, my messages are sorted (nearly) the way Google intended them to be ;)

Oh, and one nice feature from doing it this way, if an “Important” message isn’t quite important enough to disturb you on your phone (and thus is “archived” before being filed into your e-mail folders), it’ll still show up, in that top bit there… it just won’t be disturbing your sleep until you check your mailbox when you get up.

A summary of my ongoing Open Source projects

I’m a pretty frequent contributor to various Open Source projects, either when I’m starting them myself, or getting involved in someone else’s project. I thought, as I’m probably stretching myself a bit thin with these projects right now, I’d list off what I’m doing, so I can find out whether anyone’s interested in getting involved in any of them. Read More

Need to quickly integrate some IRC into your app? Running Linux? Try ii

I know, it looks like a typo, but the script ii makes IRC all better for small applications which don’t need their own re-implementation of an IRC client.

I know it’s available under Ubuntu and Debian (apt-get install ii), but I don’t know what other platforms it’s available for.

It’s not much use as a user-focused IRC client (although it would vaguely work like that with a little scripting!), but for scripts it works like a charm.

Read More

Some notes on OpenSSH

At the hackspace recently, I was asked for a brief rundown of what SSH can do, and how to do it.

Just as an aside, for one-off connections to hosts, you probably don’t need to use a public/private key pair, but for regular access, it’s probably best to have a key pair, if not per-host, then per-group of hosts (for example, home servers, work servers, friends machines, web servers, code repositories). We’ll see how to keep these straight later in this entry. For some reasons, you may want to have multiple keys for one host even!

If you want to create a public/private key pair, you run a very simple command. There are some tweaks you can make, but here’s the basic command

ssh-keygen

Generating public/private key pair
Enter the file in which to save the key (/home/bloggsf/.ssh/id_rsa): /home/bloggsf/.ssh/hostname
Enter passphrase (empty for no passphrase): A Very Complex Passphrase
Enter same passphrase again: A Very Complex Passphrase
Your identification has been saved in /home/bloggsf/.ssh/hostname.
Your public key has been saved in /home/bloggsf/.ssh/hostname.pub.
The key fingerprint is:
00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff bloggsf@ur-main-machine

See that wasn’t too hard was it? Transfer the PUBLIC portion (the .pub file) to your destination box, as securely as possible, whether that’s by SFTP, putting them on a pen drive and posting it to your remote server, or something else… but those .pub files should be appended to the end of /home/USERNAME/.ssh/authorized_keys

You achieve that by typing:

cat /path/to/file.pub >> /home/username/.ssh/authorized_keys

Note that, if you don’t spell it the American way (authoriZed), it’ll completely fail to work, and you’ll stress out!

So, now that key is on your remote host, how do we do stuff with it?

1) SSH to a console (this won’t try to use the public/private key pair, unless you left the default filename when you made your key)

ssh user@host

2) SSH to a host on an unusual port

ssh user@host -p 12345

3) SSH using a private key (see towards the end of the document about public and private keys)

ssh user@host -i /path/to/private_key

4) SSH on a new port and with a private key

ssh user@host -p 54321 -i /home/user/.ssh/private_key

5) Pulling a port (e.g. VNC service) back to your local machine

ssh user@host -L 5900:127.0.0.1:5900

The format of the portion starting -L is local-port:destination-host:destination-port.

Note, I would then connect to localhost on port 5900. If you are already running a VNC service on port 5900, you would make the first port number something not already in use – I’ll show an example of this next.

6) Pulling multiple ports from different remote hosts to your local machine.
This one I do for my aunt! It forwards the VNC service to a port I’m not using at home, and also gives me access to her router from her laptop.

ssh user@host -L 1443:192.168.1.1:443 -L 5901:localhost:5900

Here I’ve used two formats for selecting what host to forward the ports from – I’ve asked the SSH server to transfer connections I make to my port 1443 to the host 192.168.1.1 on port 443. I’ve also asked it to transfer connections I make on port 5901 to the machine it resolves the name “localhost” as (probably 127.0.0.1 – a virtual IP address signifying my local machine) and to it’s port 5901.

7) Reverse Port Forwarding… offering services from the client end to the server end.

ssh user@host -R 1080:localhost:80

I’ve identified here the most common reason you’ll do a reverse port forward – if you’re not permitted to run sftp (in case you transfer files out of the system), but you need to transfer a file to the target host. In that case, you’d run a web server on your local machine (port 80) and access the web server over port 1080 from your destination host.

8) Running a command instead of a shell on the remote host

ssh user@host run-my-very-complex-script –with-options

9) If you only want your user to be able to use a specific command when they SSH to your host, edit their authorized_keys file, and add at the beginning:

command=”/the/only/command/that/key/can/run $SSH_ORIGINAL_COMMAND” ssh-rsa ……

This command will be run instead of any commands they try to run, with the command they tried to run as options passed to it.

10) Make a file to make it easier for you to connect to lots of different machines without needing to remember all this lot!

The file I’m talking about is called config and is stored in /home/bloggsf/.ssh/config

If it’s not already there, create it and then start putting lines into it. Here’s what mine looks like (hosts and files changed to protect the innocent!)

Host home external.home.server.name
Hostname external.home.server.name
User jon
Port 12345
LocalForward 1080 localhost:1080
LocalForward 9080 router:80
LocalForward 9443 router:443
Host github github.com
Hostname github.com
User git
IdentityFile /home/jon/.ssh/github_key
Host main.projectsite.com
User auser
RemoteForward 1080:localhost:80
Host *.projectsite.com
User projectowner
IdentityFile /home/jon/.ssh/supersecretproject
Host *
IdentityFile /home/jon/.ssh/default_ssh_key
Compression yes

The config file parser steps through it from top to bottom, and will ignore any subsequent lines which it matches already (with the exception of LocalForward and RemoteForward), so if I try to SSH to a box, and my SSH key isn’t already specified, it’ll use the default_ssh_key. Likewise, it’ll always try and use compression when connecting to the remote server.

Book Review – “For The Win” and “Makers” by Cory Doctorow

I read my first Cory Doctorow book a month-or-so before the first OggCamp, September 2009. It was “Little Brother”, a “young adult” book about rebelling against the panopticon that was being created by the War on Terror. It made such an impact on me that I gave a talk at OggCamp about the technologies discussed in the book (primarily Tor and PGP) and their role in society. It went down well enough that I gave that talk again at BarCamp Manchester… a talk on a technology I’d not heard of two months before, and had significantly changed my views on how much I wanted to share with faceless companies and organisations.

My next Doctorow book was an audiobook version of “Eastern Standard Tribe”, which I only really was focused on the first chapter (it’s hard to be focused on audio when you’re as much of a magpie as I am) but it made me want to build a chording computer keyboard to use with my mobile phone after a passing comment in the opening chapter.

Last month, I heard that “For The Win”, a follow up Young Adult story had been released, so I eagerly reserved it from my local library and noticed that “Makers”, a more adult novel, had also been released, so I reserved that too.

A colleague knew that I’d read and loved “Little Brother” so asked me to tell him what I thought of “For The Win”. I read it in a couple of days. Sadly, it’s not a good book and it’s far too fragmented to tell the story in a way that you could stop for a couple of days and come back to it. It’s also desperate to explain the subtle nuances of in-game economies and unions – neither of which particularly interested me. By the end of the book, I was left wondering what the point had been – there was no real conclusion and while a battle had been won, it was clear the war was far from over. The characters all ran together and a lot of the characters were little more than stereotypical extras, whether that was racially stereotypical, gender or even ageist.

I left that book sad that I’d read it… but, I had another Cory book to read. After all, the recent books can’t *all* be stinkers, right?

I picked up “Makers” and started reading. It’s a thicker book, and this took me nearly four days to read… although admittedly, I was building a new server part way through days two and three.

This was more like the story I’d hoped “For The Win” would be. It’s a three part story; part one is about the friendship between the two lead characters, the commercialisation and massive growth of their hobby-cum-career. Part two is where that growth suddenly died, taking all the jobs with it, and their homage to “New Work” – the name given to the outcome of part one. Part three is where a mega-corp notices they’re losing money to the homage (called “The Ride”) and they try to destroy it.

It describes my experiences and hopes for the hacker culture perfectly, wanting to build something for the sake of it, discussing the concepts behind making something great from something passé and the ideas behind making an open API to let anyone play with your ideas. It also suggests how big business doesn’t “get” the hacker culture. As with much of Cory’s work, there’s lots of scope to implement his ideas in the real world, and some of the projects he mentions, I’d love to set up at my local hackspace.

The only downside I’ve found with “Makers” is that I think there’s a lot of sex in it, both implied and referred to… I guess I don’t see the relevance in a sex scene unless it’s key to the characters growth, and in “Makers” you could have removed 3/4 of the sex scenes and it would have been mostly the same book. I realise it explains some of the decisions in the book and gives some colour to the characters, but one of the side effects is that it means I can’t give this book to my 13 year old cousin – hell, I can’t even give him “Little Brother” because of the single solitary, and destinctly unnecessary sex scene 2/3rds of the way through the book.

In summary, I’d skip “For The Win”, and read “Makers”. 2/5 and 4/5 respectively.

A warning about the evils of Facebook

Facebook is one of the current breed of “Social networking” websites – which means that they let you exchange information, pictures and videos with each other… sounds good so far, right?

Here’s where the problem is. Facebook is a company which is trying to make money. Your profile (the collection of all your information) on their website belongs to them. They can market that information to anyone and do whatever they want with it. If you put any pictures on there, then they own those photos too. On top of that, every “application” (or service that isn’t written by Facebook) knows everything about you and the people you are friends with… which means that if you’ve decided not to install an application that collects e-mail addresses, but your friend does – then that application knows your e-mail address. Wonderful!

Facebook have a real problem with their “privacy policy” and the pages which let you share details with the rest of the world – every few months they write a new version of both to help them get even more of a chance to sell off your information, to use your photos and videos in new and interesting ways… so much so, that about a year ago, their CEO (Chief Executive Officer – the person who makes all the day-to-day decisions about where the company goes next) had all his details shared publicly because he forgot that they started using the new privacy settings page on that day and he’d not set his details to the most private settings. This happens all the time – to the extent another website was created called http://youropenbook.org that shows what people are making publicly available!

A few months back, Facebook changed their privacy policy again to let you log into other websites using your Facebook details, which sounds like a great idea, but it means that the website then (again) knows your e-mail address, all your friends, your birthday and (if you enter it) your phone number… not good!

Realistically, it is possible to use Facebook in a vaguely safe way if you take a lot of precautions about what you are sharing and doing on their website, but I really wouldn’t recommend using it, and in fact, I’d recommend who ever suggested you use it be forwarded a link to this page, warning them not to use it! Sadly, there’s nothing else available right now that does the same thing in a way that still maintains your privacy. I’m watching a few projects, and once something safe and easy to use comes out, I’ll let you know!

(Just as a disclaimer, I do use Facebook, but I don’t like it and I want to move away from it, PRONTO!)