Transfer my files using SFTP and SCP only?

A colleague today asked for some guidance around setting up an SFTP and SCP only account on a RedHat based Linux machine.

I sent him a collection of links, including one to the CopSSH project, and he implemented the code on that link, but then struggled when it didn’t work.

Aside from the fact the shell wasn’t copied into /etc/shells (which wasn’t disastrous, but did mean we couldn’t reuse it again later), it was still returning an error on each load.

Doing some digging into it, and running some debugging, I noticed that pscp (the PuTTY SCP) tool uses the SFTP subsystem rather than the SCP command to upload files, so we need to also check that the SFTP server hasn’t been called, instead of the SCP command, and also the SCP command needs to be corrected.

Here follows a script, complete with comments. Personally, I’d save this in /bin/sftponly, created and owned by root, and set to permissions 755 (rwxr-xr-x). Then, set the shell to this for each user which needs to do SFTP or SCP only.

#!/bin/bash
# Based on code from http://www.itefix.no/i2/node/12366
# Amended by Jon Spriggs (jon@sprig.gs)
# Last update at 2011-09-16

# Push the whole received command into a variable
tests=`echo $*`

# Set up a state handler as false
isvalid=0

# Test for the SFTP handler.
# The 0:36 values are the start character and length of the handler string.
if [ "${tests:0:36}" == "-c /usr/libexec/openssh/sftp-server" ]; then
  # Set the state handler to true
  isvalid=1
  # Configure the handling service
  use=/usr/libexec/openssh/sftp-server
fi

# Test for the SCP handler.
if [ "${tests:0:6}" == "-c scp" ]; then
  # Set the state handler to true
  isvalid=1
  # Configure the handling service
  use=/usr/bin/scp
fi

# If the state handler is set to false (0), exit with an error message.
if [ "$isvalid" == "0" ]; then
  echo "SCP only!"
  exit 1
fi

# Run the handler
exec $use $*

Quick Fix for Apache CVE-2011-3192 – Ubuntu/Debian

Please note – Apache have released a fix to this issue, and as such the below guidance has now been superseded by their fix.

I have been aware of the Apache web server issue for the last few days, where an overly wide range is requested from the server, leading to a crash in the server. As a patch hasn’t yet been released by Apache, people are coding their own solutions, and one such solution was found at edwiget.name.

That fix was for CentOS based Linux distributions, so this re-write covers how to do the same fix under Debian based distributions.
Read More

VNC in Ubuntu 11.04 with Unity

I recently bought myself a new laptop. Sometimes though, I want to check something on it on a rare occasion when I’ve not taken it with me. In comes VNC. Under Ubuntu 11.04, turning on VNC support is pretty straight forward.

To turn on VNC, go to the power icon in the top right corner (I think they call it the “Session Menu”, but it looks like a power button to me) and select “System Settings”. Under the “Internet and Network” heading, is an option called “Remote Desktop”. Click on that. Tick the top two boxes “Allow other users to view your desktop” and “Allow other users to control your desktop”. Tick the box “Require the user to enter this password” (and enter a password) and “Configure network automatically to accept connections”. Untick “You must confirm each access to this machine” and select “Only display an icon when there is someone connected”. Close it.

Now, try connecting to your device, and see what happens. I had some issues with Compiz elements not rendering correctly, and found a few hints to fix it. The first says to turn on the “disable_xdamage” option. It says to use gconf-editor, but I’m SSHing in, so I need to use gconftool-2 as follows:

gconftool-2 --set "/desktop/gnome/remote_access/disable_xdamage" --type boolean "true"

Personally, I only want to ever connect over OpenVPN to this, so I added the following:

gconftool-2 --set "/desktop/gnome/remote_access/network_interface" --type string "tun0"

You may wish to only ever access it over SSH, in which case replace “tun0” with “lo”

Now, I next made a big mistake. I followed some duff guidance, and ended up killing my vino server (I’m still not sure if I was supposed to do this or not), but to get it back, I followed this instruction to restart it. I had to tweak it a little:

sudo x11vnc -rfbport 5901 -auth guess

Once you’ve started this, tunnel an extra port (5901) to your machine, start VNC to the tunnelled port, and then go back through the options above. Exit your VNC session to the new tunnelled port, and then hit Control+C on the SSH session to close that x11vnc service.

Experimenting with Tiny Core Linux on QEMU

In response to a post on the Ubuntu UK Loco mailing list today, I thought the perfect way to produce a cross-platform, stable web server… would be to create a QEMU bootable image of Tiny Core.

So, the first thing I did was to download a Tiny Core image. This I obtained from the Tiny Core Download Page. I then created a 512MB disk image to store my packages on.

qemu-img create tinycore-tce.img 512M

After a bit of experimenting, I ended up with this command to boot TinyCore. At the moment, it’s relatively cross-platform, but will need some tweaking to get to the point where I can do anything with it…

qemu -hda tinycore-tce.img -m 512 -cdrom tinycore-current.iso -boot d -net nic -net user,hostfwd=tcp:127.0.0.1:8008-:80 -vnc 127.0.0.1:0 -daemonize

So, let’s explain some of those options.

-hda tinycore-tce.img

This means, use the image we created before, and install it in /dev/hda on the visualised machine.

-cdrom tinycore-current.iso -boot d

Create a virtual CD using the ISO file we downloaded. Boot from the CD rather than any other media.

-m 512

Allocate the virtual machine 512Mb RAM.

-net nic -net user,hostfwd=tcp:127.0.0.1:8008-:80

Create a virtual network interface in “UserMode”, and port forward from port 80 on the dynamically allocated IP address on the virtual machine to port 127.0.0.1:8008 (which means it’s only accessible from the host machine, not from any other machine on the network)

-vnc 127.0.0.1:0 -daemonize

This makes the service “headless” – basically meaning it won’t show itself, or need a terminal window open to keep it running. If you want to interact with the system, you need to VNC to localhost. If you’ve already got a VNC service running on the machine (for example, if you’re using Vino under Ubuntu), increment the :0 to something else – I used :2, but you could use anything.

At the moment, because I’ve not had much opportunity to tweak TinyCore’s boot process, it won’t start running automatically (you have to tell it what to start when it boots), nor will it start any of the services I want from it, I’ve had to use VNC to connect to it. I’ll be trying out more things with this over the next few days, and I’ll update this as I go.

Also, I’ve not tried using the Windows qemu packages to make sure the same options all work with that system, and I’ll probably be looking into using the smb switch for the -net user option, so that as much of the data is clearly accessible without needing to drop in to the qemu session just to upload a few photos into the system. I guess we’ll see :)

A tip for users who SSH to a system running ecryptfs and byobu

I’ve been an Ubuntu User for a while (on and off), and a few versions back, Ubuntu added two great installed-by-default options (both of which are turned off by default), called Byobu (a Pimp-My-GnuScreen app) and ECryptFS (an “Encrypt my home directory” extension).

Until just recently, if you wanted to enable both, and then SSH to the box using public/private keys, it would use the fact you’d connected and authenticated with keys to unlock the ECryptFS module and then start Byobu. A few months back, I noticed that if I rebooted, it wouldn’t automatically unlock the ECryptFS module, so I’d be stuck without either having started. A few login attempts later, and it was all sorted, but just recently, this has got worse, and now every SSH session leaves me at a box with an unmounted ECryptFS module and no Byobu.

So, how does one fix such a pain? With a .profile file of course :)

SSH in, and before you unlock your ECryptFS module run this:

sudo nano .profile

You need to run the above using sudo, as the directory you access before you start ECryptFS is owned by root, and you have no permissions to write to it.

In that editor, paste this text.

#! /bin/bash
`which ecryptfs-mount-private`
cd
`which byobu-launcher`

Then use Ctrl+X to exit the editor and save the file.

The next time you log in, it’ll ask you for your passphrase to unlock the ECryptFS module. Once that’s in, it’ll start Byobu. Job’s a good’n.

Watching for file changes on a shared linux web server

$NEWPROJECT has a script which runs daily to produce a file which will be available for download, but aside from that one expected daily task, there shouldn’t be any unexpected changes to the content on the website.

As I’m hosting this on a shared webhost, I can’t install Tripwire or anything like that, and to be honest, for what I’m using it for, I probably don’t need it. So, instead, I wrote my own really simple file change monitor which runs as a CronJob.

Here’s the code:

#! /bin/bash
# This file is called scan.sh
function sha512sum_files() {
find $HOME/$DIR/* -type f -exec sha512sum '{}' \; >> $SCAN_ROOT/current_status
}
SCAN_ROOT=$HOME/scan
mv $SCAN_ROOT/current_status $SCAN_ROOT/old_status
for DIR in site_root media/[A-Za-z]*
do
sha512sum_files
done
diff -U 0 $SCAN_ROOT/old_status $SCAN_ROOT/current_status

And here’s my crontab:


MAILTO="my.email@add.ress"
# Minute Hour Day of Month Month Day of Week Command
# (0-59) (0-23) (1-31) (1-12 or Jan-Dec) (0-6 or Sun-Sat)
0,15,30,45 * * * * /home/siteuser/scan/scan.sh

And lastly, a sample of the output

--- /home/siteuser/scan/old_status 2010-10-25 14:30:03.000000000 -0700
+++ /home/siteuser/scan/current_status 2010-10-25 14:45:06.000000000 -0700
@@ -4 +4 @@
-baeb2692403619398b44a510e8ca0d49db717d1ff7e08bf1e210c260e04630606e9be2a3aa80f7db3d451e754e189d4578ec7b87db65e6729697c735713ee5ed /home/siteuser/site_root/LIBRARIES/library.php
+c4d739b3e0a778009e0d53315085d75cf8380ac431667c31b23e4b24d4db273dfc98ffad6842a1e5f59d6ea84c33ecc73bed1437e6105475fefd3f3a966de118 /home/siteuser/site_root/LIBRARIES/library.php
@@ -71 +71 @@
-88ddd746d70073183c291fa7da747d7318caa697ace37911db55afce707cd1634f213f340bb4870f1194c48292f846adaf006ad61b4ff1cb245972c26962b42d /home/siteuser/site_root/api.php
+d79e8a6e6c3db39e07c22e7b7485050007fd265ad7e9bdda728866f65638a8aa534f8cb51121a68e9287f384e8694a968b48d840d37bcd805c117ff871e7c618 /home/siteuser/site_root/api.php

While this isn’t the most technically sound way (I’m sure) of checking for file changes, at least it gives me some idea (to within 15 minutes or so) of what files have been changed, so gives me a time to start hunting.

A summary of my ongoing Open Source projects

I’m a pretty frequent contributor to various Open Source projects, either when I’m starting them myself, or getting involved in someone else’s project. I thought, as I’m probably stretching myself a bit thin with these projects right now, I’d list off what I’m doing, so I can find out whether anyone’s interested in getting involved in any of them. Read More

Need to quickly integrate some IRC into your app? Running Linux? Try ii

I know, it looks like a typo, but the script ii makes IRC all better for small applications which don’t need their own re-implementation of an IRC client.

I know it’s available under Ubuntu and Debian (apt-get install ii), but I don’t know what other platforms it’s available for.

It’s not much use as a user-focused IRC client (although it would vaguely work like that with a little scripting!), but for scripts it works like a charm.

Read More

Some notes on OpenSSH

At the hackspace recently, I was asked for a brief rundown of what SSH can do, and how to do it.

Just as an aside, for one-off connections to hosts, you probably don’t need to use a public/private key pair, but for regular access, it’s probably best to have a key pair, if not per-host, then per-group of hosts (for example, home servers, work servers, friends machines, web servers, code repositories). We’ll see how to keep these straight later in this entry. For some reasons, you may want to have multiple keys for one host even!

If you want to create a public/private key pair, you run a very simple command. There are some tweaks you can make, but here’s the basic command

ssh-keygen

Generating public/private key pair
Enter the file in which to save the key (/home/bloggsf/.ssh/id_rsa): /home/bloggsf/.ssh/hostname
Enter passphrase (empty for no passphrase): A Very Complex Passphrase
Enter same passphrase again: A Very Complex Passphrase
Your identification has been saved in /home/bloggsf/.ssh/hostname.
Your public key has been saved in /home/bloggsf/.ssh/hostname.pub.
The key fingerprint is:
00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff bloggsf@ur-main-machine

See that wasn’t too hard was it? Transfer the PUBLIC portion (the .pub file) to your destination box, as securely as possible, whether that’s by SFTP, putting them on a pen drive and posting it to your remote server, or something else… but those .pub files should be appended to the end of /home/USERNAME/.ssh/authorized_keys

You achieve that by typing:

cat /path/to/file.pub >> /home/username/.ssh/authorized_keys

Note that, if you don’t spell it the American way (authoriZed), it’ll completely fail to work, and you’ll stress out!

So, now that key is on your remote host, how do we do stuff with it?

1) SSH to a console (this won’t try to use the public/private key pair, unless you left the default filename when you made your key)

ssh user@host

2) SSH to a host on an unusual port

ssh user@host -p 12345

3) SSH using a private key (see towards the end of the document about public and private keys)

ssh user@host -i /path/to/private_key

4) SSH on a new port and with a private key

ssh user@host -p 54321 -i /home/user/.ssh/private_key

5) Pulling a port (e.g. VNC service) back to your local machine

ssh user@host -L 5900:127.0.0.1:5900

The format of the portion starting -L is local-port:destination-host:destination-port.

Note, I would then connect to localhost on port 5900. If you are already running a VNC service on port 5900, you would make the first port number something not already in use – I’ll show an example of this next.

6) Pulling multiple ports from different remote hosts to your local machine.
This one I do for my aunt! It forwards the VNC service to a port I’m not using at home, and also gives me access to her router from her laptop.

ssh user@host -L 1443:192.168.1.1:443 -L 5901:localhost:5900

Here I’ve used two formats for selecting what host to forward the ports from – I’ve asked the SSH server to transfer connections I make to my port 1443 to the host 192.168.1.1 on port 443. I’ve also asked it to transfer connections I make on port 5901 to the machine it resolves the name “localhost” as (probably 127.0.0.1 – a virtual IP address signifying my local machine) and to it’s port 5901.

7) Reverse Port Forwarding… offering services from the client end to the server end.

ssh user@host -R 1080:localhost:80

I’ve identified here the most common reason you’ll do a reverse port forward – if you’re not permitted to run sftp (in case you transfer files out of the system), but you need to transfer a file to the target host. In that case, you’d run a web server on your local machine (port 80) and access the web server over port 1080 from your destination host.

8) Running a command instead of a shell on the remote host

ssh user@host run-my-very-complex-script –with-options

9) If you only want your user to be able to use a specific command when they SSH to your host, edit their authorized_keys file, and add at the beginning:

command=”/the/only/command/that/key/can/run $SSH_ORIGINAL_COMMAND” ssh-rsa ……

This command will be run instead of any commands they try to run, with the command they tried to run as options passed to it.

10) Make a file to make it easier for you to connect to lots of different machines without needing to remember all this lot!

The file I’m talking about is called config and is stored in /home/bloggsf/.ssh/config

If it’s not already there, create it and then start putting lines into it. Here’s what mine looks like (hosts and files changed to protect the innocent!)

Host home external.home.server.name
Hostname external.home.server.name
User jon
Port 12345
LocalForward 1080 localhost:1080
LocalForward 9080 router:80
LocalForward 9443 router:443
Host github github.com
Hostname github.com
User git
IdentityFile /home/jon/.ssh/github_key
Host main.projectsite.com
User auser
RemoteForward 1080:localhost:80
Host *.projectsite.com
User projectowner
IdentityFile /home/jon/.ssh/supersecretproject
Host *
IdentityFile /home/jon/.ssh/default_ssh_key
Compression yes

The config file parser steps through it from top to bottom, and will ignore any subsequent lines which it matches already (with the exception of LocalForward and RemoteForward), so if I try to SSH to a box, and my SSH key isn’t already specified, it’ll use the default_ssh_key. Likewise, it’ll always try and use compression when connecting to the remote server.

Using the recursive_import.php script for importing photos to the #Horde module Ansel with subdirectories

I have a problem with the excellent Horde module “Ansel” – their photo
display and manipulation application – which I’m
documenting-until-I-fix-it.

If you have a lot of photos and you want to import the lot in one go,
there’s a script called recursive_import.php – you’ll find this under
/path/to/your/horde/install/ansel/scripts/recursive_import.php and it
takes the following arguments: -d /path/to/directory -u USERNAME -p
PASSWORD

I’d been using it thinking it would handle directory navigation a bit
better than it did, by running it as follows:

php recursive_import.php -d import_dir -u fred -p bloggs

Infact, I needed to do it like this:

php recursive_import.php -d `pwd`/import_dir -u fred -p bloggs

This is because the script navigates up and down the directory
structure as it works out the contents of each directory, instead of
handling the referencing properly. I plan to look at this properly
tomorrow when I’ve got a day off, but if I don’t, or if the patch
doesn’t get accepted, at least you know how to fix it now! :)

Posted via email from Jon’s posterous