2014-04-04

Automated OpenSSH Configuration Tests



When developing or fine-tuning OpenSSH configurations the testing can be quite tiresome: Change configuration, restart server, run manual tests, repeat. Not to forget the many times when restarting the SSH server does not work and you lock yourself out of your test server.

When writing a Linux Magazin article about SSH key management I wanted to show how to use OpenSSH PKI in a repeatable way. The result is an automated test suite for OpenSSH configuration:
$ ./run_demo.sh   ... lots of info output running through ... 
SSH PKI Demo Test Results:
Succeeded create-ca-keySucceeded create-host-keySucceeded sign-host-keySucceeded create-user-root-keySucceeded sign-user-root-keySucceeded create-user-unpriv-keySucceeded sign-user-unpriv-keySucceeded test-trusting-known-hosts-via-cert-and-login-with-passwordSucceeded test-that-hostname-in-cert-must-match-target-hostSucceeded test-login-with-root-key-trusted-by-certSucceeded test-that-username-in-cert-must-match-target-userSucceeded test-revoked-ca-key-prevents-loginSucceeded test-revoked-user-key-prevents-loginSucceeded test-revoked-host-key-prevents-connectionSucceeded in running all tests, congratulations!
It does not require root permissions and creates a fake environment where it can start an SSH server and connect a client to it. The test also creates the required SSH CA Certificate, host and users keys to serve as a practical example of how to use OpenSSH PKI.

Based on this script it is very easy to write your own tests that verify other aspects of OpenSSH configuration as part of your Test Driven Infrastructure.

The code is available on my GitHub repository: https://github.com/schlomo/openssh-config-test

2014-03-24

Opening a Window to a Wider World

When I bought a new Chromebook Acer C720 last week I got confirmation that times are changing: It has only an HDMI connector, no more VGA. Luckily, at ImmobilienScout24 we are also adapting and last month our big projector got an upgrade to Full HD with 16:9 Wide Screen. And you can now connect the computer through HDMI, too.

Since me myself so much got used to creating presentations in 4:3 I took the opportunity to remind myself and everybody else why it really pays to pay attention to this little detail.

Video is in German with English subtitles.

2014-02-25

SSH with Personal Environment

A colleague, Eric Grehm, raised an interesting challenge:

How to maintain his personal work environment (VIM settings, .bashrc ...) on all servers?

The first thought was putting this somehow into our software distribution, but we quickly realized that this would trigger needless updates on hundreds of servers. The benefit would be that the personal work environment is already on every server upon first access.

The next idea is to switch from a pre-installed personal environment to an on-demand solution where the personal environment is transferred each time a remote connection (over SSH) is established.

A simple implementation would just to a scp before the ssh, but that entails two connections which takes more time and might also bother the user with a double password request.

Side-channel data transfer

An alternative is to piggyback the file transfer onto the regular SSH connection so that the personal environment is transferred in a side channel:
  1. On the client create a directory with the files of the personal environment that need to be distributed:
    $ tree -a personal_environment
    personal_environment
    ├── .bashrc
    └── .vimrc
  2. On the client create a TAR.GZ archive with files that need to be transferred and store this archive base64-encoded in an environment variable (can take till about 127 kB):
    $ export USERHOME_DATA=$(tar -C ~/personal_environment -cz .| base64)

    I put this into a function in my .bash_profile to load on each login.
  3. Configure the SSH client to transmit this environment variable (SendEnv):
    $ cat .ssh/config
    Host dev*
            SendEnv USERHOME_DATA
  4. Configure the SSH server to accept this environment variable (AcceptEnv):
    $ sudo grep USERHOME /etc/ssh/sshd_config
    AcceptEnv USERHOME_DATA
  5. Create an sshrc script on the server that unpacks the archive from the environment variable upon login:

    (Only the last part is relevant to this topic, but if an sshrc script is provided it must also take care of xauth).

Benefits

This approach has several benefits:
  • True on-demand solution, personal environment is updated on each connection.
  • No extra connection required to transfer data.
  • sshrc is executed before the login shell so that also .bashrc can be transferred.
  • Scales well with an arbitrary amount of users.
  • Scales well with high amount of changes to the personal work environment.
The disadvantages are that the SSH configuration must be extended and that the amount of transferable data is limited to 127 kB compressed & encoded, which I actually see as a benefit because it prevents abuse.

For me the benefits by far outweigh the problems and I don't need to transfer so many files. This solution fulfills all my needs without putting an extra load on the servers or on our deployment infrastructure.

2014-02-13

Rough Measurement for HTTP Client Download Speed

Henrik G. Vogel  / pixelio.de
Ever wonder if your website is slow because of the server or because of the clients?
Do you want to know how fast is your clients' connection to the Internet?
Don't want to use external tracking services, injecting JavaScript etc.?

Why not simply measure how long it takes to deliver the content from your webserver to your users? Apache and nginx both support logging the total processing time of a request with a suitably high precision. That gives the time from starting with first byte received from the client and ending after the last byte sent to the client.

To try out this idea I added %D to the log format for access.log of my Apache server and wrote a little Python script to calculate the transfer speeds. With the help of the apachelog Python module parsing the Apache access.log is really simple. This module takes a log format definition as configuration and automatically breaks down a log line into the corresponding values.

The script can be found together with a little explanation on my GitHub page: github.com/schlomo/apacheclientspeed.

It is rather rough, I am now collecting data to see how useful this information actually is. One thing I can already say: Small files have very erratic results, bigger files yield more trustworthy numbers.

Videos are a good example, the output of this script looks like this:

$ grep mp4 /var/log/apache2/access.log | tail -n 20 | python -m apacheclientspeed
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 1409 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 119 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 1936 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 90 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 159 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 83 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 2067 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 43226 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 4 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 33491 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 28 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g2/GANGNAM_320_Rf_28.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 507 KiloByte/s
80.226.1.16 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 46 KiloByte/s

The many requests with 0 KB/s are made by iOS devices. They do a lot of smaller Range requests when streaming a movie over HTTP.

2014-01-31

apt-install

Do you ever get tired of typing
sudo apt-get update && apt-get install <package>
just to install one package that you added to your DEB repo? I do and I decided to do something about it. What I really miss is the intelligence of yum which simply updates its repo caches if they are too old.

apt-install (github.com/schlomo/apt-install) is the next best thing. It is a simply Python script that updates the cache and installs the packages given as command line arguments. And it shows a nice GUI with a progress bar:


Turns out that the parts are all there and part of aptdaemon. The only part missing was putting them together into this little script:
Please note that I actually completely don't understand how to write async code. I'll be happy about all feedback with better implementations.

2014-01-24

Simple Video Tricks

While working on the new Recorder (see also last posting) I suddenly faced several challanges with the resulting video files:
  • Many short chunks (50MB each, about 30-60 seconds) need to be merged
  • Extract the actual talk from a longer recording, e.g. the recorder was on for one hour but the talk was only half an hour
  • Convert the video into another container format because Adobe Premiere does not like AVI files
  • Create video thumbnails
  • Convert videos to be compatible with HTML5 <video> playback
Turns out that avconv (or ffmpeg) is the swiss army knife for all of these tasks! I am not qualified to say which is better, for my purposes the avconv that ships with Ubuntu is good enough. The examples given here work with both. When I write avconv I mean both tools.

Since I don't want to degrade the video quality through repeated decode/encode steps I always use -codec copy after the input file to simply copy over the audio and video data without reencoding it.

Concatenate Videos

This is probably the easiest part: avconv has an input protocol called concat which simply reads several files sequentially:

avconv -i concat:file1|file2 -codec copy out.mp4

Note: This method simply concatenates the input files as-is. It is mostly useful for transport streams. This methods usually fails to concatenate files that have container metadata. For that you can use the concat demuxer that is only available in ffmpeg. Full details can be found in the ffmpeg Wiki.

This script automates the process. It takes two files as arguments and concatenates all the files in the same directory starting from the first one and ending with the last one. Further arguments are passed to avconv.

Convert Video Container

One of the most common tasks is to convert the video container, e.g. from AVI to MOV or MP4:

avconv -i input.avi -codec copy output.mov

Create Video Thumbnails

Another simple task. Extract an image from the video after 4 seconds:

avconv -i input.avi -ss 4 -vframes 1 poster.jpg

Or for all the videos in a directory:

ls *.mp4 | while read ; do avconv -i "$REPLY" -vframes 1 -ss 4 -y "$REPLY.jpg" ; done


Create HTML5 Video

Convering a video for HTML5 playback requires keeping some standards, e.g. using a compatible profile and level. The following worked very well for me:

avconv -i INPUT-FILE -pix_fmt yuv420p -c:v libx264 -crf 30 -preset slower -profile:v Main -level 31 -s 568x320 -c:a aac -ar 22050 -ac 1 -b:a 64k -strict experimental out.mp4 ; qt-faststart out.mp4 OUTPUT-FILE.mp4 ; rm out.mp4

For smartphone consumption this is more than good enough. The result is about 3MB/min. You can increase the video size (e.g. 854x480) or quality (e.g. -crf 28), but this will also significantly increase the file size. The qt-faststart moves some MP4 metadata to the beginning of the file so that the file can be streamed.

Create video from image

Simple way to create a video of 3 seconds length at 25 frames/sec from a PNG image:

avconv -loop 1 -i image.png -r 25 -t 3 video.mp4


2014-01-15

Hostname-based Access Control for Dynamic IPs

Sometimes less is more. The most simple way to protect my private web space on my web server is this:

<Location />
    Order Deny,Allow
    Deny from All
    Allow from home.schapiro.org
</Location>

But what to do if home.schapiro.org changes the IP every 24 hours and if the reverse DNS entry (PTR) is something like p5DAE56B9.dip0.t-ipconnect.de? When my computer at home connects to the web server the source IP address is used for a reverse DNS lookup. This lookup returns the above mentioned provider-assigned name and not home.schapiro.org,  the web server will never be able to identify this IP as belonging to my home router.

The solution is to write the IP↔Name mapping for my dynamic IPs into /etc/hosts. That way a reverse lookup on the IP will actually yield the information from /etc/hosts and not ask the DNS system.

Since I don't want to do this manually every time my IP changes, I automate it with this script. It reads host names from /etc/hosts.autoupdate and injects them into /etc/hosts:

The script is actually part of the hosts-updater DEB package which also installs a man page and a CRON job to run this every 5 minutes. As a result my own server recognizes my dynamic IPs as authorized and under their "proper" name.

2014-01-08

Simple UDP Stream Recorder

At the office I got a 3 channel digital Audio/Video Recorder to conveniently record our talks without much human effort. The device has an analog video input for the video camera (standard resolution) and a digital video input (Full HD) and an audio input.
Epiphan VGADVI Recorder
These 3 inputs will be merged into a single side-by-side video where you can see the speaker next to his computer output. The video can be even larger than Full HD, for example 2688x1200 (a 768 pixels wide SD image next to a 1920 pixels wide HD image):

The device is far from cheap (list price is 1840 € + VAT) and can really do a lot. For example, it can create H.264 movies with a bitrate of up to 9 Mbit. It can also upload the videos to a CIFS share, but sadly that works only at a transfer speed of about 4 Mbit! So how could I transfer the videos at really high quality settings (9 Mbit) to the CIFS share? Waiting 2 hours to transfer the videos of a 1 hour talk is no option.


Linux and Open Source to the rescue!


My solution is a simple UDP stream recorder running on my desktop that receives a UDP stream of the video and saves it directly onto the CIFS share :-) UDP broadcasting of the video stream is a nice feature of the device and works at the full bitrate that it can encode. This is actually meant to be used for live broadcasts of the talk to the Internet, but it also can serve to beam the video from the device to another computer.

Since it took quite a while to cook up this simple solution (and I did not find any satisfactory search result), here it is:

Implementation Details

This implementation uses some tricks to make it so simple:
  • socat connects between incoming UDP packets on port 6002 and a program that writes the data to a file
  • Upstart keeps socat running and restarts it if it fails
  • socat terminates itself if it did not receive anything for 5 seconds
  • The Perl code
    • uses perl -n as the read loop
    • creates the destination file only if there is actually some data to write.
      No data = no file
    • uses the current date and time as filename - at the time it actually starts to receive something
  • Logging (only errors) is done via syslog
As a result each time the recorder is used the videos (H.264 in a MPEG TS container) will appear instantly on the file share. Just connect the device and the UDP Stream Recorder automatically records everything.

2013-12-27

Automated Raspbian Setup for Raspberry Pi

Recently we got a whole bunch of Raspberry Pi systems at work - the cheapest platform for building Dashboards.

Everybody loves those little cute boxes - but nobody wants to deal with the setup and maintenance. The kiosk-browser package is of course also the base of our Pi-based setup. But how does it get onto the Pi?

The solution is a Bash script: rpi-image-creator available on the ImmobilienScout24 GitHub project. It automates the setup of a Pi with Raspbian by downloading a Raspbian image, customizing it a bit and writing it to a SD card. The reason to write my own script where the following features:

  1. It creates the file systems on the SD card instead of dumping a raw image onto the SD card. That way the partitions are all aligned properly and have the right size from the beginning. No later resizing needed
  2. It removes all the stuff that Raspbian runs at the first boot so that the resulting SD cards are production ready (IMHO the goal of any automation)
  3. It does some basic setup, adds my SSH keys, disables the password of the standard user
  4. It configures fully automated updates
  5. It adds our internal DEB repo with the internal GPG key and installs packages from there. In our case those packages do most of the customization work so that this script stays focused on the Pi-specific stuff
As a result I have now a one-step process that turns a fresh SD card into a ready system.

The script uses some tricks to do its work:
  • Use qemu-arm-static to chroot into the Raspbian image on my Desktop. There are a whole load of other recipes on the Internet for doing that.
  • Temporarily disable the optimized memcpy and memset functions in /etc/ld.so.preload as otherwise qemu-arm-static will just crash without a clear error message
  • Mount the SD card file systems with aggressive caching to speed up writing to it. The ext4 filesystem is configured with the journal_data mount option to increase the resilience against sudden power losses
The script can be customized by editing the variables at the top:
The two is24- DEB packages to most of the setup work. If the demand is very high then I can see if I get them released.

2013-12-11

RSH Pitfall: Remote Exit Code

While writing some test scripts that use rsh (see below about why) to run commands on a remote test server I noticed that rsh and ssh have a significant difference:

ssh reports the remote exit code and rsh does not

As a result all my tests did not test anything, the error condition was never triggered:

My solution is this rsh wrapper:

The reason for using rsh instead of ssh is very simple: In a fully automated environment it provides the same level of security as ssh without the added trouble of maintaining it:

I need to make sure that ssh continues to work after all the SSH host keys change (e.g. after I reinstall the target). Also, to allow unattended operation I must deploy the SSH private keys so that in the end others could also extract them and use them. In the end I would be using IP/hostname restrictions on the SSH server side to restrict the use of the private key.

With rsh I don't need to worry about deploying or maintaining keys and just configure the RSH server to allow connections from my build hosts. Reinstalling either the build hosts or the target host does not change the name so that the RSH connection continues to work.

A couple of months ago I also co-authored an article about SSH security in automated environments in the German Linux Magazin called "Automatische Schlüsselverifikation". The article goes deeper into thchallange of using SSH in a fully automated environment and suggests to explore Kerberized RSH as an easier to maintain alternative.