2014-10-26

DevOpsDays Berlin 2014

Update: Read my (German) conference report on heise developer.

Last week I was at the DevOps Days Berlin 2014. This time at the Kalkscheune, a much better location than the Urania from last year. With 250 people the conference was not too full and the location was also well equipped to handle this amount.

Proving DevOps to be more about people and culture, most talks where not so technical but emphasized the need to take along all the people on the journey to DevOps.

An technical bonus was the talk by Simon Eskildsen about "Docker at Shopify" which was the first time that I heard about a successful Docker implementation in production.

Always good to know is the difference between effective and efficient as explained by Alex Schwartz in "DevOps means effectiveness first". DevOps is actually a way to optimize for effectiveness before optimizing for efficience.

Microsoft and SAP gave talks about DevOps in their world - quite impressive to see DevOps beeing main stream.

My own contribution was an ignite talk about ImmobilienScout24 and the Cloud:


And I am also a certified DevOps now:

2014-07-27

EuroPython 2014

One full week of Python power is almost more than one can take, but I missing it would be even worse.

This was my first EuroPython and with 1200 participants a big upgrade compared to the previous 2 PyCon.DE events in which I participated. The location (Berlin Congress Center) deserves kudos, along with the perfect organization.

The Wifi worked really well (except for a WAN problem on Tuesday which was fixed quickly) and everybody loved the catering. They even had kosher, helal and vegan food (preordered), which is highly unusual for German conferences. Most amazing was the video crew who managed to upload all videos in about one hour after a talk was given.

I managed to give three talks:

  • DevOps Risk MitigationHow we use Test Driven Infrastructure at ImmobilienScout24 as part of our general automation to reduce the risk of giving everybody access everywhere. (Access Slides or Watch Video)
  • YAML ReaderLightning Talk about the yamlreader Python library, which provides a wrapper for the yaml.safe_load function that merges several YAML files. yamlreader is the base for most of the modularized configuration in our Python software. (Access Slides or Watch Video)
  • Open Source Sponsoring
    About why your company should invest into Open Source projects instead of into proprietary software. I did not plan this talk, but a speaker did not show up and I jumped in. (Access Slides or Watch Video)
I very much enjoyed the international public at the conference and hope to be able to also attend next years event.




2014-07-01

iPXE - The Versatile Boot Loader

iPXE is a lesser known Open Source PXE boot loader which offers many interesting features:

Talk & Article

Since iPXE plays a role in the ImmobilienScout24 boot automation I gave a talk about it at the LinuxTag 2014. The talk is half an hour long and gives a quick introduction into iPXE. It covers build, configuration & scripting and shows how to develop boot scripts in iPXE with a very short feedback cycle.



Download the slides to the talk and the audio recording as a podcast.

At the conference the German Linux Magazin became interested in the topic and asked me to write an article about iPXE:

Der vielseitige Netzwerk-Bootloader I-PXE
Linux Magazin 08/2014


Demo Scripts

For the article I created a bunch of demo scripts that are available on Gist. To try them out follow these steps:
  1. Install QEMU, usually part of your Linux distro but also available for other platforms.
  2. Download my pre-built iPXE boot kernel ipxe.lkrn
  3. Start QEMU with ipxe.lkrn and the URL to the demo script:
    qemu -kernel ipxe.lkrn -append \ 'dhcp && chain http://goo.gl/j8MbXI'
  4. Try out the various options. The login will accept any password that is the reverse of the username.
This demo script looks like that:

And the QEMU boot looks like that:
ipxe-qemu-demo2-menu.png

Try it out

Anybody struggling with PXELINUX should most definitively check out iPXE to see if it provides a better alternative to their needs.

2014-06-26

automirror - Automate Linux Screen Mirroring


I do a lot of pair working and many times I connect a large TV or projector to my laptop for others to see what I am doing.

Unfortunately the display resolution of my laptop never matches that of the other display, and Linux tends to choose 1024x768 as the highest compatible resolution. This is of course totally useless for doing any real work.

My preferred solution for this problem is to use X scaling to bridge the resolution gap between the different screens.

Since none of the regular display configuration tools support scaling, I ended up typing this line very often:

xrandr --output LVDS1 --mode 1600x900 --output HDMI3 --mode 1920x1080 --scale-from 1600x900

Eventually I got fed up and decided to automate the process, the result is automirror, a little Bash script that automatically configures all attached displays in a mirror configuration. automirror is available on https://github.com/schlomo/automirror.

Typical Use Cases

Connecting a Full HD 1920x1080 display via HDMI to my 1600x900 laptop. In this case automirror will simply configure the HDMI device with 1920x1080 and scale the 1600x900 laptop display. As a result I stay with the full resolution on my laptop display and it also looks nice on the projector.

Another case is where I work with a 1920x1200 computer monitor and add the 1920x1080 projector as a second display. Again the common resolution offered by both devices is 1024x768. automirror will recognize my 1920x1200 display as primary display and scale it to 1920x1080 on the secondary display, which is not really noticeable.

It is recommended to configure a hot key to run automirror so that one can run it even if the display configuration is heavily mwessed up. In rare cases it might be neccessary to run automirror more than once so that xrandr will configure the displays correctly.

2014-06-20

Granting root access in a DevOps world

At the 2014-06 Berlin DevOps Meetup this week we had an interesting fish bowl discussion about

What is the risk of giving DEVs root access in production?

Since I suggested the topic I was asked to give a short introduction into the topic:


The discussion that followed was suprising in several aspects:
  • A major concern is safeguarding the production data, but nobody had a really good solution for that. Many people have more problems with Developers seeing live customer data than with Develops changing something in production.
  • "Nobody should have root" was proposed by a security specialist, but he had no practical working example for this approach.
  • The question is tightly coupled to the degree of automation. The more automation you have the less need for anybody (Dev or Ops) to use their root privileges.
  • Not everybody having root access knows what to do with it, Developers are sometimes afraid of using their power if granted root.
  • This is mostly a question for larger companies and classical IT organizations. Small companies and start ups just give root to everybody who knows what to do.
For me that was the first time having this discussion when nobody tried to prove that Developers should in principle not get root access. The Test Driven Infrastructure fish bowl at the Berlin DevOps Meetup 2013-12 last year also touched upon this topic and the discussion was much more against giving root access to Developers.

My personal opinion is that in a DevOps world people are in the focus of our interest. The official title or organizational position should matter less than what the people are doing. We should therefore
give root access to people based on
  • Trust to act in our common interest
  • Commitment to fix everything they brake
  • Skills to tread carefully in our production environment

2014-06-13

My SMART TV - Linux For The Win

I love my "smart" TV - it got Linux inside which is the base for a whole range of nice hacks.

TV Router

The most important one is that the TV is actually a wireless router that provides Internet via Ethernet to my TV rack. Usually the Ethernet connection is used by the Playstation or a Raspberry Pi.
The original reason for this hack was simple: The Playstation 3 has a really really bad Wifi reception which made watching Netflix nearly impossible and the unavoidable PS3 updates painfully long. The USB Wifi adapter connected to the TV has a much better reception, sharing it with the PS3 solved all the performance problems.

Samsung Linux TV

And here comes the good part. The TV (Samsung LE32C650) runs Linux inside and there is an Open Source project (SamyGO) that "opens up" the TV firmware and extends this Linux with useful tools.

In my case I only had to enable IP forwarding, configure a static IP on the Ethernet interface (eth0) and start a DHCP server on it. The Samsung kernel already included IP forwarding (thanks!) and the DHCP server is part of Busybox that comes with SamyGO.

NFS

Another benefit from rooting the TV is the option to add NFS support. The TV has a great media player that plays almost all file formats, even with subtitles and multiple audio tracks. The player can fast forward/rewind and even remembers the last playback position for each video. But all of these nice features only work when playing videos from USB storage, not over DLNA.

Thanks to SamyGO it is possible to mount a NFS share onto a directory on a USB stick. The TV thinks that the NFS share is on the USB stick and happily plays all the videos with all the fancy features.

Wife Acceptance Factor

Back in 2010, when I bought the TV, this was a really cool solution with a high WAF because both watching TV and videos from our collection work with the same remote control. Nowadays I would probably just attach a Raspberry Pi (with OpenELEC) to the TV and enjoy the seamless integration thanks to HDMI CEC. But is is still nice to know that I can extend my TV to better serve our needs.

I can only hope that the next TV will be equally hack friendly.

2014-05-28

Win-Win: Employer Branding and Corporate Social Responsibility

Does your company care about employer branding? Probably yes.

Does your company care about corporate social responsibility? Probably yes.

Does your company combine these two to create a win-win situation? Most likely not!

Take my employer ImmobilienScout24 as a typical example: The about us page mentiones that ImmobilienScout24 is a great place to work (4th in our region) and the CSR team talks about the social engagement, e.g. blood donations or the social day where all employees donate their work time to non-profit organizations.

However, there is no obvious connection between these two things.

I would like to suggest a simple way how to combine both employer branding and corporate social responsibility:

A company should make it a priority to support charitable organizations and social projects related to their own employees.

Examples:
  • Sponsor non-profit organizations or neighborhood/community projects that employees are involved with.
  • On social day, go to schools and kindergartens where employees are parents.
  • Involve employees who are in the red cross or similar organizations to organize the annual blood drive.
  • Support local or neighborhood charity organizations instead of global ones.
Basically the idea is that CSR related activities should be geared around the employees private life and activities.

This will create a win-win situation and especially help to retain employees because they get additional fulfillment and satisfaction from their employer supporting their social engagement.

There is no added costs involved, it is enough to change the way how CSR budgets are spent.

I mostly hear these arguments against this idea:
  1. CSR spending must be charitable beyond doubt, employee projects could be too narrowly orientated to count as generally charitable.
  2. Employee-oriented sponsoring would lead to envy between colleagues.
  3. The danger of personal enrichment or employees taking personal advantage is too high.
  4. Niche projects and small target groups would get too much funding compared.
  5. Employees who are less outspoken or less engaged would be disadvantaged.
All these arguments are most certainly valid and represent the fear that "something could go wrong". Of course sponsoring a large and well-established institution is much easier and safer, but also much less gratifying. And much less worthy of press attention and less outstanding.

I believe that all these concerns can be adressed by establishing simple rules related to funding:
  • Communicate the concept of employee-oriented CSR funding to all employees so that everybody understands the value of making CSR spending more personal and more related to the people.
  • Make CSR funding very transparent - from the internal application through the reasons given till the detailed spending report.
  • Publish follow-ups on past fundings to ensure sustainable spending and to give positive examples.
  • Make a very visible call for participation to invite all employees to suggest organizations and projects they care about.
  • Not every single project must be charitable for the general population - all projects taken together should have a sufficiently wide spread.
With these rules a company can easily resolve the concerns preventing the benefical combination of CSR spending and employer branding.

The following links discuss this idea in part without drawing the obvious conclusion that smarter CSR funding could improve employer branding for free:
Image: © Can Stock Photo Inc. / ribah2012 and / mindscanner

2014-05-22

Adding Custom Menus for Linux Desktops

The "Start Menu" of a Linux Desktop usually comes with a predefined set of categories that make up the sub menus. If you have a lot of custom applications then you might want to group them under a dedicated sub menu instead of having them spread out over all the menu categories.

Adding sub menus and new categories on Linux Desktops is defined in the Desktop Menu Specification in Appendix C. It turns out that it is really simple and the following example from ImmobilienScout24 can serve as a base for your own custom menu.

You will need the following parts:
  1. A Desktop file using a custom category
  2. A Directory file defining the icon and description for the new sub menu
  3. The icon for the sub menu
  4. An XML file describing how to integrate the new sub menu into the menu structure and which categories of Desktop files to show in the new menu
The Desktop file describes the menu entry, in this example the VPN client:
The important part here is the Categories entry which specifies a generic category (Network) and a new custom category (X-IS24). The Desktop Menu Specification states that custom categories must start with X-. The Desktop file usually goes to /usr/share/applications.

The Directory file also conforms to the Desktop Entry Specification but is of Type Directory:
The XML file is placed usually in /etc/xdg/menus/applications-merged and extends the menu structure with the new sub menu, tying together the categories and the Directory file:
In this case we also exclude the X-IS24 category from the Network category so that our menu entries will not show up in several sub menus.

KDE, Gnome Classic, XFCE and other desktops with a regular menu all follow the same standards and show the new sub menu. Unity and Gnome 3 seem to have a fixed set of build-in categories and don't show the new sub menu as a new category.

2014-05-15

Simple Video Presentation with Raspberry Pi

Playing videos in an endless loop is a common problem:
  • Product demos at a trade show or fair
  • Infomercials in a public place or foyer
  • Background fun at a party
  • ...
When I faced this problem at the last LinuxTag we did not want to take a full blown computer with us but make do with a Raspberry Pi. The question was how to turn the Pi into a simple video player with a minimum amount of fuss.

The solution is simple and elegant:
  1. Install OpenELEC (an Kodi distribution) on a SD card
  2. Boot it up once in the Pi to initialize the storage partition
  3. Add the following file in the storage partition as .kodi/userdata/autoexec.py
  4. Add any amount of multimedia files in the storage partition under videos/
  5. Boot up the Pi and enjoy your videos
You can also interrupt the playback and use OpenELEC normally. To go back to the automatic playback simply reboot the system.

And here is our booth with the demo videos in front:

Update 2016-05-13: Adjust for Kodi instead of XBMC. Everything else works as before.

2014-05-02

Simple file patching with sed

Patching configuration files is like the bread-and-butter job of every configuration management. In our package-based deployment world we try to minimize the patching to the absolute minimum, usually to "enable" modularized configuration patterns.

The best example is the Apache Webserver, where we have a wrapper RPM package with a %post script that simply replaces (and not patches) the upstream configuration with a few include lines:

Sadly there is still a lot of software that does not support includes in its configuration. For these we of course have to patch the existing configuration and use this short and simple config patcher in our RPM %post scripts, for example like this for sshd_config:

The trick of this snippet is that in the end the changed parts are always at the top of the file. It is also important to always embed some information about the cause of the patch so that one can easily find out who or what is reponsible for the file. The %-variables are filled in by RPM and provide precise information about which package caused this change.

2014-04-04

Automated OpenSSH Configuration Tests



When developing or fine-tuning OpenSSH configurations the testing can be quite tiresome: Change configuration, restart server, run manual tests, repeat. Not to forget the many times when restarting the SSH server does not work and you lock yourself out of your test server.

When writing a Linux Magazin article about SSH key management I wanted to show how to use OpenSSH PKI in a repeatable way. The result is an automated test suite for OpenSSH configuration:
$ ./run_demo.sh   ... lots of info output running through ... 
SSH PKI Demo Test Results:

Succeeded create-ca-key
Succeeded create-host-key
Succeeded sign-host-key
Succeeded create-user-root-key
Succeeded sign-user-root-key
Succeeded create-user-unpriv-key
Succeeded sign-user-unpriv-key
Succeeded test-trusting-known-hosts-via-cert-and-login-with-password
Succeeded test-that-hostname-in-cert-must-match-target-host
Succeeded test-login-with-root-key-trusted-by-cert
Succeeded test-that-username-in-cert-must-match-target-user
Succeeded test-revoked-ca-key-prevents-login
Succeeded test-revoked-user-key-prevents-login
Succeeded test-revoked-host-key-prevents-connection
Succeeded in running all tests, congratulations!
It does not require root permissions and creates a fake environment where it can start an SSH server and connect a client to it. The test also creates the required SSH CA Certificate, host and users keys to serve as a practical example of how to use OpenSSH PKI.

Based on this script it is very easy to write your own tests that verify other aspects of OpenSSH configuration as part of your Test Driven Infrastructure.

The code is available on my GitHub repository: https://github.com/schlomo/openssh-config-test

2014-03-24

Opening a Window to a Wider World

When I bought a new Chromebook Acer C720 last week I got confirmation that times are changing: It has only an HDMI connector, no more VGA. Luckily, at ImmobilienScout24 we are also adapting and last month our big projector got an upgrade to Full HD with 16:9 Wide Screen. And you can now connect the computer through HDMI, too.

Since me myself so much got used to creating presentations in 4:3 I took the opportunity to remind myself and everybody else why it really pays to pay attention to this little detail.

Video is in German with English subtitles.

2014-02-25

SSH with Personal Environment

A colleague, Eric Grehm, raised an interesting challenge:

How to maintain his personal work environment (VIM settings, .bashrc ...) on all servers?

The first thought was putting this somehow into our software distribution, but we quickly realized that this would trigger needless updates on hundreds of servers. The benefit would be that the personal work environment is already on every server upon first access.

The next idea is to switch from a pre-installed personal environment to an on-demand solution where the personal environment is transferred each time a remote connection (over SSH) is established.

A simple implementation would just to a scp before the ssh, but that entails two connections which takes more time and might also bother the user with a double password request.

Side-channel data transfer

An alternative is to piggyback the file transfer onto the regular SSH connection so that the personal environment is transferred in a side channel:
  1. On the client create a directory with the files of the personal environment that need to be distributed:
    $ tree -a personal_environment
    personal_environment
    ├── .bashrc
    └── .vimrc
  2. On the client create a TAR.GZ archive with files that need to be transferred and store this archive base64-encoded in an environment variable (can take till about 127 kB):
    $ export USERHOME_DATA=$(tar -C ~/personal_environment -cz .| base64)

    I put this into a function in my .bash_profile to load on each login.
  3. Configure the SSH client to transmit this environment variable (SendEnv):
    $ cat .ssh/config
    Host dev*
            SendEnv USERHOME_DATA
  4. Configure the SSH server to accept this environment variable (AcceptEnv):
    $ sudo grep USERHOME /etc/ssh/sshd_config
    AcceptEnv USERHOME_DATA
  5. Create an sshrc script on the server that unpacks the archive from the environment variable upon login:

    (Only the last part is relevant to this topic, but if an sshrc script is provided it must also take care of xauth).

Benefits

This approach has several benefits:
  • True on-demand solution, personal environment is updated on each connection.
  • No extra connection required to transfer data.
  • sshrc is executed before the login shell so that also .bashrc can be transferred.
  • Scales well with an arbitrary amount of users.
  • Scales well with high amount of changes to the personal work environment.
The disadvantages are that the SSH configuration must be extended and that the amount of transferable data is limited to 127 kB compressed & encoded, which I actually see as a benefit because it prevents abuse.

For me the benefits by far outweigh the problems and I don't need to transfer so many files. This solution fulfills all my needs without putting an extra load on the servers or on our deployment infrastructure.

2014-02-13

Rough Measurement for HTTP Client Download Speed

Henrik G. Vogel  / pixelio.de
Ever wonder if your website is slow because of the server or because of the clients?
Do you want to know how fast is your clients' connection to the Internet?
Don't want to use external tracking services, injecting JavaScript etc.?

Why not simply measure how long it takes to deliver the content from your webserver to your users? Apache and nginx both support logging the total processing time of a request with a suitably high precision. That gives the time from starting with first byte received from the client and ending after the last byte sent to the client.

To try out this idea I added %D to the log format for access.log of my Apache server and wrote a little Python script to calculate the transfer speeds. With the help of the apachelog Python module parsing the Apache access.log is really simple. This module takes a log format definition as configuration and automatically breaks down a log line into the corresponding values.

The script can be found together with a little explanation on my GitHub page: github.com/schlomo/apacheclientspeed.

It is rather rough, I am now collecting data to see how useful this information actually is. One thing I can already say: Small files have very erratic results, bigger files yield more trustworthy numbers.

Videos are a good example, the output of this script looks like this:

$ grep mp4 /var/log/apache2/access.log | tail -n 20 | python -m apacheclientspeed
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 1409 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 119 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 1936 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 90 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 159 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 83 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 2067 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 43226 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 4 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 33491 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 28 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g2/GANGNAM_320_Rf_28.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
89.204.139.1 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 0 KiloByte/s
217.111.70.208 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 507 KiloByte/s
80.226.1.16 GET /c/g1/GANGNAM_320_Rf_30.mp4 HTTP/1.1 46 KiloByte/s

The many requests with 0 KB/s are made by iOS devices. They do a lot of smaller Range requests when streaming a movie over HTTP.

2014-01-31

apt-install

Do you ever get tired of typing
sudo apt-get update && apt-get install <package>
just to install one package that you added to your DEB repo? I do and I decided to do something about it. What I really miss is the intelligence of yum which simply updates its repo caches if they are too old.

apt-install (github.com/schlomo/apt-install) is the next best thing. It is a simply Python script that updates the cache and installs the packages given as command line arguments. And it shows a nice GUI with a progress bar:


Turns out that the parts are all there and part of aptdaemon. The only part missing was putting them together into this little script:
Please note that I actually completely don't understand how to write async code. I'll be happy about all feedback with better implementations.

2014-01-24

Simple Video Tricks

While working on the new Recorder (see also last posting) I suddenly faced several challanges with the resulting video files:
  • Many short chunks (50MB each, about 30-60 seconds) need to be merged
  • Extract the actual talk from a longer recording, e.g. the recorder was on for one hour but the talk was only half an hour
  • Convert the video into another container format because Adobe Premiere does not like AVI files
  • Create video thumbnails
  • Convert videos to be compatible with HTML5 <video> playback
Turns out that avconv (or ffmpeg) is the swiss army knife for all of these tasks! I am not qualified to say which is better, for my purposes the avconv that ships with Ubuntu is good enough. The examples given here work with both. When I write avconv I mean both tools.

Since I don't want to degrade the video quality through repeated decode/encode steps I always use -codec copy after the input file to simply copy over the audio and video data without reencoding it.

Concatenate Videos

This is probably the easiest part: avconv has an input protocol called concat which simply reads several files sequentially:

avconv -i concat:file1|file2 -codec copy out.mp4

Note: This method simply concatenates the input files as-is. It is mostly useful for transport streams. This methods usually fails to concatenate files that have container metadata. For that you can use the concat demuxer that is only available in ffmpeg. Full details can be found in the ffmpeg Wiki.

This script automates the process. It takes two files as arguments and concatenates all the files in the same directory starting from the first one and ending with the last one. Further arguments are passed to avconv.

Convert Video Container

One of the most common tasks is to convert the video container, e.g. from AVI to MOV or MP4:

avconv -i input.avi -codec copy output.mov

Create Video Thumbnails

Another simple task. Extract an image from the video after 4 seconds:

avconv -i input.avi -ss 4 -vframes 1 poster.jpg

Or for all the videos in a directory:

ls *.mp4 | while read ; do avconv -i "$REPLY" -vframes 1 -ss 4 -y "$REPLY.jpg" ; done


Create HTML5 Video

Convering a video for HTML5 playback requires keeping some standards, e.g. using a compatible profile and level. The following worked very well for me:

avconv -i INPUT-FILE -pix_fmt yuv420p -c:v libx264 -crf 30 -preset slower -profile:v Main -level 31 -s 568x320 -c:a aac -ar 22050 -ac 1 -b:a 64k -strict experimental out.mp4 ; qt-faststart out.mp4 OUTPUT-FILE.mp4 ; rm out.mp4

For smartphone consumption this is more than good enough. The result is about 3MB/min. You can increase the video size (e.g. 854x480) or quality (e.g. -crf 28), but this will also significantly increase the file size. The qt-faststart moves some MP4 metadata to the beginning of the file so that the file can be streamed.

Create video from image

Simple way to create a video of 3 seconds length at 25 frames/sec from a PNG image:

avconv -loop 1 -i image.png -r 25 -t 3 video.mp4


2014-01-15

Hostname-based Access Control for Dynamic IPs

Sometimes less is more. The most simple way to protect my private web space on my web server is this:

<Location />
    Order Deny,Allow
    Deny from All
    Allow from home.schapiro.org
</Location>

But what to do if home.schapiro.org changes the IP every 24 hours and if the reverse DNS entry (PTR) is something like p5DAE56B9.dip0.t-ipconnect.de? When my computer at home connects to the web server the source IP address is used for a reverse DNS lookup. This lookup returns the above mentioned provider-assigned name and not home.schapiro.org,  the web server will never be able to identify this IP as belonging to my home router.

The solution is to write the IP↔Name mapping for my dynamic IPs into /etc/hosts. That way a reverse lookup on the IP will actually yield the information from /etc/hosts and not ask the DNS system.

Since I don't want to do this manually every time my IP changes, I automate it with this script. It reads host names from /etc/hosts.autoupdate and injects them into /etc/hosts:

The script is actually part of the hosts-updater DEB package which also installs a man page and a CRON job to run this every 5 minutes. As a result my own server recognizes my dynamic IPs as authorized and under their "proper" name.

2014-01-08

Simple UDP Stream Recorder

At the office I got a 3 channel digital Audio/Video Recorder to conveniently record our talks without much human effort. The device has an analog video input for the video camera (standard resolution) and a digital video input (Full HD) and an audio input.
Epiphan VGADVI Recorder
These 3 inputs will be merged into a single side-by-side video where you can see the speaker next to his computer output. The video can be even larger than Full HD, for example 2688x1200 (a 768 pixels wide SD image next to a 1920 pixels wide HD image):

The device is far from cheap (list price is 1840 € + VAT) and can really do a lot. For example, it can create H.264 movies with a bitrate of up to 9 Mbit. It can also upload the videos to a CIFS share, but sadly that works only at a transfer speed of about 4 Mbit! So how could I transfer the videos at really high quality settings (9 Mbit) to the CIFS share? Waiting 2 hours to transfer the videos of a 1 hour talk is no option.


Linux and Open Source to the rescue!


My solution is a simple UDP stream recorder running on my desktop that receives a UDP stream of the video and saves it directly onto the CIFS share :-) UDP broadcasting of the video stream is a nice feature of the device and works at the full bitrate that it can encode. This is actually meant to be used for live broadcasts of the talk to the Internet, but it also can serve to beam the video from the device to another computer.

Since it took quite a while to cook up this simple solution (and I did not find any satisfactory search result), here it is:

Implementation Details

This implementation uses some tricks to make it so simple:
  • socat connects between incoming UDP packets on port 6002 and a program that writes the data to a file
  • Upstart keeps socat running and restarts it if it fails
  • socat terminates itself if it did not receive anything for 5 seconds
  • The Perl code
    • uses perl -n as the read loop
    • creates the destination file only if there is actually some data to write.
      No data = no file
    • uses the current date and time as filename - at the time it actually starts to receive something
  • Logging (only errors) is done via syslog
As a result each time the recorder is used the videos (H.264 in a MPEG TS container) will appear instantly on the file share. Just connect the device and the UDP Stream Recorder automatically records everything.