Exploring Academia

Last week I attended the Multikonferenz Software Engineering & Management 2015 in Dresden hosted by the Gesellschaft für Informatik:

My topic was Test Driven Development, but I had to rework my original talk to fit into 20 minutes and to be much less technical. As a result I created a completely new fast paced talk which draws a story line from DevOps over Test Driven Infrastructure Development into Risk Mitigation:

The conference is very different from the tech conferences I usually attend. First, I really was the only person in a T-Shirt :-/. Second, I apparently was invited as the "practitioner" while everybody else was there to talk about academic research, mostly in the form of a bachelor or master thesis.

As much as the topics where interesting, as little was there anything even remotely related to my "practical" work :-(

I still find it interesting to better combine the different worlds (academic and practical), this conference still has some way to go if it wants to achieve this goal. Maybe it would help to team up with an established tech conference and simply hold two conferences at the same time and place to allow people to freely wander between the worlds.

I also had some spare time and visited the Gläserne Manufaktur where VW assembles Phaeton and Bentley cars. They take pride in the fact that 95% of the work is done manually, but sadly nobody asked me about my T-Shirt:
I am squinting so much because that days had a really bright sun. In the background is a XL1, a car that consumes less than 1ℓ of fuel per 100km.


A Nice Day at CeBIT 2015

After many years of abstinence I went back to visit the CeBIT today. And actually enjoyed it a lot. It is funny to see how everything is new but nothing changed. From the oversized booths of the big players like IBM and Microsoft to the tiny stalls of Asian bank note counting machine vendors. From the large and somewhat empty government-IT-oriented booths to meeting old acquaintances and friends.
But there are also several notably new things to see: For example Huawei shows itself being an important global player with a huge booth next to IBM.
I managed only to visit a third of the exhibition but it was more than I could absorb in a single day. Nevertheless, my missing was accomplished with giving a talk about “Open Source, Agile and DevOps at ImmobilienScout24”. The talk is much more high-level than my usual talks and tries to give a walk through overview. There were about 60-80 people attending my talk and the questions showed that the topic was relevant for the audience. So maybe giving management-level talks is the right thing to do for CeBIT.
Meeting people is the other thing that still works really well at the CeBIT. Without a prior appointment I was able to meet with Jürgen Seeger from iX magazine about my next ideas for articles and with people from SEP about better integrating their backup tool SESAM and Relax-and-Recover.
The new CeBIT concept of focusing on the professional audience seems to work, I noticed much less bag-toting swag-hunting people than last time. All in all I think that attending for one day is worth the trouble and enough to cover the important meetings.

Random Impressions

IBMs Watson wants to be a physician.

Video conferencing with life-sized counterparts. 4K really does make a difference!

Why buy 4 screens if you can buy 1 (QM85D)? Samsung has a lot more to offer than just phones.

Definitively my next TV (QM105D). 105", 21:9 ratio and 2.5 meters wide.

Another multimedia vendor? WRONG! This is "just" a storage box!

Though is seems like storage is no longer the main focus for QNAP.

Cyber crime is big - cyber police still small

Virtual rollercoaster at Heise - barf bags not included.

Deutsche Telekom always has a big booth and represents the top of German IT development. To underline the "Internet of Things" a bunch of robot arms was dancing with magenta umbrellas.

Dropbox comes to CeBIT in an attempt to win business customers. The data is still hosted in the USA, but the coffee was great.

And finally, even the weather was nice today.


Injecting a Layer of Automation

Relax and Recover is the leading Open Source solution for automated Linux disaster recovery. It was once the pride of my work and is now totally irrelevant at my current job at ImmobilienScout24.

Why? Simply because at ImmobilienScout24 we invest our time into automating the setup of our servers instead of investing into the ability to automatically recover a manually configured system. Sounds simple but this is actually a large amount of work and not done in a few days. However, if you persist and manage to achieve the goal the rewards are much bigger: Don't be afraid of troubles, based on our automation we can be sure to reinstall our servers in a very short time.

The following idea can help to bridge the gap if you cannot simply automate all your systems but still want to have a simplified backup and disaster recovery solution:

Inject a layer of automation under the running system.

The provisioning and configuration of the automation layer should be of course fully automated. The actual system stays manually configured but runs inside a Linux container (LXC, docker, plain chroot ...) and stays as it was before. The resource loss introduced by the Linux container and an additional SSH daemon is negligible for most setups.

The problem of backup and disaster recovery for systems is converted to a problem of backup and restore for data, which is fundamentally simpler because one can always restore into the same environment of a Linux container. The ability to run the backup in the automation layer also allows using smarter backup technologies like LVM or file system snapshots with much less effort.

I don't mean to belittle the effort that it takes to build a proper backup and restore solution, especially for servers that have a high change rate in their persistent data. This holds true for any database like MySQL and is even more difficult for distributed database systems like MongoDB. The challange of creating a robust backup and restore solution stays the same regardless of the disaster recovery question. Disaster recovery is always an on-top effort that complements the regular backup system.

The benefit of this suggestion lies in the fact that it is possible to replace the effort for disaster recovery with another effort investing into systems automation. That approach will yield much more value: A typical admin will use systems automation much more often than disaster recovery. Another way to see this difference is that disaster recovery is optimizing the past while systems automation is optimizing the future.

The automation layer can also be based on one of the minimal operation systems like CoreOS, Snappy Ubuntu Core or Red Hat Atomic Host. In that case new services can be established with full automation as docker images opening up a natural road to migrate the platform to be fully automated. And to gracefully handle the manually setup legacy systems without disturbing the idea of an automated platform.

If you already have a fully automated platform but suffer from a few manually operated legacy systems then this approach can also serve as a migration strategy to encapsulate those legacy systems in order to keep them running as-is.

Update 12.03.2015: Added short info about Relax and Recover and explain better why it pays more to invest into automation instead of disaster recovery.