2018-03-23

micrxchg 2018: Serverless and Containers

The 2 days of "The Microservices Conference in Berlin - 2018" where packed with talks that clearly demonstrate that the world of micro services is based mainly on two topics: Serverless computing and containers. Users either go to the public cloud providers and use the existing frameworks there, e.g. Lambda, API Gateway, DynamoDB etc. on AWS, or users build their own platforms on top of Kubernetes (or a Kubernetes variant).

The videos are available on the microXchg YouTube channel, my own talks:


The slides for my own talks are available via SlideShare:

Root for all - measuring DevOps adoption - microxchg 2018 - Schlomo Schapiro, see also Root for All - A DevOps Measure? blog article


Kubernetes - Shifting the mindset from servers to containers - microxchg 2018 - Schlomo Schapiro, see also Using Kubernetes with Multiple Containers for Initialization and Maintenance blog article.

2018-01-22

Cloud means DevOps - No Cloud without DevOps

I strongly believe that you can't be successful in the Cloud without also adopting DevOps. Here is why.

My latest definition of DevOps is
  • if every person uses the same tool for the same job
  • codified knowledge:
    everybody contributes his part to common automation
  • if all people have the same privileges in their tooling
  • if human error is equally possible for Dev and Ops
  • replacing people interfaces by automated decisions and processes
but most of all DevOps is the result of doing the right thing and not a process, methodology or even tool set of its own.

Looking closely at public cloud vendors and their interfaces I see a close correlation with this DevOps definition. Cloud vendors
  • give every person the same interfaces and tools to work with
  • are API based and make it really simple to code the entire setup
  • give all users the same privileges - that of a customer
  • let all their users make the same mistakes indisccriminately
  • provide most change requests through automation and automated decisions
For example, an engineering team working with an AWS account starts out working DevOps style. As long as all the team members have the same permissions in their AWS account and share the same duties towards their organization this remains the case. DevOps starts breaking only when the organization doesn't trust the team to be fully responsible for their account and imposes restrictions so that the team cannot - for example - create IAM roles.

In such a case it is the customer who introduced a new management process that breaks the DevOps definition given above: Now some engineers have more privileges in their tooling and different engineers use different tools for the same job: The security engineers review, approve and apply IAM roles within AWS while all other developers cannot use the AWS interface to do so but must submit their IAM roles to the security engineers via some process. Even if this is highly or fully automated the developers cannot use AWS with its native tooling like Cloud Formation.

Maybe the company policies require this kind of split responsibility. However, most likely the underlying motivation is not one of preventing DevOps or hindering developers. Typically this split responsibility was the only practical solution available to establish security governance over the AWS account.

Automating such challenges is also the road to achieve or keep true DevOps: Automate all processes and remove the human factor from as many decisions as possible. Let the developers work with the native tooling without introducing abstractions as security gateways. Use these DevOps principles as guidelines to design a good solution that works for all.

A workable solution for the IAM role example given above could be an automated process that checks new IAM roles as soon as they are created. With CloudWatch Events and Lambda functions this is not a big challenge to implement. If a new or modified IAM role violates the company policies simply delete it (or revoke all trusts) and alert the person who created it. They will be thankful for the help in keeping the policies and they will be thankful that they can work with the native tooling.

In turn, the degree to which all people enjoy the same privileges can be also used to actually measure DevOps adoption.

2017-09-15

Simple build tooling for frontend web applications (gulp demo)

Please read this article in the GitHub Source repo for full context.

TL;DR: Why it pays to use professional tooling even for small and insignificant projects.
I used to write my build tools in Bash and "automate" stuff via Makefiles. I used to create my websites manually without any build tool (since I just edit the HTML, CSS and JS files directly).
It turns out that this is actually a big waste of my time. It also prevents me from adopting standard solutions for common problems. A specific example is the problem of proxies and browsers caching static asstes like CSS and JS files. The symptom is that I have to push repeatedly F5 to see a change in my code.
The best practice solution is to change the filename of the static asset each time the content changes. Without automation this is already way beyond my manual editing so that I so far didn't use this simple trick.
This little demo project for a static website shows how easy it is actually to setup and use professional build tooling for websites and how much one can benefit from this.
This project contains branches numbered by the steps in this tutorial, you can switch between the branches to see the individual steps.

Step 1

Install NodeJS, on Ubuntu you can simply sudo apt install nodejs-legacy npm.
Create a directory for your project, e.g. gulp-demo and change to the directory. Run npm init -y to initialize the NodeJS environment. It will create a package.json file that describes the project, the node modules it uses and which custom scripts you want to run.

Step 2

Let's create a source directory named src and add a simple website with an HTML file for the content, a CSS file for the styling and a JS file for browser-side code.
You can simply copy my example src folder or add your own.
Open the index.html file in a browser to see that it works and looks like this:
image

Step 3

I found gulp to be the "right" combination between features and ease-of-use. To get started with gulp we simply install a bunch of node modules and create a simple Javascript file that automates building our websitenpm install -D del gulp gulp-apimocker gulp-footer gulp-if gulp-load-plugins gulp-rev gulp-rev-replace
Thanks to the -D option npm saves this list of modules in the package.json file so that we can later on, e.g. after a fresh checkout, reinstall all of that with a simple npm install.
Gulp recipes (called "tasks") are actually a Javascript program stored ina "gulp file", which is simple a file named gulpfile.js in the top level directory of a project. An initial gulp file can be as simple as this:
const gulp = require('gulp');

gulp.task('clean', function() {
  return require('del')(['out']);
});

gulp.task('build', ['clean'], function(){
  return gulp.src(['src/**'])
    .pipe(gulp.dest('out'));
})

gulp.task('default', ['build']);
I find this easy enough to read, please look at other gulp tutorials and the documentation for more details.
To run this program from the command line we add this custom script to the package.json:
{
  ...
  "scripts": {
    "build": "gulp build"
  }
  ...
}
The effect is that we can invoke gulp to run the build task like this:
$ npm run build

> gulp-intro@1.0.0 build /.../gulp-demo
> gulp build

[14:58:11] Using gulpfile /.../gulp-demo/gulpfile.js
[14:58:11] Starting 'clean'...
[14:58:11] Finished 'clean' after 26 ms
[14:58:11] Starting 'build'...
[14:58:11] Finished 'build' after 30 ms
Now you can inspect the built website in the out/ directory. It will look exactly like the website in the src/ directory because the only thing that we ask gulp to do is to copy the files from src/ to out/.

Step 4

Just copying files is obviously not interesting. The first really useful feature is a local development webserver to see the website in a browser. It should automatically run the build task whenever I change a file in the src/ directory.
I like the apimocker webserver. It can not only serve static files but also allows to mock API calls or to pass API calls to a real backend server.
The following gulp file adds a new task apimocker that not only starts the web server but also starts a watcher that re-runs the build task each time some source file changes:
const gulp = require('gulp');
const $ = require('gulp-load-plugins')();

gulp.task('clean', function() {
  return require('del')(['out']);
});

gulp.task('build', ['clean'], function(){
  return gulp.src(['src/**'])
    .pipe(gulp.dest('out'));
})

gulp.task('apimocker', ['build'], function(){
  gulp.watch('src/**', ['build'])
    .on('change', function(event) {
      console.log('File ' + event.path + ' was ' + event.type + ', running tasks...');
    });
  return $.apimocker.start({
    staticDirectory: 'out',
    staticPath: '/'
  });
});

gulp.task('default', ['build']);
We can again add this task as a custom script in npm via this addition to the package.json:
{
  ...
  "scripts": {
    "build": "gulp build",
    "dev": "gulp apimocker"
  },
  ...
}
Run the development server with npm run dev. Open a web browser and go to http://localhost:8888 to see the website. If you now change a file in the src/ directory then you can see how gulp immediately rebuilds the website so that you can now reload the website in the browser.

Step 5

Now that we covered the basics we can come to the first real feature: Automatically hashing the file names of static assets. Gulp already has plugins for this task (like it has for almost any other task): gulp-rev and gulp-rev-replace.
We add the modules in the build task to change the files passing through the pipeline:
gulp.task('build', ['clean'], function(){
  return gulp.src(['src/**'])
    .pipe($.if('*.js', $.rev()))
    .pipe($.if('*.css', $.rev()))
    .pipe($.revReplace())
    .pipe(gulp.dest('out'));
})
The if module selects matching files (CSS and JS) which are then passed to the rev module that creates a hash based on the content and renames the file. Finally the revReplace module patches the HTML files with the new file names.
To check the effect run again the development server with npm run dev and have a look at the sources in the web browser (e.g. press F12 in Chrome):
image
Instead of the style.css file we now see a style-5193e54fcb.css file and similar for the JS file.

Step 6

Another common problem we can solve now is displaying the version of the software in the website. While there are many different ways to achieve this here is my (currently) preferred one: In HTML I create an empty 
 element. In CSS I use the content attribute to set the actual content. I use the build pipeline to actually append styles with the content to the CSS file.
The actual version is set from outside the build tool. Since there are typically both a version and a release (see Meaningful Versions with Continuous Everything ) I use two environment variables GIT_VERSION for the software version from the git repo and VERSION for a build version, typically set by the build automation.
const versioncss = `

/* appended by gulp */
#version::after {
  content: "${ process.env.GIT_VERSION || "unknown GIT_VERSION" }";
}
#version:hover::after {
  content: "${ process.env.VERSION || "unknown VERSION" }";
}
`
gulp.task('build', ['clean'], function(){
  return gulp.src(['src/**'])
    .pipe($.if('*.js', $.rev()))
    .pipe($.if('style.css', $.footer(versioncss)))
    .pipe($.if('*.css', $.rev()))
    .pipe($.revReplace())
    .pipe(gulp.dest('out'));
})
First we create a piece of CSS styles that set the content for the #version DIV. Javascript Template Literals serve to easily include the value from the environment variables or to use a default value.
In the package.json we can now set the GIT_VERSION variable:
{
  ...
  "scripts": {
    "build": "GIT_VERSION=$(git describe --tags --always --dirty) gulp build",
    "dev": "GIT_VERSION=$(git describe --tags --always --dirty) gulp apimocker"
  },
  ...
}
When running the build or dev npm scripts we can then also set the VERSION variable:
$ VERSION=15 npm run dev

> gulp-intro@1.0.0 dev /.../gulp-demo
> GIT_VERSION=$(git describe --tags --always --dirty) gulp apimocker

[16:48:40] Using gulpfile /.../gulp-demo/gulpfile.js
[16:48:40] Starting 'clean'...
[16:48:40] Finished 'clean' after 24 ms
[16:48:40] Starting 'build'...
[16:48:41] Finished 'build' after 106 ms
[16:48:41] Starting 'apimocker'...
No config file path set.
Mock server listening on port 8888
[16:48:41] Finished 'apimocker' after 175 ms
And then the website shows both versions:
peek 2017-09-15 16-51

Conclusion

Learning a new trick can save a lot of time. With the basic setup done it is now very easy to use more modern web development tools like Less instead of CSS and TypeScript instead of JavaScript. The gulp website lists many plugins that solve almost any problem related to modern web development.

2017-08-10

Meaningful Versions with Continuous Everything

Q: How should I version my software? A: Automated!

All continuous delivery processes follow the same basic pattern:

Engineers working on source code, configuration or other content commit their work into a git repository (or another version control system, git is used here as an example). A build system is triggered with the new git commit revision and creates binary and deployment artefacts and also applies the deployments.

Although this pattern exists in many different flavors, at the core it is always the same concept. When we think about creating a version string the following requirements apply:
  • Every change in any of the involved repositories or systems must lead to a new version to ensure traceability of changes.
  • A new version must be sorted lexicographically after all previous versions to ensure reliable updates.
  • Versions must be independent of the process execution times (e.g. in the case of overlapping builds) to ensure a strict ordering of the artefact version according to the changes.
  • Versions must be machine-readable and unambiguous to support automation.
  • For every version component there must be a single source of truth to make it easy to analyse issues and track back changes to their source.
Every continuous delivery process has at least two main players: The source repository and the build tool. Looking at the complete process allows us to identify the different parts that should contribute to a unique version string, in order of their significance:

1. Version from Source Code

The first version component depends only on the source code and is independent from the build tooling. All version components must be derived only from the source code repository. This version is sometimes also called software version.

1.1 Static Source Version

The most significant part of the version is the one that is set manually in the source code. This can be for example a simple VERSION file or also a git tag. It typically denotes a compatibility promise to the users of the software. Semantic Versioning is the most common versioning scheme here.

To automate this version one could think about analysing an API definition like OpenAPI or data descriptions in order to determine breaking changes. Each braking change should increment the major version, additions should increment the minor version and everything else the patch version.

In a continuous delivery world we can often reduce this semantic version to a single number that denotes breaking changes

1.2 Source Version Counter

Every commit in the source repository can potentially produce a new result that is published. To enforce the creation of a new version with every change, a common practice is adding an automatically calculated and strictly increasing component to the version. For Subversion this is usually the revision. For git we can use the git commit count as given by git rev-list HEAD --count --no-merges

If the project uses git tags then the following git command generates the complete and unique version string: git describe --tags --always --dirty=-changed which looks like this: 2.2-22-g26587455 (22 is the commit count since the 2.2 tag). In this case the version also contains the short commit hash from git which identifies the exact state.

2. Version from Build System

The build tools and automation also influence the resulting binaries. To distinguish building the same source version with different build tooling we make sure that the build tooling also contributes to the resulting version string. To better distinguish between the version from source code and the version from the build tooling I like to use the old term release for the version from the build tooling.

2.1 Tool Version

All the tooling that builds our software can be summarized with a version number or string. The build system should be aware of its version and set this version.

If your build system doesn't have such a version then this is a sign that you don't practice continuous delivery for the build automation itself. You can leave this version out and rely only on the build counter.

2.2 Build Counter

The last component of the version is a counter that is simply incremented for each build. It ensures that repeated builds from the same source yield different versions. It is important that the build counter is determined at the very beginning of the build process.

If possible, use a build counter that is globally unique - at least for each source repository. Timestamps are not reliable and depend on the quality of the time synchronization.

Versions for Continuous Delivery

If all your systems are continuously built and deployed than there is a big chance that you don't need semantic versioning. In this case you can simplify the version schema to use only the automatic counter version and release:
<git revision counter>.<build counter>
On the other hand you might want to add more components to your version strings to reflect modularized source repos. For example, if you keep the operational configuration separate from the actual source code then you might want to have a version with three parts, again in simplified form:
The most important take away is to automate the generation of version strings as much as possible. In the world of continuous delivery it becomes possible to see versions as a technical number without any attached emotions or special meaning.

Version numbers should be cattle, not pets.

2017-08-04

Favor Dependencies over Includes

Most deployment tools support dependencies or includes or even both. In most cases one should use dependencies and not includes — here is why.
Includes work on the implementation level and dependencies work at the level of functionality. As we strive to modularize everything, dependencies have the benefit of separating between the implementations while includes actually couple the implementations.

Dependencies work by providing an interface of functionality that the users can rely upon - even if the implementation changes over time. They also create an abstraction layer (the dependency tree) that can be used to describe a large system of components via a short list of names and their dependencies. Dependencies therefore allow us to focus on smaller building blocks without worrying much about the other parts.

To conclude, please use dependencies if possible. They will give you a clean and maintainable system design.

2017-07-21

Web UI Testing Made Easy with Zalenium

I so far was always afraid to mess with UI tests and SeleniumHQ. Thanks to Zalenium, a dockerized "it just works" Selenium Grid from Zalando, I finally managed to start writing UI tests.

Zalenium takes away all the pain of setting up Selenium with suitable browsers and keeps all of that nicely contained within Docker containers (docker-selenium). Zalenium also handles spawning more browser containers on demand and even integrates with cloud-based selenium providers (Sauce Labs, BrowserStack, TestingBot).

To demonstrate how easy it is to get started I setup a little demo project (written in Python with Flask) that you can use for inspiration. The target application is developed and tested on the local machine (or a build agent) while the test browsers run in Docker and are completely independent from my desktop browser:
A major challenge for this setup is accessing the application that runs on the host from within the Docker containers. Dockers network isolation normally prevents this kind of access. The solution lies in running the Docker containers without network isolation (docker --net=host) so that the browsers running in Docker can access the application running on the host.

Prerequisites

First install the prerequisites. For all programming languages you will need Docker to run the Zalenium and Leo Gallucci's SeleniumHQ Docker containers.
For my demo project, written in Python, you will also need Python 3 and virtualenv. In Ubuntu you would
apt install python3-dev virtualenv.

Next you need to install the necessary libraries for your programming language, for my demo project there is a requirements.txt.

Example Application and Tests

A simple web application. It shows two pages with a link from the first to the second:




The integration tests check if the page loads and if the link from the main page to the second page is present. For this check the test loads the main page, locates the link and clicks on it - just like a user accessing the website would do.

Automate

To automate this kind of test we can simply start the Zalenium Docker containers before the actual test runner:

This little script creates a new virtualenv, installs all the dependencies, starts Zalenium, runs the tests and stops Zalenium again. On my Laptop this entire process takes only 40 seconds (without downloading any Docker images) so that I can run UI tests locally without much delay.

I hope that these few lines of code and Zalenium will also help you to start writing UI tests for your own projects.

2017-06-29

Setting Custom Page Size in Google Docs - My First Published Google Apps Script Add-On

While Google Docs is a great productivity tool, it still lacks some very simple and common functionality, for example setting a custom page size. Google Slides and Google Drawings allows setting custom sizes, but not Google Docs.

Luckily there are several add-ons available for this purpose, for example Page Sizer is a little open source add-on on the Chrome Web Store.

Unfortunately in many enterprise setups of G Suite access to the Chrome Web Store and to Google Drive add-ons is disabled for security reasons: the admins cannot white-list single add-ons and are afraid of add-ons that leak company data. Admins can only white list add-ons from the G Suite Marketplace.

The Google Apps Script code to change the page size is actually really simple, for example to set the page size to A1 you need only this single line of code:
DocumentApp.
 getActiveDocument().
 getBody().
 setAttributes({ 
  "PAGE_WIDTH": 1684, 
  "PAGE_HEIGHT": 2384 
 });
To solve this problem for everybody I created a simple Docs add-on Set A* Page Size that adds a menu to set the page size to any of 4A0 - A10.
Users can use this add-on in three modes:
  • Install the add-on in your Google Docs. Works for gmail.com accounts and for G Suite accounts that allow add-on installation. It makes the add-on available in all your documents.
  • Ask their domain admins to add the add-on from the G Suite Marketplace. This will add the add-on for all users and all their documents in the domain. The source code is public (MIT License) and open for review.
  • Copy & Paste the code into their own document — this requires no extra permissions and it does not involve the domain admins. It adds the add-on only to the current document.
To use the code in your own document follow these steps:
  1. Copy the code from the Script Editor of this Document into the Script Editor of your own Document.
  2. Close and Open your document.
  3. Use the Set Page Size menu to set a custom page size:
I hope that you will find this little add-on useful and that you can learn something about Google Apps Scripting from it.