This post is about setting up Continuous Delivery, its focus is on getting the delivery part optimized. The example I’ve sketched is a .Net platform, but in the final setup I’ll include where you might choose for other options to accommodate, for instance a Java stack.
A couple of years ago at Sogyo I was in a team of developers building a data gathering and visualisation application for a company specialized in industrial laundry automation.
At the time we had a Hudson build server, running our unit tests and running some code quality checks. It also built our releaseable packages, both for the acceptance platform as for the production platform, doing the configuration transforms at build time.
Because we found that within our team we sometimes forgot to prepare these config transformations and the final deploy had some manual steps we sometimes forgot, we wanted to test and automate this deploy. We introduced a test server. To do this we introduced configuration transforms for this environment, set up a spare machine as a server and did a scripted deploy to this machine at every successful run of the build, unit tests and quality checks. We introduced the rule that you could only set a feature to resolved if it worked on the test machine. At that time we also made the switch to the Jenkins fork of Hudson.
This approach had several advantages:
- It was easy to implement as we already had Jenkins (upgrade from Hudson was easy enough)
- We automatically tested the script for deployment every time
- We detected database changes and other dependency issues early during development
When first hearing about Continuous Delivery I thought we were already doing this, as we deployed the latest version every time someone checked something into SVN, the only thing we were missing was the continuous part, as the trigger for building an acceptance test version was starting the job in Jenkins by hand. But over time I felt we missed some control over all of this. Every configuration change for a live server, renaming an SQL server or even using a new domain the customer bought meant we had to do the changes in our configuration transforms, check this in into SVN and wait for all the steps to complete (build, unit test, quality checks, test-deploy, acceptance-deploy, release). Jenkins was in the middle of all this and flow was achieved by tagging versions in SVN.
For the final setup we had several demands:
- It should be possible te deploy the same artifacts to other machines without rebuilding everything from scratch
- Previous versions of the artifacts should still be available without rebuilding from source
- Flow from source to deployment should allow for mixing of components
To be able to have multiple versions of the build artifacts somewhere in deployable form, we needed an artifacts repository. Because of our experience with powershell (our previous deployment was fully powershell based) and Nuget, we decided to go for a Nuget repository for our .Net projects and have Chocolatey provide these as deployables.
Sonatype Nexus, which is basically a Nuget and Maven repository with a proxy, might be something to look into if you want to have a single artifacts repository for both Java and .Net, though the Nuget (.Net) support is not included in the OSS version. If you’re working on a Java-only stack, any Maven repository will do.
To easily deploy our packages and trigger the configuration transformations, we needed deployment automation. We decided to use Puppet, as we found that Chocolatey could easily provide the packages for puppet to use. Furthermore, we liked the fact that puppet is used when managing server parks for companies like Google (all the non-cloud parts), Red-Hat and several of our customers. There are many alternatives, like DeployIt, which focus on release management, but we prefer to provide a more DevOps minded approach, so we chose a configuration management approach.
This configuration management system (Puppet in our case) is also responsible for the configuration transformation during install. As these configurations are in XML config files, the changes are fairly easy to implement, either by providing the Chocolatey/Nuget packages with the correct values during install, or by transforming the configuration files themselves after the deploy.
The links from source to acceptance/production are now ready, so we could have chosen to keep Jenkins as coordinator for all this, but then you either have one big job coordinating the whole process, where you’ll have to search what went wrong and why, or you have a lot of little jobs with flow arranged through triggering, making it harder to find out why a certain step was triggered in complex scenarios. That’s why we preferred to go with Go, because it’s more aimed at this orchestration goal, informing you to what’s happening all over the flow.
Overview of Continuous Delivery setup
The final result will be a setup where Go will orchestrate and monitor all steps, first triggering a build in Jenkins, which’ll push its build artifacts into the artifacts repository. Go will then trigger Puppet who will deploy the package in the artifacts repository to a test server. After deploy is complete, Go will trigger Jenkins to run tests on the test server. If all goes as planned, the deployment to acceptance test is triggered, with its corresponding tests, followed by a deploy to production. During deploy the same artifacts repository is used.
The following image gives an overview of the flow, click to enlarge.
The sample I draw here is slightly simplified, it uses Jenkins for all tests. You could for example trigger a Loadrunner test run directly from Go without Jenkins, not changing the flow much.