Backing up your Jenkins configuration

After your team has been using Jenkins for a while you’ll want to start thinking about backing up your configuration regularly. It’s the first step toward making your Jenkins installation “highly available” (quotes intentional). One day I stopped and asked myself, “How painful would it be if I had to re-create all of our configuration data from scratch?” The answer was that it would probably cost us several days of reduced productivity, not a risk worth taking given the very rapid pace of development and deployment we’re trying to sustain. I looked at a few different backup solutions, and eventually ended up just extending a generic one that we already use in-house.

plattersBefore I get into the different ways you can do this, I highly recommend that you take a look at this white paper that Jenkins creator Kohsuke Kawaguchi wrote, 7 Ways to Optimize Jenkins [pdf]. One section is about backups, and gives a lot of good advice. This is what I used to determine which things were most important to back up.

There are a few Jenkins plugins which will implement backups for you. The Backup plugin is no longer maintained and hasn’t had a release since 2011. It also backs up everything in your JENKINS_HOME directory, which is unnecessary and will quickly become a massive amount of data. Not recommended.

The SCM Sync Configuration plugin will write your Jenkins and individual job configurations to an SCM system. It currently only supports Git and Subversion, and every time you change a config and save it the plugin will interrupt you to ask for a comment to go with the SCM commit. I would find that annoying because I make configuration changes regularly, and I already use the Job Config History plugin to track, diff, and revert changes when necessary with no SCM required.

The thinBackup plugin only backs up essential configuration and has a bunch of good configuration options so you can tune it to your needs (back up plugins too, for example). It can also restore your backup for you. This is probably the best of the plugin options, but it can only store the backup locally on your Jenkins master. You would still have to script a separate process to move the backups off to another system for safe keeping. That’s what keeps me from using it.

At BrightRoll we are heavy users of Amazon AWS, and we have a skeleton BASH script (!) that can be run from cron to back up data to S3, typically as a tar file. My Jenkins version of that runs these commands to generate the tar file:

nice -n 19 tar --exclude-from jenkins_backup_exclusions -zcf $DUMP $JENKINS_HOME

Not rocket science, that. What I really need to show you is the contents of jenkins_backup_exclusions, because that’s what tells tar what to leave out of the backup. We used to back up our entire JENKINS_HOME, and then I needed to restore it and I found out the backup was 600MB and was going to take awhile to pull out of S3. Now they’re 138MB, which is not bad for 114 jobs. Here’s what we exclude:


All of those paths are relative to JENKINS_HOME. config-history contains all the configuration history managed by the Job Config History plugin that I mentioned earlier. No need to back that up. You don’t need to back up any job’s workspace, most likely, nor any archived build artifacts. You can always download your plugins again, but keeping these would probably make complete restoration faster. They bloat the backup by 224MB in my case though. [See update below.] The war and cache directories are 69MB and 79MB, respectively, in my case. Also not necessary for a restoration. You’ll want to look at the contents of your own JENKINS_HOME and possibly add other things to the exclusions list, depending on what plugins you use and so on.

So our BASH script just runs in crontab every night, creates this tar.gz file and uploads it to S3 using Amazon’s standard Linux command line tools. The storage costs there are low enough that we can justify keeping backups indefinitely, so I can restore from any point in time. This has come in handy on a few occasions where a job configuration was broken and no one realized it for a long time.

So that’s it! Now you have no excuse to not perform backups of your Jenkins configs. Get to work! Let me know in the comments if you have any questions.

[Updated 11/18/2013]
I didn’t look closely enough at what I’m doing with the backup exceptions list, and it has been awhile since I set this up. In the case of the plugins directory, I’m only excluding the uncompressed contents of each plugin and any previous plugin versions which Jenkins keeps (so you can roll back if you find a bug). The actual plugin .jpi files are backed up, and that’s all you need to do a restoration.


Useful Jenkins plugins – Conditional BuildStep

The Conditional BuildStep plugin fills a need that I’ve felt in Jenkins for a long time: in the core Jenkins, there is no easy way to do conditional (if/then) logic. For example, in my installation I create parameterized builds, where one parameter is a dropdown list of environments to deploy to if the build succeeds. Deployment is then handled by a different Jenkins job.

Before Conditional BuildStep came along I tried some other workarounds. At first I used a shell script build step to evaluate the $DEPLOY_ENV environment variable. If it wasn’t the default value of “none,” I would use curl to trigger the downstream job via HTTP and pass it the appropriate parameters in the URL. There are two problems with this solution: first, since you’re triggering the deploy from the “build” phase of your first job, it happens before, and with no knowledge of, any of the post-build actions. Second, with this setup the first build job has no official connection to the deploy job, as far as Jenkins is concerned, so you can’t do things like make the build job wait and fail if the deploy fails.

Another approach that I used for a longer time was to use the Promoted Builds plugin in all my main build jobs. I created a sort of “shim” job that would evaluate $DEPLOY_ENV and fail if it was “none.” In my main build I set up Promoted Builds to trigger the shim job with the params from the main one, and if the shim was successful then trigger the deploy job. At least in this way the deploy happened after all the post-build steps in the main job, but I still had the problem of not being able to make the main job wait for the deploy to finish.

Conditional BuildStep was the answer to my prayers, and I’m now using it in two places in my builds. The first is in the build phase, to determine if we should create a package. Then, in the post-build phase, we use it to trigger the deploy job if appropriate. The configuration is not always that intuitive; here is a screenshot.

In this example, our build has a choice/dropdown menu parameter called DEPLOY_ENV where the user selects which environment they want to deploy to, if any. If they have chosen something then the first thing we need to do (after compiling and running tests, and assuming all that passes) is build a package.


Once a package has been built, in the post-build phase we want to trigger a deploy.

Due to a truly pernicious confluence of bugs that all interact with each other, there is no clean way to add a conditional build step as a post-build action. The resolution of JENKINS-14494 will help with that. I also use matrix/multi-configuration builds everywhere, which seem to be sort of the red-headed stepchild of Jenkins job types. There are many plugins that at best actively don’t support matrix jobs, or at worst just blow up when you try to use them in that context. Our solution is to use the PostBuildScript plugin, which lets you use build steps in a post-build context, in matrix jobs. It’s a plugin to call another plugin. Gross, but it works.

I will talk about matrix jobs more in an upcoming post.