Tuesday, May 22, 2012

How to Use Jenkins for Job Chaining and Visualizations

We like to share useful Jenkins How-To's with the community, so here's an awesome guest post from at ZeroTurnaround...

Job chaining in Jenkins is the process of automatically starting other job(s) after the execution of a job. This approach lets you build multi-step automation pipelines or trigger the rebuild of a project if one of its dependencies is updated. In this article, we will look at a couple of plugins for Jenkins job chaining and see how to use them to build and visualize these pipelines.
  • Out of the Box Solution
  • Build Pipeline Plugin
  • Parameterized Trigger Plugin
  • Downstream Buildview Plugin
  • Conclusions

Out of the box solution – Build Other Projects

Jenkins has a built-in feature to build other projects. It is in the Post-build Actions section. You can specify the projects that you want to build after this project is built (you can trigger more than one). So whenever project A is built you will trigger the building of project B. You can also specify the conditions when the other jobs are built. Most often you are interested in continuing with the pipeline only if the job is successful but your mileage might vary.



One thing to remember here is that this feature has two configuration location. You can configure project A and specify a post action as in the previous screenshot. Another option is to configure this from project B and say “build this project B only after project A is built”. You don’t have to fill out both, just change one and the other is updated. See the next screenshot for the second option.



Build Pipeline Plugin

Build Pipeline Plugin is one interesting plugin. The main features of this plugin is to provide visualization of the pipeline and also to provide manual trigger for continuous delivery purposes. The configuration is a separate Post Build action where you can configure which projects should be built after project A. By default the triggering is actually done manually by the end user! If you want certain steps of the pipeline to be automatic then you have to use the built-in job chaining (see the Out of the Box Solution for more details).

 

The pipeline plugin offers a very good visualization of the pipeline. By configuring a new Jenkins view and choosing which job is the first job in the pipeline you can get a visualization of the whole pipeline. In the screenshot, be sure to note that one of those steps is manual and the result are automatic. The manual one can be triggered from the very same view.



Parameterized Trigger Plugin

The Parameterized Trigger Plugin is another triggering plugin but with a twist: this plugin lets you configure more aspects of the triggering logic. It covers the basic Out of the Box Solution features and adds many more. The most important one is the option to trigger the next build with parameters. For example, by defining SOURCE_BUILD_NUMBER=${BUILD_NUMBER} you are able to use the variable $SOURCE_BUILD_NUMBER in project B. This way you can, for example, use the artifact built in the previous job to be fetched from your central artifact repository using the ${BUILD_NUMBER}.



Downstream Buildview Plugin

The Downstream Buildview Plugin plugin that does not do job chaining itself, but provides a means to visualize the pipeline. Similar to the Build Pipeline View but more dynamic. You can click on any item in the build history and have its pipeline visualized.


 

Conclusions

The main feature that makes Jenkins so good is that there is always an app plugin for what you need. Of course, the same fact also highlights its biggest weakness. It is rather difficult to choose the correct plugin and very often you need a couple of plugins to achieve your goal. The same is true for job chaining and visualization.

The job chaining features that we covered in this post all provide the minimum functionality – triggering other jobs. The Parameterized Trigger plugin is the most feature-rich, but lacks the manual triggering. The Build Pipeline only offers manual triggering and you need to figure out automatic triggering yourself (using the built-in feature for example).

From the visualization side, the Build Pipeline plugin is definitely the best looking. At the same time, the plugin does not support passing parameters (the latest alpha build is a bit better) and once the pipeline gets long it gets a bit ugly. We do like the part of defining a separate view and then always being on top of your pipeline. The Downstream Build View plugin gives you great flexibility and insight to job chaining, but does not enforce any kind of process.

So, there are the Jenkins plugins that we use at ZeroTurnaround for job chaining and visualization. DO you use the same tools? If not, can you recommend any others? Which are your favorites? Please leave comments below!

Toomas Römer is the co-founder and product lead of ZeroTurnaround. Once a Linux junkie, he was fooled by Apple into proprietary OS and devices. He is a big fan of JUGs, OSS communities and beer. He blogs at dow.ngra.de, tweets from @toomasr and also runs the non-profit chesspastebin.com website. In his spare time he crashes Lexuses while test driving them, plays chess, Go and Starcraft. Looks can fool you; he will probably beat you in Squash. You can connect with Toomas on LinkedIn.

6 comments:

  1. Hi

    Honestly i find jenkins (with/without plugins you described) chaining solutions very cumbersome. Maybe it is because i used to work a lot with teamcity before or an organization of my jobs/build is not suited to jenkins...
    Normally i am used to create multiple jobs: compile/junit tests, integration tests, deployment, e2e tests. In teamcity i can define dependencies on jobs which can be unidirectional: i.e. If I specify that integration_tests job depends on compile than
    running integration_tests will trigger (actually teamcity checks if running upstream jobs is necessary) running compile job. On the other side, running compile will not trigger integration_tests job after it is completed!
    It all helps me to reuse my jobs easily: if i want to have my app deployed to staging just run deploy job , which starts all dependent jobs and that's all.
    Unfortunately i can't mimic this kind of jobs organization on jenkins, since every time i run compile job, all downstream jobs are run afterwards.
    If i want to have compilation job to be run after each commit but also reuse it for my deployment job i am stuck... Obviously i can create a copy of compile job and then have a new job wired to deployment pipeline but then you have 2 copies which you need to keep in sync in case of any compile-related modifications.

    ReplyDelete
  2. We like to keep our number of jobs low also. We have not been missing such feature that you are explaining but I think it depends on how you look at this.

    If we take the Jenkins way then when you press Trigger (see Build Pipeline plugin shot) then you can be sure that it will the pipeline to that point has succeeded. As I understand then if you do it in Teamcity then you will

    a) need to wait for the dependencies to finish
    b) if one dep fails then you need to investigate

    In Jenkins on the other hand you already know that the pipeline to that point is good. So it looks like a tradeoff between time, pipeline health info and number of jobs.

    ReplyDelete
  3. Regarding team city behavior: as i stated in my previous post, teamcity is smart enough to check if dependent job needs to be run, so when you press start on integration-testing job that depends on compile-package job, teamcity checks if compile-package must be run (for example by checking if new commits arrived after last run of compile-package) and if it does not, it runs integration-testing only (which can consume artifact from compile-package).
    I understand that one must built its own structure/organization of jobs. Unfortunately the organization which worked for us really well in teamcity can not be easily migrated to jenkins. Honestly i thought that the one we had (and like very much) is quite typical but it is not the case for jenkins.
    You are right, that it can happen that if you start the last/one of the last job in a chain, first all dependencies must be run and then some/even all of them can fail. Investigation in this case is not a big deal but you have to resolve all problems to finish pipeline.
    On the other side, by using buildpipeline you can be sure that if you get to manual step phase, all previous steps were successful, so assuming you start your pipeline on each commit, you can commit the pipeline with each commit easily.
    Nevertheless it create a couple of questions in my head:
    how do you trigger running first job in the chain, with cron expression, each commit? What happens if any of after-first-job should to be run at different intervals/triggers than the first job in a chain, i.e what if performance-test need to be run at night when regular compile-unittest job at each commit?

    I think that both jenkins/temacity models are not perfect and there should be some high level structure that could assemble pipelines from individual jobs. Jobs should have no direct relationships between each other (only store/share artifacts) and composition logic should be encapsulated in high level structure.

    ReplyDelete
  4. The first job is triggered in most cases by polling the SCM (depends what you need). If some downstream jobs should run on different interval then maybe they are not supposed to be in the pipeline? I agree that for pipelining neither of them is perfect but this is due to the fact that today CI servers have turned into orchestration servers (I like to call them Enterprise Cron after a beer) and pipelining is just a small portion of their main job.

    ReplyDelete
  5. If it happens that you have a multiple jobs which can be trigger at different interval (at each commit/nightly/once a week)/different approach(automatic,manual ) than each of these jobs must be completely separated and can not reuse/reference "shared" jobs. It leads to a huge duplication... which happened to hurt me a few times in the past.

    ReplyDelete
    Replies
    1. Definitely. If you want to re-use your pipelines steps for non-pipeline work then you will need extra work. I've also hit the duplication and then maintenance overhead :(

      Delete