Tuesday, May 22, 2012

How to Use Jenkins for Job Chaining and Visualizations

We like to share useful Jenkins How-To's with the community, so here's an awesome guest post from at ZeroTurnaround...

Job chaining in Jenkins is the process of automatically starting other job(s) after the execution of a job. This approach lets you build multi-step automation pipelines or trigger the rebuild of a project if one of its dependencies is updated. In this article, we will look at a couple of plugins for Jenkins job chaining and see how to use them to build and visualize these pipelines.
  • Out of the Box Solution
  • Build Pipeline Plugin
  • Parameterized Trigger Plugin
  • Downstream Buildview Plugin
  • Conclusions

Out of the box solution – Build Other Projects

Jenkins has a built-in feature to build other projects. It is in the Post-build Actions section. You can specify the projects that you want to build after this project is built (you can trigger more than one). So whenever project A is built you will trigger the building of project B. You can also specify the conditions when the other jobs are built. Most often you are interested in continuing with the pipeline only if the job is successful but your mileage might vary.

One thing to remember here is that this feature has two configuration location. You can configure project A and specify a post action as in the previous screenshot. Another option is to configure this from project B and say “build this project B only after project A is built”. You don’t have to fill out both, just change one and the other is updated. See the next screenshot for the second option.

Build Pipeline Plugin

Build Pipeline Plugin is one interesting plugin. The main features of this plugin is to provide visualization of the pipeline and also to provide manual trigger for continuous delivery purposes. The configuration is a separate Post Build action where you can configure which projects should be built after project A. By default the triggering is actually done manually by the end user! If you want certain steps of the pipeline to be automatic then you have to use the built-in job chaining (see the Out of the Box Solution for more details).


The pipeline plugin offers a very good visualization of the pipeline. By configuring a new Jenkins view and choosing which job is the first job in the pipeline you can get a visualization of the whole pipeline. In the screenshot, be sure to note that one of those steps is manual and the result are automatic. The manual one can be triggered from the very same view.

Parameterized Trigger Plugin

The Parameterized Trigger Plugin is another triggering plugin but with a twist: this plugin lets you configure more aspects of the triggering logic. It covers the basic Out of the Box Solution features and adds many more. The most important one is the option to trigger the next build with parameters. For example, by defining SOURCE_BUILD_NUMBER=${BUILD_NUMBER} you are able to use the variable $SOURCE_BUILD_NUMBER in project B. This way you can, for example, use the artifact built in the previous job to be fetched from your central artifact repository using the ${BUILD_NUMBER}.

Downstream Buildview Plugin

The Downstream Buildview Plugin plugin that does not do job chaining itself, but provides a means to visualize the pipeline. Similar to the Build Pipeline View but more dynamic. You can click on any item in the build history and have its pipeline visualized.



The main feature that makes Jenkins so good is that there is always an app plugin for what you need. Of course, the same fact also highlights its biggest weakness. It is rather difficult to choose the correct plugin and very often you need a couple of plugins to achieve your goal. The same is true for job chaining and visualization.

The job chaining features that we covered in this post all provide the minimum functionality – triggering other jobs. The Parameterized Trigger plugin is the most feature-rich, but lacks the manual triggering. The Build Pipeline only offers manual triggering and you need to figure out automatic triggering yourself (using the built-in feature for example).

From the visualization side, the Build Pipeline plugin is definitely the best looking. At the same time, the plugin does not support passing parameters (the latest alpha build is a bit better) and once the pipeline gets long it gets a bit ugly. We do like the part of defining a separate view and then always being on top of your pipeline. The Downstream Build View plugin gives you great flexibility and insight to job chaining, but does not enforce any kind of process.

So, there are the Jenkins plugins that we use at ZeroTurnaround for job chaining and visualization. DO you use the same tools? If not, can you recommend any others? Which are your favorites? Please leave comments below!

Toomas Römer is the co-founder and product lead of ZeroTurnaround. Once a Linux junkie, he was fooled by Apple into proprietary OS and devices. He is a big fan of JUGs, OSS communities and beer. He blogs at dow.ngra.de, tweets from @toomasr and also runs the non-profit chesspastebin.com website. In his spare time he crashes Lexuses while test driving them, plays chess, Go and Starcraft. Looks can fool you; he will probably beat you in Squash. You can connect with Toomas on LinkedIn.

Friday, May 18, 2012

BuildHive: Build GitHub Projects in Cloud-enabled Community Jenkins

If you are short on time - go see it in action. For the rest:

If you know CloudBees, you know that we are passionate about continuous integration (CI). We have been responsible for expanding the definition of PaaS in the market to encompass continuous integration. However, if truth be spoken, we have been a bit of the bad guys. We have pushed open source projects to face up to the status quo and make decisions.

Let me explain: Open source projects thrive on interactions based around the code. Any code that makes it into the repository has to be built and tested. Today, committers push their code, and an on-premises Jenkins builds a job. The project administrators usually setup a public facing Jenkins instance to show whats happening (see example). However, for resource poor projects or fledgling OSS projects - it is years before they get to this world. So many projects just skip this step.

This is where we stepped in and made it extremely easy to host projects on CloudBees DEV@cloud and have a program for FOSS projects to host their projects on us free. This meant that we took away any excuse for administrators to not have CI. In the process, we ended becoming this nagging voice in a community member's head asking them to setup CI for their projects.

Who likes to be a nagger or the bad guys? Definitely not us!

Everthing has to be easy if not easier… 

So over the last few months we have worked to silence (in a good way) that nagging voice in a community member's head. 

Today, we launch BuildHive - BuildHive is Jenkins for the community and works with projects hosted on GitHub. Administrators login to GitHub, one-click enable their projects for BuildHive. BuildHive sniffs multiple project types - today Ant, Maven, Gradle, sbt (Scala) and Rake (Ruby) - and automatically sets up a corresponding build. In the majority of the projects, users will end up with absolutely no configuration changes for their projects. 

Every time there is a commit in the project, BuildHive will build the project and the status will be shown on the community dashboard. Administrators no longer have to figure out how (and where?) to setup their on-premises Jenkins or put in work to expose it to their community. 

Apart from the obvious benefit of easily CI-enabling jobs on GitHub and consequently improving the project health - BuildHive will fundamentally move the starting conversation in OSS contributions from source to builds. Today, project owners get pull requests from contributors with an email indicating that the code built successfully. The owner still ends up pulling in the code, merging it in and building the code. If the code does not build - administrators work with contributors to fix these issues. This is one significant investment for an administrator (multiplied by 10s-100s pull requests for a project). With BuildHive in place, the onus moves from the administrator/contributor to BuildHive. BuildHive automatically merges a pull request and builds it - so now a pull request is accompanied by the build status. If the build failed, the administrator does not have to entertain the request - saving them hours and perhaps giving them a nice weekend in the process :-).

BuildHive has used our investments in our Jenkins-in-the-cloud product (DEV@cloud) and the Templates plugin within our on-premises offering called Jenkins Enterprise. As an administrator, if you need more cloud resources like executors, and complete Jenkins configurability - GitHub projects can use DEV@cloud. 

Honestly, it was more work explaining it than actually using it. Lets just get rolling and see it in action. 

- Harpreet Singh
Senior Director, Product Management

Follow CloudBees:


Thursday, May 17, 2012

Announcing BuildHive!

I'm very excited to announce one of the projects that I've been spending a lot of time on lately — BuildHive! It started as my Christmas/New Year break hack project, but since then it has grown into a real project inside CloudBees.

BuildHive is a free service that lets you set up Jenkins-based continuous integration build/test jobs for your GitHub repositories with just a few mouse clicks. It is freely available for anyone to use.

The top page shows recent builds that have happened on BuildHive. If you keep the page open for a while, you'll see new cards appear from the left for new builds in semi-real time:

Let's click the big red "Add your Git repositories" button to set up CI jobs for your repositories. It'll first ask you to approve GitHub OAuth integration, so you need to click "Allow" to let us see your repositories and install hooks:

Once logged in, you'll see the screen to select repositories from GitHub. It will show all your personal repositories as well as repositories from any organizations that you belong to. If you have too many repositories, use the filter text box to narrow down the candidates.

All you have to do to set up a CI job is to click "Enable". We'll sniff your repository to guess the initial build configuration. The auto sniffing of the repository contents is extensible, but the initial set is geared toward Java projects (like Ant, Maven, Gradle), but also covers some Ruby projects as well.
In any case, auto-sniffing can only do so much. You can tweak the setting via the configuration screen (for example, to update where the notification e-mails are sent):

When you enable a repository, we auto-install necessary hooks for you, so that it'll build your project every time someone pushes to your repository. In addition to that, we'll speculatively build incoming pull requests to your repository (by building the result of the merge between the incoming commit and the current tip of the branch for which the pull request is sent), and report the status as a comment to the pull request:

So you can use that as one of the inputs to determine if you are going to merge a pull request or not.
Behind the scene, many of the features in BuildHive rely on our value-add plugins for Jenkins Enterprise by CloudBees, which are available for customers to use on their own Jenkins instances. For example, we use the Templates plugin to model various project types and for auto-sniffing. We use the Validated Merge plugin to speculatively build pull requests. So, while it isn't as easy as it could be, our customers can set up a similar environment in their own Jenkins instance, or they can re-use those pieces to create similar but different workflows.

And needless to say, there are many other open-source plugins at play, too — it really shows off the power of extensibility in Jenkins.

--Kohsuke Kawaguchi
Founder, Jenkins
CloudBees, Inc.

Follow CloudBees:


Monday, May 14, 2012

Follow-up on HP's Cloud Announcement and Other News...

Last week, HP made an impressive splash, launching not only the official public beta of their IaaS solution - HP Cloud Services, but announcing 40 partners that will run on HP Cloud Services, and CloudBees was one of these key partners.

"Designed with OpenStack technology, the open-sourced-based architecture ensures no vendor lock-in, improves developer productivity, features a full stack of easy-to-use tools for faster time to code, provides access to a rich partner ecosystem, and is backed by personalized customer support."

Here is a round up of the some of the more insightful coverage.

The Register:
Good discussion of the HP offering and the fact that HP hasn't really embraced Microsoft as much as they had intended back in 2010. Instead, HP made the move to OpenStack, after only joining OpenStack less than one year ago.

In this article, reporter Joab Jackson called out seven of the 40 partners HP announced, identifying them as part of the "impressive roster" of partners HP announced. CloudBees was one of the seven.

Analyst Barb Darrow called out CloudBees in her summary of nine of the 40 HP partners included in the HP announcements.

TechTarget - SearchSOA:
This article focuses on the importance to HP of a pervasive ecosystem for its platform, and identified CloudBees in a very short list of vendors who represent, "some major forces in various cloud arenas."

Unrelated to the HP news, CloudBees was also called out in a SearchCloudComputing article focused solely on CloudBees and a CloudBees customer (this article can be accessed only if you are registered with TechTarget). This is a great article that highlights how ARTstor leveraged the CloudBees PaaS to improve its business and how it went about selecting CloudBees as their PaaS solution. Michelle Boisvert’s article, “Is PaaS just another four-letter word in cloud computing?” argues that the noise around PaaS is righteously warranted. She introduces ARTstor as a company under pressure to quickly take advantage of the cloud’s benefits. Overall, a great piece for CloudBees that communicates a customer’s perspective on why CloudBees is the front runner in the PaaS market!

Happy Reading!


Follow CloudBees:


Friday, May 11, 2012

Jenkins User Conference Paris Summary

The first stop of Jenkins User Conference world tour this year was Paris, where there's a considerable concentration of Jenkins developers and users (sometimes those of us on the other side of the Atlantic call them "the French gang"). The event was held a day before Devoxx France, in the hope that we would attract more attendance.

I believe there are 100+ people that actually showed up, and we had a full day divided into two tracks, talking about all things Jenkins. While many were French, some of the attendees came from all over Europe. I was able to see some familiar faces, as well as those who I've only known by their names.
I tried to get in and out of both tracks to get a sense of what was going on, so that I could report out later, and here's my notes.

I kicked off the whole day with a keynote, looking back at what we've done since we became Jenkins. I've looked into various activities in the community, such as LTS, Jenkins CIA, Ruby plugin development, and UI enhancements. I updated my adoption statistics slides (we are happy to report that we crossed 40K installations in our tracking!), and reported that JFrog is now hosting our repositories that we rely on for development. I showed some of what we've been working on lately at CloudBees — such as the upcoming version of Jenkins Enterprise by CloudBees that supports high-availability, our giving away the Folder plugin for free (as in beer), and previews of some not quite public yet features, which is a treat only for those who came!

In the first slot, Gregory Boissinot went through a plugin development workshop. This was actually something I really wanted to understand, so that I get the objective view on where the pitfalls are. Even though the talk was in French, I did understand the code he was showing, and I took some notes about having some kind of skeleton code generator — for example, there's a common pattern for building an UI bound model object (for asking the user to enter data that has structures, persisting them, and so on), and having a code generator command line tool (like jenkins.rb has) could be really handy.

In another room, Nicolas and Mathieu were showing their "Build Flow" plugin, which lets you write a workflow in Groovy DSL. Choreographing a complex workflow that involves multiple jobs is a common challenge among many Jenkins users, and so this talk was well attended, and I'm really looking forward to seeing this plugin mature (there's a separate effort to integrate BPMN workflow into Jenkins, see more about that here.) One thing I learned about Groovy DSL since then is the AST transformation. I'm thinking it might allow us to convert the DSL workflow script into a continuation passing style so that you can suspend/resume workflow at arbitrary point.

The day was so packed that we didn't even waste the lunch time! While attendees were eating, we had lightening talks in the room. Olivier showed off how Apache runs Jenkins, which is quite sizable, then I pitched in for Domonik, who couldn't make it to the conference, and covered the Scriptler plugin. Vincent followed and covered the similar Groovy system console. Harpreet then closed off the lunch lightening talks by showing the templates plugin in Jenkins Enterprise by CloudBees.

In the afternoon, Arnaud, one of our French gang, showed how you can set up iOS development on Jenkins (from code change to test, to the delivery of the binaries to actual phones). Bruno then did a demo of how he uses DEV@cloud and RUN@cloud to quickly set up continuous deployment for Java webapps. For system integraters that deal with lots of projects, I think it is a great combination (for example, allowing you to hand over the entire development environment to the customer when the project is over).
While all that was going on in one room, in another room Lars Kruse showed off how the old meets the new — where you take ClearCase UCM and use it to do validated merge, in which only the changes tested by Jenkins become visible to the rest of the team. I personally don't know much about ClearCase, but it was very interesting that emerging techniques like validated merge can be applied on more traditional SCM tools. He also said his company works with clients to develop custom Jenkins plugins. I always felt that any big company adopting Jenkins need some custom glue plugins, and I regularly come across those companies, but CloudBees can only help so many. It's great to see that there are more help available now!

The talk that followed was from Julien Carsique from Nuxeo, discussing how he manages and improves the CI environment for his organization. Now, I regret I didn't take all the notes about details, but I think this was one of the best presentations of the day for me. I remember thinking that if we had the best Jenkins administrator award for those who push things to the limit and beyond, he would be my top pick. IIRC, he had a major project with multiple Maven modules that span across different repos and all. He set up Jenkins such that any change triggers a cascade of new builds of downstream jobs, which later then fan out to cross-platform test jobs, then he made the whole thing visual so you can track exactly where time is spent and how those changes propagate. I think this was very inspirational to many other fellow Jenkins users, and I hope he will put his slides somewhere so that other people can mimic what he's done.
Back to the big room, my fellow colleagues Stephen and Harpreet did the only introductory talk in the whole day, going through check lists of production Jenkins deployments, recapping why you want CI, etc. (And I always forget that there are still many who don't know much about Jenkins!)

It was also great to see and hear Sebastian Bergmann, the guy behind Jenkins PHP, talk about Jenkins and PHP integrations. I wish we had more of those people who bridge our community to different communities and help us spread the ideas. He even kindly gave me his Jenkins/PHP book and signed it for me!

Aside from talks, food was great, especially for those of us who came from the U.S.

I've got some good inspirations about where we need to work. And I also managed to implement the search filter in the update center during the day, in response to the valid complaint from Sebastian. For virtual communities like ours, it's really good to meet people in the meeting space and put faces to names. Build automation engineers are often somewhat lonely in their respective organizations — there just aren't that many people who get excited about automating things away, and so having so many of like-minded folks in one room was by itself a great experience.

On the things to improve side, I felt that workshops was tricky to do in a limited time and in a big room. Maybe it would work out better if there's a smaller room where a smaller number of people can gather and hack away (probably some time slots designated for some specific topics), then we can collectively merge pending important pull requests, teaching how to develop plugins, or ask others to look at their plugins, etc. There also can be a valid discussion about JUC, run nicely in exchange for an admission fee, vs. JUC run cheaply but free.

In any case, I think the quality of presentations were very good, and knowing local Jenkins developers/users will help expand your horizons. As I said in the beginning, we are taking JUC around the world this year. The one in New York is already coming up next week (May 17), followed by Herzelia (Israel - July 5 (July 29), San Francisco (September 30) and Antwerp (November 13).

Please register while seats are still available (and the cost is even lower during the Early Bird registration period)!

Kohsuke Kawaguchi
Founder, Jenkins Community and
Elite Developer, CloudBees

Follow CloudBees:


Thursday, May 10, 2012

CloudBees is Live on HP Cloud Services


When we announced AnyCloud in February, we discussed our ability to deploy to OpenStack-based providers. Behind the scenes, that meant we had been working closely with HP Cloud Services to take advantage of the great work they've done and which is now available to the general public. What that means for you as a CloudBees customer is that you can make use of HP as a trusted cloud infrastructure provider with a real enterprise sensibility, probably also a provider your company already has an existing relationship with. So you get all the advantages of the CloudBees PaaS (productivity, faster development, built-in continuous integration and delivery, world class service, all at aggressively low pricing) along with the backing of a tremendous company with over $120 billion in revenue that isn't trying to convince you their packaged middleware is taking you to the cloud.      

Let's take a look at some specifics about the CloudBees PaaS on HP Cloud...

OpenStack.  CloudBees was born on AWS, but with our AnyCloud offering, we've been running on vSphere and OpenStack-based infrastructure, too.  Since we have an abstraction layer in the CloudBees Platform to work across IaaS providers, I asked Michael Neale, the lead on our HP Cloud work, to comment on what he found on HP compared to others. Here are his thoughts:
"HP cloud is becoming quite full featured - yet still OpenStack-standards based.  I can use the nova CLI for instance. The Web UI seems fast, friendly and easy to use.  They have done a great job with OpenStack so far. As HP rolls out more images and features, it looks super competitive - and so far staying pretty simple to use. Having a fully featured infrastructure cloud to compete with AWS is encouraging.  It will be interesting to track the roll out of features."
The Experience.  Probably the most exciting thing about CloudBees on HP Cloud Services is... that it just looks and feels like CloudBees once you've registered your HP Cloud Services resources with us.  We treat those resources like other "dedicated" resources already available on CloudBees.  So, you can deploy to HP Cloud Services and set up continuous deployment to HP from Jenkins hosted on DEV@cloud.  When you associate server resources from your HP Cloud Services account with CloudBees, we manage them on your behalf as you deploy, scale down-up-and-out, monitor and make use of CloudBees partner services, like New Relic.

Why CloudBees?  You'll find other PaaS players on HP Cloud Services, so what sets CloudBees apart? In addition to the usual differentiators, the power of AnyCloud really shines brightly with HP Cloud Services.  That means we manage the entire stack for you, and you're not installing or maintaining the PaaS itself or the underlying runtime stack.  You can use other public cloud resources, or target the HP cloud, all from a single environment or command line.  Many customers would like a combination of elastic, public cloud resources for some of their activities, like dev/test, together with your HP Cloud Services resources for other activities. In both cases, though, they want a consistent experience, with CloudBees managing the workloads and underlying infrastructure in a way that maximizes utilization and minimizes their operational burden.  Only CloudBees AnyCloud offers that kind of flexibility.

If you're an HP Cloud customer exploring CloudBees, contact us to see the CloudBees PaaS in action on HP Cloud Services.  If you're already a CloudBees customer, then you can now use not just AnyCloud, but the HP Cloud.

Learn more!
Learn how to deploy Java applications with the CloudBees Paas, on HP Cloud Services

Steven G. Harris, Senior VP of Products

Follow CloudBees:


Wednesday, May 2, 2012

Upcoming CloudBees Webinar: Application Lifecycle Management with PaaS

Are you getting the most out of your cloud-based application development lifecycle? Did you know that you can improve the entire application lifecycle - from build to deployment to running production applications in the cloud? At CloudBees, we work hard so you won't have to.  Your CloudBees account comes packed full of a wide range of tools that allow you to instantly set up a full-cycle development environment with project collaboration, Jenkins continuous integration, and multi-stage (development/test/staging/production) environments.

Take advantage of our free Application Lifecycle Management with Platform as a Service (PaaS) training as we demonstrate how you can optimize productivity within your CloudBees account to create application lifecycle proceesses to fit the needs of your team.  By the end of this one hour session, you'll be able to:
  • Setup a collaborative development environment
  • Create developer application sandboxes for "under development" features
  • Inject application configuration and data sources
  • Combine CloudBees services to create a continuous development process
  • Roll out application updates using multi-stage deployments
  • Manage application configuration and data sources for each deployment environment
  • And much more 
Date: May 8, 2012
Time: 1PM - 2PM EDT

Attendance is limited, so register now!

Spike Washburn
VP of Engineering

Follow CloudBees:
Facebook Twitter