Thursday, September 17, 2015

Docker Hub 2.0 Integration with the CloudBees Jenkins Platform

Docker Hub 2.0 has just been announced, what a nice opportunity to discuss Jenkins integration!

For this blog post, I'll present a specific DockerHub use case:  How to access to the Docker Hub registry and to manage your credentials in Jenkins jobs.  

The Ops team is responsible for maintaining a curated base image with a common application runtime. As the company is building Java apps, they bundle Oracle JDK and Tomcat, applying security updates as needed.

The Ops team uses CloudBees Docker Build and Publish plugin to build a Docker image from a clean environment, and deploy to DockerHub on a private repository. Integration with Jenkins credentials makes it easy, and the plugin allows them to both deploy the base-image as "latest" and track all changes with a dedicated tag per build.


The Dev team are very productive, producing thousands of lines of Java code, relying on Jenkins to ensure the code follows coding, and testing coverage standards whilst packaging the application. 

During the build, they eventually include the packaged WAR file in a new Docker image, relying on Ops' base-image. To do this, they just had to write a minimalist Dockerfile and add it to their git repository. They can use this image to run some advanced tests and reproduce the exact production environment (even on their laptop for diagnostic purposes if needed). The Ops team is confident with such an image as they know the base image is safe.

    They also have installed Jenkins DockerHub Notification plugin, so they can configure the job to run when the Docker base-image is updated on the Hub. With this setup they know the last build will always rely on latest base-image, including all important security fixes that the Ops team is concerned about.

    This scenario has been tested on DockerHub 2.0 and works like a charm. Updating the base image sources on Github triggers a build for base-image job, which is then published to DockerHub 2.0.
    Jenkins detects these changes to the DockerHub hosted images, and and jobs that depend on the upstream base-image* will be rebuilt, tested, and published (and possibly released).

    The Ops team are happy with this, as their fears of developers running ancient docker images full of security holes are calmed by knowing that by simply updating the base-image, all projects that depend on it will be notified and updated automatically:

    An actual company would probably have a more sophisticated deployment pipeline than outlined above, with validation steps (and possibly approval) for each image.

    To learn more about Docker integration with CloudBees Jenkins Platform, be sure to read additional blogs on, including Architecture: Integrating CloudBees Jenkins Platform with Docker Hub (INSERT LINK).

    You can read more documentation about CloudBees and Docker containers here.

    * Note the new Docker-Workflow feature will automatically register for changes to base images if you use that way to build out your pipeline:

    Team's logos are from, which I recommend you follow - you may not learn much but you should get some good laughs.

    Nicolas De Loof
    Software Engineer

    Nicolas De Loof is based in Rennes, France. Read more about Nicolas in his meet the bees blog post, and follow him on Twitter.

    Architecture: Integrating the CloudBees Jenkins Platform with Docker Hub 2.0

    Docker is an incredibly hot topic these days. Its role in Jenkins infrastructures will soon become predominant as companies are discovering how Docker fits within their own environments as well as how to use Docker and Jenkins together most effectively across their software delivery pipelines.

    The major use cases for Docker in a Jenkins infrastructure are:
    • Customize the build environment: Different applications often require different build tools, some of these tools require root permissions to be installed on the build servers (x11/xvfb and Firefox for headless tests such as selenium, ImageMagick...). Jenkins admins once solved this problem by increasing the number of flavors of Jenkins slaves, but it was limited by hardware constraints and was not flexible for project teams. The CloudBees Docker Custom Build Environment Plugin and the CloudBees Docker Workflow Plugin offer a new way to solve this challenge with much more flexibility, allowing Jenkins admins to manage only one flavor of Jenkins slaves—Docker enabled slaves—and let the project team customize their build environment to their needs running their jobs in Docker containers.
    • Ship applications as Docker images: More and more applications get shipped as Docker images (instead of war/exe/... files) and the Continuous Integration platform has to build and publish these Docker images.

    For these scenarios, the Jenkins infrastructure needs to access to a Docker registry to retrieve/pull the Docker images used on Docker enabled slaves and to store/push the Docker images created by Jenkins builds.

    Docker Hub

    The Docker Hub is the cloud-based registry service proposed by Docker, Inc that combines the "official" registry of public images on which "every" Docker user relies with a private registry that will allow the user to manage private images.

    Integrating a Jenkins infrastructure with Docker Hub requires architecture decisions that are similar to the decisions to integrate a Jenkins infrastructure with online services such as GitHub or BitBucket.

    Direct connectivity from the Jenkins infrastructure to Docker Hub

    The most straightforward solution is to simply open network connectivity (http and https) from the Jenkins slaves to Docker Hub.

    Architecture: Jenkins infrastructure and

    Connecting the Jenkins infrastructure to Docker Hub through a proxy

    Several organisations will prefer to secure the connectivity of the Jenkins infrastructure to the "outside world" with firewalls and proxies.

    To do so, it is necessary to declare the HTTP proxy in the configuration of the Docker daemon on each Jenkins slaves as documented in Docker Documentation - Control and configure Docker with systemd - HTTP Proxy.

    Sample /etc/systemd/system/docker.service.d/http-proxy.conf:


    Architecture: Jenkins infrastructure and through an HTTP proxy

    Private Docker Registries behind firewalls?

    This blog post covered how to integrate a Jenkins infrastructure with the Docker Hub public registry service. We will cover in seperate post the integration of a Jenkins infrastructure with a private registry behind the firewalls.

    Accessing the Docker Hub registry in Jenkins jobs

    To see how to access to the Docker Hub registry and to manage your credentials in Jenkins jobs, please read Nicolas de Loof's blog post Docker Hub 2.0 Integration with CloudBees Jenkins Platform and watch the screencast:

    Cyrille Le Clerc is a product manager at CloudBees, with more than 15 years of experience in Java technologies. He came to CloudBees from Xebia, where he was CTO and architect. Cyrille was an early adopter of the “You Build It, You Run It” model that he put in place for a number of high-volume websites. He naturally embraced the DevOps culture, as well as cloud computing. He has implemented both for his customers. Cyrille is very active in the Java community as the creator of the embedded-jmxtrans open source project and as a speaker at conferences.

    Tuesday, September 1, 2015

    Jenkins Community Survey - Your Chance to Be Heard!

    Just as in past years, CloudBees is again working with the community to sponsor a survey. The goal is for the community to get some objective insights into what Jenkins users would like to see in the Jenkins project.

    The survey will be open until the end of September. This is your chance to be heard and to have a say in development priorities for Jenkins. Why not take it now? 

    We understand the value for the community in learning what users want and how they are using Jenkins, so we are providing an added incentive for community members to fill out the survey. We have donated a $100 Amazon gift card that will be randomly awarded to a lucky survey taker. 

    As with most give-aways...there are always terms and conditions. So now the boring, legal stuff.

    Fine print:
    1. The survey will be open from September 1 to September 30, 2015. If you submit a completed survey, we will enter you to win a $100 Amazon gift certificate. Yeah, you can only enter the contest once, so please don’t over-stuff the survey box. After the survey closes, we’ll draw a name to choose the winner…and maybe it will be you!
    2. If you do not supply your name and email address, you are not eligible to win. Think about it – we have no way to contact you. If you do supply your name and email address, we’ll send you the survey results.
    1. The Amazon gift card can only be won by someone who lives in a country where you can buy from Amazon. If you live in a country without Amazon access, we will send you $100 via PayPal. If you live in a country under U.S. embargo, we’re sorry, but there’s not much we can do here.
    2. You must be 18 years old or older (20 or older in Japan).
    3. You must use Jenkins or be affiliated with its use.
    4. The winner is responsible for any federal, state and local taxes, import taxes and fees that may apply.
    5. This survey is administered by CloudBees, Inc., 2001 Gateway Place, Suite 670W, San Jose, CA 95110, +1-408-805-3552, If you’d like to send us feedback or have questions, please email us at And no, we do not accept bribes to rig the contest. :)
    6. Regardless of whether you win the Amazon gift card, you will have the satisfaction that you’re providing input that will help make Jenkins even better. Thank you in advance for sharing your thoughts with the community!
    7. Oh, and the best purchase necessary!

    Thursday, August 27, 2015

    Jenkins User Conference U.S. West Speaker Highlight: Kaj Kandler

    When Kaj attended JUC Boston in 2014, he was surprised to see how many enterprise Jenkins users had developed plugins to use for themselves. In his Jenkins blog post, Kaj shares some insight on developing enterprise-ready plugins.

    This post on the Jenkins blog is by Kaj Kandler,  Integration Manager at Black Duck Software, Inc. If you have your ticket to JUC U.S. West, you can attend his talk "Making Plugins that are Enterprise Ready" on Day 1.

    Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for the last Jenkins User Conference of the year: JUC U.S. West.

    Thank you to the sponsors of the Jenkins User Conference World Tour:

    Volume 9 of the Jenkins Newsletter: Continuous Information is out!

    The next issue of the Jenkins Newsletter, Continuous Information is out!

    There has been so much Jenkins content all from all over the world from events, to articles, blogs, training and everything in between:

    • Learn more about how Jenkins works with technologies like Kubernetes, Docker and Postman
    • Find a Meetup near you or another Jenkins event in your area
    • Find the latest news about Jenkins User Conference U.S. West
    • Read some articles and blog posts and expand your Jenkins knowledge

    Catch up on the latest Jenkins news every quarter and sign up to receive Continuous Information directly to your inbox every quarter. 

    Tuesday, August 25, 2015

    JUC Session Blog Series: Christian Lipphardt, JUC Europe

    At the Jenkins user conference in London this year I stumbled into what turned out to be the most interesting session to my mind, From Virtual Machines to Containers: Achieving Continuous Integration, Build Reproducibility, Isolation and Scalability (a mouthful), from folks at a software shop by the name of Camunda.

    The key aspect of this talk was the extension of the “code-as-configuration” model to nearly the entire Jenkins installation. Starting from a chaotic set of hundreds of hand-maintained jobs, corresponding to many product versions tested across various environmental combinations (I suppose beyond the abilities of the Matrix Project plugin to handle naturally), they wanted to move to a more controlled and reproducible definition.

    Many people have long recognized the need to keep job configuration in regular project source control rather than requiring it to be stored in $JENKINS_HOME (and, worse, edited from the UI). This has led to all sorts of solutions, including the Literate plugin a few years back, and now various initialization modes of Workflow that I am working on, not to mention the Templates plugin in CloudBees Jenkins Enterprise.

    In the case of Camunda they went with the Job DSL plugin, which has the advantage of being able to generate a variable number of job definitions from one script and some inputs (it can also interoperate meaningfully with other plugins in this space). This plugin also provides some opportunity for unit-testing its output, and interactively examining differences in output from build to build (harking back to a theme I encountered at JUC East).

    They took the further step of making the entire Jenkins installation be stood up from scratch in a Docker container from a versioned declaration, including pinned plugin versions. This is certainly not the first time I have heard of an organization doing that, but it remains unusual. (What about Credentials, you might ask? I am guessing they have few real secrets, since for reproducibility and scalability they are also using containerized test environments, which can use dummy passwords.)

    As a nice touch, they added Elasticsearch/Kibana statistics for their system, including Docker image usage and reports on unstable (“flaky”?) tests. CloudBees Jenkins Operations Center customers would get this sort of functionality out of the box, though I expect we need to expand the data sources streamed to CJOC to cover more domains of interest to developers. (The management, as opposed to reporting/analysis, features of CJOC are probably unwanted if you are defining your Jenkins environment as code.)

    One awkward point I saw in their otherwise impressive setup was the handling of Docker images used for isolated build environments. They are using the Docker plugin’s cloud provider to offer elastic slaves according to a defined image, but since different jobs need different images, and cloud definitions are global, they had to resort to using (Groovy) scripting to inject the desired cloud configurations. More natural is to have a single cloud that can supply a generic Docker-capable slave (the slave agent itself can also be inside a Docker container), where the job directly requests a particular image for its build steps. The CloudBees Docker Custom Build Environment plugin can manage this, as can the CloudBees Docker Workflow plugin my team worked on recently. Full interoperation with Swarm and Docker Machine takes a bit more work; my colleague Nicolas de Loof has been thinking about this.

    The other missing piece was fully automated testing of the system, particularly Jenkins plugin updates. For now it seems they prototype such updates manually in a temporary copy of the infrastructure, using a special environment variable as a “dry-run” switch to prevent effects from leaking into the outside world. (Probably Jenkins should define an API for such a switch to be interpreted by popular plugins, so that the SMTP code in the Mailer plugin would print a message to some log rather than really sending mail, etc.) It would be great to see someone writing tests atop the Jenkins “acceptance test harness” to validate site-specific functions, with a custom launcher for their Jenkins service.

    All told, a thought-provoking presentation, and I hope to see a follow-up next year with their next steps!

    We hope you enjoyed JUC Europe! 

    Here is the abstract for Christian's talk "From Virtual Machines to Containers: Achieving Continuous Integration, Build Reproducibility, Isolation and Scalability." 

    Here are the slides for his talk and here is the video

    If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.

    Monday, August 24, 2015

    Managing a Jenkins Docker Infrastructure: Docker Garbage Collector

    Using Docker for Continuous Delivery is great. It brings development teams an impressive flexibility, as they can manage environments and test resources by themselves, and, at same time, enforce clean isolation with other teams sharing the same host resources.

    But a side effect on enabling Docker on build infrastructure is disk usage, as pulling various Docker images consumes hundreds megabytes. The layered architecture of Docker images ensures that you'll share the lower level layers as much as possible. However, as those layers get updated with various fixes and upgrades, the previous ones remain on disk, and can result, after few months, in huge disk usage within /var/lib/docker.

    Jenkins monitors can alert on disk consumption on build executors. However, a more proactive solution should be implemented versus simply making the node offline until administrator handle the issue "ssh-ing" to the server.
    Docker does not offer a standard way to address image garbage collection, so most production teams have created their own tool, including folks at Spotify who open-sourced docker-gc script.

    On a Jenkins infrastructure, a scheduled task can be created to run this maintenance script on all nodes. I did it for my own usage (after I had to handle filesystem full error). To run the script on all docker enabled nodes, I'm using a workflow job. Workflow make it pretty trivial to setup such a GC .

    The script I'm using relies on a "docker" label to be used on all nodes with docker support. Jenkins.instance.getLabel("docker").nodes returns all the build nodes with this label, so I can iterate on them and run a workflow node() block to execute the docker-gc script within a sh shell script command:

    def nodes = Jenkins.instance.getLabel("docker").nodes
    for (n in nodes) {
    node (n.nodeName) {
          sh 'wget -q -O - | bash'

    docker-gc script do check images not used by a container. When an image existed last run of the script, but is not used by a container,

    I hope that the Docker project will soon release an official docker-gc command. This will benefit to infrastructure teams, eliminating the need to re-invent custom solutions to the same common issue.