Thursday, August 27, 2015

Jenkins User Conference U.S. West Speaker Highlight: Kaj Kandler

When Kaj attended JUC Boston in 2014, he was surprised to see how many enterprise Jenkins users had developed plugins to use for themselves. In his Jenkins blog post, Kaj shares some insight on developing enterprise-ready plugins.

This post on the Jenkins blog is by Kaj Kandler,  Integration Manager at Black Duck Software, Inc. If you have your ticket to JUC U.S. West, you can attend his talk "Making Plugins that are Enterprise Ready" on Day 1.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for the last Jenkins User Conference of the year: JUC U.S. West.


Thank you to the sponsors of the Jenkins User Conference World Tour:



Volume 9 of the Jenkins Newsletter: Continuous Information is out!

The next issue of the Jenkins Newsletter, Continuous Information is out!

There has been so much Jenkins content all from all over the world from events, to articles, blogs, training and everything in between:

  • Learn more about how Jenkins works with technologies like Kubernetes, Docker and Postman
  • Find a Meetup near you or another Jenkins event in your area
  • Find the latest news about Jenkins User Conference U.S. West
  • Read some articles and blog posts and expand your Jenkins knowledge

Catch up on the latest Jenkins news every quarter and sign up to receive Continuous Information directly to your inbox every quarter. 

Tuesday, August 25, 2015

JUC Session Blog Series: Christian Lipphardt, JUC Europe

At the Jenkins user conference in London this year I stumbled into what turned out to be the most interesting session to my mind, From Virtual Machines to Containers: Achieving Continuous Integration, Build Reproducibility, Isolation and Scalability (a mouthful), from folks at a software shop by the name of Camunda.

The key aspect of this talk was the extension of the “code-as-configuration” model to nearly the entire Jenkins installation. Starting from a chaotic set of hundreds of hand-maintained jobs, corresponding to many product versions tested across various environmental combinations (I suppose beyond the abilities of the Matrix Project plugin to handle naturally), they wanted to move to a more controlled and reproducible definition.

Many people have long recognized the need to keep job configuration in regular project source control rather than requiring it to be stored in $JENKINS_HOME (and, worse, edited from the UI). This has led to all sorts of solutions, including the Literate plugin a few years back, and now various initialization modes of Workflow that I am working on, not to mention the Templates plugin in CloudBees Jenkins Enterprise.

In the case of Camunda they went with the Job DSL plugin, which has the advantage of being able to generate a variable number of job definitions from one script and some inputs (it can also interoperate meaningfully with other plugins in this space). This plugin also provides some opportunity for unit-testing its output, and interactively examining differences in output from build to build (harking back to a theme I encountered at JUC East).

They took the further step of making the entire Jenkins installation be stood up from scratch in a Docker container from a versioned declaration, including pinned plugin versions. This is certainly not the first time I have heard of an organization doing that, but it remains unusual. (What about Credentials, you might ask? I am guessing they have few real secrets, since for reproducibility and scalability they are also using containerized test environments, which can use dummy passwords.)

As a nice touch, they added Elasticsearch/Kibana statistics for their system, including Docker image usage and reports on unstable (“flaky”?) tests. CloudBees Jenkins Operations Center customers would get this sort of functionality out of the box, though I expect we need to expand the data sources streamed to CJOC to cover more domains of interest to developers. (The management, as opposed to reporting/analysis, features of CJOC are probably unwanted if you are defining your Jenkins environment as code.)

One awkward point I saw in their otherwise impressive setup was the handling of Docker images used for isolated build environments. They are using the Docker plugin’s cloud provider to offer elastic slaves according to a defined image, but since different jobs need different images, and cloud definitions are global, they had to resort to using (Groovy) scripting to inject the desired cloud configurations. More natural is to have a single cloud that can supply a generic Docker-capable slave (the slave agent itself can also be inside a Docker container), where the job directly requests a particular image for its build steps. The CloudBees Docker Custom Build Environment plugin can manage this, as can the CloudBees Docker Workflow plugin my team worked on recently. Full interoperation with Swarm and Docker Machine takes a bit more work; my colleague Nicolas de Loof has been thinking about this.

The other missing piece was fully automated testing of the system, particularly Jenkins plugin updates. For now it seems they prototype such updates manually in a temporary copy of the infrastructure, using a special environment variable as a “dry-run” switch to prevent effects from leaking into the outside world. (Probably Jenkins should define an API for such a switch to be interpreted by popular plugins, so that the SMTP code in the Mailer plugin would print a message to some log rather than really sending mail, etc.) It would be great to see someone writing tests atop the Jenkins “acceptance test harness” to validate site-specific functions, with a custom launcher for their Jenkins service.

All told, a thought-provoking presentation, and I hope to see a follow-up next year with their next steps!

We hope you enjoyed JUC Europe! 

Here is the abstract for Christian's talk "From Virtual Machines to Containers: Achieving Continuous Integration, Build Reproducibility, Isolation and Scalability." 

Here are the slides for his talk and here is the video

If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.

Monday, August 24, 2015

Managing a Jenkins Docker Infrastructure: Docker Garbage Collector

Using Docker for Continuous Delivery is great. It brings development teams an impressive flexibility, as they can manage environments and test resources by themselves, and, at same time, enforce clean isolation with other teams sharing the same host resources.

But a side effect on enabling Docker on build infrastructure is disk usage, as pulling various Docker images consumes hundreds megabytes. The layered architecture of Docker images ensures that you'll share the lower level layers as much as possible. However, as those layers get updated with various fixes and upgrades, the previous ones remain on disk, and can result, after few months, in huge disk usage within /var/lib/docker.

Jenkins monitors can alert on disk consumption on build executors. However, a more proactive solution should be implemented versus simply making the node offline until administrator handle the issue "ssh-ing" to the server.
Docker does not offer a standard way to address image garbage collection, so most production teams have created their own tool, including folks at Spotify who open-sourced docker-gc script.

On a Jenkins infrastructure, a scheduled task can be created to run this maintenance script on all nodes. I did it for my own usage (after I had to handle filesystem full error). To run the script on all docker enabled nodes, I'm using a workflow job. Workflow make it pretty trivial to setup such a GC .




The script I'm using relies on a "docker" label to be used on all nodes with docker support. Jenkins.instance.getLabel("docker").nodes returns all the build nodes with this label, so I can iterate on them and run a workflow node() block to execute the docker-gc script within a sh shell script command:

def nodes = Jenkins.instance.getLabel("docker").nodes
for (n in nodes) {
node (n.nodeName) {
      sh 'wget -q -O - https://raw.githubusercontent.com/spotify/docker-gc/master/docker-gc | bash'
   }

}

docker-gc script do check images not used by a container. When an image existed last run of the script, but is not used by a container,

I hope that the Docker project will soon release an official docker-gc command. This will benefit to infrastructure teams, eliminating the need to re-invent custom solutions to the same common issue.

Thursday, August 20, 2015

JUC Session Blog Series: Tom Canova, JUC U.S. East

I was pleased to be able to attend the D.C. Jenkins user conference this year, where I gave a talk on the progress of the Workflow plugin suite for Jenkins. One highlight was seeing Jenkins Workflows with Parallel Steps Boosts Productivity and Quality by Tom Canova of ibmchefwatson.com. Naturally the title made me curious: how were people in the field using parallelism in workflows?

The project he works on is a little unusual for someone coming from the software-delivery mindset, since while the ultimate deliverable is still software, what Jenkins is spending most of its time on is running that software (rather than a compiler or automated tests): the result is a summary of a big set of online recipes crunched through some natural language processing into a machine-friendly format. Each “build” is a dry-run of Chef Watson’s preparation for the dinner service, if you will.

Since slicing & dicing all that messy web HTML can take a long time, Tom’s process follows a pretty standard three-stage fork-join model. In the first stage, one Jenkins slave finds a site index with a list of recipes, collecting a list of every recipe to be processed. In the main, second stage, a number of distributed slaves each pick up a subset of recipes, parse them, and dump the JSON result into Cloudant, using a 5Gb heap. Finally all the results are summarized and archived, and some follow-on jobs are triggered (I think in part as a workaround for missing Workflow plugin integrations). All told, the parallelization can cut a twenty-hour build into two hours, giving developers quicker feedback. Doing this from a traditional “freestyle” project would be tough—you would really need to set up a custom grid engine instead of using the Jenkins slave network you already have.

Another unusual aspect of Tom’s setup was that the build history was really curated. Whereas some teams treat Jenkins builds as dispensable records created and then trimmed at a furious rate, here there may only be a few a week, and each one is examined by the developers to see how their changes affected the sample output. (The analysis is put right in the build description.)

One interesting thing the developers do is interactively compare output from one build to another. After all, they want to judge whether their code changes produced reasonable changes in the result, or whether unexpected and unwanted effects arose in real data sets. For this they just do a diff (I think outside Jenkins) between build artifacts. After the talk I suggested to Tom that it would be useful for “someone” to write a Jenkins plugin which displays the diff between matching build artifacts of consecutive builds. This reminded me of something my team started producing when I worked on NetBeans: a readable summary of the changes in major application features from one build to the next.

As a final note, I did try to get some meal advice from the live system. Whether I can convince my wife to let me cook this is another matter:

Basque Red Beet Pasta Salad

1 poblano pepper
½lb fusilli
½c cranberry juice
1½c crumbled queso blanco
3T achiote paste
5 red beets
3c cubed, peeled butternut squash
3 halved tomatoes
¼c olive oil
½T chopped candied ginger
cocoa

Hmm. Looks like Jenkins still has its job cut out for it!

We hope you enjoyed JUC U.S. East!
Here is the abstract for Tom's talk "Jenkins Workflows with Parallel Steps Boosts Productivity and Quality." 
Here are the slides for his talk and here is the video.

If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.

Tuesday, August 18, 2015

CloudBees Jenkins Platform on Amazon Web Services

CloudBees Jenkins Platform available on AWS Marketplace


We are delighted to announce the immediate availability of CloudBees Jenkins Platform 15.05 on the AWS Marketplace.

The two components of CloudBees Jenkins Platform, are offered as a bring your own license mode with a free trial.
With these AWS marketplace offerings, you can seamlessly provision your virtual machines of Jenkins masters and Operations Centers and interact directly with AWS services, including Amazon EC2, S3, Route53 and Lambda from within Jenkins.


CloudBees Jenkins Platform on AWS Marketplace


Virtual Machines Specifications


CloudBees Jenkins Enterprise and CloudBees Jenkins Operations Center AWS Marketplace AMIs are built with the following components:

  • Ubuntu 14.04 LTS (Trusty Tahr)
  • OpenJDK 8
    • Installed as a Debian package from the "ppa:openjdk-r/ppa" repository
  • CloudBees Jenkins Enterprise (resp CloudBees Jenkins Operations Center)
    • Installed as a Debian package
    • Running as a SystemD service
    • Listening on port 8080 (resp 8888)
    • JENKINS_HOME set to "/var/lib/jenkins"
  • Git
    • Installed as a Debian package from the "ppa:git-core/ppa" repository
  • HAProxy
    • Installed as a Debian package from the "ppa:vbernat/haproxy-1.5" repository
    • Listening on port 80 and forwarding to the Jenkins process (port 8080 resp. 8888)
    • Capable of listening on HTTPS:443 if configured (docs here)
  • SSH connection
    • Listen on port 22
    • User "ubuntu", SSH public key (aka EC2 key pair) provisioned through AWS management console. This user has "sudo" privileges.


Security and Maintenance of the Servers


  • Firewall: firewall rules are defined in the AWS Management Console with EC2 Security Groups. CloudBees recommends to restrict access (inbound rules) from a limited IP Range, not allowing "all the internet" to access to the VM ; this is particularly important for the SSH and HTTP protocols. Deploying the VM in an Amazon VPC instead of "EC2 Classic" offers finer security settings.
  • OS Administrators are invited to frequently apply security fixes on the operating system of the VM ("sudo apt-get update" then "sudo apt-get upgrade")
  • Jenkins Administrators are invited to frequently apply upgrade the Jenkins plugins and the Jenkins Core through Jenkins administration console
  • Jenkins Administrator are invited to secure their Jenkins server enabling Authentication and Authorization on their newly created instances
  • Jenkins Administrators are invited to connect slave node to the Jenkins masters according to the needs of the project teams (CentOS, Ubuntu, Redhat Enterprise Linux, Windows Server...) and to disable builds on the masters
  • Jenkins Administrators are invited to frequently backup the Jenkins data (aka JENKINS_HOME) using the CloudBees Backup Plugin and/or doing a backup of the VM File System through AWS EC2 services (EBS snapshot ...)

Licensing


CloudBees Jenkins Platform is distributed on the AWS Marketplace on a Bring Your Own License mode. You can provision your Virtual Machines with the marketplace images and then enter your license details or start a free evaluation from the welcome screen of the created Jenkins instance.

Screencast: Installing CloudBees Jenkins Enterprise on Amazon Web Services


This screencast shows how to install a CloudBees Jenkins Enterprise VM on Amazon Web Services using the AWS Marketplace. The installation of CloudBees Jenkins Operations Center is similar, you just have to choose CloudBees Jenkins Operations Center instead of CloudBees Jenkins Enterprise in the marketplace.




More Resources

Jenkins User Conference U.S. West Speaker Highlight: Andrew Phillips

In his presentation, Andrew will be taking a broader view than his talk at JUC U.S. East and will discuss common challenges you may come across and the solutions that you may need when moving from Continuous Integration to Continuous Delivery.

This post on the Jenkins blog is by Andrew Phillips, VP, Product Management, XebiaLabs. If you have your ticket to JUC U.S. West, you can attend his talk "Sometimes Even the Best Butler Needs a Footman: Building an Enterprise Continuous Delivery Machine Around Jenkins" on Day 1.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for the last Jenkins User Conference of the year: JUC U.S. West.


Thank you to the sponsors of the Jenkins User Conference World Tour: