Wednesday, July 30, 2014

Continuous Delivery: Deliver Software Faster and with Lower Risk

continuous-delivery-infographic
Continuous Delivery is a methodology that allows you to deliver software faster and with lower risk. Continuous delivery is an extension of continuous integration - a development practice that has permeated organizations that have adopted agile development.

Recently DZone conducted a survey of 500+ IT professionals about continuous delivery, the results of which CloudBees has summarized in an infographic.

Find out:
  • The percentage of people that think they are following continuous delivery practices versus the percentage of people that actually are, according to the definition of continuous delivery
  • Internal groups that provide production support, such as development, operations or DevOps
  • Which team is responsible for actual code deployment
  • How pervasive version control is for tracking IT configuration
  • The length of time it takes organizations from code commit to production deployment
  • Barriers to adopting continuous delivery (hint: they aren't technical in nature)

Tuesday, July 29, 2014

Continuous Integration for node.js with Jenkins

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Steven Christou, technical support engineer, CloudBees about a presentation given by Baruch Sadogursky, JFrog, at JUC Boston.



Fully automating a continuous integration system from development, to testing, to deployment in production servers for Node.js can be a challenge. Most Node.js developers are familiar with NPM, which I learned does not stand for “Node Package Manager,” but stands for a recursive bacronymic abbreviation for "npm is not an acronym." In other words, it contains packages that contain a program described by the package.json file. To compare with a Java developer, an NPM is similar to a jar, and NPM registry is similar to Maven central for Java developers. What would happen if the main rpm registry https://www.npmjs.org/ goes down? At that moment node.js developers would be stuck waiting for npmjs.org to return to normal status, or they could run their own private registry.



Baruch Sadogursky, JFrog
That sounds easier said than done, though. According to http://isaacs.iriscouch.com/registry, the current size of the registry is 450.378 gigabytes of binaries. Out of all of those 450 gigabytes of information how many of the packages are going to be used by your developers?

Artifactory: a repository manager to bridge the gap between developers and rpm registry npmjs.org. Artifactory acts as a proxy between your coders and Jenkins instances to the outside world. When I (a developer) require a new package and I declare a new dependency in my code, Artifactory will pull the necessary package dependency from npmjs.org and make it available. After the code has been committed with the new package dependency, Jenkins is then able to fetch the same package from Artifactory. In this scenario if npmjs.org ever goes down, testing using Jenkins will never hault because it will still be able to obtain the necessary dependencies from the Artifactory server.



Building code using an Artifactory server also eliminates the need for users to checkout and build their dependencies as it would be time consuming. Also dependencies could be in an unstable state if I build in my environment and it differs from other users or the Jenkins server. Another advantage is Jenkins could record information about the packages that were used during the build.

Overall, using a package manager like Artifactory to act as a proxy between your Jenkins instance and the NPM registry npmjs.org is beneficial in order to maintain true continuous integration. Your developers and Jenkins instances would not be impacted by any downtime issues if the NPM repository is down or unavailable. Thus, adding an Artifactory server to manage package dependencies would help maintain continuous integration.

Steven Christou
Technical Support Engineer
CloudBees

Steven works on providing bug fixes to CloudBees customers for Jenkins, Jenkins plugins and Jenkins enterprise plugins. He has a great passion for software development and extensive experience with Hudson and Jenkins. Follow him on Twitter.

Friday, July 25, 2014

Continuous Delivery and Workflow

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Jesse Glick, software developer, CloudBees, about a presentation given by himself and Kohuske Kawaguchi, as well as a session given by Alex Manly, MidVision. Both sessions are from JUC Boston.

At the Jenkins User Conference in Boston this year, Kohsuke and I gave a session Workflow in Jenkins where for the first time we spoke to a general audience about the project we started to add a new job type to Jenkins that can manage complex and long-running processes. If you have not heard about Workflow yet, take a look at its project page, which gives background and also links to our slides. I was thrilled to see the level of interest and to hear confirmation that we picked the right problem to solve.

Alex Manly, MidVision
A later session by Alex Manly of MidVision (Stairway to Heaven: 10 Best Practices for Enterprise Continuous Delivery with Jenkins) focused on the theory and practice of CD, such as the advantages of pull (or “convergent”) deployment at large scale when using homogeneous servers, as opposed to “pushing” new versions immediately after they are built, and deployment scenarios, especially for WebSphere. Since I am only a spectator when it comes to dealing with industrial-scale deployments like that, while listening to this talk I thought about how Workflow would help smooth out some of the nitty-gritty of getting such practices set up on Jenkins.

One thing Alex emphasized was the importance of evaluating the “cost of non-automation” when setting up CD: you should “take the big wins first,” meaning that steps which are run only once in a blue moon, or are just really hard to get a machine to do exactly right all the time, can be left for humans until there is a pressing need to change that. This is why we treated the human input step as a crucial feature for Workflow: you need to leave a space for a qualified person to at least approve what Jenkins is doing, and maybe give it some information too. With a background in regulatory compliance, Alex did remind the audience that these approvals need to be audited, so I have made a note to fix the input step to keep an audit trail recording the authorized user.

The most important practice, though, seemed to be “Build Once, Deploy Anywhere”: you should ensure the integrity of a build package destined for deployment, ideally being a single compressed file with a known checksum (“Fingerprint” to Jenkins), matched to an SCM tag, with the SCM commit ID in its manifest. Honoring this constraint means that you are always deploying exactly the same file, and you can always trace a problem in production back to the revision of the software it is running. There should also be a Definitive Software Library such as Nexus where this file is stored and from which it is deployed. One important advantage of Workflow is that you can choose to keep metadata like commit IDs, checksums, timestamps, and so on as local variables; as well as being able to keep a workspace (i.e., slave directory) locked and available for either the entire duration of the flow, or only some parts of it. This means that it is easy for your flow to track the SCM commit ID long enough to bake it into a manifest, while keeping a big workspace open on a slow slave with the SCM checkout, then checksum the final build product and deploy to Nexus, releasing the workspace; and then acquire a fast slave with a smaller workspace to host some functional tests, with the Nexus download URL for the artifact still easily accessible; and finally switch to a weak slave to schedule deployment and wait. Whereas a setup using traditional job chaining would require you to carefully pass around artifacts, workspace copies, and variables (parameters) from one job to the next with a lot of glue code to reconstruct information an earlier step already had, in a Workflow everything can remain in scope as long as you need it.

The biggest thing that Alex treated as important which is not really available in Workflow today is matrix combinations (for testing, or in some cases also for building): determining the effects of different operating systems / architectures, databases, JDK or other frameworks, browsers, and so on. Jenkins matrix projects also offer “touchstone builds” that let you first verify that a canonical combination looks OK before spending time and money on the exotic ones. Certainly you can run whatever matrix combinations you like from a Workflow: just write some nested for-loops, each grabbing a slave if it needs one, maybe using the parallel construction to run several at once. But there is not yet any way of reporting the results in a pretty table; until then, the whole flow run is essentially pass/fail. And of course you would like to track historical behavior, so you can see that Windows Java 6 tests started failing with a commit done a week ago, while tests on Firefox just started failing due to an unrelated commit. So matrix reporting is a feature we need to include in our plans.

All in all, it was a fun day and I am looking forward to seeing what people are continuously delivering at next year’s conference!


Jesse Glick
Developer Extaordinare
CloudBees

Jesse Glick is a developer for CloudBees and is based in Boston. He works with Jenkins every single day. Read more about Jesse on the Meet the Bees blog post about him.


Tuesday, July 22, 2014

Automating CD Pipelines with Jenkins - Part 1: Vagrant, Fabric and Selenium

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Tracy Kennedy, solutions architect, CloudBees, about a session given by Hoi Tsang, DealerTrack, at JUC Boston.


There’s a golden standard for the software development lifecycle that it seems most every shop aspires to, yet seemingly few have already achieved - a complete continuous delivery pipeline with Jenkins that automatically pulls from an SCM repository on each commit, then compiles the code, packages the app and runs all unit/acceptance/static analysis tests in parallel.

Integration testing on the app then runs in mini-stacks provided by Vagrant and if the build passes all testing, Jenkins stores the binary in a repository as a release candidate until a candidate passes QA. Jenkins then plucks the release from the repository to deploy it to production servers, which are created on-demand by a provisioning and configuration management tool like Chef.

The nitty gritty details of the actual steps may vary from shop to shop, but based on my interactions with potential CloudBees customers and the talks at the 2014 Boston JUC, this pipeline seems to be what many high-level execs aspire to see their organization achieving in the next few years.

Jenkins + Vagrant, Fabric and Selenium
Hoi Tsang of DealerTrack gave a wonderful overview of how DealerTrack accomplished such a pipeline in his talk: “Distributed Scrum Development w/ Jenkins, Vagrant, Fabric and Selenium.”

As Tsang explained, integration can be a problem, and it’s an unfortunately expensive problem to fix. He explained that is was best to think of the problem of integration as a multiplication problem, where

Hoi Tsang, DealerTrack
practice x precision x discipline = perfection

When it comes to SCRUM, which Tsang likened to being “like driving really fast on a curvy road,” most all of the attendees at Tsang’s JUC speech practiced it and almost all confirmed that they do test-driven development.

In Tsang’s case, DealerTrack was also a test-driven development shop and had the goals of writing more meaningful use cases and defining meaningful test data.

To accomplish this, DealerTrack set up Jenkins and installed a few plugins: Build Pipeline plugin, Cobertura and Violations to name a few. They also created build and deployment jobs - the builds were triggered by code commits and schedules, and the builds triggers tests whose pass/fail rules have been defined by each DealerTrack team. Their particular rules were:
  • All unit tests passed
  • Code coverage > 90%
  • Code standard > 90%
DealerTrack had their Jenkins master control a Selenium hub, which consisted of a grid of dedicated VMs/boxes registered to the Selenium hub. Test cases would get distributed among the grid, and the results would be reported back to the associated Jenkins jobs.

The builds would also be subject to an automated integration build, which relied on Vagrant to define mini-stacks for the integration tests to run in by checking out source code into a shared folder with a Virtual Machine, launching the VM, preparing + running the test, then cleaning up the test space. Despite this approach to integration testing taking longer, Tsang argued that it provided a more realistic testing environment.

If the build passed, then its artifact would be uploaded to an internally-hosted repository and reports on the code standards + code coverage were published.This would also trigger a documentation generation job.

According to Tsang, DealerTrack also managed to setup an automated deployment flow, where Jenkins would pick up a build from the internal repository, tunnel into the development server, then drop off the artifact and deploy the build. They managed to accomplish this using Python Fabric, a CLI for streamlining the use of SSH for application deployment or system administrator tasks.

Tsang explained that DealerTrack had a central Jenkins master to maintain the build pipeline, but split the work between each team’s assigned slave and assigned testing server. Dedicated slaves worked on the more important jobs, which allowed branch merging to be accomplished 30% faster.

Stay tuned for Part 2!


Tracy Kennedy
Solutions Architect
CloudBees

As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. (Meet the Bees blog post coming soon!) For now, follow her on Twitter.

Friday, July 18, 2014

To Successfully Adopt Continuous Delivery, Organizations Need To Change

In a recent Forrester Research report, Modern Application Delivery Demands a Modern Organization, Kurt Bittner, John Rymer, Chris Hines and Diego Lo Giudice review the differences between the 'modern organization' of yesterday and today and the shifts that need to be taken to keep up with not only customer demand, but the success of more agile competitors.

Bottlenecks

When you look at the structure of a successful organization, it is rare to find silos of any sort. The reason being that when you shift the emphasis from individual performance optimization to a team-based structure focused on optimizing delivery, you get faster output. Why?

When an individual is focused on their own task lists, priorities slip for other projects that get held up. This ultimately creates a bottleneck of work. What is the natural thing you may do when you are waiting for someone else to do the next step of a project or be told to do so from a superior? You start something else. Because you are working on some new project, a new bottleneck is formed when your resources are needed. You are not available any longer and someone is now waiting on you. They start a new project while they wait and so on and so forth.

The Culture Shift

We are members of a culture of multi-tasking – we must always be busy. This is not always good. In the modern culture, resources are dedicated and at the ready to move projects along, even if they are underutilized. So now you have resources that are not moving on to new projects and are ready when their resources are needed.

Now going back to the silo vs. team approach, we start to see less specialization and more focus on distributing knowledge. So now you have a team that can be the next in line instead of one person. It’s now about cross-functional teams vs. superstars.

The focus also needs to change. Our culture wants us to get the Employee of the Month award and achieve personal objectives but what if we focused less on how much we could get out of top performers and more on how much output we could deliver to our customers?

This would mean another huge cultural shift and this time it’s about the management team. Management must be agile and allow for teams to make decisions quickly without having to cut through yards of red tape to get something across the finish line. It’s more about holding your team accountable vs. tracking and monitoring their every move.

The report concludes by stating: “While process and automation are essential enablers of these better results, organization culture, structure, and management approach are the true enablers of better business results.”

Continuous Delivery can be a tremendous game changer for your organization but the organization needs to be modernized in such a way that it will be a successful game changer. 



Christina Pappas
Marketing Funnel Manager
CloudBees

Follow her on Twitter

Tuesday, July 15, 2014

The Butler and the Snake: Continuous Integration for Python by Timo Stollenwerk, Plone Foundation

This is the first in a series of blog posts in which various CloudBees technical experts will summarize presentations from the Jenkins User Conferences. This first post is written by Félix Belzunce, solutions architect, CloudBees.

At the Jenkins User Conference/Europe, held in Berlin on June 25, Timo Stollenwerk of Plone Foundation presented how the Plone community uses Jenkins to build, test and deliver Python-based software projects. Timo went through some of the CI rules and talked about the main tools you should take a look at for implementing Python CI.

For open source projects implementing CI, the most important thing besides version control and automated builds is the agreement of the team. In small development teams, that is an easy task most of the time, but not in big teams or in open source projects where you need to follow some rules.

When implementing CI, it is always a good practice to build per commit and then notify the responsible team as to the outcome. This makes the integration process easier and avoids "Integration Hell." The Jenkins dashboard and the Email-ext plugin could help accomplish this. Also, the Role-based Access Control Plugin could be useful to set-up roles for your organization, so your developers can access the Jenkins dashboard while being sure that nobody can change their job configuration.


Java developers usually use Gradle, Maven or Ant as automated build tools, but in Python there are different tools you should consider, like Buildout, PIP, Tox and Shining Panda. Regarding testing and acceptance testing, I have listed below some of the tools that Timo mentioned.


Due to Python's nature static analysis has become somewhat essential. If you plan to implement this in your organization, I recommend reading this article, which compares different tools for Python static analysis, some of them which Timo also mentions.

Regarding scalability, when you are running long builds you could start facing some issues. A good practice here is not to run any build in your master and to let your slaves do the job.

If you have several jobs involved in launching a daemon process, you should ensure that each job uses unique TCP port numbers. If you don't do this, two jobs running on the same machine may use the same port and end up interfering with one another. In this case, the Port Allocator Plugin can help you out.

The CloudBees Long Running Build Plugin and the NIO SSH Slaves Plugin could also be helpful if you want to restart a build (in the case that Jenkins crashes) without starting from scratch or if you want to increase the number of executors attached to your Jenkins master while maintaining the same performance.

In the release process, Timo explains that the Jenkins Pipeline plugin could be combined with some specific Python tools like zest.releaser or devpi.

Get Timo's slides and (when videos are posted) watch the video of his JUC Europe session.



Félix Belzunce
Solutions Architect
CloudBees

Félix Belzunce is a solutions architect for CloudBees based in Europe. He focuses on continuous delivery. Read more about him on his Meet the Bees blog post and follow him on Twitter.

Thursday, July 10, 2014

CloudBees Announces Public Sector Partnership with DLT Solutions


Continuous Delivery is becoming a main initiative across all vertical industries in commercial markets/private markets. The ability for IT teams to deliver quality software on a hourly/daily/weekly basis is the new standard.

The public sector has the same needs to accelerate application delivery for important governmental initiatives. To make access to the CloudBees Continuous Delivery Platform easier for the public sector, CloudBees and DLT Solutions have formally joined hands in order to help provide Jenkins Enterprise by CloudBees and Jenkins Operations Center by CloudBees to federal, state and local governmental entities.

With Jenkins Enterprise by CloudBees now offered by DLT Solutions, public sector agencies have access to our 23 proprietary plugins (along with 900+ OSS plugins) and will receive professional support for their Jenkins continuous integration/continuous delivery implementation.

Some of our most popular plugins can be utilized to:
  • Eliminate downtime by automatically spinning up a secondary master when the primary master fails with the High Availability plugin
  • Push security features and rights onto downstream groups, teams and users with Role-based Access Control
  • Auto-scale slave machines when you have builds starved for resources by “renting” unused VMware vCenter virtual machines with the VMware vCenter Auto-Scaling plugin
Try a free evaluation of Jenkins Enterprise by CloudBees or read more about the plugins provided with it.

For departments using larger installations of Jenkins, CloudBees and DLT Solutions propose Jenkins Operations Center by CloudBees to:
  • Access any Jenkins master in the enterprise. Easily manage and navigate between masters (optionally with SSO)
  • Add masters to scale Jenkins horizontally, instead of adding executors to a single master. Ensure no single point of failure
  • Push security configurations to downstream masters, ensuring compliance
  • Use the Update Center plugin to automatically ensure approved plugin versions are used across all masters
Try a free evaluation of Jenkins Operations Center by CloudBees, or watch a video about Jenkins Operations Center by CloudBees.

The CloudBees offerings, combined with DLT Solutions’ 20+ years of public sector “know-how”, makes it easier to support and optimize Jenkins in the civilian, federal and SLED branches of government.

For more information about the newly established CloudBees and DLT Solutions partnership read the news release.

We are proud to partner with our friends at DLT Solutions to bring continuous delivery to governmental organizations.

Zackary Mahon
Business Development Manager
CloudBees