Thursday, March 31, 2011

A Glance into the Future of Continuous Integration, and Other Upcoming CloudBees Events

We’re excited to be involved in the first Silicon Valley Continuous Integration Summit, which is taking place on April 7 at LinkedIn in Mountain View, California.

Our own Kohsuke Kawaguchi (@kohsukekawa) joins an outstanding line-up of speakers to talk about the latest in continuous integration and the state of the Jenkins project. Already, a great list of folks have RSVP'ed. If you can’t join in person, LinkedIn is generously providing online streaming -- just sign up for the details!

Aside from the CI Summit, it will be a very busy spring for the Bees. Here’s a short list of where you can expect to see us the next two months.

April 5Jenkins training - led by Kohsuke in the Bay Area
April 7Silicon Valley CI Summit (Bay Area)
April 20Oakland JUG (Bay Area)
May 2-5JAX Germany – Kohsuke speaking on the state of Continuous Integration with Jenkins
May 3-6JBoss World (Boston) – Come see us at booth #1011 (and use our registration code to get $150 off on your registration: RHCUSTCLB34)
May 18MIT CIO Symposium (Boston)
May 20Jenkins Meetup Tokyo – Kohsuke speaking
May 27What's Next (Paris) – Kohsuke speaking
May 31Skills Matter (London) – Kohsuke speaks on “Continuous Integration with Jenkins”
June 1 Jenkins training at SkillsMatter in London, led by Kohsuke

Follow CloudBees:
Facebook Twitter

Wednesday, March 30, 2011

What is a PaaS, After All?

There is a growing interest in PaaS offerings, with Gartner going as far as predicting that 2011 will be the year of the PaaS. Yet, when talking to people, I realize that there is not a good understanding of what a PaaS is, what it does and how it differentiates from traditional middleware layers. This is a frequent problem with fast-growing technologies: the usage of acronyms describing those technologies tends to get ahead of the real understanding of what they do.

So, what’s a PaaS? The most frequent answer you will get is that a PaaS is a IaaS (pronounced “yass”) with “some” traditional middleware software pre-installed on top of its servers. As developers, we have been used to think about our environment in terms of “Servers”. This has become our unit of work, our unit of provisioning, and has defined up to what level we can scale vertically, and as of when we have to start scaling horizontally, what it meant in terms of application configuration, deployment, etc. Consequently, it should come as no surprise that most of us would come to think of a PaaS as a IaaS with some traditional middleware layer pre-installed on top of it.

The problem with this “naïve” approach is that doesn’t really solve anything new. Au contraire, it tends to make it more complex since you typically have less control over a cloud infrastructure than on your own servers.

What problem are we trying to solve?

Traditionally, one of the main difficulties companies face when developing a new service is the high level of friction between IT operations and development teams. Both teams have different aims, different timelines and different worries. Put simply, a different DNA. Yet, bringing an application to life is a long sequence of high-friction steps involving both IT and development teams. Some of those steps are one-time activities, some are recurring activities, but none are trivial steps. Want to add more nodes to your cluster? Want to push a new version to production? Want to take a snapshot of your running environment for some testing? Those are all high friction activities leading to high costs and bad time-to-market.

How could a PaaS help us drop that friction?

The key concept here is to change the level of abstraction that developers will have to deal with. Application developers should not have to worry about servers, virtual machines, application servers, deployment scenarios, clustering, etc. The only things developers should have to worry about are … applications. Developers should be able to interact with a platform that provides a self-service environment for all of their typical activities… without any dependency on or interaction with IT. Setting up a new application, testing it, deploying it to production, deploying a new version, auto-scaling it, etc. should all be managed by the platform, not by a mix of IT and developers. It is then up to the PaaS to map this high-level abstraction, applications, to lower-level Lego blocks, servers.

In summary, a proper PaaS should put back the developers in charge of their applications.

As importantly, a PaaS is not just a huge productivity and time-to-market boost for developers, it is also a great time and energy saver for IT teams. By leveraging a PaaS, IT ends up not managing myriad applications of various sizes, each with their own requirements, their own timeline. Instead, IT ends-up managing a single entity, a single “container”, the PaaS. This structure provides a clean demarcation line between IT and development.

Cloud vs. Cloud

What should be apparent by now is that the cloud attributes (pay as you go, self-service, elasticity, etc.) are not linked in any way, shape or form to “IT resources” such as servers. Those attributes are generic and can be applied to any abstraction that makes sense to the problem we are trying to solve. And so the job of a PaaS is to “cloudify” applications, not servers.

The reality

That’s for the theory at least. When you look at the PaaS solutions on the market, you’ll find lots of different approaches.

A number of PaaS implementations out there are still very much examples of the naïve approach of “IaaS+middleware” that we’ve seen above. PaaS’s such as Azure or Beanstalk follow this approach pretty closely. Those are what I’d call “first generation” PaaS: they map legacy middleware on top of cloud infrastructure. In doing so, you get the advantages (and some of the inconvenience) of a IaaS, but none of the advantages a PaaS can bring.

Some other PaaS’s on the other hand -- I’d call them “2nd generation PaaS” -- do offer that kind of abstraction. Those include CloudBees obviously but also and Google App Engine. The problem with some of those implementations is that in order to achieve proper “server abstraction” and offer an application-centric view of the world, they clutter their containers with plenty of constraints that developers have to follow (such as the inability to create threads or the maximum duration of HTTP invocations). The problem with that approach this is that middleware developers tend to be smart and those kind of constraints do not fly long.

If you want to get a better feel of what a PaaS has to offer, give CloudBees a try, this is a best way to understand what it brings. Take an existing application, and deploy it fast on RUN@cloud -- we are waiting for your feedback.


Sacha Labourey, CEO
Follow CloudBees:
Facebook Twitter

Monday, March 28, 2011

March Newsletter: "Mastering Jenkins Security" Webinar on Mar 31st, 10am PT and Other News...

Here is a copy of our March newsletter...

The hive has been a busy place over the last few weeks! First, we’re thrilled to report CloudBees won a CloudCamp Cloudy award for Best Cloud Innovation: Editor's Choice. There’s plenty of other buzz too – check it out below. Hopefully you’ll find some useful resources to help you make the most of your Jenkins continuous integration server as well as enjoy hassle-free builds and deployments in the cloud.

Hot Stuff

  • Mastering Jenkins/Hudson Security – New Webinar, March 31 at 10am PT
In this new webinar, Jenkins/Hudson creator Kohsuke Kawaguchi shows you the ins and outs of securing Jenkins. You’ll learn the best ways to control access to Jenkins, the workings of the authentication and authorization mechanisms, the security design of Jenkins and much more. More info
  • Automated Continuous Deployment from Build to Deploy with DEV@cloud and RUN@cloud
Continuous deployment is all the rage. This tutorial will help you get started down that path. Take a look
  • Featured Blog: Jenkins vs. Hudson – Time to Upgrade!
Bob Bickel expounds on the reasons why Jenkins will continue to win over Hudson, complete with interesting stats on community adoption.Read Bob’s blog
  • Eclipse Toolkit for Jenkins – Try the Beta
Now you can manage and monitor Jenkins/Hudson and your DEV@cloud account conveniently from within Eclipse. Find details and a video here

Upcoming Events

  • Mastering Continuous Integration with Jenkins Training -- A one-day course conducted by Jenkins/Hudson creator Kohsuke Kawaguchi in the San Francisco Bay Area on April 5 and in London on June 1.Learn more
  • Silicon Valley Continuous Integration Summit: A glimpse into the future of CI -- April 7 at LinkedIn in Mountain View, CA: Kohsuke Kawaguchi will present the latest and greatest Jenkins developments in his “Status of the Jenkins Project – Expect a Better Experience” talk. Space is limited, so sign up soon.
  • Red Hat Summit and JBoss World 2011 -- Join CloudBees in Boston May 3-6 and keep on top of the latest cloud developments. Registerwith our special discount code, RHCUSTCLB34, and enjoy a $150 discount.
If you have any suggestions of resources you’d find helpful, Jenkins plug-ins you’re dying to see in Nectar, or other thoughts, we welcome your feedback.Please drop us a note or find us on Twitter or Facebook.

Follow CloudBees:
Facebook Twitter

Friday, March 25, 2011

Hiring! Sales Engineer and Consultant

Do you want to join the CloudBees team? We are hiring a "Sales Engineer and Consultant".

You can find the job description here.


Sacha Labourey, CEO
Follow CloudBees:
Facebook Twitter

Wednesday, March 23, 2011

Welcoming Our Newest Bees…

Today, we're thrilled to welcome some more "bees" to the CloudBees team.

As you may have heard over the twitoverse, we have brought on some great technical talents in recent weeks:
  • Stephen Connolly, one of the original committers on Jenkins/Hudson, an avid contributor on Codehaus and an Apache Maven PMC member.
  • Paul Sandoz, previously of Sun Microsystems, was a member of the GlassFish team, and co-spec lead of JAX-RS and Jersey.
  • Ben Walding, CTO and long-serving operations manager of Codehaus.

Stephen and Paul will be working on Jenkins with Kohsuke and making our Jenkins offerings even better, while Ben focuses on our forge effort for DEV@cloud as well as lots of DevOps magic. We already have a great engineering team, and these guys only make us stronger.

On the business side, we also brought on John Vigeant to run business development. Growing our partner ecosystem is one of our major goals this year, and we are looking forward to seeing John expand the list of major IT players we work with. John comes to us with some impressive achievements at XenSource where he helped forge many of XenSource's most strategic partnerships, including Citrix -- which acquired them eventually. John also augments our Boston presence, where we will be building out our sales and marketing team and our company headquarters.

For me, it has been incredibly rewarding to see CloudBees go from an idea to a real, global company -- and to be honored as we were recently by the editors/organizers behind the Cloudy Awards who named us as their choice for the best cloud innovation in 2010 (thank you!). This year is shaping up to be all that we expected, with some serious activities in the Platform as a Service (PaaS) market; and not surprisingly, Gartner recently declared 2011 to be the year of the PaaS.

We intend to be one of the top PaaS players.


Sacha Labourey, CEO
Follow CloudBees:
Facebook Twitter

Writing Automatic Tool Installer for Jenkins

As you can see in Ant, Maven, JDK, and so on, Jenkins has an ability to automate tool installation necessary for builds. This page discusses how a plugin developer can add this to their own plugin, by using Gradle as an example.

Write a crawler

The first piece you need is a crawler, which generates metadata that in turn tells Jenkins where to download the tools from. The goal is to produce a JSONP file that looks like this:'hudson.plugins.gradle.GradleInstaller',{"list": [
 "id": "1.0-milestone-1",
 "name": "Gradle 1.0-milestone-1",
 "url": ""
 "id": "0.9.2",
 "name": "Gradle 0.9.2",
 "url": ""
More about the structure of the JSON file. The first "hudson.plugins.gradle.GradleInstaller" portion is the fully-qualified class name that you'll be writing later. Then a list of tuples follow, where each tuple contains an unique ID, a human readable display name, and URL to download a zip file from. The list should be sorted so that newer ones appear first. This is the order users will see in their drop-down combobox.
The crawler is a program that generates this file. It can be any program, but this is the crawler that produces the above JSONP, written in Groovy. Once you are ready with this, you can run this at your own machine or we can run it for you on our CI infrastructure. Please drop us a note at the dev list so that we can discuss.

Write an installer

Next, you write a new extension point implementation for the installer. This code tells Jenkins that you have an auto-installer for this tool. Gradle follows the standard file structure, so there's really no need to override any behaviour of the installer.
Every time the administrator sets up a new Gradle installation, a new GradleInstaller instance will created and it gets the ID that you set in the metadata JSON file above. The isApplicable method is saying that this installer can only apply to GradleInstallation. That is, using this installer for Ant doesn't make sense.
public class GradleInstaller extends DownloadFromUrlInstaller {
 public GradleInstaller(String id) {

 public static final class DescriptorImpl extends DownloadFromUrlInstaller.DescriptorImpl<GradleInstaller> {
     public String getDisplayName() {
         return "Install from";

     public boolean isApplicable(Class<? extends ToolInstallation> toolType) {
         return toolType==GradleInstallation.class;

Make Auto-installer a default option

You can make the auto-installer selected by default when the user adds a new tool installation. This is desirable since there's really no reason our users run around and install their own tools. To do so, add the getDefaultInstallers method to your ToolInstallation's descriptor, like this:
public class GradleInstallation extends ToolInstallation {

 public static class DescriptorImpl extends ToolDescriptor<GradleInstallation> {

     public List<? extends ToolInstaller> getDefaultInstallers() {
         return Collections.singletonList(new GradleInstaller(null));

That's it. It wasn't that hard, was it.

More complex installation scenarios

What's discussed in this page takes advantages of the stock implementation in Jenkins that's suitable for simple tool installations that only involves unzipping a zip file. If your tool installation scenario is more complex, you can still do that by extending from ToolInstaller directly instead of DownloadFromUrlInstaller. See the JDKInstaller class in the core as the starting point. It involves going through the gated download link via page scraping, choosing the right bundle based on the platform, and then installing a tool by executing an installer.

- Kohsuke Kawaguchi

Tuesday, March 15, 2011

Why Smart, Efficient Backup and Restore Techniques Are Essential with Jenkins Production Server

Tips from Jenkins/Hudson Founder Kohsuke Kawaguchi:

If you're like me or other typical folks out there, you've probably been postponing backups because you have more important things to worry about. But as you surely know, it's very important to have a backup, and better late than never!

In addition to disaster recovery, backups are useful insurance against accidental configuration changes, which might be discovered long after they were made. A regular backup system lets you go back in time to find the correct settings. A key tip to ensure reliable, optimized production operation with Jenkins is to make sure you keep up on backups. Even if you are already running Jenkins, it's not too late to start taking backups.

First, let's look at backup planning.

Jenkins stores everything under the Jenkins Home directory, $JENKINS_HOME (to find the $JENKINS_HOME location, go to the Configure System menu), so the easiest way to back it up is to simply back up the entire $JENKINS_HOME directory. Even if you have a distributed Jenkins setup, you do not need to back up anything on the slave side.

Another backup planning issue is whether to do backups on live instances without taking Jenkins offline. Fortunately, Jenkins is designed so that doing a live backup works fine – configuration changes are atomic, so backups can be done without affecting a running instance.

Now, let's look at how you can optimize backups.

Optimization 1: Back up a subset of $JENKINS_HOME
Although $JENKINS_HOME is the only directory you need to back up, there's a catch: this directory can become rather large. To save space, consider what parts of this directory you really need to back up and back them up selectively.

The bulk of your data, including your job configuration and past filed records, lives in the /jobs directory. The /jobs directory holds information pertaining to all the jobs you create in Jenkins. Its directory structure looks like this:


- builds (build records)

- builds/*/archive (archived artifacts)

- workspace (checked out workspace)

The /builds directory stores past build records. So if you're interested in configuration only, don't back up the builds. Or perhaps you need to keep build records but can afford to throw away archived artifacts (which are actually produced binaries). You can do this excluding builds/*/archive; note that these artifacts can be pretty big, excluding them may introduce a substantial savings.

Note that the following directories contain bits that can be easily recreated, so you don't need to include these in the backup:

- /war (exploded war)

- /cache (downloaded tools)

- /tools (extracted tools)

Finally, the workspace directory contains the files that you check out for the version control systems. Normally these directories can be safely thrown away. If you need to recover, Jenkins can always perform a clean checkout, so there's usually no need to back up your workspace.

Optimization 2: Use OS-level Snapshots
If you want maximum consistency in your backups, use the snapshot capability in your file system. Although you can take live backups, they take a long time to run, so you run the risk of taking different data at different time points... which may or may not be a real concern.
Snapshots solve this problem. Many file systems let you take snapshots, including Linux Logical Volume Manager (LVM) and Solaris ZFS (which also lets you take incremental backups). Some separate storage devices also let you create snapshots at the storage level.

Now, let's test and restore.

Nothing is worse than thinking you have a backup and then when disaster hits, finding out you can't actually recover. So it's worth testing to make sure you have a proper backup.

The JENKINS_HOME directory is "relocate-able" – meaning you can extract it anywhere and it still works. Here’s the easiest way to test a restoration:

- Copy the backup Home directory somewhere on your machine, such as ~/backup_test

- Set JENKINS_HOME as an environment property and point to backup_test; for example, export JENKINS_HOME=~/backup_test

- Run java -jar jenkins.war --httpPort=9999

This sequence of commands will pick up the new JENKINS_HOME with the backup_test directory. You can use this instance of Jenkins to make sure your backup works. Be sure to specify a random HTTP port so you don’t collide with the real one – otherwise the server won’t start!

While Jenkins is not difficult to set up or configure, you will get better results, support more projects and save administration time if you know the tips, tricks and optimal settings that can make your installation function most effectively.

This is just one tip, but I share several more in our article, "7 Ways to Optimize Jenkins" (download). If you prefer slides and my voice, we also have the recorded webinar on the CloudBees resource page (scroll down to bottom), as well as a list of the top questions asked by attendees during the webinar. I'll also be in the San Francisco Bay area on April 5th running a training course for those who want to master Jenkins.

If there are other topics related to Jenkins that you'd like me to address, feel free to leave a comment!

Follow CloudBees:
Facebook Twitter

Monday, March 14, 2011

Eclipse Toolkit for Jenkins and DEV@cloud

Over the last few months, we have received requests asking us if developers could interact with DEV@cloud service through Eclipse.

We have recently published a beta of our Eclipse toolkit that does just that plus more. The CloudBees Eclipse Toolkit allows developers to manage DEV@cloud and Jenkins CI (formerly known as Hudson CI) through an Eclipse plugin. In addition to managing Jenkins, DEV@cloud, developers can interact with GIT/SVN forges on CloudBees through the plugin. You can find details and a video that helps you get started here.

- Harpreet

Follow CloudBees:
Facebook Twitter

Tuesday, March 8, 2011

White paper: "7 Ways to Optimize Jenkins"

Back in Janurary, I've done a webinar and discussed a check list for production Jenkins deployments. The main content of that webinar is now available in a whitepaper. Hopefully this makes it easier for more people to get the deployment "right"!

After the first webinar, people gave me various ideas about what they wanted to hear in the future webinars. So I'm looking forward to doing more webinars in the future. (And if you have suggestions about what you'd like to hear, please let me know!)

Follow CloudBees:
Facebook Twitter

Monday, March 7, 2011

NoSQL and CloudBees

So we are occasionally asked if CloudBees supports or will support NoSQL databases (we actually use them ourselves internally in a few places).

We can of course support these, via some excellent 3rd parties who provide these as a service themselves.

The two I will mention (which you may not have heard of) are (MongoDB) and (CouchDB) - and both have free plans you can happily try !

Currently you will need to sign up to the one you wish to use, and then bring your credentials over to your cloudbees application or build setup - but you should experience good performance and low latency when accessing these services from the public cloud.

-- Michael Neale, Elite Developer and Architect

Follow CloudBees:
Facebook Twitter

Tuesday, March 1, 2011

CloudBees in the News: Open(?) PaaS

Yesterday, CloudBees was mentioned as an interesting player in an article about the openness of cloud platforms.

In the article, Bharath Chandrasekhar mentions VMware's OpenPaaS as an "open" solution as long as you are okay with being locked into the Spring Framework. He mentions VMware's definition of open PaaS means adding support for the Spring framework. To quote him: "Strictly speaking, one of Trend’s own product lines can become an OpenPaaS provider by adding support to Spring Framework!"

The article is timely as CloudBees is running a webinar on Mar 2nd 10am PST where CloudBees architects will demonstrate running Spring (and Java EE) web apps in the CloudBees PaaS. By the VMware definition we are already an OpenPaaS solution :-).

CloudBees vision goes beyond VMware's definition, we believe in letting developers code to open standards and open source solutions while not tied to a particular IaaS solution provider.
Today, the platform lets users build JVM based applications using JenkinsCI and deploy them to the CloudBees PaaS.

Join us in the webinar to see how you can build and deploy Spring/Java EE web apps in an open PaaS!
- Harpreet Singh, Senior Director of Product Management

Follow CloudBees:
Facebook Twitter