Monday, January 30, 2012

Who Will Make the Best PaaS this Sunday?

Guess the score, win a Kindle Fire...



Everyone around here is really buzzing with excitement about the big game this weekend. The CloudBees Woburn Hive is filled with rabid Patriots fans (being Boston-based and all), and despite our inherent bias, we wanted to give everyone outside of the New England area the chance to express enthusiasm for your favorite team. So... support your team (and Platform as a Service and CloudBees) and keep the buzz going in one of these fun ways:

1)    Twitter:  Visit our SuperPaaS Game page and Tweet with hashtag #SuperPaaSPats or #SuperPaaSGiants to support your favorite team. Make sure to include your score prediction in the tweet!

2)    Facebook: Go to our Facebook page, “like” CloudBees, and leave a comment with your forecast of the score.

Encourage your friends to do the same, because the closest score guess wins a Kindle Fire from CloudBees.

Help us keep friendly rivalry and the SuperBuzz going! And may the best team win...

Contest update:
Here is our winner - Fran Garcia, Spain - with his new Kindle (we really like the webpage he is currently surfing)! Congratulations, Fran!
















The Fine Print
-    Predictions must be entered before 6pmEST on Game Day, February 5th in order to be eligible. (Game starts at 6:30EST).
-    On Facebook, you must “like” CloudBees in order to win.
-    On Twitter, you must include the #SuperPaaSGiants or #SuperPaaSPats hashtag in order to win (otherwise we can’t find you).
-   Please tweet and comment as much as you like, but each person gets only one score prediction. Your last one is the one that counts!
-    If the winner comes from Twitter, we'll notify you by replying @you, so please keep an eye out and reply back/follow us so we can contact you. If we don’t hear from you after 3 days, we’ll pick someone else.
-    If there’s a tie between two people, we’ll give out two Kindles. If more than two folks are tied, we’ll draw two names to determine the winners.
-    Closest prediction must have correctly chosen the winning team (can be tweeted on either hashtag) and have the closest point spread (points totaled collectively over both teams).  For example, let's say the final score is 24-21 Patriots. If one person predicted the Patriots to win 21-20 (4 point cumulative difference), and someone else predicted 24-17 (3 points cumulative), the second would win.

The Really Fine Print
-    Kindles can only be won by someone who lives in a country to which the US is authorized to ship technology. If you live in a country under U.S. embargo, we're very sorry, but there's not much we can do about this.
-    You must be 18 years old or older to win (20 or older in Japan).
-    The winner is responsible for any federal, state and local taxes, import taxes and fees that may apply. Blah blah blah. We have to say this.
-    This little bit of fun is administered by CloudBees, Inc., 400 Trade Center, Suite 4950, Woburn, MA 01801, +1.781.404.5100. If you'd like to send us feedback or have questions, drop us a note. And no, we do not accept bribes to rig it. :)


CloudBees
www.cloudbees.com
 
Follow CloudBees:
Facebook Twitter

Controlling What You See (with the View Job Filters Jenkins Plugin)

Overview
When you first start using Jenkins for continuous integration, you will most likely have only a few projects, and the default main screen will suit your needs perfectly.

After a while, however, you will have more and more jobs in your Jenkins server and the default screen may no longer be adequate.

Stable Release Version
The latest release is 1.18 which was released in September 2011 and has no known issues.

Requirements for Plugin-Use
Jenkins 1.398 or newer

Step-by-Step Instructions on How to Use the View Job Filters Plugin:


Installation
  1. Go to your Jenkins instances root page.
  2. If your Jenkins instance has security enabled, login as a user who has the Overall | Administer permission.
  3. Select the Manage Jenkins link on the left-hand side of the screen.
  4. Select the Manage Plugins link.
  5. On the Available tab, select the View Job Filters Plugin and click the Download and Install button at the bottom of the page.
  6. (If you are using a version of Jenkins prior to 1.442) Restart Jenkins once the plugins are downloaded.
Configuration
This plugin does not have any global configuration options, instead it adds additional functionality to the views available within Jenkins.

If you have not created any views, then your system will be using the default “All” view. This view is read only (see the links at the bottom of this page for how to edit your “All” view). So in order to get started with this plugin, you first need to create a view.

  1. From your Jenkins instance's root page, there is a tab called “+” at the end of all the tabs. Click on that tab to create a new view.
  2. Give the view a name, and select the type of view you want to create. View types are an extension point that other plugins can contribute. Jenkins has one built in view type, List View.
If you already have created a view, just select that view and click on the “Edit View” button. In either case you should now be at the following screen:

View Configuration screen with the View Jobs Filters plugin installed, while in the process of adding a view job filter.

Jenkins has some rudimentary filtering built in, essentially: the status filter; the manual set of check-boxes; and the use a regular expression. So by way of comparison, and so that you know where any issues you might have are originating from, here is the same screen without the View Jobs Filters plugin installed:

View Configuration screen without the View Jobs Filters plugin

To start filtering the jobs in your view, you click on the Add Job Filter button, and select the type of filter to add. There are multiple types of filters, which I will describe in a moment, but first I need to explain how the filters work.

A view starts off with an empty set of jobs. To this is added all the selected jobs, followed by any jobs selected by the regular expression and then this initial set of jobs is filtered based on the built-in status filter. The View Job Filters then proceeds to operate on the set of jobs, adding and removing jobs in sequence. Most of the filters provided by the View Job Filters have a “Match Type” field, which determines whether they add or remove jobs from the set of jobs. The filters apply in the order in which you define them (and you can re-order them by drag and drop). Once all the filters have been applied, you have a final set of jobs that will be displayed in the view.

The Match Types are:
  • Include Matched - adds to the set of jobs, any jobs not already in the set which the filter rule matches.
  • Include Unmatched - adds to the set of jobs, any jobs not already in the set which the filter rule does not match.
  • Exclude Matched - checks all the jobs in the set of jobs, removing any which match the filter rule.
  • Exclude Unmatched - checks all the jobs in the set of jobs, removing any which do not match the filter rule.
This is a very powerful mechanism that allows you to create a view containing only the jobs you want to see... though at times it can seem like you are back in school doing set theory while trying to figure out exactly how to cut those Venn Diagrams to get at the set of jobs that you want to see.

The View Job Filters plugin exposes an extension point for other plugins to add additional types of filters, but here are the types of filters that are currently built into the plugin:
  • All Jobs: This adds all the jobs to this view. You could select all the checkboxes, you could specify a regex of “.*”, but this is by far the easiest way to start with all the jobs when either the filters you will be adding will be trimming back, or when you want to customize the “All” view.
  • Build Statuses Filter: This adds/removes jobs based on their current status, i.e. whether they are currently building; were never built; or are currently in the build queue. By using the Include/Exclude Unmatched match type you can invert the selection, i.e. whether they are currently not building; have been built at least once; or are not in the build queue.
  • Build Trend Filter: This adds/removes jobs based on recent events. In some cases the Build Statuses Filter or the Job Statuses Filter will be a simpler way to get the same result. Some of the criteria you can construct with this filter are:
    • All jobs that ran in the last 5 hours;
    • All jobs that have been unstable for the last 7 builds.
    • All jobs that have at least one stable build in the 10 days.
    • All jobs that have not been run in the last 30 days.
    • All jobs that have been triggered by a SCM change within the last week.
  • Job Statuses Filter: This adds/removes jobs based on the job status, i.e. Stable; Failed; Unstable; Aborted; Disabled.
  • Job Type Filter: This adds/removes jobs based on the type of job. For example, I use this filter to identify any Maven 2/3 project type jobs so that I can beat-up give out to the people creating those jobs thereby continuing my long standing disagreement with Kohsuke over whether the Maven 2/3 project type is a good idea or not ;-)
  • Logged-in User Relevance Filter: This adds/removes jobs based on their relevance to the logged in user. For example: matching jobs that were started by the user, or where the user committed changes to the source code of the job; matching jobs with a name that contains the user's name or login id.
  • Other Views Filter: This adds/removes jobs based on whether they are in a different view's set of jobs. Note: you can create a circular logic of death if View A has another views filter based on View B which has another views filter based on View A. There are longer and shorter ways to such a circular logic, but it all amounts to a dog chasing its tail.
  • Parameterized Jobs Filters: This allows adding/removing parameterized jobs based on whether regular expressions match the job parameters. If you need to use this one, you are doing very fancy stuff altogether. An example use case would be where you have a job parameter that selects the database that the job uses for running tests against, you could create a view which selects all the jobs targeting a specific database.
  • Regular Expression Job Filter: This allows using a regular expression to match against one of: the job name; the job description; the SCM configuration (i.e. select a specific branch); the email recipients; the Maven configuration; or the job schedule.
  • SCM Type Filter: This allows filtering based on the SCM type, for example to identify all the projects still using CVS or all the projects that have migrated to GIT.
  • Unclassified Jobs: This allows finding all the jobs that are not in a view already. Note: you can create a circular logic of death, so if you are using this filter, make sure you put it in one and only one view.
  • User Permissions for Jobs: This allows filtering jobs based on the logged in user's permissions for the job, i.e. whether they can: configure; build; or access the job's workspace
Once you have created your chain of filters, you can just save the view to see the set of jobs in that view.

Tips & Tricks, How to Use it on DEV/RUN
  • Start simple - this is a very powerful plugin, and while you can build very complex chains of filters, e.g. a view of All Matrix build jobs which are in the build queue and have at least one non-stable build in the last week, are currently stable, have been started by the current user, are not in view B, have a build parameter with the name matching “[dD]atabase” and a value of “test2” where the SCM is matching “svn:.*/foobar/.*” and which the currently logged in user has not got the ability to configure may make perfect sense to you, but maybe you should start with something a little simpler first, and work your way up to such complexity... you really need to have 100's of jobs before very complex filters become necessary.
  • Unclassified Jobs and Other Views Filter should be used with care. Both of these filters have the capacity to create circular logic loops. If you use one of them in one and only one view, you have nothing to worry about, but once you think you need to use another one, break out your set theory math book from school to make sure you won't be creating a circular logic loop.
  • When you have a system with more than about 25-50 jobs, the default “All” view can become useless, as it can be hard to find the jobs you want. The Jenkins Wiki describes how you can edit or replace the “All” view. Alternatively you can keep the “All” view but just pick a different view as the default view from the main Jenkins configuration screen. It can be useful to have the default view just show jobs that are relevant to the logged in user.
  • The View Filters plugin is not currently available on DEV@cloud, but will be shortly. Installation will be just as it is for a standalone Jenkins instance.
  • If you are creating views which are filtered based on job parameters, or based on being relevant to the currently logged in user, you may want to use the Build Filter Wrapper column feature. Some examples might help:
    • You create a view which consists of all jobs that the currently logged in user committed to in the past week. You use the Build Filter Wrapper column to replace the default Status, Weather and Last Success/Failure columns so that they only display the results from the builds that the currently logged in user committed to. That way the logged in user can see which builds they broke, as opposed to builds that they committed to recently but that didn't break.
    • You create a view which consists of all jobs that have pushed code into the production server (i.e. where the “deployment target” parameter was equal to “production” or some such criteria). You use the Build Filter Wrapper column to replace the default Status, Weather and Last Success/Failure columns so that they only display the results from the actual deployments into production and the subsequent deployments into test are filtered out of the view... that way if the screen is full of blue balls and sunny skies, you are a happy camper!
Known Issues
CloudBees
www.cloudbees.com
 
Follow CloudBees:
Facebook Twitter

Jenkins Support Comes to WANdisco’s uberSVN Platform


If you use uberSVN, we have good news - now it's even easier to use Jenkins with it! Last year, our friends at WANdisco announced that Jenkins is fully integrated with uberSVN and available in the uberAPPS Store.

WANdisco's uberSVN is a freely available, open ALM tool that transforms Apache Subversion into a versatile platform designed for social coding. uberApps provides an array of tested and certified developer tools – both free and paid – in one user-friendly location. Here you can quickly download a version of Jenkins that comes pre-configured to work with Subversion and uberSVN (if you want to see a great overview/demo, go here). uberSVN also gives you "one throat to choke" for products, services, and support.

Now, backed by CloudBees expertise, uberSVN users can add advanced Jenkins support to the mix, so they know they have peace of mind when they depend on Jenkins for their crucial development processes. Since 82% of the Jenkins users we recently surveyed told us they consider Jenkins mission-critical, backing your Jenkins with formal support is a very good thing!

WANdisco's Jenkins support includes:
  • 24-by-7 worldwide coverage
  • Online, email and phone support
  • Named support contacts
  • Online case tracking
  • Access to highly experienced Subversion and Jenkins support staff
  • One-hour response times with a Platinum or Platinum Plus package
With our team of world-renowned Jenkins experts, including Jenkins founder Kohsuke Kawaguchi, CloudBees provides higher-level support for any advanced technical issues that uberSVN/Jenkins users may encounter.

Not using uberSVN? No worries, you can still get support for Jenkins through a Jenkins Enterprise by CloudBees subscription.

Want to learn more? Visit the WANdisco blog or view the press release. Also, we'll be announcing a Jenkins Tips training webinar soon, so stay tuned!

CloudBees
www.cloudbees.com
 
Follow CloudBees:
Facebook Twitter

Thursday, January 26, 2012

Writing Programs that Drive Jenkins


One of the interesting discussions during SCALE 10x was about using Jenkins as a piece of a bigger software. This person was interested in using Jenkins to run some business analysis operations, and wanted to have a separate interface for business oriented people. This is actually an emerging but common theme I hear from many users. Another company in San Jose actually built the workflow engine that uses Jenkins as a piece in a bigger application (aside from the actual build and test, this workflow involves reviews, approvals, etc.), and GitHub Janky can be classified as one such app, too.

This is something I always believed in — that every piece of software needs to be usable but a layer above. Or put another way, every software should be usable as a library.

So in this post I'm going to discuss various ways you can programmatically drive Jenkins.
Let's start with the REST API of Jenkins. For most of the data Jenkins renders as HTML, you can access its XML version and JSON version (as well as a few other formats, like Python literal fragment.) You do this by adding /api to the page (see http://ci.jenkins-ci.org/api for example.) Those pages discusss other REST API where applicable. For example, you can POST to a certain URL and it'll create/update job definitions, etc.

If you are going to use REST API, you might find the auto-discovery for Jenkins useful. You can discover Jenkins on the local subnet via UDP broadcast, or DNS multi-cast. There's also a distinctive HTTP header "Jenkins-Version" on the top page of Jenkins that allows your application to verify that it's talking to a real Jenkins server, as well as an instance ID that allows you to identify Jenkins instanecs. These featuers allow you to build smarter applications.

For Jenkins protected by some authentication mechanism, you can use the user name + API key in the HTTP basic auth (and I want to add OAuth support here.)

REST API is great -- its programming language agnostic. It is also convenient that neither the server nor the client has to trust each other. But those APIs are bound by the request/response oriented nature of the HTTP protocol.

Another great integration point for Jenkins is the CLI. This uses the same underlying technology that drives master/slave architecture, which enables your command line clients to be a lot more intelligent. For example, REST API exposes an URL that you can post to get a build started. But the equivalent command in CLI can have you block until the build is complete (and exit code indicates the status), or run the polling first and proceed to build only when the polling detects a change, or allow you to perform a parameterized build with multiple file uploads very easily. For protected Jenkins, CLI supports SSH public key authentication to securely authenticate the client.

A slightly different version of the CLI is "Jenkins as SSH server". Jenkins speaks the server side of the SSH protocol, and allows regular SSH clients to execute a subset of CLI commands. In this way, you don't need a Java runtime installed on the client side to drive Jenkins.

These two integration APIs are often much easier to script than REST API.

Those APIs are available for non-privileged users, and they are great for small scale integrations. But for more sophisticated integration needs, we have additional APIs.

One is the REST API access to the Groovy console, which allows administrator users to run arbitrary Groovy scripts inside the Jenkins master JVM (and you can submit this script as POST payload, and get the response back as the HTTP response). This allows you to tap into all the Jenkins object models. Unlike the REST API, in this way you can ship the computation, so in one round-trip you can do a lot. You can do the same with CLI, which also lets you access stdin/stdout of the CLI.
The other sophisticated integration API I wanted to talk about is the remoting API that does Java RPC (not to be confused with the remote API, which is the synonym for the REST API). The remoting API is the underlying protocol that we use for master/slave communications, and it revolves around the notion of shipping a closure (and code associated with it) from one JVM to another, executing it, and getting the result back. If your application runs elsewhere, you can establish the remoting API channel with Jenkins master, then prepare a Callable object. You can then have Jenkins master execute this closure, and the result is sent back to your JVM.

There's an example of this available. You bootstrap this in the same way the CLI client talks to the master, then you "upgrade" the communication channel by activating the remote code download support (which requires the administrator privilege, for obvious reasons).

The great thing about this is that your data structure is rich Java object model all the way, and you never have to translate your data to externalizable serialization data format like XML or JSON. This greatly simplifies your program.

I think this list covers all the major integration APIs that Jenkins offers. If you are building any interesting applications that use Jenkins as a building block, please share your experience so that we can make it better!

Kohsuke Kawaguchi
Jenkins Founder, Elite Developer & Architect
CloudBees
www.cloudbees.com
 
Follow CloudBees:
Facebook Twitter

Wednesday, January 25, 2012

Painless Maven Builds with Jenkins

Overview

One of the great things about Maven is that it provides a standard project and build layout, along with a standard set of "goals." Not only does this make it easier for developers to get up to speed on a new project, but it also allows Jenkins to provide special support for Maven projects, reducing the configuration needed while enhancing the build report automatically. That is the goal of the Maven plugin for Jenkins.

The central feature of the Maven plugin for Jenkins is the Maven 2/3 project type. Thanks to the Maven project object model (POM), this project type can automatically provide the following features:
  • Archive artifacts produced by a build
  • Publish test results
  • Trigger jobs for projects which are downstream dependencies
  • Deploy your artifacts to a Maven repository
  • Breakout test results by module
  • Optionally rebuild only changed modules, speeding your builds
All of the above can be accomplished in Free style builds, but this requires more configuration on your part.

Requirements for Plugin Use

All Jenkins releases have the Maven plugin included. You must also have at least one Maven installation.

How to Use It

First, you must configure a Maven installation (this step can be skipped if you are using DEV@cloud). This can be done by going to the system configuration screen (Manage Jenkins-> Configure System). In the "Maven Installations" section, 1) click the Add button, 2) give it a name such as "Maven 3.0.3" and then 3) choose the version from the drop down.


Now, Jenkins will automatically install this version any time it's needed (on any new build machines, for example) by downloading it from Apache and unzipping it.

Next, create a new Maven job by 1) clicking "New Job" on the left hand menu, 2) give it a name, and 3) choose the "Build a Maven 2/3 project".



You will then be presented with the job configuration screen. On this page, you need to provide 1) the SCM and 2) the Maven goals to call. That's it! Choose the SCM you want to use (we'll use Git), and then specify what Maven targets to call. We'll call the "clean site install" goals so we can see the full effect.

...


Now, just click build now, and click on the progress bar in the left hand "Build Executor Status" to watch Jenkins install Maven, checkout your project, and build it using Maven. The build output should look something like this:


Now that the project is built, we can navigate to the detail page for our Maven module (project page -> Modules link on the left). For each module, Jenkins displays:




  1. A link to the module's workspace
  2. The module's artifacts
  3. A clickable test result trend
  4. Recent changes (just for that module!)

What's unique about the Maven job type is that these links and reports are automatically broken down by module for you. You can even customize notifications for this module by clicking the "Configure" link on the left.

Finally, if we had configured Jenkins to build any downstream dependencies of this project, they would automatically start building after this build completed.

Tips

When running multiple builds simultaneously on the same slave, they will share the same local Maven repository. This can cause problems if two builds are trying to update the same artifact simultaneously, since local Maven repositories are not designed for concurrent access. One long-standing solution to this is to use the "Use private Maven repository" option in the "Advanced" section of the Maven build. This will create an isolated local Maven repository for the job (in $WORKSPACE/.repository) which prevents these problems. Jenkins releases since 1.448 let you specify a Maven repository per executor, which is a more efficient way to solve the same problem.

If you are using the Maven plugin, you should also investigate the M2 Release Plugin for automating releases with one click. The M2 Extra Steps plugin will let you run arbitrary build steps before and after your build. Finally, the new Config File Provider plugin lets you maintain different settings.xml which can be referenced by your Maven builds.

Relevant Documentation

Maven is a trademark of the Apache Software Foundation.

Ryan Campbell, Developer
CloudBees
www.cloudbees.com
 
Follow CloudBees:
Facebook Twitter

Postgres in the Cloud Goodness with CloudBees


Today, EnterpriseDB announced the availability of Postgres Plus Cloud Database. Some of the advantages of this DBaaS (Database as a Service) are point-and-click provisioning, online backups, automatic scaling and failover.

Developers who prefer Postgres can now use this service with the CloudBees Platform as a Service (PaaS) to build database-backed applications.

Configuring web applications (on CloudBees) to talk to the database requires minimal configuration changes. Developers end up just changing their datasource configuration settings.

I'd recommend that developers change an existing application to play with this database. The delta upload feature in CloudBees (where only touched changes are uploaded during redeploys) makes this process fairly painless (dare I say enjoyable :-)).

A detailed article that builds a web application from scratch, configures Maven to pull in the right Postgres libraries and uses JPA and JDBC is available here. The source code of the application is available on github. Feel free to use the code to get started quickly.

- Harpreet Singh, Senior Director of Product Management

CloudBees
www.cloudbees.com
 
Follow CloudBees:
Facebook Twitter

Tuesday, January 24, 2012

Securing the Cloud: Part 2 - Managing Security Around Remote Login and Development


In Securing the Cloud: Part 1, we looked at the ways in which developers at CloudBees manage credentials. In today's post, we'll look at how we manage security around remote login and remote development. 
Like the previous post, because a large portion of our infrastructure is in the Amazon Web Services environment (AWS), this post will specifically focus on that platform.
Remote Server LoginOne major advantage in using the CloudBees Platform as a Service (PaaS) is that you do not have to manage servers anymore. Using our platform, developers develop, deploy and scale applications with minimal server interaction.
However, behind the scenes, CloudBees engineers do need to manage server lifecycle. Not only for instances that run customer code, but for web proxying layers, databases, Git/SVN repos, and many other administrative things. In the previous post, we discussed the credentials that allow developers to see, and perhaps manage, the lifecycle of these servers. However, we also need to manage the ability to remotely login to these machines to perform maintenance or fix problems that may occur. In addition, we need to limit traffic from the outside world in a way that allows applications to work, but does not allow malicious attempts to break into the systems.
 
Locked Down Access

 Our first strategy is to make prodigious use of EC2 security groups and rules. Each of our instances has a particular role it serves, and as such is tied to a specific security group that reflects that type of role. Our application servers, our proxying layer and our databases each have separate EC2 security groups attached to them. On the DEV@cloud side, our Jenkins master instances, the executor machines and the proxying layer also have their own EC2 security groups.
It is within these security groups that we can restrict outside traffic to only the ports needed, and then also limit inside traffic between the EC2 security groups where things need to "talk" internally. For example, our web proxying layer allows outside traffic from ports 80 and 443 - and that's it. Our application servers don't allow outside traffic at all, and only allow connections to specific ports coming from the web proxying layer. This tiered and locked down approach ensures we don't succumb to attackers looking for a backdoor into our environment.
Backdoors 

Of course, we still DO need backdoors into the systems in order for our own team to get in and perform administrative tasks. Most commonly this includes remote login (SSH) to a server, but also includes access to backend web interfaces to monitor application health or observe application metrics in order to solve issues.
To ensure we maintain as much security around these backdoors as possible, we hide them all behind a Virtual Private Network (VPN) that is accessible only to CloudBees developers. We use openvpn, which is a userspace-based SSL VPN that tunnels traffic over UDP. Each developer who has the need for access is given a private key to access the VPN. Once established on the VPN, the developer now has access to the ports needed to get into the system.
 
Note that doesn't mean they automatically have access INTO the systems, it just means they have access to the mechanisms to get into the systems. Case in point: once on the VPN, developers have access into port 22 (SSH) on our various machines. However, this still doesn't mean they have the access keys to actually login to those various systems - this is a separate credentialing and distribution mechanism that is handled on an as-needed basis.
This two layer approach gives us a high level of security, while still maintaining usability for our development team.
Handling Problems 

While it provides security, the VPN system can still be a source of friction. Maintenance, or an unplanned outage on the VPN system itself, can halt developer progress across the entire system. In a way, the VPN becomes a single point of failure for our team to be able to handle system level issues, should they occur.
To handle this, we allow our administrators to make temporary rule changes to the EC2 security groups. This facilitates work on system issues if the VPN system, itself, becomes a bottleneck to progress. As an example, they can open SSH access to a specific external IP address a developer may be using in order to let them login while bypassing the VPN. This change can only be facilitated by an administrator.
In addition, our security group rules are monitored by an external script on a nightly basis. A script matches the state of the security group rules with a known state stored in a Git repository; any deviations are noted and an email is generated. This allows all administrators to keep tabs on rule changes and ensure "temporary" changes get reverted, or made permanent by adding them to the Git repository of "good" rules.
We feel that our VPN approach, coupled with continuous auditing of security group rules against a known standard, provides us with a very high level of overall security around external facing access into our critical infrastructure. This, in turn, provides our customers with the highest levels of security against intrusion and potential data theft.
In my third and final post on the topic of security, we'll look at how we manage credential access to external services that developers may need to use.
-- Caleb Tennis, Elite Developer
CloudBees
www.cloudbees.com
Read Parts 1 and 3 in Caleb's Securing the Cloud blog series:

Follow CloudBees:
Facebook Twitter

Wednesday, January 18, 2012

You Ain’t Nothin’ but a Clound-Dog… with an iPad

In late December, we launched a little challenge - publish a sample "Hello Java in the Cloud" application on the CloudBees PaaS, which takes only minutes. And if you’re lucky, win an iPad2! If you did something creative with your app, you got entered three times to win.

Well, Hannu Leinonen in Helsinki got lucky! His app, featuring a face that can break hearts, got entered three times for coolness, and the third entry scored the big prize.


Even better, for every app deployed and tweeted, CloudBees promised to donate $5 to the International Committee of the Red Cross… and for every additional tweet (by anyone), another $1. Many thanks to all of you who participated! Thanks to you, we’re donating several hundred $$ to the Red Cross!

Special recognition also goes out to Fran Garcia, who not only created a clever app, but who raised the most money for the Red Cross with 80 retweets!


If you missed this round of the challenge, no worries – stay tuned because we’ll be doing a similar one next month! This time we'll feature cats, cloud magic and more tweets…

CloudBees
www.cloudbees.com
 
Follow CloudBees:
Facebook Twitter

Tuesday, January 17, 2012

Emma Plugin for Jenkins: Easy Code Coverage Reports


Overview

Data lovers, rejoice: Jenkins and Emma can help satisfy your urge to quantify your tests' code coverage. Behold, the Emma Plugin for Jenkins. For the un-initiated, code coverage is a measure of how well a test suite exercises a given code base. These tools will report what percentage of your packages, classes, methods or lines of code are covered by your automated tests. While good code coverage is not necessarily an indicator of quality, poor code coverage (also called "test coverage") often correlates with frequent regressions and a fragile code base (results may vary).

Emma is a free code coverage tool written in Java, for Java. Emma generates code coverage reports based on automated (or even manual!) tests of instrumented Java code. The Maven plugin for Emma will automate the code instrumentation, run the tests and generate the reports for you, so that you only have to type "mvn emma:emma" at the command line to get a coverage report. There are even Ant tasks for Emma, in case your code base is stuck in 2001. Likewise, a Gradle plugin is also available. (Hail, programmer of the future, does the world end in 2012?).

It is important to understand that Jenkins does not really know how to run Emma for you (a common misconception). The Emma Plugin for Jenkins is a reporting plugin. This means it runs after the build and tests are complete, in order to process the Emma output into a form which Jenkins understands. You are responsible for invoking Emma for your build using one of the approaches outlined above.

Stable Release Version

The current stable release is 1.26.

Requirements For Use

  • Jenkins 1.398 or newer.
  • A build which generates the Emma report using one of the approaches above.

Instructions

You can install the Emma plugin by going to your Update Center (click "Manage Jenkins", then "Manage Plugins") and select the Emma Plugin:

Then click the Install button.
Next, edit your job. In the Post-build Actions section, check the Record Emma coverage report.In the provided text box, enter the path-like pattern to tell Jenkins where to look for the reports. (for instance "*/coverage.xml"). If you leave it blank, Jenkins will search your entire workspace for the coverage.xml generated by Emma (which may take a while if your workspace is large!).


Once you run your build, you'll get to see a graph on your job's index page which plots your code coverage over time, and you'll see it change over time with each build, as you add code and tests.

This graph, like all standard Jenkins graphs, lets you click on the graph to drill down to a specific build. When you do, you'll see the Emma report for that build:


You can then drill down to individual classes, but coverage is only reported on the method level. Ideally, the plugin would let you see the source file and highlight individual lines to indicate the coverage of specific lines, but this plugin does not support that yet. (Sounds like a great feature for someone to contribute to this plugin!)

Alternatives

There are a few alternatives to Emma. Cobertura is another open source code coverage tool for Java, which also (surprise!) has a Jenkins plugin. Clover is a proprietary coverage engine from the fine folks at Atlassian (Jenkins plugin. Finally, Sonar also provides a view into Emma code coverage results and helps you track coverage over time, along with lots of other juicy metrics about your code. Even better, you can get a Sonar instance integrated with your DEV@Cloud Jenkins instance, making it even easier to get started tracking your code coverage with Jenkins and Emma.

Relevant Documentation

Ryan Campbell
Developer

CloudBees
www.cloudbees.com
 
Follow CloudBees:
Facebook Twitter