Thursday, June 28, 2012

Don't Shoot the Messenger (the PMD Plugin)

by Stephen Connolly, CloudBees
Stephen has nearly twenty years experience in software development. He is involved in a number of open source projects including Jenkins. Stephen was one of the first non-Sun committers to the Jenkins project and is the person directly responsible for the weather icons. Stephen lives in Dublin, Ireland - where the weather icons are particularly useful. Follow Stephen on Twitter and on his blog.


For the Java developer there are three real go-to tools for providing static analysis.  On one end of the spectrum you have Checkstyle, and on the other you have Findbugs.  Checkstyle looks at your source code and compares it to the coding style rules you have defined. Findbugs looks at the compiled byte code, looking for patterns that are usually associated with bugs. Sitting firmly in the middle of all of this is PMD, which looks for patterns in your source code... a kind of a hybrid between Findbugs and Checkstyle.

Each tool has its advantages, and in general I just use all three... as long as you make sure that you are not using conflicting rulesets with the different tools (i.e., it is not a good idea to have Checkstyle enforce the opposite of PMD).

PMDs sweet-spot is programming style... as opposed to coding style... though programming style is somewhat harder to define. 

Stable Release Version
The latest release is 3.28 which was released in March 2012.

Requirements for Plugin Use
This plugin requires Jenkins 1.409 or newer, as well as the Analysis-Core plugin version 1.41.

Step-by-Step Instructions on How to Use the PMD Plugin


  1. Go to your Jenkins instances root page.
  2. If your Jenkins instance has security enabled, login as a user who has the Overall | Administer permission.
  3. Select the Manage Jenkins link on the left-hand side of the screen.
  4. Select the Manage Plugins link.
  5. On the Available tab, select the PMD Plugin and click the Download and Install button at the bottom of the page. (All the required dependent plugins will automatically be downloaded for you.)
  6. (If you are using a version of Jenkins prior to 1.442) Restart Jenkins once the plugins are downloaded.


Before you can use this plugin you must ensure that your job is generating PMD reports. The Jenkins plugin will not run PMD for you. It will report the results that your build produces.

With Maven-based projects, this is usually a case of ensuring that the maven-pmd-plugin is executed during your build. With ANT-based projects, you will need to ensure that your build invokes the PMD ANT tasks, while other build systems will have their own integrations.

Usually the PMD results will be saved in an XML file called pmd.xml. If you are that lucky, then enabling the plugin is just a matter of selecting the Publish PMD analysis results checkbox in the Post-build Actions:

With the Freestyle project type there is a text box where you can enter the filename pattern that the plugin will use to find the PMD XML results. The text box assumes that the pattern is **/pmd.xml unless you enter an alternative pattern, so 99 times out of 100 you can just leave the text box empty.

If you are using the Maven project type, the plugin will capture the XML filename(s) from the Maven plugin, so there is no need to configure the filename pattern and, as a result, there is no text box to fill in!

Tips & Tricks
There are some additional advanced options available if you click on the Advanced button for the PMD analysis plugin:

  • By default, the plugin only runs for stable or unstable builds (on the assumption that you only run the PMD reports when the code compiles). If you need the reports to be collected for every build, just enable the Run always option.
  • If you are using a Freestyle project[3] with an ANT or Maven multi-module project, you may want to see the reports broken down by module. You can ask the plugin to try and auto-detect the modular structure of your build by enabling the Detect modules option.
  • You have a project with 10,000 PMD errors. You don't want to fix all of them this sprint, but you want to make some progress, i.e. get down to 9,500 -- you certainly don't want things getting worse. The solution here is to use a mix of the Health and Status thresholds:

  • Set 0% health to the current number of PMD errors, e.g. 10,000. Set 100% health to somewhere between 20 and 50% better than your target, e.g. 9,300. Set the status thresholds so that unstable is about 10% of your target, e.g. 9,950, and failed is slightly worse than where you are, e.g. 10,001:

  • The result will be that developers will be prodded into fixing some PMD issues (as the build will be called out as unstable) and prevented from letting things get worse (as the build will be marked as failed if that happens) and once some progress has been made, the weather reports will start to improve, giving a nice subtle nudge... just the kind of positive feedback that works.
  • The PMD plugin can be somewhat demanding on memory, if your project has a very large number of PMD violations, you may have to resort to either fixing a large chunk of them or switching to the Violations plugin which uses a different parsing engine and usually maintains a lower memory footprint.
How to Use it on DEV@cloud
If you are using the CloudBees Platform as a Service (PaaS), the plugin is identical to configure on DEV@cloud.

Any Known Issues

Relevant Documentation

- Stephen Connolly
Elite Developer and Architect

Follow CloudBees:


Wednesday, June 27, 2012

New Relic Launches App Speed Index and Custom Dashboards

This guest blog post was contributed by Bill Hodak, director of product marketing at New Relic, an application performance management vendor and integrated CloudBees partner.

New Relic is announcing the availability of two awesome new features, and thanks to our SaaS model, our customers have immediate access to them. When you login or sign up, you'll automatically get one or both of these features (feature #2 is requires New Relic Pro). And since New Relic is integrated with CloudBees RUN@cloud, you can activate it from within CloudBees in just a click or two. 

New Feature #1: App Speed Index
Think your app is fast? Stop guessing and start knowing with the App Speed Index. The App Speed Index leverages our Big Data to provide Big Insight to our customers. New Relic collects over 55 billion performance metrics and monitors 1.5 billion page loads on behalf of our 25,000 customers and their 450,000 application instances. All of that data equates to 3.5 terabytes of new data collected, stored and analyzed each day.

With the App Speed Index, our customers will be able to classify their application into a Peer Group of similar applications (ex. eCommerce, SaaS, Gaming, and Consumer Internet applications) and benchmark their app with industry peers. Find out your percentile rank within your peer group for end user and application response times, error rates, and application availability to find out how fast you really are.

Learn more about the App Speed Index here, or check out this blog post. And don't forget to check out our living infographic, updated daily to show how the peer groups rank by performance and availability. It even lists the fastest applications monitored by New Relic!

New Feature #2: Custom Dashboards
Have you ever wanted to see Network I/O graphs and End User Response Time graphs on the same dashboard?  What about some custom business metrics and application response time? Now you can with Custom Dashboards. With Custom Dashboards you can build any dashboard with any data that tickles your fancy. The best part about it? No Coding Required!  With Custom Dashboards all you have to do is click and pick, drag and drop, or instant copy an existing New Relic graph and boom — you’ve got a Custom Dashboard. This feature is only available to our Pro customers. So if you're not currently a Pro customer, just sign up or upgrade to get access to Custom Dashboards. Learn more about Custom Dashboards by reading this blog post.

New Relic and CloudBees have partnered to make New Relic Standard available to all CloudBees customers free of charge. If you’re not yet a customer, sign up today! All accounts start with 14 days of Pro, for free.

Tuesday, June 26, 2012

Jenkins Protip – Artifact Propagation

Here's another great guest post from , co-founder and Director of Engineering at ZeroTurnaround
Jenkins is a continuous integration tool that is very often used as a orchestration tool for continuous deployment and delivery. Today, we will look how to do artifact propagation in a Jenkins pipeline for Continuous Delivery. Some of questions I’m thinking about are:

  • How to make sure that different jobs in the pipeline use the same artifacts?
  • How will these jobs get hold of those artifacts in a distributed environment?
  • How can we make sure that 3rd party services also have access to those artifacts?

There are multiple ways how to support this and in this article we will take a look at two of these. For seeing how to chain jobs together then please refer to How to use Jenkins for Job Chaining and Visualizations.

Copy Artifact Plugin

Of course there is a plugin for that! One of the best things about Jenkins is that it has a massive library of plugins (I wrote a popular post on my Top 10 Must-have Jenkins features/plugins if you want to see more).
Check out the Copy Artifact Plugin. The core concept of the plugin is that your Jenkins job needs to archive artifacts. Basically this means that if you produce a JAR/WAR/EAR or some other artifact then you instruct Jenkins to save it for later use. Once your jobs are doing that you can tell other jobs to start using those artifacts.
Archiving is nothing more than specifying a Post-build Action in the job configuration. Specify artifact to archive or use a wildcard to archive more than one. In this screenshot we are archiving the WAR archive from target/lr-demo.war.
In the following screenshot I have configured a job to use this artifact. I’ve outlined in the Pre Steps section that I will be using an artifact from another job build-chat and the artifact is target/lr-demo.war and it will get copied to copied-tmp. It will keep the folder structure of the source and the file will end up as copied-tmp/target/lr-demo.war. There is an option to override that.

Your Favourite Artifact Repository

Although the Copy Artifact is a quick and easy way to distribute your archive from one job to another, at one point you want your company artifact repository to know about these builds. We for example at ZeroTurnaround treat Jenkins cluster as a – the cluster might blow up any second and we’ll re-provision. We hold our artifacts in Nexus (we are considering Artifactory also). This means we want our artifacts, snapshots and releases to end up in that central repository instead.
One quick way to get them there is to use the mvn deploy-file. This does not require you to even have your project set up as a maven project. For example the following line deploys your artifact with the Jenkins BUILD_NUMBER embedded into the name to your repository.

mvn deploy:deploy-file -Durl=http://your-repo -DrepositoryId=your-repo-id\\
     -Dfile=filename.jar \\
     -DgroupId=groupId \\
     -Dversion=ver-${BUILD_NUMBER}-SNAPSHOT \\
Next step is to use this very same artifact in your other jobs. There you can use the mvn dependency:get plugin. Before version 2.4 it was not so easy to download from the repository to a pre-defined location (you needed a pom.xml) but now you can execute the following command to download the artifact to the current folder.

mvn org.apache.maven.plugins:maven-dependency-plugin:2.4:get \\
    -DrepoUrl=repo-name::repo-id::http://your-repo \\
    -Dartifact=gropuId:artifactId:ver-${BUILD_NUMBER}-SNAPSHOT -Ddest=zt-zip.jar
Of course now is the question where will this other job know the ${BUILD_NUMBER}. Well, there are couple of options. If you have a pipeline, then be sure to use the proper trigger plugin (see How to use Jenkins for Job Chaining and Visualizations). Other option is to take the latest snapshot from the repository or parameterize your build that would require the build number.


Jenkins comes with a plugin that does the trick and works perfectly for most of the situation. It is called Copy Artifact plugin. It does require some setup. Your projects must archive artifacts and secondly your job needs to copy those artifacts as a pre step.
The other option is to use your favourite artifact repository. Your Jenkins job will upload the artifact and optionally embed some more information there. The build number for example and then your other jobs can use an artifact from the repository. This also means that your other systems that anyways are using your artifact repository will always see the latest and greatest versions.
If you know any other good ways for artifact propagation in Jenkins then do let us know!
ZeroTurnaround is delighted to be sponsoring the Jenkins User Conference in San Francisco on September 30! Join this gathering of Jenkins experts to learn similarly-useful Jenkins tips and tricks.

Toomas Römer

Toomas Römer is the co-founder and Director of Engineering of ZeroTurnaround. Once a Linux junkie, he was fooled by Apple into proprietary OS and devices. He is a big fan of JUGs, OSS communities and beer. He blogs at, tweets from @toomasr and also runs the non-profit website. In his spare time he crashes Lexuses while test driving them, plays chess, Go and Starcraft. Looks can fool you; he will probably beat you in Squash. You can connect with Toomas on LinkedIn.

Friday, June 22, 2012

The Cloud as a Tectonic Shift in IT: The Irrelevance of Infrastructure as a Service (IaaS)

This is the second in a series of four blogs about The Cloud as a Tectonic Shift in IT, authored by Sacha Labourey, CEO, CloudBees. In the series, Sacha examines the huge disruption happening across the IT industry today, looks at the effect it is having on traditional IT concepts and reviews the new IT service and consumption models that have emerged as a result of the cloud. Finally, Sacha makes some predictions about where this tectonic shift will lead us in the future.

The move to the cloud represents one of the largest paradigm shifts to ever affect IT. More than just a simple technology evolution, the cloud fundamentally changes many of the cornerstones on which IT evolved. From redefining the concepts of operating systems and middleware, to revolutionizing the way IT services are built and consumed, the cloud is ushering in an era of change unlike any we have ever seen.

In Part 1: The Industrialization of IT, Sacha examined the evolution of electricity and the development of standards for operating and using it. He then compared the evolution of grid delivery, instant-on access and pay-as-you-go models in the power industry with the parallel evolution occurring now in IT service delivery.

In Part 2 (below), Sacha takes a closer look at Infrastructure as a Service (IaaS).

The Irrelevance of Infrastructure as a Service (IaaS)

Because they are not a straightforward evolution, true paradigm shifts make it hard to foresee what’s to come. Yet, when facing such drastic change, the natural response is to try to replicate what we know and have done to date in the new environment.

One of IT’s occupations is to build software stacks, which often include virtualization technologies, OS, middleware, databases and more. The stack is capable of evolving over time and integrating new software versions and patches with as little disruption as possible. While there is a lot of science to it, there is also quite a bit of art in there.

Consequently, when discovering the cloud, IT initially tried mimicking what they had been doing for decades: Pick a server, install an OS and some additional software and configure, manage, patch and monitor it. But when it comes to the cloud, the server was not a recognizable, brand name server running in a private data center. Instead, it is a “no-name,” standardized virtual server running in Amazon’s worldwide data center infrastructure. So if the brand doesn’t matter at all, why not just cut and paste our last 30 years of blueprints and apply them to the cloud? With no CAPEX investment required and very little time needed to provision new servers, it’s a change for the better. However, this scenario fails to capitalize on all the cloud has to offer. Here, it is merely an evolution of the traditional on-premise model, not a real paradigm shift.

Initially applied to servers, under the name Infrastructure as a Service (IaaS), the founding attributes of the cloud – elastic, on-demand, pay-as-you-go capabilities – can actually be applied to other layers of the stack. Two of those layers are Platform as a Service (PaaS) and Software as a Service (SaaS). Let’s examine how IaaS, PaaS and SaaS all relate to each other.

IaaS vs. PaaS vs. SaaS

IaaS sits at one extreme of the cloud continuum. It provides a way for users to consume the basic building blocks of IT – the compute, network and storage layers – as a service. This is probably the most flexible layer of all, but also the most complex. It is flexible because access to a bare-bones server allows you to install whatever you want on that machine, including a specific OS, a specific version of that OS, low-level drivers and more. With IaaS, you are really working with systems – not just applications or data, but a full-fledged stack of software.

But in doing so, you end-up performing a lot of low-level IT tasks. Worse yet, executing these in the cloud is typically harder than doing it on-premise. Cloud environments are implemented based on the assumption that they might come and go. To take advantage of elasticity, new servers must be spawned automatically, requiring that all of the typical IT tasks be highly automated – not just the setup of new instances, but also the synchronization of those resources with each other.

For example, using the underlying IaaS-API, the provisioning logic must be able to dynamically update load-balancers whenever clusters get modified, discover and collect under-used resources and so on. And while you are not in charge of the hardware maintenance, you are still very much in charge of patching, upgrades and other software maintenance. The typical audience for IaaS consists of system operation and DevOps engineers who are replicating the on-premise art of IT in the cloud. If you are interested in migrating an existing system – and not just a specific application – from an on-premise server to the cloud, you will want to use IaaS.

At the other extreme of the continuum sits SaaS. Whenever a business need is generic enough, chances are high that you’ll find a company providing the solution as a ready-to-use service. Typical examples include CRM, ERP, support portals and collaboration tools. If you can find a SaaS deployment that fits your requirements, you will realize that they typically offer a huge productivity boost. They can often be customized to fit specific requirements, but if you have other requirements that are not covered by the solution, you may be stuck.

It is important to note that a growing number of SaaS solutions have started offering a set of APIs that will automate specific tasks through an external script instead of, for example, having to rely purely on a human clicking on a GUI. This capability becomes very important as the number of SaaS solutions being consumed grows, as companies will need these APIs to synchronize and/or integrate some areas of their SaaS solutions.

Yet, those APIs will typically not be able to change the behavior of the solution per se, but only impact it at its periphery. Consequently, SaaS solutions typically deliver great productivity gains as long as they offer what you need. The typical audience for SaaS can be any business end user but you’ll also find a number of solutions, such as e-mail gateways and source code analysis, aimed at more technically minded end users, like developers.

In between IaaS and SaaS sits PaaS. If you are interested in developing and deploying custom applications, then PaaS is for you. You can see PaaS as “the middleware of the cloud.” It provides an abstraction layer over low-level IT elements, including servers, storage and networking, and enables software developers to work with such concepts as “applications,” “databases,” “source-code building” and “application testing.” With PaaS, you don’t have to worry about setting up servers, firewalls, build farms, load-balancers or databases. You’ll only have to focus on your business needs and what your application needs to do; the PaaS vendor provides the rest.

This is a very important distinction that might not seem obvious at first: IaaS focuses on hosting full system stacks, while PaaS focuses on applications. The PaaS provider delivers a set of pre-defined, state-of-the-art application runtimes following the best practices in that space, which developers use to deploy their applications. PaaS users should never have to care about what OS, load-balancer and configuration is being used, or whether or not it should be upgraded. Those are the concerns of the PaaS provider. And while it might be more efficient to request an 80-volt plug for a toaster – or a customized runtime environment for just one specific application – this is what kills the ability for a provider to offer a high-quality service at a highly competitive price.

The typical audience for PaaS consists of software developers and architects.

So, Which One Should I Use?

The big question that many businesses face as they move to the cloud revolves around which layer they should be using.

Most companies are already using some sort of SaaS solution, such as Google Mail, or Zendesk. But, whenever the need for “custom” work arises, they’ll typically adopt an IaaS solution. The reason is simple: building software stacks is what they are doing today on-premise. This is really the evolutionary approach that helps a company change, while remaining in a comfort zone. But they quickly realize that while the first steps are trivial, building a fully automated, scalable and resilient infrastructure is more complex than doing it on-premise – and requires different, quite specialized technical talents.

If a company is interested in running existing legacy systems, then this might be the right solution. Whenever you want to customize a system at a relatively low level, you need to have full access to the server – as well as specific drivers and a specific version of the file system or database. But remember that in this case, the IaaS provider will only help you by maintaining the hardware equipment and environment – you won’t receive any help detecting issues with your stack, or patching and upgrading it. All of these activities remain in your hands.

Should you really move existing systems “as-is” to the cloud? This is not always the best choice. A better course of action is to leave the existing legacy systems and stacks that require a specific configuration on-premise. So, the next time you have a new business problem to solve, ask yourself a very simple question: Can I find a SaaS solution that fits my needs? If the answer is “yes,” then take it – you’ll get the biggest productivity gain by using an already implemented, already tested, fully maintained offering.

But if the answer is “no,” it probably means you must implement some custom service. In that case, you will want to use PaaS, not IaaS. As a company, your added value lies in your ability to create new services that help you differentiate from the competition – not managing servers, firewalls and load-balancers.

The bottom line is that 5 to 10 years from now, existing systems will be de-provisioned and replaced by SaaS solutions, and new custom applications will be developed and deployed on PaaS. IaaS will be comparatively too complex to use and require sophisticated – and hard to recruit – talents.

Will IaaS Disappear?

Much like power plants are not likely to disappear, IaaS is here to stay. However, PaaS and SaaS vendors will emerge as the dominant visible species – and a number of SaaS solutions will actually be delivered using PaaS.

PaaS and SaaS vendors will be the ones caring about the infrastructure layers, as well as servers, load-balancing, backup and maintenance. You won’t.

Does this mean that IT consumers should not care about what IaaS framework their PaaS/SaaS solution is based on? Not at all. But they should care for different reasons.

First of all, customers will want to make sure some applications are co-located for latency reasons and they will want to make sure that others are hosted in a reputable country, typically for legal reasons.

Customers will also want to know what uptime SLA the IaaS solution is held to, as they will not directly manage this relationship nor use these resources. Instead, they will use a PaaS vendor as a proxy to address these underlying concerns.

Last but not least, all IaaS solutions are not created equal. Some offer services that go well beyond the mere renting of computing and storage resources – such as offering a Content Delivery Network (CDN) or providing access to sophisticated database engines. When running on such an IaaS architecture, the PaaS/SaaS provider could resell those additional services, provide the end customer with an abstract interface to access them in a vendor-neutral fashion, or, more simply, let the customer use them. In the latter case, the customer will want to make sure it is running on that particular IaaS, despite delegating systems and software stacks to a PaaS/SaaS provider.

As we’ve seen, “the cloud” is actually composed of several distinct layers, each providing value in a very different fashion to certain audiences. In particular, IaaS, through vendors like Amazon, catalyzes the adoption of the cloud and is getting an enormous amount of attention. But as cloud layers mature and enterprises become more educated about them, focus will shift away from IaaS and move towards PaaS and SaaS, the more directly actionable layers. Therefore, unless you are a PaaS vendor yourself, chances are you won’t directly care about IaaS; you’ll just move right to PaaS and SaaS solutions.

Up next in the blog series: The Death of Operating Systems (as we know them). 

To learn more:


Sacha Labourey

The Cloud as a Tectonic Shift in IT: 
  • Part 1, The Industrialization of ITSacha compares the evolution of electricity and its delivery with the evolution of IT. He draws parallels between the instant-on access and pay-as-you-go models in the power industry with the evolution occurring now in IT service delivery.
  • Part 2, The Irrelevance of Infrastructure as a Service (IaaS), see above
Follow CloudBees:


Tuesday, June 19, 2012

Android CI on DEV@cloud

Today's blog is a guest blog by Ashley Willis, Android developer extraordinare. Read all about the nifty project she did for K-9 Mail, using BuildHive and DEV@cloud!

In my spare time (which I've had a lot of recently), I've been developing for K-9 Mail. We recently had a security/privacy bug fix go out that broke K-9 on older Android versions (I won't mention any names), and then the quick fix for that didn't work for some users with a certain setup. Yes, we've had a complete lack of QA, but I can count on one hand the number of active developers we have—and, no, I'm not counting in binary!

Shortly after that someone mentions BuildHive. I decide to check it out, thinking it might help us avoid such situations in the future (assuming we actually write some tests to cover more than 5% of our code). BuildHive is quick and easy to get started with if you're using GitHub (which K-9 is), but it doesn't quite meet my needs. DEV@cloud, also from CloudBees, gives the full Jenkins product, though, and you don't have to be using GitHub, or even Git. Also, CloudBees is nice enough to have special accounts for FOSS projects, which is great as firing up an Android emulator and running tests on it takes some time, and then you might want to test on every target you support for your stable branch.

Building an Android project on CloudBees shouldn't be any different than building any other Java project—it's getting the emulator going that's the challenge. There's an Android Emulator plugin for Jenkins, but it, among some other Jenkins plugins, are not available on free accounts. And even if you do have access to it, it starts up before anything else is done, wasting valuable EC2 time if your build horribly fails and there's nothing to install on it (it seems like a great plugin if you're running your own Jenkins multi-core server, though). So, I wrote a Bash script (available here) which does much of what the plugin does, but when I want it done—after a successful compile (which could be slower with an emulator running in the background).

For our Jenkins setup, I created a job named master. Miscellaneous options I filled in include our Google code website and GitHub project (which I think is not needed as CloudBees has no special permissions on our public repo), as well as a description.

I also have a Job Notification Endpoint pointing to a port on my computer that lets me know when jobs are started and finished—this was useful before we had access to the IRC plugin which is not standard with free accounts. Below that is the option to Restrict where this project can be run where you can enter m1.large if your account has access to that. I do this when I'm creating a new AVD snapshot as they take quite some time, otherwise I don't restrict and it defaults (usually?) to m1.small.

Since we are using GitHub, I selected Git under Source Code Management. I have the Repository URL as git://, then click on Advanced and set the Name as origin and the Refspec as +refs/heads/master:refs/remotes/origin/master. Then for Branches to build I have origin/master, then click on Advanced and select Fast remote polling. All this gets this job to pay attention to only the master branch, and I have another job for our stable branch.

Under Build Triggers I selected Build when a change is pushed to GitHub so that whenever anyone pushes to our repo, DEV@cloud will automatically build and test. Under Build Environment I selected Add timestamps to the Console Output so I can see how long various steps take.

Now onto the main Build section. You can add various steps here, from invoking Ant or Maven, installing Android packages if using the Android Emulator plugin, and so on, but I only have one—Execute shell, which contains a list of commands to run:

rm -f;
$ANDROID_HOME/tools/android update project --path ./;
cp -f tests/;

# path to cloudbees private storage:
export PRIVATE=${JENKINS_HOME/home/private}

# always sign with the same debug key:
mkdir -p  ~/.android
cp -f $PRIVATE/debug.keystore ~/.android/

# build the project:
cd tests/
ant all clean
bash $PRIVATE/ emma debug artifacts

# start/create the emulator:
export AVD_NAME=android-7
export AVD_TARGET=android-7
bash $PRIVATE/ -n $AVD_NAME -t $AVD_TARGET -c 10M
source $WORKSPACE/.adbports

# do tests and such:
ANDROID_ADB_SERVER_PORT=$ANDROID_ADB_SERVER_PORT bash $PRIVATE/ -Dadb.device.arg=-e emma installd test
cd ..
bash $PRIVATE/ javadoc > javadoc.log # the log is ignored, but building javadoc spews tons of junk in general

# fix output from running via
eval `grep -P '^(DIST|BETA)_' $PRIVATE/`
find javadoc/ lint-results.xml monkey.txt tests/coverage.xml tests/junit-report.xml -type f -print0 | \
find javadoc/ lint-results.xml monkey.txt tests/coverage.xml tests/junit-report.xml -type f -print0 | \
find javadoc/ -type f -print0 | xargs -0 perl -pi -e"s|$BETA_LOGTAG|$DIST_LOGTAG|g"
if [[ "${DIST_TLD}" != "${BETA_TLD}" ]]; then
    mv javadoc/${BETA_TLD} javadoc/${DIST_TLD}
if [[ "${DIST_DOMAIN}" != "${BETA_DOMAIN}" ]]; then
    mv javadoc/${DIST_TLD}/${BETA_DOMAIN} javadoc/${DIST_TLD}/${DIST_DOMAIN}
if [[ "${DIST_PROJECT}" != "${BETA_PROJECT}" ]]; then

# kill the emulator:
kill `cat /tmp/$USER-$`

I've added some extra build targets to our Ant project for lint-xml, javadoc, and monkey (commented out above); as well as artifacts, which copies both the project and the test project to unique names based on Jenkins build properties. I also overrode Android's test target in tests/build.xml in order to get code coverage and the JUnit report in XML format. For the JUnit support, I replaced the default test runner with android-junit-report by dropping the source in tests/src/. Take a look at build.xml, build_common.xml, and tests/build.xml from K-9 Mail.

Also notice the environment variables set above: AVD_NAME=android-7 and AVD_TARGET=android-7. AVD_TARGET must be set to a valid target installed on DEV@cloud (they have the standard ones and then some), where as AVD_NAME you can set to whatever you want. I'm thinking about adding a feature to the script to randomly or sequentially go through the different AVDs, one per build, unless the testing failed on the previous build. On our stable branch I plan on going through all the generic targets we support. The script that creates and starts AVDs is, which must be called by bash since CloudBees private storage is mounted without execute permissions.

Finally concerning the building, some Ant targets are called with (available in the auto-avd repo) instead of ant. This Bash script with Perl one-liners temporarily renames the project (it can also be used to completely fork a project with an entirely new name), so that the created apk files have a different project name. This allows the build and tests to be installed and ran on any device which already has our project installed without interfering with the active installation, and so we can get a better idea of what might be going wrong on a particular user's device. You'll need to edit its variables to suit your project before using it. The above code also fixes the reports and Javadoc so they don't have the temporary name in them.

Last but not least, after the build is complete, I have the following Post-build Actions selected. These publish the various reports and do notifications. You might not have all of these with a free account, and some of these require installing the Jenkins plugins at 

   Publish Android Lint results with the file lint-results.xml
   Scan for compiler warnings with Parser Java Compiler (javac)
   Archive the artifacts with files 
   Publish Android monkey tester result with filename monkey.txt
   Publish JUnit test result report with file tests/junit-report.xml
   Publish Javadoc with directory javadoc
Record Emma coverage report with file tests/coverage.xml
   E-mail Notification with Send separate e-mails to individuals who broke the build checked
   IRC Notification with Notification Strategy set to all and Channel Notification Message set to Summary, SCM changes and failed tests.

When it's all finished, our IRC channel is notified with a summary, including which tests failed if any. Also, Javadoc, Lint, EMMA coverage, and JUnit reports are published on the job page, and the artifacts are available for download and can be tested by anybody. And that's all pretty nifty. :)

Hopefully I've made these scripts general-purpose enough for any project, but they might require some tweaks depending on how your project is laid out. If you have suggestions for improvement, please let me know—preferably via a pull request or issue report at

Ashley Willis

To read Ashley's original blog:

Follow CloudBees: