Thursday, September 18, 2014

Customer Spotlight: Choose Digital

At CloudBees, we have a lot of innovative customers. They’ve established leadership positions in the marketplace with their great ideas, hard work and a little help from the CloudBees Continuous Delivery Platform.

This blog is the first of several that we will run from time-to-time, highlighting various CloudBees customers. In this first post, we head to Miami to visit Mario Cruz, co-founder and CTO of Choose Digital (recently acquired by Viggle).

Mario, tell us about yourself.
I’m a technologist, born in Cuba and now living in the Miami area. I've now been developing and marketing B2B and B2C technology solutions for over 20 years.

Tell us about Choose Digital.
We developed a private-label digital marketplace that has enabled companies to launch a digital content strategy incorporating the latest in music, movies, TV shows, eBooks and audiobooks. SkyMall, Marriott, United Airlines and others have tapped into our platform to up-level initiatives such as customer loyalty programs, promotional offers, affinity sales channels and digital retail roll-outs. We’ve had great success providing a streamlined channel, helping companies navigate around licensing conflicts, reduce brand friction and take control of usage data. We’ve also provided solutions for musicians and authors to market their work directly to fans and monetize their social media followings.

What did you do before you started Choose Digital?
I’ve had a bunch of jobs in the technology space. I spent three years as CTO of Grass Roots America, a provider of global performance improvement solutions for employee, channel and consumers. I oversaw the business’s technology, infrastructure and information security in the Americas region. Before that I worked for five years as CIO of Rewards Network, operator of loyalty dining programs in the U.S. for most major airlines, hotels and credit card companies.

What kinds of challenges did you face at Choose Digital that spurred you to start working with CloudBees?
We felt we had to be the first to market and we dedicated all our resources to this goal. We didn’t have time for long development and integration cycles. We didn’t want to worry about setting up and maintaining a Java infrastructure, so we adopted Jenkins in the cloud - the CloudBees’ cloud platform. We were up and running with DEV@cloud in just one day. And using CloudBees’ ClickStarts we were able to set up new projects in about an hour. If we had to set up our own hardware or use a IaaS solution, development would have taken three to five times as long, and costs would have been multiplied by a factor of 10 to 15.

Can you talk about your experience with Continuous Delivery, using CloudBees’ technology?
Using a continuous delivery model, we’re able to experiment cheaply and quickly, with low risk. We’re able to run every step of the process in a streamlined manner. Every update kicks off a series of tests, and once the tests pass, the update deploys to production. Everything is automated using Jenkins and deployed to CloudBees. Rather than wait for new versions, we can constantly push, build in improvements and be confident that production will never be more than a couple of hours behind. This gives us control over our development process and instills a certain amount of trust within the staff that projects we undertake will get done on time, on budget and with the quality that we need.

Your business is all about helping companies make strategic use of digital content. What do you like to listen to, read and watch in your spare time?
I’m in the right profession because I’m a huge consumer of content myself – all kinds.

My favorite book is probably “Bluebeard,” by Kurt Vonnegut. It’s about an abstract impressionist painter who, in typical Vonnegut form, has some eccentric ideas about how to create and promote art. The first movie I ever saw was “Raiders of the Lost Ark.” It made me want to travel the world, and luckily my technology career has allowed me to do that. Going way back, my first 45 record was “Freeze Frame” by the J. Geils Band and my first album was “Ghost in the Machine” by the Police.

I’m still a big music guy. I play drums in a band called Switch, which plays all kinds of music, from the Doobie Brothers to Four Non Blondes. I used to be in a bunch of other bands called The Pull, Premonition and Wisdom of Crocodiles. (To see/hear Mario playing the drums in his band, go to this post by Mario.)

So, what’s next for you?
After Choose Digital being acquired by Viggle my goal is to make sure Viggle members get the best media rewards for doing things they love to do – like watching TV and listening to music - while continuing to innovate on our platform.

Read the case study about Mario and his team at Choose Digital
Follow Mario on Twitter: @mariocruz

Monday, September 15, 2014

Webinar Q&A: Continuous Delivery with Jenkins and Puppet - Debug Bad Bits in Production

Thank you to everyone who joined us on our webinar.


We presented:


  • How to build a modern continuous delivery pipeline with Jenkins
  • Connect Jenkins and Puppet such that Dev and Ops team can determine what happens on the other side of the house and closely interact to debug issues in production environments.


Webinar recording is here.


Following are answers to questions we received during the webinar:
________________________________________________________________

Q: Is Puppet serving as the orchestrator for Jenkins?
A: Not quite - the tools run independently but communicate with each other. The demo will make it clear.

Q: Can JMeter be plugged in with Jenkins for Continuous testing?
A: Yes it can. 

Q: When we say continuous testing do we mean automated testing here?
A: Continuous Testing = automated testing for each commits made in the source repository.

Q: What drivers or plugins are required? Can I get a website where I can get this info?
A: https://wiki.jenkins-ci.org/display/JENKINS/JMeter+Plugin

Q: With JMeter can we run a load test using the build in Jenkins, or how can we do continuous testing with this combination?
A: JMeter is going to used for load testing stage. It depends how you setup your workflow/pipeline. If you run perf test on every commit (you shouldn't) but you have continuous testing. You will have more testing stages ideally.

Q: Can Puppet work with VM's
A: Yes, Puppet can work with VMs. Puppet agents live at the OS level, and can be deployed to virtual machines or bare hardware. Puppet is agnostic to where or how it has been deployed. We do have some hooks and integrations around provisioning new VMs as well.

Q: I'm curious that I don't see AWS/EC2 under "Virtual & Cloud" for Puppet along with VMware, Xen, Azure ... is there a reason? Any concerns I should have about compatibility with EC2 infrastructure?
A:  No, there are no concerns around EC2. Puppet runs great in EC2 and we have many customers running their infrastructure with Puppet in Amazon's cloud.

Q: Are you going to share these scripts somewhere?
A: Demo write up available on CloudBees developer wiki. The jenkinsci infrastructure is available at https://github.com/jenkinsci/infra-puppet

Q: I understand that Puppet helps create an MD5 hash file of the war file -  build deployments. Could you provide a basic definition of what is Puppet and what is Docker?
A: Puppet (stealing from the Puppet page)

Puppet Enterprise (PE) uses Puppet as the core of its configuration management features. Puppet models desired system states, enforces those states, and reports any variances so you can track what Puppet is doing.

To model system states, Puppet uses a declarative resource-based language — this means a user describes adesired final state (e.g. “this package must be installed” or “this service must be running”) rather than describing a series of steps to execute

Docker (stealing from Docker.io)

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.



Q: Will this work with SVN too?

A: There is an equivalent version of Validated Merge for Jenkins that our team has pushed out in OSS.

Q: Will Validated merge with SVN repo too?
A: See above.

Q: Is an equivalent to the gated repo available with subversion?  It's a great idea; a while back I'd worked with a similar homegrown solution for Perforce.
A: See above.

Q: What's the difference between open source Jenkins & CloudBees's version?
A: See this link.

Q: Where I could get the quotation if I want to buy?
A: Email sales@cloudbees.com

Q: Does Puppet require root access for Unix host? What privileges would it require as a user?
A: The Puppet agent typically runs as root in order to be able to fully configure the system, but it does not require those privileges. When running as a non-privileged user, it will only be able to manage aspects of the system the user has permissions for.

Q: When Harpreet was doing the Traceability demo, the Jenkins screen that showed the artifact deployment state had a field for 'Previous version' that was blank. Why was that empty? What value would normally be in there, the MD5 hash of the previous artifact?
A: Those would change if I had checked in new code thus altering the MD5 hash. Since I was just rebuilding the same image in the demo, the hashes are same and hence no previous version.

Q: Is Puppet capable to work with IBM Solutions? like Websphere?
A: Yes. In general, if it's possible to manage or modify an application from the command line of a system, it is possible to build a Puppet model for it. Check out forge.puppetlabs.com for 2500+ examples of pre-built community and supported modules.

Q: I read that about the agent, but what about the master? If not, can you run Puppet without a master?
A: The master is effectively a web service, which does not require root privileges, so it too can be run without root. For testing and development, you can run Puppet in a stand-alone mode using the `puppet apply` family of commands.

Q: Does Puppet need vagrant to run or can we run it directly on the VM?
A: Puppet can be run directly on a VM. It does not have dependencies on Vagrant or any other specific virtualization/cloud management software.

Q: How does the facility match with the preccommit checkin provided by Visual Studio Env?
A: I am not familiar with Visual Studio Env but documentation indicates that those are just environment variables that are in injected into builds, if so then Jenkins can understand environment variables.



-- Harpreet Singh

Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter
-- Reid Vandewiele
www.puppetlabs.com


Reid is a technical solutions engineer at Puppet Labs, Inc.


Thursday, September 11, 2014

CloudBees Becomes the Enterprise Jenkins Company

Since we founded the company, back in 2010, CloudBees always had the vision to help enterprises accelerate the way they develop and deploy applications. To that end we delivered a PaaS that covered the entire application lifecycle, from development, continuous integration and deployment to staging and production. As part of this platform, Jenkins always played a prominent role. Based on popular demand for Jenkins CI, we quickly responded and also provided an on-premise Jenkins distribution, Jenkins Enterprise by CloudBees.

Initially, Jenkins Enterprise by CloudBees customers were mainly using Jenkins on-premise for CI workloads. But in the last two years, a growing number of customers have pursued an extensive Continuous Delivery strategy and Jenkins has moved from a developer-centric tool to a company-wide Continuous Delivery hub, orchestrating many of the key company IT assets.

For CloudBees, this shift has translated into a massive growth of our Jenkins Enterprise by CloudBees business and has forced us to reflect on how we see our future. Since a number of CloudBees employees, advisors and investors are ex-JBossians, we’ve had the chance to witness first-hand what a successful open source phenomenon is and how it can translate into a successful business model, while respecting its independence and further fueling its growth. Consequently, it quickly became obvious to us that we had to re-focus the company to become the Enterprise Jenkins Company, both on-premise and in the cloud, hence exit the runtime PaaS business (RUN@cloud & WEAVE@cloud). While this wasn’t a light-hearted decision (we are still PaaS lovers!), this is the right decision for the company.

With regard to our existing RUN@cloud customers, we’ve already reached out to each of them to make sure they’re being taken care of. We’ve published a detailed migration guide and have setup a migration task-force that will help them with any question related to the migration of their applications.  (Read our FAQ for RUN@cloud customers.) We’ve also worked with a number of third-party PaaS providers and will be able to perform introductions as needed. We’ve always claimed that our PaaS, based on open standards and open source (Tomcat, JBoss, MongoDB, MySQL, etc.) would not lock customers in, so we think those migrations should be relatively painless. In any case, we’ll do everything we can to make all customer transitions a success

From a Jenkins portfolio standpoint, refocusing the company means we will be able to significantly increase our engineering contribution to Jenkins, both in the open source community as well as in our enterprise products. Kohsuke Kawaguchi, founder of Jenkins and CTO at CloudBees, is also making sure that what we do as a company preserves the interest of the community.

Our Jenkins-based portfolio will fit a wide range of deployment scenarios:
  • Running Jenkins Enterprise by CloudBees within enterprises on native hardware or virtualized environments, thanks to our enterprise extensions (such as role-based access control, clustering, vSphere support, etc.)
  • Running Jenkins Enterprise by CloudBees on private and public cloud environments, making it possible for enterprises to leverage the elastic and self-service cloud attributes offered by those cloud layers. On that topic, see the Pivotal partnership we announced today. I also blogged about the new partnership here.
  • Consuming Jenkins as a service, fully managed for you by CloudBees in the public cloud, thanks to our DEV@cloud offering (soon to be renamed “CloudBees Jenkins as a Service”).

Furthermore, thanks to CloudBees Jenkins Operations Center, you’ll be able to run Jenkins Enterprise by CloudBees at scale on any mix of the above scenarios (native hardware, private cloud, public cloud and SaaS), all managed and monitored from a central point.

From a market standpoint, several powerful waves are re-shaping the IT landscape as we know it today: Continuous Delivery, Cloud and DevOps. A number of companies sit at the intersection of those forces: Amazon, Google, Chef, Puppet, Atlassian, Docker, CloudBees, etc. We think those companies are in a strategic position to become tomorrow’s leading IT vendors.

Onward,

Sacha

Additional Resources
Read the press release about our new Jenkins focus
Read our FAQ for RUN@cloud customers
Read Steve Harris's blog







Sacha Labourey is the CEO and founder of CloudBees.

CloudBees Partners with Pivotal

Today, Pivotal and CloudBees are announcing a strategic partnership, one that sits at the intersection of two very powerful waves that are re-shaping the IT landscape as we know it today: Cloud and Continuous Delivery.

Pivotal has been executing on an ambitious platform strategy that makes it possible for enterprises to benefit from a wide range of services within their existing datacenter: from Infrastructure as a Service  (IaaS) up to Platform as a Service (PaaS), as well as a very valuable service, Pivotal Network, that makes it trivial to deploy certified third-party solutions on your Pivotal private cloud. (To read Pivotal's view on the partnership, check out the blog authored by Nima Badiey, head of ecosystem partnerships and business development for Cloud Foundry.)

As such, our teams have been working closely on delivering a CloudBees Jenkins Enterprise solution specifically crafted for Pivotal CF. It will feature a unique user experience and will be leveraging Pivotal’s cloud layer to provide self-service and elasticity to CloudBees Jenkins Enterprise users. We expect our common solution to be available on Pivotal CF later this year, and we will be iteratively increasing the feature set.

Given Jenkins’ flexibility, Pivotal customers will be using our combined offering in a variety of ways but two leading scenarios are already emerging.

The first scenario is for Pivotal developers to use Jenkins to perform continuous integration and continuous delivery of applications deployed on top of the Pivotal CF PaaS. CloudBees Jenkins Enterprise provides an integration with the CloudFoundry PaaS API that makes the application deployment process very smooth and straightforward. This first scenario provides first class support for continuous delivery to Pivotal CF developers.

The second scenario focuses on enterprises relying on Jenkins for continuous integration and/or continuous delivery of existing (non-Pivotal CF-based) applications. Thanks to the Pivotal/CloudBees partnership, companies will ultimately be able to leverage the Pivotal cloud to benefit from elastic build capacity as well as the ability to provision more resources on-demand, in a self-service fashion.

The CloudBees team is very proud to partner with Pivotal and bring Pivotal users access to CloudBees Jenkins Enterprise, the leading continuous delivery solution.

Onward,

Sacha







Sacha Labourey is the CEO and founder of CloudBees.

Reflections on the PaaS Marketplace

Cairn from the
Canadian Arctic Expedition
Entering the PaaS marketplace in 2010 resembled a polar expedition near the turn of the last century. Lots of preparation and fundraising required, not a lot of information about what you’d encounter on the journey, life-and-death decisionmaking along the way, shifting and difficult terrain in unpredictable conditions and intense competition for the prize. At least we didn’t have to eat the dogs.

In case you missed it, CloudBees announced that we’ll no longer offer our runtime PaaS, RUN@cloud. Instead, we’re focusing on our growing Jenkins Enterprise by CloudBees subscription business - on-prem, in the cloud, and connecting the two - and the continuous delivery space where Jenkins plays such a key role. Jenkins has been at the core of our PaaS offering all the way along, so in some ways, this is less of a pivot than a re-focusing. Still, it’s an important event for CloudBees customers, many of whom rely on our runtime services and the integrated dev-to-deployment model we offer. We’ll continue to support those customers on RUN@cloud for an extended period and help them transition as painlessly as possible to alternatives (read our FAQ about the RUN@cloud news). Given our open PaaS approach and the range of offerings in the marketplace, the transition will be non-trivial, but manageable (read our transition documentation). Given that background, I wanted to share some thoughts behind our move and what we see going on in the PaaS marketplace.

A Platform, Of Sorts
By Agrant141 [CC-BY-SA-3.0]
As a team, we come from a platform background. To us, cloud changes the equation in how people build, deploy and manage applications. So, the platforms we’re all used to building on top of - like Java - need to change scope and style to be effective. That idea has driven a lot of what we delivered at CloudBees. It’s why Jenkins was such a big part of the offering, because from our perspective Continuous Integration and Continuous Delivery really needed to be integral to the experience when you’re delivering as-a-service with elastic resources, on-demand. I think we have been proven right. Doubts? Take a look at what Google is doing with the Google Cloud Platform. They agree with us and they built their solution around Jenkins. This is also why primarily runtime-deployment-focused PaaS offerings like Pivotal’s Cloud Foundry partner with us on Jenkins.

What’s changed, then?
  • Service - IaaS platform affinity. IaaS providers, but particularly AWS and Google, are moving up-stack rapidly, fleshing out a wider and wider array of very capable services. These services often come with rich APIs that are part of the IaaS-provider’s platform. Google Cloud Services is a good example. If you’re an Android developer, it’s your go-to toolbox to unlock location and notification services. It also incentivizes you to use Google identity and runtime GAE services. The same is true on AWS and Azure with some different slants and degrees of lock-in. Expect the same on any public cloud offering that aims to succeed longer term. This upstack march by the IaaS vendors blurs the line on PaaS value. PaaS vendors like CloudBees can make it easy to consume these IaaS-native services, but how the value sorts itself out for end-users between “PaaS-native” services and those coming directly from the IaaS provider is unclear.
  • What’s a platform? Who’s to say that AWS Elastic Beanstalk is less of a platform than what CloudBees offers? I’d like to think I have some experience and credibility to speak to the topic, and I can assure you ours is superior in all ways that matter technically. But in the end, if a bunch of Ruby scripts pushing CloudFormation templates make it as simple to deploy, update, and monitor a Java app as CloudBees does, those distinctions just don’t matter to most users. This is not to say that Beanstalk is functionally equivalent to CloudBees today, because it isn’t. But it’s a lot closer than it was two years ago. The integration with VPC is front-and-center, because, well, they are AWS and as an end-user, you’re using your own account with it, while we are managing the PaaS on your behalf. My point here is that our emphasis on platform value, which was very much a differentiator two years ago, is less of one today and will continue to decrease even as we add feature/functionality. Is that because we are being outpaced by competitors who were behind? No, it’s because as IaaS-native services expand their scope and the platform itself changes (see next point), the extra value that can be added by a pure-play PaaS gets boxed-in.
  • Commoditization of platform. There is a lot going on in this area that is hard to capture succinctly. First, there is the Cloud Foundry effect. Cloud Foundry has executed well on an innovate-leverage-commoditize (ILC) strategy using open source and ecosystem as the key weapons in that approach. Without any serious presence in public cloud, Pivotal Cloud Foundry has produced partnerships with the largest, established players in enterprise middleware and apps. In turn, that middleware marketplace ($20B) is prime hunting ground for PaaS, and Cloud Foundry has served up fresh hope to IT people searching desperately for a private cloud strategy with roots in open source. Glimmers of hope for success in on-prem private PaaS in the enterprise act as a damper on public cloud PaaS adoption, making a risk-averse enterprise marketplace even more sluggish. Second, thanks to Docker, the containerization of apps - a mainstay implementation strategy of PaaS providers like CloudBees - is becoming “standard” and simple for everyone to use. It’s been embraced by Google as a means to make their offering more customizable, and even Amazon hasn’t been able to ignore it. This shift changes the PaaS equation again, because combining Docker with infrastructure automation tools like Chef and Puppet starts to look a lot like PaaS. New tools like Mesos also change the landscape when combined with Docker. Granted for those paying attention to details, Docker still has some holes in it, but don’t expect those to remain unplugged for long.
  • It’s about service. There is a clear dividing line among PaaS players between fully-managed (think: CloudBees, Heroku) and self-managed (think: any on-prem solution, AWS Elastic Beanstalk). Broadly speaking, the startups and SME customers tend to lean toward the fully-managed side, while the larger enterprises lean toward the self-managed side. The platform changes I was covering above continue to make self-service easier, while reducing the perceived value of the fully-managed approach. I say “perceived” because the gap between the perceived and actual effort to implement a PaaS and operate it at scale is huge. It’s something that is hard for people to understand, especially if they haven’t lived through it. But, perception is reality at the buying stage, even if the reality bites at delivery. The technology and organizational investment of Heroku and CloudBees to operate at scale and to deliver deep, quality service is significant, but the perception gap leads people to equate it to the labor associated with answering PagerDuties and Nagios alerts. Furthermore, as the IaaS players move more up-stack, and customers consume a broader mixture of self-service and fully-managed value-add services, the gap increases. The other difference between fully-managed vs. self-service centers around the delivery model. When you deliver as-a-service, like we do with the CloudBees PaaS, you have advantages that are not available to on-prem software delivery and support models. But, from a CloudBees perspective, with a large, growing business delivering to on-premise Jenkins Enterprise users, we really need to think of our fully-managed Jenkins more as a SaaS, not just a component of a broader PaaS offering.
What does all this change mean to the PaaS marketplace? In addition to the moves I noted earlier, you can already observe some of the impact:
  • Google consolidated their PaaS GAE and IaaS GCE stories into a single, powerful developer-savvy Google Cloud Platform story, with more consistency no doubt on the way from the mobile side of the house.
  • CenturyLink bought AppFog and Tier3, putting the combined IaaS and PaaS pieces in place to move up from being just a hosting provider.
  • IBM moved all SmartCloud Enterprise efforts onto Softlayer and consolidated PaaS efforts behind the Cloud Foundry based BlueMix to extend the life of WebSphere in the cloud. At the same time, the introduction of UrbanCode gives them DevOps coolness, at least as much coolness as a blue shop can handle.
  • Microsoft blurred the line between Azure PaaS and a real public IaaS, a clear recognition that combined there is more value and better ways to appeal to a broader audience.
  • DotCloud pivoted to become Docker, re-purposing their internal containerization investments and de-emphasizing their PaaS business.
  • Heroku aligned more closely with the Salesforce side of the house in Heroku1 - you know, the part with access to enterprise companies with deep pockets who already trust Salesforce with some of their most sensitive information.
  • Rackspace, caught in the middle without a IaaS or PaaS card to play, is floundering and looking for a buyer.
  • In a classic enemy-of-my-enemy confederation, traditional enterprise players have lined up behind OpenStack. Because of its open source heritage, Red Hat is well positioned to grab the leadership ring in what appears to be a contentious, political, but perhaps too-big-to-fail mess.
  • Looking to avoid the messiness of OpenStack but to obtain an aura of community governance around its Cloud Foundry efforts, Pivotal created a new pay-to-play Cloud Foundry Foundation and notched up a broad range of enterprise participants.
  • Amidst all this, Amazon just continues their relentless pace to add more services, the latest onslaught being aimed at mobile and collaboration.
Taken together, these changes demonstrate market consolidation, platform commoditization, a continued strength of on-prem solutions in the enterprise, and the important strategic leverage to be obtained by combining IaaS, PaaS and managed service offerings. Longer term, it calls into question whether there will even be a PaaS marketplace that is identifiable except by the most academic of distinctions. These are not trends we can ignore, particularly when we have a successful and growing business centered on Jenkins.

Amundsen Expedition
So, we’re emerging from our PaaS polar expedition. Like a triumphant Amundsen, we are leaving behind some noble competitors. We’re taking what we’ve learned and are applying the lessons toward new adventures. Jenkins is an incredible phenomenon. It’s built around an amazing open source community that is populated with passionate advocates. With its Continuous Integration roots, Jenkins sits at the center of the fundamental changes cloud has ushered in to software development - the same ones that brought CloudBees into existence in the PaaS world. Join us and follow us as we push the boundaries of Continuous Delivery using Jenkins, and as we work with the community to make sure Jenkins continues to be the tool of choice for software development and delivery both on-premise and in the cloud.


Resources:





Steven Harris is senior vice president of products at CloudBees (and a fan of Roald Amundsen). 
Follow Steve on Twitter.

Wednesday, September 10, 2014

Advanced Git with Jenkins

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Harpreet Singh, VP Product Management, CloudBees about a presentation given by Christopher Orr of iosphere GmbH at JUC Berlin.

Git has become the repository of choice for developers everywhere and Jenkins supports git very well. In the talk, Christopher shed light on advanced configuration options for the git plugin. Cloning extremely large repositories is an expensive proposition and he outlined a solution for speeding up builds with large repositories.

Advanced Git Options
The are three main axes for building projects: What, When and How.
Git plugin options
What to build:
The refspec option on Jenkins lets you choose on what you need to build. By default, the plugin will build the master branch, this option can be supplanted by wildcards to build specific features or tags. For example:


  • */feature/* will build a specific feature branch
  • */tags/beta/* will build a beta version of a specific tag
  • +refs/pull/*:refs/remotes/origin/pull/* will build pull requests from GitHub

The default strategy is usually to build particular branches. So for example if refspec is */release/*, branches release/1.0release/2.0 will be built while branches feature/123bugfix/123 will be ignored. To build feature/123/ and bugfix/123, you can flip this around by choosing the Inverse strategy.

Choosing the build strategy

When to build:
Generally, polling should not be used and webhooks are the preferred options when configuring jobs. OTOH, if you have a project that needs to be built nightly only if a commit made it to the repository during the day, it can be easily setup as follows:



How to build:
A git clone operation is performed to clone the repository before building it. The clone operation can be speeded up by using shallow clone (no history is cloned). Furthermore by using the "reference repo" during the clone operation, builds can be speeded up. In the reference repo option, the repository is cloned to a local directory and from there on, this local repository is used for subsequent clone operations. A network access is made only if the repository is unavailable. Ideally, you line these up, so shallow clone for the first clone (fast clone) and reference repo for faster builds subsequently.


Equivalent to git clone --reference option



Working with Large Repositories
The iosphere team uses the reference repository approach to speed up builds. They have augmented this approach by inserting a proxy server (git-webhook-proxy [1]) between the actual repo and Jenkins. Thus, a clone happens to this proxy server. The slave setup plugin copies the workspace over to the slaves (over NAS) and builds proceed there on. Since network access is restricted to the proxy server and each slave does a local copy, this speeds up builds considerably. 


git-webhook-proxy: to speed up workspace clones

The git-webhook-proxy option seems a compelling solution, well worth investigating if your team is trying to speed up builds.

[1] git-webhook-proxy



-- Harpreet Singh

Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter




Monday, September 8, 2014

[Infographic] Need To Deliver Software Faster? Continuous Delivery May Be The Answer

More and more organizations are realizing the impact of delivering applications in an accelerated manner. Many of those that are seeking to do so are leveraging DevOps functions internally and moving towards Continuous Delivery. Did you know that 40% of companies practicing Continuous Delivery increased frequency of code delivery by 10% or more in past 12 months?

Do you need to deliver software faster? This infographic based off the DevOps and Continuous Delivery survey conducted by the EMA shows why Continuous Delivery may be the answer.


Download your copy of the DevOps and Continuous Delivery paper to read the entire report based on the EMA survey.


Christina Pappas
Marketing Funnel Manager
CloudBees

Follow her on Twitter