Wednesday, August 20, 2014

Integrated Pipelines with Jenkins CI

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Félix Belzunce, solutions architect, CloudBees about a presentation given by Mark Rendell, Accenture, at JUC Berlin.

Integrated Pipelines is a pattern that Mark Rendell uses at Accenture to reduce the complexity of integrating different packages when they come from different source control repositories.

The image below, which was one of the slides that Mark presented, represents the problem of building several packages, which will need to be integrated at some point. The build version we need to use, how to manage the control flow and what exactly we need to release are all the main pain points when you are working on such an integration.


Mark proposes a solution where you will not only create a CI pipeline, but also an Integration pipeline to be able to fix the problem. In order to stop displaying all the jobs downstream inside the pipeline, Mark uses a Groovy script. For deploying the right version of the application, several approaches could be used: Maven, Nexus or even a simple plain text file.


The pattern can scale up, but using this same concept for micro services could be indeed a big challenge as the number or pipelines significantly scales up. As Mark pointed out, it cannot only be applied to micro services or applications, as this concept on Jenkins could be also used when you do Continuous Delivery to manage your infrastructure.

You might use similar jobs configurations along your different pipelines. The CloudBees templates plugin will be useful to templatize your different jobs, allowing you to save time and making the process more reliable. It also allows you to do a one time modification in the template which will automatically be pushed to all the jobs without going individually from one job to another.

View the slides and video from this talk here.



Félix Belzunce
Solutions Architect
CloudBees

Félix Belzunce is a solutions architect for CloudBees based in Europe. He focuses on continuous delivery. Read more about him on his Meet the Bees blog post and follow him on Twitter.

Thursday, August 14, 2014

Webinar Q&A: "Scaling Jenkins in the Enterprise"

Thank you to everyone who joined us on our webinar, the recording is now available.

Below are some of the questions we received to webinar:

Q: How do you implement HA on the data layer (jobs)?  Do you have the data hosted on a network drive?

A: Yes - the 2 masters (primary and failover) share a filesystem visible to both over a network. You can read about HA setup here.

Q: I would like to know how to have a different UI instead of Jenkins UI. If I want to customize the Jenkins UI what needs to be done?

A: There are plugins in the open source community that offer customizable UIs for Jenkins: Simple Theme Plugin is one popular example.

            Q: I want to have a new UI for Jenkins. I want to limit certain things for the Jenkins
                 user.


            A: Interesting. What types of things? A lot of the Jenkins Enterprise plugins allow admins to
                exercise stricter limits on different roles' access to certain functions in Jenkins, whether
                that be through templating or Role-based access control with Folders. The Jenkins
                Enterprise templates also allow you to “hide” some configuration parameters.


            Q: Let's take simple example. I want to have a very simple UI for the parameterized
                 build where a user can submit the SRC path and the build script name. He
                 submits that job by specifying the above two values. How we can have a very
                 simple UI instead of Jenkins UI?


            A: Okay - this is exactly the use case that the job template was designed for. See
                 the last image in the job template tutorial.


            Q: Looks like it will work. How I can get rid of the left hand Jenkins menu?

            A: You can remove most of the options in that menu with the role-based access 
                control plugin - you can remove certain roles' ability to create new jobs, configure
                the system, kick off builds, delete 
projects, and see changes/the work space, etc,
                which will remove most all of the options in that 
menu.

Q: We use the open source version of Jenkins and we have been facing an issue with parsing the console log. We use curl and there is a limit for console text to be displayed for only 10000 lines. Will this enterprise edition handle that issue?

A: It sounds like you're seeing Run.doConsoleText being truncated, though it seems there shouldn't be a 1000-line limit, I just checked sources and it looks to send the full log, regardless of size.

Q: Is there a customizable workflow capability to allow me to configure some change control and release management process for enterprise?

A: The Jenkins community is currently developing a workflow plugin (0.1-beta at the moment). Jesse Glick, engineer at CloudBees, did a presentation about it at the '14 Boston JUC. CloudBees is working on enterprise workflow features such as checkpoints as a part of Jenkins Enterprise by CloudBees.

Q: Is there any framework/processes/checklists that you follow to ensure the consistency/security of multi-tenant slaves across multiple masters?

A: Please see the recording of the webinar for the answer

            Q: Is there a way to version control job configuration?

            A: Yes - CloudBees offers a Backup Plugin that allows your to store your job configs
                 in a tar ball. You can set how long to retain these configs and how many to keep,
                 just as you would for a job's run history. You can also use the Jenkins
                Job Configuration History plugin.

            Q: This backup plugin is available with the open source version of Jenkins?

            A: The backup plugin that I'm speaking of is only a part of
                the Jenkins Enterprise package of plugins.

Q: How is the environment specific deployment done through same project configuration in Jenkins?

A: You can use CloudBees' template plugin to define projects and then have a job template take environment variables to pull them from a parent folder with Groovy Scripting, or to take them from user input using the parameterized builds plugin:  
http://developer-blog.cloudbees.com/2013/07/jenkins-template-plugin-and-build.html
http://jenkins-enterprise.cloudbees.com/docs/user-guide-bundle/template-sect-job.html

Q: Do we need to purchase additional licenses if we want to set up an upgrade/evaluate validation master and slaves, as you recommend?

A: For testing environments, CloudBees subscription pricing is different, it is cheaper. For evaluation, I recommend just doing a trial of both to see which fits your needs better. You can request a 30 day trial of Jenkins Enterprise here.

Q: Is this LDAP group access is only available in the enterprise version? I am asking if I can make it so that some users can only see the jobs of their group.

A: Jenkins OSS supports LDAP authentication. The Role Based Access Control authorization provided by Jenkins Enterprise by CloudBees allows you to apply RBAC security on groups defined in LDAP. You can then put the jobs in folders using the Folders/Folders Plus Plugin and assign read/write/etc permissions over those folders using the CloudBees RBAC plugin.

            Q: Another question. What's the difference between having dedicated slaves
                 with your plugin/addon or just add another slave with another label?

            A: Dedicated slaves cannot be shared with another master - only with the master
                that it's been assigned to - whereas shared slaves with just labels are still open
                for use between any masters that can connect to it.

Q: At this moment my organization is planning to implement Open Source Jenkins. Does CloudBees provide training or consultancy adoc to client environment in order to implement Jenkins with best practices, saving time, money and resources?

A: CloudBees service partners provide consulting and training. The training program is written by CloudBees.

Q: Can I use an LDAP for authentication, but create and manage groups (and membership) locally in Jenkins?  For us, creating groups and managing them in the corporate LDAP is a very heavyweight process (plus, support only for static LDAP groups, not dynamic). Clarification - we have a corporate LDAP, and want to use it for authentication. I want to not use the LDAP to host/manage groups and group management.  I want to do that in Jenkins - and not using LDAP whatsoever in any way for groups

A: Yes, with the the Role Based Access Control security provided by Jenkins Enterprise by CloudBees, you can declare users in LDAP and declare groups and associate users in Jenkins. A Jenkins group can combine users and groups declared in LDAP. You can define users in your authentication backend (LDAP, Active Directory, Jenkins internal user database, OpenID SSO, Google Apps SSO ...) and manage security groups in Jenkins with the CloudBees RBAC plugin.

Q: Is the controlled slaves feature available in the Enterprise version only?

A: Yes - this is a feature of the CloudBees Folders Plus plugin.

Q: Is there a way to version control job configuration?

A: Yes - CloudBees offers a Backup Plugin that allows your to store your job configs in a tar ball. You can set how long to retain these configs and how many to keep, just as you would for a job's run history. You can also use the Jenkins Job Configuration History plugin.

Q: Can I start implementing Jenkins Operations Center as a monitoring layer for teams that have set up with Jenkins OSS? Over time I would move them to Jenkins Enterprise, but we need to progress in small iterative stages.

A: Jenkins OSS masters must be converted into Jenkins Enterprise by CloudBees masters. You can do this either by installing the package provided by CloudBees or by installing the “Enterprise by CloudBees” plugin available in the update center of your Jenkins console. Please remember that a Jenkins OSS master must be upgraded to the LTS or to the ‘tip’ before installing the “Enterprise by CloudBees” plugin.

Q: What is the purpose of the HA proxy?

A: HA proxy is an example of the load balancer used to setup High Availability of Jenkins Enterprise by CloudBees (JEBC) masters (it could also be another load balancer such as BIG IP F5, Cisco ...). More details are available on the JEBC High Availability page and in JEBC User Guide / High Availability.

Q: When builds will run on slaves and Jenkins Operation Center will manage, what is use of masters?

A: JOC is the orchestrator. It manages which slaves are in the pool, which masters need a slave, and which masters are connected. The masters are still where the jobs/workflows are configured and where the results are published

Q: Is there a functionality for a preflight/proof build - i.e. the build with the local Dev changes grabbed from developer's desktop?

A: Jenkins Enterprise by CloudBees offers the Validated Merge plugin that allows the developer to validate their code before pushing it to the source code repository.

Q: Currently we are using OSS version with 1 Master and 18 slaves with 60 executors and faces performances issues, and a workaround used to bounce the server once in a week. Any clue to debug the issue?

A: We would need more information to help diagnose performance problems, but the CloudBees Support plugin in conjunction with a CloudBees Support plan, you could always create a Support Bundle and send it to our support team along with a description of your performance problem.

Q: How do I create dummy users and assign passwords (not using LDAP, AD or any security tool) just for testing my trial Jenkins jobs? (Jenkins open source)

A: Use the "Mock Security Realm" plugin and add dummy users with the syntax "username groupname" under the Global Security Settings

Q: Can you have shared slave groups?  For example, slave group "A"  and within it have sub group "A-Linux5", "A-Linux6", etc...

A: Yes, you can do this with folders in Jenkins Operation Center. A detailed tutorial is available here.

For example with groups “us-east” and “us-west”, you could create folders “us-east” and “us-west”:
  • In the “us-west” folder, you would declare the masters and slave of the West coast (e.g. san-jose-master-1, palo-alto-master-1, san-jose-slave-linux-1, san-francisco-slave-linux-1 ...).
  • In the “us-east” folder, you would declare the masters and slave of the East coast (e.g. nyc-
    master-1…).
Thanks to this, the west coast masters will share the west coast slaves. More subtles scenarios can be implemented with hierarchies of folders as explained in the tutorial.

Q: How do you implement HA on the data layer (jobs)?  Do you have the data hosted on a network drive?

A: Yes - the 2 masters (primary and failover) share a filesystem visible to both over a network. You can read about HA setup here.


--- Tracy Kennedy & Cyrille Le Clerc


Tracy Kennedy
Solutions Architect
CloudBees

As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. (Read her Meet the Bees blog post and follow her on Twitter.


Cyrille Le Clerc
Elite Architect
CloudBees

Cyrille Le Clerc is an elite architect at CloudBees, with more than 12 years of experience in Java technologies. He came to CloudBees from Xebia, where he was CTO and architect. Cyrille was an early adopter of the “You Build It, You Run It” model that he put in place for a number of high-volume websites. He naturally embraced the DevOps culture, as well as cloud computing. He has implemented both for his customers. Cyrille is very active in the Java community as the creator of the embedded-jmxtrans open source project and as a speaker at conferences.

Building Resilient Jenkins Infrastructure

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Harpreet Singh, VP product management, CloudBees about a presentation given by Kohsuke Kawaguchi from CloudBees at JUC Boston.

A talk by Kohsuke Kawaguchi is always exciting. It gets triply exciting when his talk bundles three in one. 

Scaling Jenkins horizontally
Kohsuke outlined the case on how organizations scale, either vertically or organically (numerous Jenkins masters abound in the organization). He made the case that the way forward is to scale horizontally. In this approach a Jenkins Operations Center by CloudBees master manages multiple Jenkins in the organizations. This approach helps organizations share resources (slaves) and have a unified security model through roles-based access control plugin from CloudBees. 
Jenkins Operations Center by CloudBees

This architecture lets administrators maintain a few big Jenkins masters that can be managed by the operations center. This effectively builds an infrastructure that fails less and recovers from failures faster.


Right sized Jenkins masters
Bursting to the cloud (through CloudBees DEV@cloud)
He then switched gear to address a use case where teams can start using cloud resources when they run out of build capacity on their local build farm. He walked through the underlying technological pieces built at CloudBees using LXC. 

CloudBursting: Supported by LXC containers on CloudBees

The neat thing with the above technology piece is that we have used it to offer OSX build slaves in the cloud. 
We have an article [2] highlights on how to use cloud bursting with CloudBees. The key advantage is that users pay for builds-by-the-minute.

Traceability
Organizations are looking at continuous delivery to deliver software often. They often use Jenkins to build binaries and use tools such as Puppet and Chef to deploy these binaries in production. However, if something does go wrong in production environment, it is quite a challenge to tie these back to the commit that caused issues. The traceability work in Jenkins ties this loose end. So post deployment, Puppet/Chef notifies a Jenkins plugin and Jenkins calculates its finger print and maintains it in the internal database. This fingerprint can be used to track where the commits have landed and help diagnose failures faster. We have an article [3] that describes how to set this up with Puppet.

Finger prints flow through Jenkins, Puppet and Chef


-- Harpreet Singh

Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter



Tuesday, August 12, 2014

Automation, Innovation and Continuous Delivery - Mario Cruz

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Steve Harris, SVP Products, CloudBees about a presentation given by Mario Cruz of Choose Digital at JUC Boston.

Choose Digital is a longstanding CloudBees customer, and Mario Cruz, founder and CTO has been a vocal supporter of CloudBees and continuous delivery. So, it was fun to have a chance to hear Mario talk about how they use continuous delivery to fuel innovation at Choose Digital at the recent Jenkins User Conference in Boston (slides, video).

Mario began by talking about what Choose Digital does as a business. They host millions of music downloads, along with movies, TV shows, and eBooks, that they offer as a service in a kind of "white label iTunes". Choose Digital's service is used by companies like United, Marriott and Skymall to offer rewards. Pretty much all of this runs on CloudBees and is delivered using Jenkins as their continuous delivery engine.

The thesis of Mario's presentation is that innovation is really the next evolution of continuous delivery. From my perspective, this is probably the biggest strategic advantage a continuous delivery organization gets from its investment. Still, it's hard to quantify, and it can come across as marketing hot air or the search for unicorns. Being able to experiment cheaply and quickly, with low risk, and have an ability to make data-driven product choices are huge advantages that a continuous delivery shop has over its more traditional competition. Fortunately, Mario is able to speak from experience!

To set the stage, he covered Choose Digital's automation and testing processes. They are a complete continuous delivery shop - every check-in kicks off a set of tests, and if successful, deploys to production. Everything is automated using Jenkins and deployed to CloudBees. They are constantly pushing, constantly building, and their production systems are "never more than a couple of hours behind". The rest of Mario's talk was about the practices, both operational and cultural, they have used to get to this continuous delivery nirvana. Some of Choose Digital's practices include:

  • Developer control. They follow the Amazon "write the press release first" style. Very short specs identify what they want to achieve, but the developer is given control over how to make that happen; i.e., specs identify the "what" not the "how", so that developers are in control and empowered. But, this requires...
  • Trust. Their culture and processes disincentivize the need for heroes, and force a degree of excellence from everyone. For that to work, they need a...
  • Blameless culture. Tools like extensive logging and monitoring give everyone what they need to find and fix issues quickly and efficiently.
  • Core not context. They ruthlessly offload anything that is not core to their business. Mario talked about avoiding "smart people disease", where smart people are attracted to hard problem solving, even if it's not what they should be doing. By offloading infrastructure, and even running of Jenkins, to service providers who are specialists in their area, Choose Digital has been able to stay hyper-focused on their business and quickly improve their offerings. In particular, that means...
  • No heavy lifting. Just because you're capable and might even be great at some of the heavy lifting to support infrastructure or some technical area (like search), that's not what you should be doing if it's not a core part of the business. This is one of the main reasons Choose Digital is using CloudBees and AWS services.
  • Responsibility. If you write code at Choose Digital, you are on call to support it when it's deployed. To me the goodness enabled by this simple rule is one of the biggest wins of the as-a-service continuous delivery model (everything at Choose Digital is API-accessed by their customers).
  • Use feature flags. Mario went into some detail about how Choose Digital uses feature flags to enable them to deliver incrementally, experiment, do A-B testing, and even interact with specific customers directly and in proofs of concept.

Mario is a quotable guy, but I'd say the money quote of his presentation was:
"Once you make every developer in the room part of what makes the company's bottom line move forward, they'll start thinking like that."
In a lot of ways, that's what continuous delivery is all about. It's great to have customers who walk the walk and talk the talk. Thanks, Mario!



Steven Harris is senior vice president of products at CloudBees. 
Follow Steve on Twitter.

Monday, August 11, 2014

Meet the Bees: Tracy Kennedy


At CloudBees, we have a lot of seriously talented developers. They work hard behind the scenes to keep the CloudBees continuous delivery solutions (both cloud and on-premise) up-to-date with all the latest and greatest technologies, gizmos and overall stuff that makes it easy for you to develop amazing software.

In this Meet the Bees post, we buzz over to our Richmond office to catch up with Tracy Kennedy, a solutions architect at CloudBees.


Tracy has a bit of an eccentric background. In college, she studied journalism and, in 2010, interned for the investigative unit of NBC Nightly News. She won a Hearst Award for a report she did about her state’s delegates browsing Facebook and shopping during one of the last legislative sessions of the season. She had several of her stories published in newspapers around the state. Sounds like the beginnings of a great journalistic career, right?

Well, by the time she graduated, Tracy ended up being completely burned out and very cynical about the news industry. Instead of trying to get a job in journalism, she wanted to make a career change.

Tracy's dad was a programmer and he offered to pay for her to study computer science in a post-bachelor’s program at her local university. He had wanted her to study computer science when she first started college, but idealistic Tracy wanted to first save the world with her hard-hitting reporting skills. She now took him up on his offer, and surprisingly, found she had a knack for technology.

Tracy landed a job at a small web development shop in Richmond as a QA and documentation contractor. The work tickled her journalistic skills as well as her newly budding computer science skills and she had a great opportunity to be mentored by some really talented web developers and other technical folks while she was there.

By the time Tracy felt ready to look for more permanent work, she had finished some hobby projects of her own that furthered her programming skills better than any class she had taken. It was also at that time that Mike Lambert, VP of Sales - Americas at CloudBees, was looking for someone with Tracy's skills and experience.

You can follow Tracy on Twitter: @Tracy_Kennedy

Who are you? What is your role at CloudBees?
My name is Tracy Kennedy and I’m a solutions architect/sherpa at CloudBees.


My primary role is to reach out to customers on our continuous delivery cloud platform and assist them in on-boarding and learning how to use the platform to its fullest potential. However, I work on other things, too. My role actually varies wildly; it really just depends on what the current needs of the organization are.


Tracy with her dog Oliver.
I’ve dabbled in some light marketing by writing emails for and sometimes creating customer communication campaigns, done lots of QA work when debugging our automated sherpa funnel campaign and do a bit of sales engineering, as well, since I’m physically located in the Richmond sales office. I also write some of our documentation as I find the time and identify the need for it.


Lately, I’ve also been spending a good chunk of my week working on updating our Jenkins training materials for use by our CloudBees Service Partners and laying the foundation for future sherpa outreach campaigns.

When those projects are done, I plan on going back to work on a Selenium bot that will automate a lot of my weekly tasks involving the collection of customer outreach statistics. I’m hoping that bot will give me more free time to spend learning about Jenkins Enterprise by CloudBees and Jenkins Operations Center by CloudBees - our on-premise Jenkins solutions, and to create some ClickStacks for RUN@cloud.


What makes CloudBees different from other PaaS and cloud computing companies?
CloudBees has a really, really excellent "Jenkins story" as the business guys like to say, and that story is really almost like a Dr. Seuss book in its elegant simplicity. Ahem:
Not only is Tracy a poet, but she is a budding actress!
Here she is as an extra in a Lifetime movie.


I can use Jenkins on DEV@cloud
I can hide Jenkins from a crowd


I can load Jenkins to on-premise machines
I can access Jenkins by many means


I can use Jenkins to group my jobs  
I can use Jenkins to change templated gobs


I can use Jenkins to build mobile apps
I can use Jenkins to check code for cracks


I can keep Jenkins up when a master is down
I can “rent” slaves to Jenkins instances all around


I can use Jenkins here or there,
I can use Jenkins anywhere.


Don’t worry; I have no plans on quitting my day job to become a poet laureate!


What are CloudBees customers like? What does a typical day look like for you?
CloudBees PaaS customers can range from university students to enterprise consultants. It’s also not uncommon to see old school web gurus open an account and “play around” with it in an attempt to understand this crazy new cloud/PaaS sensation.

I’ve even seen some non-computer science engineers on our platform who are just trying to learn how to program, and those are my favorite customers to interact with since they’re almost always very bright and seem to have an unparalleled respect for the art of creating web applications. It’s always a great delight to be able to “sherpa” them along on their web dev journey and to see them succeed as a result.

As for my typical day, I actually keep track of each of my days’ activities in a Google Calendar, so I can give you a pretty accurate timeline of my average day:


8:30 or 8:45 am - Roll into the Richmond office, grab some coffee. Start reading emails that I received overnight and start replying as needed. Check the engineering chat for any callouts to me and check Skype for any missed messages.


9:30 am - Either start responding to customer emails or start working on whatever the major project of the day is. If it’s something serious or due ASAP, I throw my headphones on to help me concentrate and tune out the sales calls going on around me.


12:00 pm - Lunch at my desk while I read articles on either arstechnica.com, theatlantic.com, or one of my local news sites.


1:00 pm - Usually by this point, someone will have asked me to review an email or answer a potential customer’s question, so this is when I start working on answering those requests.
Tracy after doing the CrossFit workout "Cindy XXX."



3:00 pm - Start moving forward a non-urgent project by contacting the appropriate parties or doing the relevant research.


The end of my day varies depending on the day of the week:

  • Monday/Wednesday - 4:00 pm  - Leave to go to class
  • Tuesday/Thursday - 5 pm  - Leave for the gym
  • Friday - 5:30 pm  - Leave for home

Tracy's motorcycle: a 1979 Honda CM400
In my spare time, video games are a fun escape for me and they give me a cheap way of tickling my desire to see new places. Sometimes I spend my Friday nights playing as a zombie-apocalypse survivor in DayZ and exploring a pseudo-Czech Republic with nothing but a fireman’s axe to protect me from the zombie hordes.

On the weekends I spend my time playing catch-up on chores, hanging out with my awesome and super-spoiled doggie and going on mini-adventures with my boyfriend. Richmond has a lot of really beautiful parks, and we hike through one of them each weekend if the weather’s conducive to it.


When I can get more spare time during the week, I plan on finishing restoring my motorcycle and actually riding it, renovating my home office into a gigantic closet for all of my shoes and girly things, and learning how to self-service my car.




What is your favorite form of social media and why?
Twitter -- I enjoy the simplicity of it, how well it works even when my wi-fi or cellular data connection is terrible, and how easy it makes following my favorite news outlets.

Something we all have in common these days is the constant use of technology. What’s your favorite gadget and why?
While I’d love to name some clever or obscure gadget that will blow everyone’s mind, the truth is that I’d be completely lost without my Android smartphone. I use it to manage my time via Google Calendar, check all 10 million of my email accounts with some ease and stay up to date on any breaking news events. Google Maps also keeps me from getting hopelessly lost when driving outside of my usual routes.

Favorite Game of Thrones character? Why is this character your favorite?
Sansa Stark, Game of Thrones
Please note that book-wise I’m only on “Storm of Swords” and that I’m completely caught up on the HBO show, so I’m only naming my favorite character based on what I’ve seen and read so far. Some light spoilers below:

While I know she’s not the most popular character, I really like Sansa Stark. Sure, she’s not the typical heroine who wields swords or always does the right thing, but that’s part of her appeal to me. I like to root for the underdogs, and here we have this flawed teenager who’s struggling to survive her unwitting entanglement in an incredibly dangerous political game. She has no fighting skills, no political leverage beyond her name, and no true allies, and she’s trapped in a city with and by her psychopathic ex-fiancé whose favorite past time is to literally torture her.


The odds of Sansa surviving such a situation seem very slim, and yet despite her naïveté, she’s managing to do just that while the more conventional “heroes” of the story are dropping like flies. I could very well see her learning lessons from the fallen’s mistakes and applying them to any leadership roles she takes on in the future. Is she perhaps a future Queen of the North? I wouldn’t discount it.

Sansa is a bright girl with the right name and the right disposition to gracefully handle any misfortunates thrown her way, and aren’t grace, intelligence and a noble lineage all the right traits for a queen? I think so, but we’ll just have to see if George R.R. Martin agrees.

Thursday, August 7, 2014

Amadeus Contribution to the Jenkins Literate Plugin and the Plugin's Value

This is one in a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Valentina Armenise, solutions architect, CloudBees, about a presentation called "Going Literate in Amadeus" given by Vincent Latombe, Amadeus at JUC Berlin.

The Literate plugin is built on top of the Literate programming concept, introduced by Donald Knuth, who introduced the idea that a program can be described by natural language, such as English, rather than by a programming language. The description would be translated automatically to source code to be used in the scripts in a process completely transparent for the users.

The Literate plugin is built on top of two APIs:
  • Literate API responsible for translating the descriptive language in source code
  • BRANCH API which is the toolkit to handle multi-branch:
    • SCM API - provides the capability to interact with multiple heads of the repository
    • capability to tag some branches as untrusted and skip those
    • capability to discard builds
    • foundation for multi-branch freestyle project
    • foundation for multi-branch template project

Basically, the Literate plugin makes you able to describe your environment together with the build steps required by your job to build, in a simple file (either the marker file or the README.md). The Literate plugin queries the repository looking for one or more branches which contain the descriptive file. If more than one branch contains this file, being eligible to be built in a literate way and no specific branch is specified in the job, then the branches are built in parallel. This means that you can create multi-branch projects where each branch requires different build steps or simply different environments.

The use of the Literate plugin becomes quite interesting when you need to define templates with customizable variables or to whitelist build sections.

Amadeus has invested resources in Jenkins in order to accomplish continuous integration. Over the years they have specialized in the use of the Literate plugin in order to make the creation of jobs easier and become a contributor to this plugin.
Vincent Latombe presenting his talk at JUC Berlin.
Click here to watch the video.
And click here to see the slides.

In particular, Amadeus invested resources in order to enhance the plugin usage experience by introducing the use of YAML, a descriptive language which leaves less space to errors compared to the traditional MARKDOWN -too open.

How do we see the Literate plugin today?

With the introduction of CI, there are conversations going on about what is the best approach in merging and pulling changes to repositories.

Some people support the “feature branching” approach, where each new feature is a new branch and is committed to the mainline only when ready to be released in order to provide isolation among branches and stability of the trunk.

Although this approach is criticized by many who think that it is too risky to commit the whole new feature at once, it could be the best approach when the new feature is completely isolated from the rest (a completely new module) or in open source projects where a new feature is developed without deadlines and, thus, can take quite a while to be completed.

The Literate plugin works really well with the feature branching approach described above, since it would be possible to define different build steps for each branch and, thus, for each feature.

Also, this approach gets along really well with the concept of continuous delivery, where the main idea is that the trunk has to be continuously shippable into production.

How does it integrate with CD tools?

Today, we’re moving from implementing CI to CD: Jenkins is not a tool for developers only anymore but it’s now capturing the interest of Dev-Ops.

By using plugins to implement deployment pipelines (ie. Build Pipeline plugin, Build Flow plugin, Promotion plugin), Jenkins is able to handle all the phases of the software lifecycle.

The definition of environments and agents to build and deploy to is provided with integration to Puppet and Chef. These tools can be used to describe the configuration of the environment and apply the changes on the target machines before deployment.

At the same time, virtualization technologies that allow you to create software containers, such as Docker, are getting more and more popular.

How the literate builds could take part in the CD process?

As said before, one of the things that the Literate plugin simplifies is the definition of multiple environments and of build steps by the use of a single file: the build definition will be stored in the same SCM as the job that is being built.

This means that the Literate plugin gets along really well with the infrastructure as code approach and tools like Docker or Puppet where all the necessary files are stored in the SCM. Docker, in particular, could be a good candidate to work with this plugin, since a Docker image is completely described by a single file (the Dockerfile) and it’s totally self-contained in the SCM.

What's next?

Amadeus is looking for adding new features for the plugin in the near feature:
  • Integration with GitHub, Bitbucket and stash pull request support
  • Integration with isolation features (i.e. sandbox commands within the container)

Do you want to know more?




Valentina Armenise
Solutions Architect, CloudBees

Follow Valentina on Twitter.