Is Jenkins the best CI tool

author

Bernhard Cygan December 15, 2015

The big internet retailers and social media giants are all talking about continuous delivery today. For everyone else, a silent revolution is taking place as companies of all kinds from startups to automakers and medium-sized companies are looking for ways to optimize their software faster than the competition. Everyone says to do it, but what's the best way to start?

I've learned how to build PaaS and SaaS frameworks from scratch and the hard way, and maybe there isn't a really convenient way to go. Still, I can give a few tips on how development teams can move from Continuous Integration (CI) to Continuous Delivery (CD) using the powerful workflow functionality in Jenkins [1].

Continuous Integration (CI) is the process by which the working copies of all developers are merged with a common mainline version several times a day. This gives the developers quicker feedback in the event their code causes a bug. The process is illustrated in Figure 1.

This typical CI process provides feedback to developers if their code fails any of the integration tests. What distinguishes CD fundamentally from CI, however, is that CD not only validates the code, but also prepares the application for use under real operating conditions and even takes into account how user behavior can affect the application "in real use".

Before going into further details, one common misunderstanding should be cleared up at this point. It should be noted that with continuous delivery, the decision as to whether and when to use software in the productive environment is not made automatically. However, a fully automated process can be initiated manually if desired.

This is the scope of "Continuous Deployment", a DevOps process in which artifacts are automatically released to different stages of a build pipeline. If all stages are passed satisfactorily, the software is finally released into the production environment. If an automated test in a "quasi-productive environment" is positive, the automated releases for the generated artifacts ultimately lead to the software being put into productive use without human intervention. This is like continuous delivery, but completely automated.

This can be useful for some teams and not for others. In any case, one must make a clear distinction when internally advocating the need for continuous delivery. From a technical point of view, the introduction of continuous deployment does not pose any risk; At the organizational level, however, the project can turn out to be a bit tricky, as it entails a perceived loss of control - which in one case or another can prove to be an inhibition threshold.

What are the advantages of CD over CI?

CI provides a framework for development teams to achieve a meaningful division of tasks. However, there is still a big "showdown" in the software release, since the code has to go through the necessary tests and quality controls and ultimately the operational use. This means that the features implemented by the individual developer have not yet been implemented for some time and are not available to the user during this time. Due to the long cycle times, the developer does not receive any real-time feedback as to whether the implemented features actually work as desired.

By expanding from CI to CD, new features and improvements can be brought to market more quickly. This is achieved by making much smaller incremental changes to the software and tracking those incremental changes with measurable feedback. This also makes it easier to make changes or restore previous states if things don't go according to plan.

All tests are repeated each time before the software is carried over to the next stage, which ensures an overall higher quality. By inserting "quality gates" in important stages of the pipeline, you can ensure that non-functional (e.g. economic) requirements are also met. CD uses the same software principles as CI and enables the artifacts to be traced (who did what when?) From the source code to the delivery of the software applications.

People play a role as important as tools, but implementing a software delivery pipeline in itself creates a shared vision and common driving force for the entire process and all of the teams involved.

Try out!

End of the theoretical lesson - how is it done? Let's start with a typical CI pipeline for Java with Maven: A typical pipeline looks something like this:

  • maven compile - Compile source code
  • maven test - Perform unit tests
  • maven package - Pack the compiled code in an a jar file
  • maven install - Add artifacts to the local repository
  • maven deploy - Add artifacts to a remote repository

    Note: The above steps will likely all be done in one build job!
  • Upload WAR file to test system
  • perform tests
  • Tests passed - DONE!

Is that really all done?

At this point we have an application that can be validated and tested.

In principle one led mvn deploy and uploads a war file to an application container for test purposes. This is usually done by two Jenkins jobs (see Figure 3).

Before software can be released, however, one must consider what is required for its use, in which environment it should be used and how it should be used.

  • What about changes to the runtime environment? (Linux / Windows, Tomcat / Jetty / WebSphere, MySQL / DB2 / Microsoft SQL Server)
  • What about exercise testing? "These are carried out manually before each release."
  • What about performance regressions? "Before each release, click tests are carried out."
  • What about the scalability? "The servers handled the last release well."

The most important aspect of designing a CD pipeline is increasing security from development to production. This requires a detailed understanding of the processes as well as new strategies and technologies to deal with these complexities.

Is the software rollout aimed at a clean install or a system upgrade and what patches are being installed? The mantra on CD is "automate everything". Therefore tests need to be built in and certain strategies also need to be considered, e.g. B. Containerization with Docker to handle changes in the target environment.

Databases can add additional complexity. How can you avoid problems if the software implementation affects not just one but several databases at the same time?

Performance tests are an Achilles' heel when introducing web-scale applications. For example: Can the performance of the virtualized server be measured and does the multitenant hardware deliver the desired performance? With ad hoc tests, there is a risk that the tests will be forgotten. If you run these tests every night, you can quickly determine whether the software is speeding up or slowing down the application. This depends on the infrastructure and the application code and requires the specification of specifications: What is acceptable and what is not?

Test cases must be created that are relevant for the application. At this point, we will not go into more detail about automated tests, but observing all of the points mentioned above increases the certainty that the software is suitable for the operating environment.

What can you improve and achieve with CD?

The good news is that creating a CD pipeline using Jenkins Domain-Specific Language (DSL) isn't difficult. The CD process can be automated with just a few lines of code.

As a first step, you can convert the two CI jobs into a workflow:

Listing 1:

 

If these tests are positive, further stress tests can be used as a quality gate:

Listing 2:

 

The following stages were combined and automated in the workflow job

  • compilation
  • Integration tests
  • Stress tests as a quality gateway with a typical environment
  • Load tests as a quality gateway with all required environments
  • If all quality gates have been positive, release of all changes to production

Listing 3:

What is the next step? You can z. B. test a copy of the CI pipeline. Coordination with the operations department makes sense - the better the cooperation with the Ops employees, the earlier you can deal with possible errors and build a functional CD pipeline on this basis.

Handling the complexity

So far so good. So far it has been shown how a workflow can reduce complexity or how one can orchestrate a single CD pipeline. When using multiple pipelines, further increases in efficiency can be achieved by converting a pipeline that is deemed reliable into a template that can then be applied to other pipelines with similar requirements.

First of all, you can capture all the similarities of parallel jobs and consider the variables as attributes. This is shown below, with the differences between a template definition and the original workflow job being shown in the first comment, lines 3-6.

Listing 4:

 

You can also take this one step further and create a template as a template for the function of the pipeline. This may not look very impressive (see Figure 5), but this is exactly the point: Just like a master slide for PowerPoint or a stylesheet for a website, this template abstracts and contains all the elements necessary to build a working pipeline. Of course, this approach has been known for many years from working with build jobs.

The workflow that has already been defined has the form shown in Figure 5. In the next step, a template for use in Jenkins is generated from this.

A pluginSource attribute is defined as follows (see Figure 6):

The variable in Fig.6 creates the dialog in Fig.7 - the Jenkins Subversion plugin is to be created here for demonstration purposes.

In practice, the Jenkins Stage View looks like a normal job, but instead of the job definition, the link to the template "Build and deploy Jenkins plugin" appears (see Fig. 8).

Templates have the great advantage that they support full inheritance. This means that - as in the analogy to the PowerPoint master - every change to the template is taken into account in every dependent build job in the next run.

This allows a drastic reduction in the number and complexity of the build job definitions that have to be maintained, which in turn frees up more time for more productive (and more valuable) activities.

Conclusion

This has outlined the key steps in making the transition to CD with Jenkins Workflow. Every developer is called upon to test for themselves how they can take the first steps on the way to faster software implementation - be it with a single commercial project or by experimenting with a TYPO3 installation in the cloud outside of regular operating hours.

Of course, you need a good process and good people to be successful. When CD is introduced, everyone involved has to change the way they work so that the needs of everyone involved must also be taken into account. An Operations team will likely work with different tools and priorities than a QA team, but both can be extremely valuable experiences. The transition from CI to CD is a consultation-intensive process and requires a cultural change, but here, too, you can start with a small step - e.g. B. to drink coffee with the company's own Linux guru.

  1. Jenkins