What does it take to create a winning software company? The ability to deliver valuable software, and deliver it fast. How can we guarantee this high-speed service? A Continuous Delivery (CD) process, supported by a perfected Continuous Integration (CI) mechanism to provide flawless delivery, especially when components of a platform increase in numbers and dependencies.
This picture perfectly summarizes the virtuous CI/CD loop, which any DevOps should have pinned to their desk:
In this article we will focus on the left side of the loop, the product’s journey from code to test.
When working with source code, git is the only option. In fact, here at BOOM, we use GitHub to manage the code lifecycle (but git options also include Gitea or Bitbucket). Each project has its own repository, which can be accessed by various team members with different roles. We use the “develop” branch to build the staging releases, and the master branch to build the production releases.
So far, so good. But how should actions performed on git repositories (e.g. pull requests and merges) be managed? How can artifacts be deployed in a controlled way across various environments?
The answer is a CI/CD tool.
At BOOM, at the very beginning, we used Github Actions for CI and Ansible/AWX for CD. This solution works when few software components are involved, but becomes limited as soon as your roadmap points to a distributed software model in terms of quantity and dependencies. But with time, the need to write libraries (e.g. logging libraries) or packages (e.g. react components libraries) - with more than one software component - became more pressing, requiring maintenance and more efficient management of the entire ecosystem.
In my past life, I had a strong experience with Jenkins, with all its pros and cons. But at BOOM, we are curious and we are eager to try new technologies and see if they fit our needs. So together with the Engineering team, we decided to evaluate and try out various solutions, including some SaaS, which took the following aspects into account:
After an evaluation period in which we tested a number of tools (CircleCI, TravisCI, TeamCity, Bamboo) we decided to go with Drone CI (https://www.drone.io) as the core part of our CI/CD.
Drones offer us all we need, in particular:
it’s open-source, developed by a huge community, with extra development participation possible;
it’s easy to install and maintain;
it’s Docker-based, everything runs on containers;
native Github, Gitlab, Bitbucket (and many others) integrations;
adopts a yaml-based configuration, embracing the pipeline-as-a-code principle;
it’s easily scalable (and has an autoscaling feature on major cloud providers);
it includes many working plugins maintained by the community and writing ad-hoc plugins or extensions is not complex;
out-of-the-box secret management (but using an external system is also possible);
The basic concepts of drones are quite similar to other CI tools. Any action performed on a git repository triggers via webhook Drone. If a pipeline is defined for the specific repository (i.g. a .drone.yml file is present in the repository root), Drone will analyze it and perform the requested actions. This decision is taken through the following trigger definitions:
In this specific scenario, the pipeline will run if - and only if - the target branch is either “develop” or “master”, and if the event is either “pull_request” or “push”.
Each pipeline is built using a sequence of steps, each one described with a syntax, such as:
This is very easy to read. Using the image maven:3.6.3-jdk-11 we perform a mvn clean & then a mvn install. The following one needs no explanation.
But where are these actions performed? Where is the source code? As we told at the beginning, well-defined actions performed on a git repository triggers Drone via webhook. Drone takes care of cloning the git repository content, sharing it with all containers, mounting a specific path (/drone/src) for each of them, and setting a home container there as well. As a result, adding files in this folder can be done in one phase, and finding the same files later in another phase, For example, the building result of the previous mvn command can be used to execute unit tests:
and maybe another one can be used to perform integration tests:
As the example above shows, we use simple docker containers to perform various steps, the majority of which are standard containers. Complex pipelines can also be built just by adding new steps until the desired results are achieved.
One of the powerful features of Drone is the concept of service. Sometimes, performing a specific task (eg: integration tests) requires a support service such as a redis instance or a postgres instance. Anyone working with SaaS services needs to use the docker-in-docker (dind) feature. With Drone, you can just define a service
and Drone will take care of spinning up the needed postgres instance, and then kill it once the pipeline ends. What will need to be done next? Just instruct the test step to use this postgres instance.
If none of the available plugins fit your needs, you can write your own. But what is a Drone plugin? Simple: it’s a container running code! And although Go is the preferred language to write plugins, it is possible to use another language.
Let’s look at this step:
and suppose you pushed a container my-plugin with tag 1.1.0 on your preferred image repository.
When executing this step, Drone will download your plugin and run what is found in the defined Dockerfile
but after setting two environment variables, called DRONE_FOO and DRONE_BAR, on the values defined in the step. Of course, this works fine for simple plugins, but when they are more complex, it’s better to use the drone-plugin-starter and write it in Go.
Test & test reports
Let's go back to the testing phase in our pipeline. As previously described, test steps can be added for unit and integration testing. But the same strategy can also be applied to add steps performing other kinds of testing, such as cypress testing, postman testing, etc. Writing steps for these scenarios is again start-up a suitable container and “run” commands in it. But what about test reports? Unlike with Jenkins, where test results are attached to the pipeline run using one suitable plugin and access it via Jenkins UI, Drone is just a pipeline executor. It offers a nice UI, but it provides info strictly related to builds, nothing else. So how can test results be collected and made available to the engineering team?
The solution we found was an open-source project called Allure Docker Service that provides a way of storing and organizing test results on a project basis. It is composed of an API layer (responsible for managing the content ingestion and management) and a UI that allows browsing the reports easily and intuitively. It can accept reports in various formats (junit, testng, allure and more) and supply both a trend view for each project and a detailed view per run and test case. It is useful to perform the following tasks:
running the various tests in a specific container and writing the test results in the shared file system;
sending the reports to our allure-service instance via API by using a drone plugin developed internally.
In other words,
Drone executes tests
Drone sends the test results to the Allure Docker Service
Tests are made available to the engineering team by accessing the Web GUI provided by Allure Docker Service.
For example, in the specific case of cypress test, this is the code snippet we use in our pipeline
The first step runs the cypress tests and stores the results in allure native format under /drone/src/cypress-results/allure, while the second one sends the results to the allure-service on our system.
It might seem like a workaround to compensate for the fact that Drone is just a pipeline executor, but in my experience, the best way to operate is to have each platform component in charge of one task. The trouble with huge applications (such as Jenkins) where everything is collapsed can be an issue when implementing changes. Meanwhile, loosely coupled components make it possible to change one element without changing everything else as well.
The final result of a CI pipeline should be an artifact that can be used in any environment (staging, pre-production, production etc.). At the moment, our platform has three kinds of artifacts:
Docker images are stored on ECR, whereas we are using Nexus Repository manager OSS for npm packages and java libraries. Drone makes it very easy to create those artifacts and push them to the appropriate location. For example, when dealing with docker images, using the following step is more than enough:
As a result, a new version of the image will be pushed on your ECR with the version found in the pom.xml. In this article, we described why we chose Drone for our CD and how we have used it together with other tools to provide our engineering team with a top-class experience. The journey was very interesting and not always easy, but we were able to overcome various issues and take advantage of the ecosystem we built.
In the next Tech Talks article, we will discuss the right part of the virtuous loop, which focuses on the methods we use for CI.