digatus_logo
Search
Generic filters
Filter by Standorte
Filter by Funktionen
Search
Generic filters
Filter by Standorte
Filter by Funktionen

Advanced CI/CD with Azure DevOps

The idea for this blog post series arose from the situation at a customer, where we introduced CI/CD because the manual workload was no longer manageable. The following instructions are therefore fresh from practical experience. For simplicity, we have shortened the long road of trial and error and present here only the final results. The code snippets are exemplary but sufficient to present the functionality.

Part 1: Go, Docker und self-hosted build agents

What is Azure DevOps?

Azure DevOps is a web platform from Microsoft that provides tools for various areas surrounding IT projects:
  • Azure Boards for project management
  • Azure Pipelines for CI/CD
  • Azure Repos for source code management
  • Azure Test Plans for manual testing
  • Azure Artifacts for artifact management
Azure_DevOps
The tools work hand-in-hand, for example work items from Azure Boards can be linked to pull requests in Azure Repos. Before pull requests can be merged, a pipeline in Azure Pipelines must confirm the correctness of the code and finally it loads an artifact into Azure Artifacts. In this blog post series, we will only use Azure Repos and Azure Pipelines.

The first CI Pipeline– Go und Docker

Our first use case is a microservice written in Go that shall be deployed using Docker. We will create a CI pipeline that will do the:
  • Build and test the microservice
  • Build a Docker Image
  • Upload the Docker image to a Docker Registry
Azure DevOps offers two ways to create pipelines: via a graphical user interface or via YAML files that are checked in to a git repo. Typically, this file is committed to the root of the git repo under the name azure-pipelines.yaml (although the name is freely chooseable). Since we want to develop our pipelines as a team, document them, track changes, and reuse sections, we decided to take the advanced option with the YAML files. The microservice is very simple: it launches an HTTP server with a REST endpoint:
go.mod_
main.go
Dockerfile
Now for the exciting part: the CI pipeline. In Azure DevOps, the executable part of a pipeline consists of stages, a stage consists of jobs, and a job consists of steps. For our simple case, a stage with one job is quite sufficient. The functionality of a step is described by a task, for example there is a Go task, a Docker task and a Git checkout task. The Bash and Powershell task even provide us the possibility to execute custom scripts. For even more complex cases, there is also the possibility to develop your own tasks in TypeScript. Since the build process is already completely defined in the Dockerfile, we only need docker build and docker push as build steps. For this, we use the Docker task. Besides the actual build process, we can define in the pipeline
  • for which events the pipeline should be triggered automatically
  • which variables and variable groups are to be used
  • whether the pipeline should be parameterized
  • whether additional git repositories should be checked out
All these settings can also be omitted, in which case the pipeline is automatically triggered for every git push, has no variables and parameters, and only its own git repository is checked out. This is also the behavior we want for our pipeline. Within the pipeline, we have access to some predefined variables that tell us, among other things, the name, organization, and checked out commit of the git repo. A list of all predefined variables can be found here. Of course, you can also define your own variables at runtime to pass data between steps. We use
  • $(Build.SourcesDirectory), the path on the build agent where the git repo is checked out, as a path prefix for accessing files.
  • $(Build.Repository.Name), the name of the git repo, as the Docker image name.
  • $(Build.SourceVersion), the commit hash as the Docker image tag.
The pipeline definition then looks like this:
azure_pipeline_yaml
We have specified ‘docker-hub’ as the target container registry here which is a reference to a so-called service connection. A service connection in general describes a connection to an external service. This allows the pipeline to use it without having to store any credentials directly in the pipeline. To create a new service connection, we go to the “Project Settings” page and there go to “Pipelines” → “Service Connections”. Here we create a connection to a Docker Registry on Docker Hub. The prerequisite for this is a (free) Docker account. Of course, any other Docker registry could also be used.
docker_registry
Next, we need to enter our credentials for the Docker Hub account. Important: you have to create an access token on Docker Hub beforehand.
Docker-ID
After committing all the files in git, all we need to do is create the pipeline in the Azure DevOps interface, pointing it to our azure-pipelines.yaml file. To do this, we go to “Pipelines” and then click on “Create Pipeline”:
create_pipeline
Our azure-pipelines.yaml file resides in Azure Repos:
azure_pipeline
After selecting our git repo, Azure DevOps automatically recognizes our azure-pipelines.yaml because it is the only YAML file in that repo.
data_storage
A well-considered click on “Run” and we can finally see the fruits of our labor:
build
As we can see the Docker image was built and automatically uploaded to Docker Hub:
docker_hub
A quick test in a local shell confirms that everything worked fine and the Docker image can now be pulled from anywhere:
test

Unit Tests and Code Coverage

Automated testing is part of every good pipeline. That’s why we will now add a unit test to our Go project and a step in the pipeline that executes the test. If some tests fail, the pipeline should terminate, and the Docker image should neither be built nor pushed. Our test starts an HTTP request to the standalone server and checks the response:
main_test.go
Locally, the test works already:
test_run
Next, we add the test execution to the pipeline. Azure DevOps provides two predefined tasks for Go projects: GoTool and Go. With GoTool we select the Go version for the pipeline, with Go we can run Go commands. Before running the tests, we build our Go project. Although this wouldn’t be strictly necessary, it does help in debugging whether an error occurs during the build (syntax error) or only when running the tests (semantic error). The build itself requires two steps: go mod download to download the libraries and go build to compile. The tests are then executed using go test. Now we are ready to extend the pipeline as the following:
After committing and pushing in git, the pipeline should automatically start, build the application, and execute the test:
test_successfull
We’re beginning to get a feel for CI and how to implement it in Azure DevOps. To make sure that the negative case also works, we now change the code so that the test fails:
main.go
As expected, the pipeline fails and aborts before the Docker image is built:
test_failed
However, to look up specifically which test failed and why, we need to look in the logs. For a single test this is not a problem, but if we have hundreds of tests, we don’t have the time to scroll through thousands of log lines to find the test that failed. We also don’t immediately see what percentage of tests failed. Fortunately, Azure DevOps provides an interface here to provide test results in JUnit XML format. To make use of this feature, we need to convert the output of go test into this format. Luckily, someone else has already done this work for us and written a corresponding go tool: https://github.com/jstemmer/go-junit-report. We are also interested in test coverage. There is also an interface from Azure DevOps and ready-made tools for converting it to the right format. For this whole complex process, we create a bash task which will do the following: first it will download the necessary tools, then it will run the tests, remembering the return code for later. This is because we want to use the return code of go test as the return code of the whole step, so that Azure DevOps knows whether the step failed or not. But before that, we need to prepare the report and coverage, both in case of success and failure. Afterwards, we add the two tasks PublishTestResults and PublishCodeCoverageResults to the pipeline. Here it is important to add condition: succeededOrFailed(). Normally, subsequent steps are not executed if a step fails (i.e. the default value is condition: succeeded()) but with condition: succeededOrFailed() they are executed even if previous steps failed, but unlike condition: always()not if the pipeline was manually aborted. Side note if the builds are going to be run on a self-hosted build agent: the PublishCodeCoverageResults task expects the build agent to have a .NET runtime installed. Here now the finished pipeline:
azure-pipelines.yaml
After a successful run of the pipeline, we now see the test results and coverage graphically displayed in two new tabs:
test_report
success
test_report
Similarly, in case of a failed test, we see an accurate message about the cause:
test_fail_report

Conclusion

Azure DevOps offers a good way to create pipelines quickly and conveniently. The graphical user interface is simple and easy to understand and is recommended for beginners. For this purpose, creating pipelines via the graphical user interface is a good option. Connecting to external services is also simple and quick. Nevertheless, the change to YAML syntax for pipelines also makes it suitable for more complex use cases. This allows you to define pipelines with almost unlimited complexity. In this blog post, we have only looked at a fraction of the features of Azure DevOps. In part 2 we will
  • create another Go project and another pipeline as a dependency in our first Go project.
  • create a pipeline template to reuse for multiple pipelines
  • add an intelligent versioning algorithm according to Semantic Versioning to the pipeline.
dennis-heller

Dennis Heller
Dennis Heller loves a varied workday and therefore enjoys working on projects where, in addition to development tasks, he also gets to take on responsibility in project management, onboarding new team members, and as a software architect. He has experience with Kubernetes, Docker, Linux Server, Java, Python, PHP and Go, as well as JavaScript, Typescript and Vue.js.

Contact

Get in touch with our experts. Please call +49 89 2 62 07 56 12 or use the contact form:
By submitting this form, I consent to the processing of personal data in accordance with the Privacy Policy.*

Similar Posts

Christoph Pscherer
digatus had already carried out the IT due diligence in advance and is now supporting Infrareal Holding GmbH & Co. KG during the takeover and integration of the pharmaceutical and biotech site.
Maximilian Maier
For Stumpp + Schüle GmbH, an automotive supplier, the IT due diligence and the following IT carve-out from the Lesjöfors Group marked a decisive step towards independence. The aim was to make the IT infrastructure independent and future-proof in order to pave the way for autonomous and flexible action as an independent company. The main challenge here was to continue the existing systems and processes independently of the parent company without affecting ongoing operations. Thanks to well thought-out planning and professional implementation, this vision was successfully realized.
Carl-Friedrich Heintz
Munich, July 17, 2024 – digatus is pleased to announce that it has won the prestigious M&A Transaction Advisory Award 2024. The coveted recognition was awarded at the annual M&A Summit of the German Mergers & Acquisitions Association (BM&A) in Munich.