Assembling DevOps bricks – A technical illustration

This article will not explain the overall concepts and principles around DevOps. It will neither discuss the relating organization and cultural changes.

It will instead give an example of a concrete implementation of a DevOps technical infrastructure. Indeed this technical infrastructure is a cornerstone of every DevOps practice. Yet be aware that it will not give you “the” right solution for implementing your DevOps infrastructure. There are actually plenty of different solutions and tools, which can help in setting up a DevOps technical architecture. And these technical choices will differ from company to company, depending on the defined goals, the constraints (technical, financial…) and the current state.

We have built a proof of concept, which was aimed at validating an architecture and at making sure that some DevOps tools and solutions could nicely play together.

Here are the tools / solutions onto which this proof of concept is built:



Instead of having a single massive system, which would be hard to maintain and to evolve, we decompose our system into a set of loosely-coupled collaborating services. Each service implements a set of narrowly, related functions. The collaboration is done through some well-defined interfaces.

In our proof of concept, we build a website, which consists into 3 “micro-services”: a Web interface (developed with Angular2) and 2 backends which expose some REST APIs.


  • The evolutions are easier to design, as the complexity is broken down into multiple components
  • We can have several teams of developers, working independently and simultaneously on the different components
  • The releases of these components can be built, tested (and under certain conditions, deployed) independently

Warning, microservices generate some other challenges:

  • Versioning and backward compatibility of the APIs
  • Service Discovery (Auto-Discovery or Service Registry)
  • Requires a fault tolerant architecture and some appropriate monitoring tools


Docker Containers:

Docker is a good fit for a DevOps architecture based on micro-services:

  • Some common mechanisms for the deployment, even when there are some multiple heterogeneous components and applications stacks
  • Optimization of the computing resources (possible to host multiple containers onto the same host)
  • Resources isolation between containers (CPU, memory, block I/O, network, etc.)
  • Fault isolation (one bugged container will not harm the other co-hosted containers)

Our 3 application components are built as Docker images and deployed as Docker Containers.



Jenkins is an open source product, which can support your Continous Integration / Continuous Delivery pipelines. It is highly flexible and configurable, and it benefits from a rich ecosystem of plugins.

Our proof of concept is based on the latest Jenkins 2.0 release, as we take advantage of the new “Pipeline” feature:

  • Nice interface for real-time visualization and control of your pipelines
  • Pipeline as Code (Groovy scripting)
  • Support for parallelism and distributed jobs within pipelines
  • Ability to suspend/resume executing jobs

Jenkins is the cornerstone of this proof of concept architecture. 3 different pipelines are implemented for each one of our application components. Each of these pipelines enforces the following activities: retrieve source code from Git, perform unit testing, build, store the artifacts, generate temporary test environment, perform tests, deploy into production.                                                                                                                       



Included and automated Quality Assurance:

Quality Assurance is fully integrated into our 3 Jenkins pipelines:

  • Unit Testing and Code Coverage control (Junit, Cobertura)
  • Static Code Analysis (PMD, SonarQube)
  • Functional tests with real browsers (Selenium)
  • Test of the Rest APIs (SoapUI)
  • Performance Tests (Jmeter, Gatling)
  • Security (OWASP ZAP)

We could choose to trigger a failure of the pipeline, if a test fails or if a metric reported by a test exceeds a given threshold. Besides, all the reports are easily accessible from the build view in Jenkins.


“Everything as code” and Source Control Repository as source of everything:

Everything is code:

  • Architecture: some AWS CloudFormation templates (JSON format) define the network subnets (VPC, subnets), the access control rules (Security Groups), the permissions (IAM Policies), the setup of our Jenkins server, the setup of our production environment.
  • Jenkins Pipelines: Groovy Scripts
  • Packaging of the application along with the needed middleware, configuration, and dependencies: Dockerfile
  • Application Source Code
  • Test Plans/Suites used by the testing tools within the Jenkins Pipelines: the testers commit their test resources (test plan / suites for the 3rd party testing tools) on Git and the Jenkins pipelines retrieve them from Git, in order to launch the related tests.

At first glance “everything is code” could look like a passing fancy of an extremist developer. But it actually significantly helps in achieving the following goals:

  • Full automation (do not spend time doing some repetitive tasks, some automated systems will perform them more efficiently and securely than you)
  • Everything is Source Controled in Git: track changes, easy rollback to a previous, easy rebuilt from scratch

In this perspective, Git is the source of everything. A Jenkins pipeline is automatically started when a commit is done on the Git repository related to a given pipeline.


Public Cloud (AWS):

We chose Amazon AWS for hosting our proof of concept.

Unless you already have your own private cloud (the same can be achieved), a public Cloud enables a quick start. Going to the public Cloud can be especially efficient for both following purposes:

  • Take advantage of the proposed Cloud services, in addition to the infrastructure and the computing and storage resources. You will be far more efficient, if you use some built-in and managed services, instead of rebuilding the whole by yourself (AWS managed RDS Databases, Dynamo DB, CloudWatch for the monitoring, AWS CloudFormation for building your infrastructure, AWS Elasticsearch for storing and indexing your logs, AWS ECS for the management of clusters of Docker containers…)


  • Cut down the costs for the computing resources which are used sporadically. In our case, that is especially true for the Test and Development environment, which are generated on-demand and ephemeral.

For our Proof of concept application we will use the following architecture in the AWS Cloud:

Note: in a “real” environment, we would have some separate subnets for the Web containers and the backend ones.


We rely on the EC2 Container Service, which is a management solution for Docker containers. Here is how it actually works

  • We create an ECS Task Definition, which references the Docker Images (stored in the EC2 Container Repository) that we want to instantiate. It also states which fraction of the CPU / Memory are allocated to which container. You can also add some specific container settings: environment variables, volume mount points, mapped ports.
  • We associate this ECS Task Definition to a newly created ECS Cluster. This ECS Cluster will automatically instantiate and start the Docker containers defined in the Task Definitions, onto the EC2 instances which are part of this cluster.
  • The EC2 instances of the ECS Cluster are automatically launched by an Autoscaling Group, whose Launch Configuration contains the pieces for the automatic association of the EC2 instances with the ECS Cluster (the ECS agent is installed and launched onto the EC2 instances, and in a configuration file used by this ECS agent, the name of the targeted ECS Cluster is mentioned)
  • The Elastic Load Balancer load balances the incoming traffic for the 3 different containers (3 different ports) to the healthy EC2 instances of the Autoscaling Group.
  • Some CloudWatch alarms are defined for when the CPU of the EC2 instances is above a high threshold or below a low one. In case that one of this threshold is breached, the Autoscaling Group automatically scales in or out (automatically add or remove an EC2 instance to the group). In case a new EC2 instance is added, it is automatically configured with the expected Docker containers by the ECS Cluster.

The only noticed shortcoming of the ECS solution might be the lack of a Service Discovery feature. Into our small proof of concept, as a workaround, the URLs from the Elastic Load Balancer for the Backends 1 and 2 are passed as environment variables to the docker Web container. And in the Dockerfile of this Web Container, the values of these environment variables are written to a configuration file, which is then used by the Web Application.


Temporary and “production-like” Development and Test environments

The Jenkins pipelines are completely independent from each other for running their integration tests. Indeed each pipeline instantiates a temporary and independent test environment for carrying its defined tests. These test environments are decommissioned as soon as the tests are finished. Hence the related costs remain affordable.

These ephemeral tests environments are:

  • Complete: all the other microservices are instantiated (we pick up their latest stable Docker image), along with the one which is built and tested by the given pipeline. It is so possible to perform real end-to-end functional tests.
  • Production-like: the architecture is the same as for the production, and the iso-configuration of the other microservices is guaranteed through the Docker containerization. No risk of “yes but it worked in Test!”.


Similarly, this proof of concept proposes the same kind of ephemeral, complete and production-like environments for the Developers. Indeed we strongly believe that it can significantly enhance the overall quality, if the Developers can easily and quickly test the effects of their code changes, onto a complete and production-like Development environment. No risk of “yes but it worked on my Development Environment!”.


We even went further, as we achieved a full integration of this support for some temporary Development Environments within the IDE (Eclipse):

  • The Lambda functions, which start and stop the given Development Environment are directly called from a menu item within Eclipse.
  • The deployment of the generated artifact for the developed service can also be performed directly from Eclipse. We use the “AWS Toolkit Plugin” from Eclipse and AWS Code Deploy for the actual deployment of the artifact onto the newly created EC2 Instance.



This technical article gave you a quick preview of the kind of technical infrastructure, which could enable a DevOps practice. Such an infrastructure (along with all the necessary cultural and organizational changes) can really help you in achieving the following goals:

  • Increase quality and control, reduce the risks
  • Increase agility, reduce the time to market
  • Putting control back into the hands of the business: the paradox is that setting up such a DevOps technical architecture requires a lot of technical efforts and skills. But once done, you can confidently give control back to the business, as it is easier, faster and more secure to develop, test and deploy any new business need.
  • Cut down the following costs:

- Indirect costs induced by low quality and/or delays

- Development costs (complexity is better managed, better development environments, higher efficiency)

- Operational costs (most of the repetitive tasks for building, testing and deploying the application are automated)

- Infrastructure costs (in production your system can automatically scales-in and out, for test and development instantiation of temporary environments)

SOGETI Switzerland can help you to define and implement the DevOps technical architecture, which will best fit your needs. 


Arnaud Landié - Senior Consultant at Sogeti Switzerland

todo todo
  • Pierre Schuffenecker
    Pierre Schuffenecker
    Practice Leader Digital, Mobile & IoT for the German Speaking Part
    +41 (0) 76 811 11 92
  • Luis Marcos
    Luis Marcos
    Practice Leader Digital, Mobile & IoT for the French Speaking Part
    +41 (0) 79 653 69 02