In my previous blog post, we started to explore how to increase the Time-to-Market (TTM) of your simple application. We improved it and to go further we will now leverage containerization. As in part 1, I intend to explain this simply to be accessible to everyone.

The Current Status

Your application is running for a while now but you came to realize of a few limitations. Your application being monolithic, it is not easy to scale because the frontend requirement is different than the backend database. The frontend needs to absorb a changing amount of traffic that will vary during the days for example. On the other end, the database just needs redundancy.

Your application is successful and you need to deploy more frontend to cope with the increase of traffic. With your current application design, that means deploying another frontend and backend on another server. Then you may need to repeat that operation again and now your application is clumsy. Each frontend writes into its own database but this is not a good practice. Instead we should have for example two databases (one active, one standby) and several frontend that will all write into the same active database. These data are then regularly replicated in the standby one.

Third Acceleration

A better way to design our application is to split the frontend and the backend into two separate packages. We can then decorrelate both components and deploy them separately. We can now increase the number of frontend as we need without touching the backend.

Another improvement is to install each of these both components in a separate Virtual Machine (VM) instead of a physical server. Having a VM allows the developer to have its own separated environment to deploy your application. He could then pass that VM to the operation team, it should work right away. It is good for the operation to also do some tests, however for the production, it will be better to reconstruct the whole environment cleanly. Also the VMs are not easily scalable so this is only a small improvement. The intent here is to give you a comparison example between a VM and a container we will see in the next section.

The speed of your TTM at this stage is as shown below:

Forth Acceleration

At this stage what can improve our TTM is to leverage containerization. A container is different than a VM because he doesn’t contain the whole Operating System, it just contains what it needs to operate. So it is smaller in size and start faster than a VM.

You have now to convert your application into containers. Basically each component will be a separate container. Depending of the size of your application as well as the number of developers that are working on it, you may have to split the frontend itself into several containers. The good practice is to put one function into one container and keep the whole loosely connected. Each container stays independent and can be upgraded without impacting the others. This process is called containerization and your application can now be called a microservice or cloud-native. In case you meet these words, you will now know what they mean: An application that is now a group of containers working together.

To keep it simple, we will just have 2 containers. One for the frontend and one for the backend. The script we used in part 1 to create the environment and install the application is now replaced by a set of instructions used to build what we call an image. This image then contains all the environment and dependencies required by the application. This image is immutable and can be passed on to the operation that can safely use it in production. It is cleaner than a VM and much smaller so that is a preferred choice.

A container will use that image and run it to make the component up and running. A container can be killed and re-created very quickly. Also it is very easy to add frontend capacity for example by just launching a new container with the frontend image. However this creation is still manual as the container can’t scale automatically.

The speed of our TTM improved again:

Fifth Acceleration

When your application has a lot of containers and the need to scale out (add containers) or scale in (remove containers) becomes frequent, doing it manually is cumbersome. This is the time to think about hosting your application into a Kubernetes cluster. Kubernetes is a container orchestrator tool that provides plenty of features to give flexibility and scalability to your application. The container (it is actually called a Pod in Kubernetes that is a wrap around the container but let’s keep it simple) autoscaling is one of them. If the frontend becomes too loaded, Kubernetes will automatically adds a new one based on criteria you configure. On the opposite if the traffic is very low, one frontend is removed. This is a simple example for you to get an idea of it.

Another feature of Kubernetes is to easily deploy replicas of your containers. A replica will be hosted on a separate nodes and thus make your application resilient in case of a hardware failure.

Kubernetes provides also the ability to deploy a new release of your application with different strategy and avoid or reduce the downtime of your application.

From there, it is now even possible to give the developers access to the Kubernetes cluster where they could deploy their application directly by themselves! They will have a separate environment production-like to do their tests. There is then no separation anymore between developers and operations and this is the spirit of you-know-which-word (I consider you’ve read the part 1!)!

If you want to learn more about Docker containers and Kubernetes, our Training course can be just what you need! Check it out!

Now the speed of our TTM improved a bit more as the deployment of your application will be quicker:

The Kubernetes flavour of Red Hat is called OpenShit which provides additional feature to vanilla Kubernetes. OpenShift has been optimized to let a developer quickly deploy its application directly from its code. This code will become an image that will be used to deploy some containers in the OpenShift cluster. All of these intermediary steps are taken care of by OpenShift. This is what a pipeline is doing without naming it! That is the automation pushed to its finest and this is the ultimate goal to reach. In this case our TTM is approaching the speed of light:

Kubernetes is a complex technology and if you don’t use it yet, you will have some homework to do to decide which flavour and which platform you are going to use:

Some Final Thoughts

For some small applications, this embedded OpenShift pipeline can be enough, however for bigger applications there is a need for a more complex infrastructure to go from the code to a Kubernetes cluster. This is where you’ll use more advanced pipeline tools. In a nutshell, a pipeline will build a new image from your code source when it is completed and push it into a registry. Then, a container will be created and deployed in a Kubernetes cluster by using that new image from the registry.

When your application is in production, used by end-users, you can collect metrics and feedback from them that will be used to improve your application. With a fast TTM, you can then quickly react to provide a feature in high-demand, innovate with smart add-ons or just fix a bug. This will all flow better, make your application shine and your business will thrive.

To keep your TTM on track or give it a nudge, you also need someone that has the view of the whole process from the code development to the production and can help solve the issues during this journey. The Release Manager will perfectly fit in this role. You can check the excellent article of my colleague Emmanuel Wagner who shares his experience on this very topic.


The containerization with the flexibility offered by Kubernetes is a major step for your application to reach the market faster. By using OpenShift from Red Hat with its simple embedded pipeline allows a developer to quickly deploy your application without any knowledge on containers or Kubernetes. He can focus solely on development and quickly see the final result.

I hope you’ve enjoyed this journey and have a clearer idea of what the DevOps process is through this simple example. To be successful with your excellent application, remember to start with small improvements and build up on that!