Is the applications’ future “containe(r)d”?

Reading Time: 4 minutes

Well, I’ll be lying if I haven’t heard of containers in the past years although I used them barely at my job (solutions engineer) shouldn’t I consider them more every time to replace the applications where my customers have their servers?

Like many people that work as sysadmin these days, they are used to work with virtualization, in particular Virtual Machines (VMs), customers still using them (and it will continue) to deploy their applications in an OS which delivers great benefits against the legacy approach of the Baremetal (Mainframe) era.

Although there are other ways to deploy your applications (always depends on your application but we are taking a general approach), containers are always in the mind of Cx0 people because of their advantages against virtualization and the trend that has become in the past years.

But, which is the correct approach for an application? As always, it depends but I am going to talk about the technologies used now and the trend that I can see.

Talking about virtualization…

For many years, virtual machines have been the way to go to deploy applications within servers. You seize the hardware by running an “OS” (hypervisor) and inside of it, you run your VMs where you can assign virtual resources as you desire.

This has been (and it continues) to be the first approach for many new companies as it’s now quite standardized.

In my opinion, I think that VMware is the most famous provider by offering their Hypervisor ESXi, which has proved to be the standard for VMs.

I am not going to dive in on this as you can search for more information on Google (or whatever search engine you like to use).


Talking about containers…

It’s known that the most famous container runtime is Docker although Podman seems to be the direct rival (perspective only from my understanding, which is little)

Also, the benefits of segmenting your applications on different services (containers) will normally let you escalate, perform, etc. better than running it on VMs

The principal benefit of running services in containers is that you have a single OS where each container runs on it. All dependencies for the application you are deploying in the container are “in the OS” and this isolation is managed by the container runtime (Docker in the image that you see below):

Is there a mix? Something…

The main problem I saw the first time when I saw how the container runtime works is, how “isolated” is each container from each other? What about a security breach in the OS?

And then it’s when I found a mix of both worlds (which I really need to dig into it).

Kata Containers, a promising Open Source project where it combines both worlds and trying to deliver the best features from both technologies.

By running a dedicated kernel (part of the operating system) you provide isolation from many resource perspectives like network, memory, and I/O for example without the performance disadvantatges that virtual machines have.

Removes the necessity of nesting containers in virtual machines (which I do like in some environments to provide the isolation that a container runtime can provide).

Obviously not everything is good and it has some limitations like Networking, Host Resource and more. You can find them here


We don’t know the future but for sure containers are still a trend for many years.

You can see that even VMware embedded Kubernetes (Container orchestrator) in their products.

So it will be everything containers, a mix of containers and VMs or something like Kata Containers could be the next thing?

We will see, for now, let me research more in this last open-source project and see how it really delivers!


Finishing a Computer Engineering degree with DevOps stuff

Reading Time: 4 minutes

It’s been more than one month since I published something here but I’ve been changing quite a lot my focus learning and I changed now from CCNA to DevOps things.

TL;DR  I will be building an automated CI/CD pipeline for my final assignment focusing on tools installed and configured on-premise although there will be cloud services like the front-end.

Also, I forgot to mention that last month I started the last semester of my Computer Engineering degree that I started back in 2014 (oh my!), and I expect to finish it (if I pass the last “subject”) next January 2021!.

And now let’s move to the point.

In this last semester, I have to deliver the “final assignment” which consists of a project of my own that will be documented and then defended (virtually as per the current circumstances) against university judges.

In my case, I finally decided to get into the DevOps world and my assignment is Building a production CI/CD pipeline.

Some sort of introduction…

I suppose you’re currently aware of the trending topic regarding Containers and Container Orchestrators, in particular (you know them), Docker and Kubernetes.

Those two are the most used technologies in the DevOps world because they work great in conjunction although there are alternatives that could work as good as them.

So regarding DevOps, you probably know is a culture and it follows a set of practices where the software development world and the IT operations are combined in order to speed up and improve the process of application delivery (a.k.a. SDLC).

Continuing with DevOps, there is a pipeline or process which combines the practices of CI (Continuos Integrity) and CD (Continuous Delivery) and that’s the process that I am going to describe and build for my final assignment.

But…why this topic?

Good question… I know almost nothing about that world which is a good approach for many enterprises but not for all of them.

And, the same thing with containers, all the applications shouldn’t be always in containers but if you can re-code your app to split it into micro-services to make it better would you do it (That probably means spending large amounts of money)?

Anyway, why I am choosing this topic?

I think it’s a great opportunity to finally take a look into this area where developers need to push updates to production apps in the faster way possible. We saw that even VMware focused on Kubernetes in their product catalog so maybe you should take a look as well…

But not because VMware did it nevertheless, we are moving to a faster and automated world where everything is becoming more and more automatized. 

Just deploying containers and building micro-services will make you the coolest guy in the world but in my opinion, knowing the use cases and some tools to provision and automate lots of items will make you smarter.

I believe that this will help me to gain knowledge in those areas and advance in my career, therefore, I will be sharing all the useful information I researched during the entire project.


It is known that there many ways to build a CI/CD pipeline and many tools that you can use for each phase but in this project, I will try to start with the “foundation” of the main tools used (Container runtime, Container Orchestrator, Configuration and provision management, etc.).

All of them would be hosted in on-premise infrastructure, instead of going to the cloud where there are a lot of tools that integrates many things and will help you to avoid problems and headaches.

So basically, I am aiming to build everything on-premise except the service itself (which will be a web application) that would be hosted in the cloud, in order to achieve a better service in terms of availability, resiliency, etc.

Therefore, a mix of on-premise and cloud CI/CD pipeline is the objective with the main focus on the process and not the code of the application.

That doesn’t mean that the process where the developer has to push code to a repository (CI) will be neglected, in fact, probably some tools for the developer will be cloud-based due to the simplicity that adds but can’t ensure that this will be my final approach



In short, I am aiming to gain knowledge about this new area where developers and operations meet, and “everything” is automated (or at least a great part of it).

Although there are many tools to build a CI/CD pipeline, learning which tools to use on each phase, how and why are chosen will be key in order to understand clearly the whole process from a technical perspective.

I forgot to mention that, there are other things like IaC (Infrastructure as Code) and Control Version which are handy everywhere but especially in this environment as with code you can have different versions and avoid more errors than provisioning resources manually.