New role, moving to the cloud!

Reading Time: 3 minutes

Finally, I am writing something here.

It’s being almost 5 months since I wrote my last blog post and it happened so fast but my final project kept me quite busy between November ’20 and January ’21.

After finishing my bachelor’s degree final project in January, I’ve been enjoying some free time for myself and with that, I “forgot” about writing (a.k.a. procrastination) but not anymore as I plan to start again writing as I did in the past.

More on that project and the bachelor’s degree in the next blog posts!

Getting to the point… I am writing this because finally, after waiting for 5 months (internal arrangements) I am moving to another position! So I am changing the role but not the company.

Therefore, since today, I’ll be a “Cloud SRE” within NTT.

Cloud what?

Wait….what the hell is “Cloud SRE”?

I know… it’s a weird name but it’s the official name for the position. Probably a better name would be Cloud Automation Engineer (or similar) as I am not going to work with developers in pipelines and helping them closely.

I’ll be working with technologies and tools like Git, Ansible, Terraform, Packer, and more.

Containers and Kubernetes (very well known) are also on the menu but the biggest change for me will be the infrastructure.

I’ll be working only with cloud providers, specifically GCP and Azure, for now, we will see later.

Challenges ahead

Well, it could sound just a mere position change but for me, it would be a huge one.

From using VMware, Microsoft and some Linux technologies mainly in on-premises infrastructures to mainly Linux with Git, Ansible plus containers in the cloud will be a huge change (which I am eager to start!).

So basically Infrastructure as Code (IaC), which is a more common thing these days and quite convenient for the cloud.

This doesn’t mean I won’t be posting more VMware things but probably more container-ish, cloud (GCP and Azure), and probably Linux things as I will be learning a lot of them 🙂

P.S. This change will have a massive impact on my Windows to Linux journey which I expect to write some of it shortly!

 

 

 

 

Is the applications’ future “containe(r)d”?

Reading Time: 4 minutes

Well, I’ll be lying if I haven’t heard of containers in the past years although I used them barely at my job (solutions engineer) shouldn’t I consider them more every time to replace the applications where my customers have their servers?

Like many people that work as sysadmin these days, they are used to work with virtualization, in particular Virtual Machines (VMs), customers still using them (and it will continue) to deploy their applications in an OS which delivers great benefits against the legacy approach of the Baremetal (Mainframe) era.

Although there are other ways to deploy your applications (always depends on your application but we are taking a general approach), containers are always in the mind of Cx0 people because of their advantages against virtualization and the trend that has become in the past years.

But, which is the correct approach for an application? As always, it depends but I am going to talk about the technologies used now and the trend that I can see.

Talking about virtualization…

For many years, virtual machines have been the way to go to deploy applications within servers. You seize the hardware by running an “OS” (hypervisor) and inside of it, you run your VMs where you can assign virtual resources as you desire.

This has been (and it continues) to be the first approach for many new companies as it’s now quite standardized.

In my opinion, I think that VMware is the most famous provider by offering their Hypervisor ESXi, which has proved to be the standard for VMs.

I am not going to dive in on this as you can search for more information on Google (or whatever search engine you like to use).

 

Talking about containers…

It’s known that the most famous container runtime is Docker although Podman seems to be the direct rival (perspective only from my understanding, which is little)

Also, the benefits of segmenting your applications on different services (containers) will normally let you escalate, perform, etc. better than running it on VMs

The principal benefit of running services in containers is that you have a single OS where each container runs on it. All dependencies for the application you are deploying in the container are “in the OS” and this isolation is managed by the container runtime (Docker in the image that you see below):

Is there a mix? Something…

The main problem I saw the first time when I saw how the container runtime works is, how “isolated” is each container from each other? What about a security breach in the OS?

And then it’s when I found a mix of both worlds (which I really need to dig into it).

Kata Containers, a promising Open Source project where it combines both worlds and trying to deliver the best features from both technologies.

By running a dedicated kernel (part of the operating system) you provide isolation from many resource perspectives like network, memory, and I/O for example without the performance disadvantatges that virtual machines have.

Removes the necessity of nesting containers in virtual machines (which I do like in some environments to provide the isolation that a container runtime can provide).

Obviously not everything is good and it has some limitations like Networking, Host Resource and more. You can find them here

Summary

We don’t know the future but for sure containers are still a trend for many years.

You can see that even VMware embedded Kubernetes (Container orchestrator) in their products.

So it will be everything containers, a mix of containers and VMs or something like Kata Containers could be the next thing?

We will see, for now, let me research more in this last open-source project and see how it really delivers!

 

Finishing a Computer Engineering degree with DevOps stuff

Reading Time: 4 minutes

It’s been more than one month since I published something here but I’ve been changing quite a lot my focus learning and I changed now from CCNA to DevOps things.

TL;DR  I will be building an automated CI/CD pipeline for my final assignment focusing on tools installed and configured on-premise although there will be cloud services like the front-end.

Also, I forgot to mention that last month I started the last semester of my Computer Engineering degree that I started back in 2014 (oh my!), and I expect to finish it (if I pass the last “subject”) next January 2021!.

And now let’s move to the point.

In this last semester, I have to deliver the “final assignment” which consists of a project of my own that will be documented and then defended (virtually as per the current circumstances) against university judges.

In my case, I finally decided to get into the DevOps world and my assignment is Building a production CI/CD pipeline.

Some sort of introduction…

I suppose you’re currently aware of the trending topic regarding Containers and Container Orchestrators, in particular (you know them), Docker and Kubernetes.

Those two are the most used technologies in the DevOps world because they work great in conjunction although there are alternatives that could work as good as them.

So regarding DevOps, you probably know is a culture and it follows a set of practices where the software development world and the IT operations are combined in order to speed up and improve the process of application delivery (a.k.a. SDLC).

Continuing with DevOps, there is a pipeline or process which combines the practices of CI (Continuos Integrity) and CD (Continuous Delivery) and that’s the process that I am going to describe and build for my final assignment.

But…why this topic?

Good question… I know almost nothing about that world which is a good approach for many enterprises but not for all of them.

And, the same thing with containers, all the applications shouldn’t be always in containers but if you can re-code your app to split it into micro-services to make it better would you do it (That probably means spending large amounts of money)?

Anyway, why I am choosing this topic?

I think it’s a great opportunity to finally take a look into this area where developers need to push updates to production apps in the faster way possible. We saw that even VMware focused on Kubernetes in their product catalog so maybe you should take a look as well…

But not because VMware did it nevertheless, we are moving to a faster and automated world where everything is becoming more and more automatized. 

Just deploying containers and building micro-services will make you the coolest guy in the world but in my opinion, knowing the use cases and some tools to provision and automate lots of items will make you smarter.

I believe that this will help me to gain knowledge in those areas and advance in my career, therefore, I will be sharing all the useful information I researched during the entire project.

How?

It is known that there many ways to build a CI/CD pipeline and many tools that you can use for each phase but in this project, I will try to start with the “foundation” of the main tools used (Container runtime, Container Orchestrator, Configuration and provision management, etc.).

All of them would be hosted in on-premise infrastructure, instead of going to the cloud where there are a lot of tools that integrates many things and will help you to avoid problems and headaches.

So basically, I am aiming to build everything on-premise except the service itself (which will be a web application) that would be hosted in the cloud, in order to achieve a better service in terms of availability, resiliency, etc.

Therefore, a mix of on-premise and cloud CI/CD pipeline is the objective with the main focus on the process and not the code of the application.

That doesn’t mean that the process where the developer has to push code to a repository (CI) will be neglected, in fact, probably some tools for the developer will be cloud-based due to the simplicity that adds but can’t ensure that this will be my final approach

 

Summary

In short, I am aiming to gain knowledge about this new area where developers and operations meet, and “everything” is automated (or at least a great part of it).

Although there are many tools to build a CI/CD pipeline, learning which tools to use on each phase, how and why are chosen will be key in order to understand clearly the whole process from a technical perspective.

I forgot to mention that, there are other things like IaC (Infrastructure as Code) and Control Version which are handy everywhere but especially in this environment as with code you can have different versions and avoid more errors than provisioning resources manually.

 

 

Increasing the heap memory on vCSA 6.7 services

Reading Time: 2 minutes

For some reason, our monitoring alerted that the service “vsphere-ui” from the vCSA it was having some problems randomly. From the user perspective only we noticed some slowness when navigating within the HTML5 client.

I took a quick view of the VAMI I saw this message from the VMware vSphere Client service:

The server is running low on heap memory (>90% utilized.)

So it was time to solve those random alerts about memory utilization.

Let’s work a bit…

Accessing the vCSA via SSH (using PuTTY):

I can see the service has 1110 MB assigned. So as the deployed VM for the vCenter Server appliance has 16GB of RAM allocated(you can see it anyway how much is being assigned in the previous screenshot), I decided to give it ~1.5x (1665MB) but in powers of 2!:
512+1024 = 1536 MB .

Executed:

cloudvm-ram-size -C 1536 vsphere-ui 

 

Now, restart the affected service:

service-control --stop vsphere-ui;service-control --start vsphere-ui; 

And now check the allocated memory for the service we configured:

It seems that the vCSA itself adjusted the value to what it considers it’s best so nothing that we can modify there. So finally this service memory allocation changed from 1110 MB to 1792 MB.

 

Final note: Obviously other services were modified and have allocated less memory, in general, it gathered a bit of memory allocation from each service (the most impacted was vmware-vpxd with ~ 300 MB)

 

All this information can be also reviewed in this KB: https://kb.vmware.com/s/article/2150757

That’s all for this quick post!

 

WSFC – Validate Configuration wizard error

Reading Time: 2 minutes

This is a short post talking about Windows Server Failover Clustering (WSFC) and a problem I found when adding the nodes from your cluster using the “Validate a Configuration” wizard.

This wizard is recommended to run after configuring your nodes and before creating the cluster in order to spot any misconfigurations.

So now, let’s go into the problem.

 

The issue

In the wizard when trying to add (in my example) the second node shows an error:

Failed to access remote registry on <FQDNoftheserver>. Ensure the remote registry service is running, and have remote administration enabled.

 

Possible solutions

  • Execute in Powershell (PS): winrm quickconfig

This will set up “winrm” (Windows Remote Management), more information in this link.

  • Review the NIC settings on the affected node:

Check the options “File and Print Sharing for Microsoft Networks” and “Client for Microsoft Networks” for the NIC that you’re are trying to add the node (based on what’s registered in DNS):

  •  Review the service “remote registry” is set to “automatic (trigger start)”.

 

After that, you shouldn’t have problems in order to add your nodes within the cluster from the wizard:

Now, you could continue with the testing options and so on but this post is only to explain the error and how to solve it.

 

That will conclude this quick post about Windows Server Failover Cluster and an issue you can find while trying to validate the configuration of your cluster from the wizard.