Veeam – Backup VMs in remote sites

Reading Time: 6 minutes

I was wondering why I haven’t talked about Veeam when I use it almost every day in my job, not only administering backups but doing new implementations.

Recently, I had to implement a design where I need to backup VMs in remote sites but not back up them in a centralized storage, they will be backed up on each remote site storage.

So, by deploying a VM with the Backup proxy service and also use it as the backup repository we can accomplish the goal. We will save bandwidth and increase the speed to restore and backup those remote virtual machines by using the local storage on each remote site.



The scenario I am talking is the following, a dedicated VM with Windows Server 2016 Standard (a.k.a. W2016 STD) to act as a backup proxy and backup repository and Veeam B&R installed on the main site (the cloud we will say).

This is the high level design:


So, we are going to back up all the VMs that are hosted in the remote ESXi hosts and also save the backup data in the local storage.

As said before, in this way we save bandwidth and gain speed in the backup and restore process in case we need to perform any of it.

We will assume that we have a vCenter deployed with Veeam B&R installed. The Veeam B&R has configured the vCenter and then all remote ESXi hosts.



The implementation is pretty straightforward, we will have a dedicated VM to be deployed on each remote site and then perform the following high-level steps:

– As a backup repository, we are going to add a hard disk to the remote VM and use that hard disk as the backup repository for the site. We will seize the capabilities of Windows Server 2016 and use ReFS as the filesystem for the added hard disk.

– Install a backup proxy service, we just need to deploy the backup proxy service from the Veeam B&R console to the VM that we are using. The backup proxy will be who processes jobs and delivers backup traffic.

So, let’s go each step!


Backup proxy service

First, our Windows Guest OS VM is joined to the domain, so we won’t have any kind of problem for resolving the name or accessing with domain account credentials.

Let’s add the proxy by going to the Backup Infrastructure tab > Backup Proxy > Add VMware Backup Proxy…

As this is a new server for Veeam, we will have to add it as a “server” by pressing “Add New…”:

Then, this window will appear, just enter the FQDN of your server:

Choose credentials and chooseApply “to install the transport service:

After that, you will be able to choose the newly added server ( from the drop-down menu:

Now, let’s configure the Transport mode and Datastores for this proxy (as in the previous screenshot):

And for the datastores, choose the ones that are connected to the ESXi host where the VM is hosted by selecting Manual Selection and adding them:


After configuring that, you will have the same configuration as in this screenshot:

Finally, just hit Next and apply any kind of traffic rule if you want:

Now, finish, and the proxy will be fully configured and ready.



We configured these options because they are the best for our deployment which is using a Windows VM that will have a backup repository which will save the backups.

For more detailed options about the Backup Proxy service go here.

After configuring each backup proxy we will have a bunch of them in the Backup Proxies tab:


Backup repository configuration

In this step, I suggest following this article to perform this step.

Basically, we just have to add a new hard disk to our dedicated VM as Thick Provision Eager Zeroed, format the disk as ReFS and finally, add the Backup Repository to the Veeam B&R Console.

In that article, it’s also explained the benefits of ReFS so, I think it’s more detailed and easy to follow it.

After we configure all the backup repositories, we will have the same amount as the backup proxies:


As you can see in the previous screenshot, the path (D:\Backups) is the disk that we added to the VM on each remote site. We have configured the backup repository to that path because, as explained before, we have a disk formatted in ReFS and it’s explained in the article.

Backup job configuration

After configuring the backup proxy and backup repositories on each site, we are ready to the last step, configuring the backup job to perform backups.

Go to Home tab and then Backup… Virtual Machine:

Now, step by step, pick a name for the job:

Proceed to select the VMs you want to backup (in our case the ones in the EUR site):


Let’s continue and in the Backup proxy, click Choose… and select the correspondent backup proxy (EUR_proxy):


Press OK and go to Advanced. Configure it like that if you want Synthetic  full backups:


And then the monthly health check (recommended):

Accept and here is the summary for the backup proxy step (we will keep 7 restore points in our case):

Configure any option as you like (not in my case):

And finally, proceed with the schedule that you want after finishing the configuration for this job!

And that would be all for this remote site. We had to to the same with the other remote sites and our job will be done!



Finally, with this design you will be able to back up remote sites and store the backups in the local storage from each site.

If you don’t want to use a dedicated VM as a backup proxy, you can install the service on a VM that has low usage and install the backup proxy service, however, it’s recommended to use a dedicated VM which will have the backup proxy service and the backup repository (the virtual hard disk attached).



Migrating ADFS from 2012 R2 (3.0 v) to 2016 (4.0 v.)

Reading Time: 5 minutes

I will explain today how to migrate ADFS from 2012 R2 (3.0 v) to 2016 (4.0) without almost no downtime. The overall process consists in adding the new ADFS server to the farm, assign the primary role to the new ADFS, make some changes and then we’re done.


The current environment is:

  • 1 x WAP Server (W2012 R2)
  • 1 x ADFS Server (W2012 R2)

No applications published, just an Office 365 Relying party trust.

A DNS A record that points to the ADFS IP address.


And the future environment will be:

  • 1 x WAP Server (W2016) -> Not in this post
  • 1 x ADFS Server (W2016) -> In this post

Planning for your ADFS Migration

  1. Active Directory schema update using ‘ADPrep’ with the Windows Server 2016 additions (not necessary in my case)
  2. Build a Windows Server 2016 server with ADFS and join into an existing farm.
  3. Promote one of the ADFS 2016 servers as “primary” of the farm, and point all other secondary servers to the new “primary”.
  4. Change DNS records to the new servers’s IP address.
  5. Raise the Farm Behavior Level feature (FBL) to ‘2016’
  6. Test that the setup works correctly.
  7. Remove the old ADFS server (W2012 R2) from the farm.

Upgrading Schema

Now, time to upgrade the schema of the AD:

Put the installation media from W2016 Datacenter:

Adprep /forestprep

In my case, it was already updated (my domain is in W2012 R2 so it seems that I don’t need it).


Installing and configuring ADFS

Once we deployed a new Windows Server 2016 and it’s joined to our domain…

Install the role of ADFS in your target server and then continue with the post-deployment config:


Provide can account with Domain Administrator permissions:


Provide your federation service name. You can review it in the current ADFS primary server and click Properties in the root folder of the ADFS console:


In our case “”:


Specify your SSL certificate (usually your wildcard):


Then, I will use an account (Managed service account recommended):


Review your configuration and after the pre-requisite checks proceed with the “Configure” button:


After the server is installed you will have some warnings that will be fixed later by rebooting the server and making this new server as the primary ADFS server in the farm:

Then, we will proceed to reboot our server (


Configuring as a “PrimaryComputer” in the ADFS farm

Once the machine has restarted, open the ADFS Management Console, and you’ll notice it’s not the primary federation server in the farm.

Open a PS console and execute:

Set-AdfsSyncProperties -Role PrimaryComputer


After that, I can access the ADFS console from our new ADFS server without the warning:


Execute this on the other ADFS servers (we will point the new ADFS server as the PRIMARY):

Set-AdfsSyncProperties -Role SecondaryComputer -PrimaryComputerName

Then, we will check that in our old ADFS server it’s correct:

Details to bear in mind

So, in my case, I have a DNS A record that points to an IP address (the ADFS server)

After pointing the new, I had to modify the hosts file from the WAP server in the DMZ to point to the new server!

Also, I modified the DNS  record from the internal DNS with the new server’s IP address.



Error with 0365 relying party trust

After migrating the service from ADFS 3.0 (W2012 R2) to ADFS 4.0 (W2016), I faced a  problem when updating the O365 relying party trust.

The solution was to apply a fix described by Microsoft:

Basically, what you have to do is to add a couple of registry values in this new ADFS server because it’s Windows Server 2016 and is running ADFS 4.0 version.

Once you applied the fix, reboot it and works flawlessly!


Testing the new setup

To check that it’s really working, try to log into your Office 365 portal and it must show you the portal from your federation service.

As the WAP service isn’t migrated yet, it should respond correctly but if the configuration is not correct, it won’t be able to gather the configuration from the ADFS service.

Removing the old ADFS server

Once you tested that it works correctly, as both ADFS servers will have the configuration replicated, you can remove the role from the old one (that now holds the secondary role) and then remove it from the domain.

With that done, you will have a fresh new Windows Server 2016 ADFS server and none “old” ADFS servers.



And that’s all, I will do in the future another post about the WAP service migration that it’s easier than this one, I hope that this can help someone.

Exam 70-743, Upgrading MCSA Windows Server 2016 experience

Reading Time: 3 minutes

I will explain quickly my experience regarding the Exam 70-743, Upgrading Your Skills to MCSA: Windows Server 2016 exam from Microsoft I took last April.

It’s been a while since I took an exam from Microsoft (the latest was in 2013 I think) where you probably know that these kind of exams are multiple-choice or single-choice.

Through my career, I saw a lot of people cheating with these exams by memorizing the questions you can find on the internet and finishing it in just 20 minutes.

Despite I envied these persons because they weren’t putting the same effort as I did, in the end, this was translated in almost no knowledge about what they practiced nor familiar with all the features that Windows Server offers.

So, I encourage you to study the materials and practice in order to learn and bring value to yourself if you want to use these technologies from Microsoft.

The blueprint and webpage for this exam is the following one:


About the exam

In my case, although I am experienced with Windows Server this kind of upgrade exams, which consists in a 3 in 1 exam, can be scary for someone who’s new or hasn’t touched many roles that Windows Server has.

Even I installed almost all roles from Windows Server 2016 there are some of them that aren’t so common and you should practice it in a homelab (best way to stick in your mind).

There are around 60 questions (the quantity may differ) chosen from the following exams:

Regarding the questions there is a mix of Drag and Drop, Radio buttons, Checkboxes, …you know, the usual ones in this kind of exams.

Important: Be aware that the feature “Nano Server” was removed/deprecated in Windows Server 2016 time ago, here is the official post from Microsoft:

Also read the changes that this exam suffered, in the official change document that Microsoft provides (is in the blueprint):

So, even if you see a lot of information about Nano Server in guides or courses in my case I didn’t find any question in the exam related to it (as it was deprecated years ago).


Resources and suggestions

As a resource, I mainly used this course from Pluralsight (not free):

There are a lot of videos there, I checked the ones I felt more insecure and practiced in the lab. Also, I do recommend that you use Powershell to install and configure everything you can and in this way, you will get used to it.

As this is a 3 in 1 exam, the range of features and roles to know is huge, knowing a bit of everything will help you to pass but, without practice, you won’t get anywhere…

Having experience helps a lot but if it’s not your case, focus on the roles and features you never used or are not used to use (ADFS, NPS, RRAS, Hyper-V, etc.).



To conclude, I can say it’s a fair exam and a bit challenging maybe but if you practice a lot with all the roles that Windows Server 2016 offers and know the differences from Windows Server 2012 R2.

Also, the most important I think…practice with Powershell. It won’t only help you with the exam also, in your future!






Cohesity Build Day Live

Reading Time: 8 minutes

I am going to share some thoughts and opinions about a recent video from the Cohesity Build Day Live recorded recently with the Build Day Live! team.

Disclosure: This post is sponsored by Cohesity.

First, just let me introduce briefly you what is Cohesity:

Cohesity is a platform focused on managing and consolidating secondary applications and the data. It provides a unified view and access to converge that secondary data, such as system backups and analytics in a simpler way to an IT administrator.


Now, let’s deep into the topic.

In the video, you will see how Alastair Cooke and Bharath Nagaraj building a Cohesity cluster from the scratch, configuring jobs, updating the physical appliance, restoring some data and showing some other cool stuff.

I really like this kind of videos because, you can see how they install a cluster, configure it or resolve any problems that can happen in real time without cuts.

Also, you can notice how much time it can take to deploy and configure a Cohesity cluster in some minutes, or even upgrading the whole cluster (node by node) while running some protection jobs (backup jobs).


In this case, they use a  physical unit for deploying their solution, so it’s a 2U enclosure with 4 servers/nodes inside (blade server type).

It comes, like most other solutions, with 1GB ports for Management purposes and 10GB ports for Production Data.

As this is an HCI solution, it comes with the storage and computes resources necessary to process and store all data (PCIe Flash card and hard drives in each node).


Cluster configuration and UI

To configure the cluster you won’t need a lot of data to fill or knowledge to do it, they configure the cluster easier than I thought and straightforward.

In a real scenario, a Cohesity engineer will do it for you thus, this is just to let you know of the simplicity of it.

The UI is simple and clean, the home dashboard looks nice with some graphics regarding your Storage Reduction, Health, Throughput, Protection Runs, etc.


As you probably guess, it backups your vSphere/Hyper-V/Nutanix environment like other products do, so you can configure a Backup Policy with a schedule, time retention, etc. to back up your data and then you configure a Protection job which will be the backup job that is associated to a policy.

Just register the hypervisor of your choice and basically, you’re ready to back up your virtual servers (VMs).

One option I really liked when registering a hypervisor was, the option of selecting “Auto Cancel Backup if Datastore is running low on space”, so the DataPlatform solution is aware of the datastore’s space and can avoid you a big problem there…


About granularity, there is a lot of options to select when you create a protection job (DB, Virtual/Physical Servers,  but regarding what you can see in the video they protected only VMs and Office 365 mailboxes in different backup policies.

It’s great that when you are creating a protection job (a.k.a. backup job). you can select an object like a cluster or a folder with some particular VMs and then check the “Autoprotect” option to ensure that new VMs that are added to that object (folder, cluster, etc.) will be automatically protected.


Regarding long term retentions, you can choose to add an external target like a NAS or any cloud (AWS, Azure, GCP, etc.) to store your archive backups there.

This is an option that adds great value to your strategy because when storing great amounts of data for several years, you usually don’t want to store it locally or even in a NAS.

In my opinion, having a flexible option to store it in any cloud can save you a lot of headaches despite the money that you must pay for the cloud service (which nowadays almost every company does).

So, within a Backup policy select the Archival option and then you add as many external targets to store your long term backups.



Your backup strategy is useless unless you can restore from it…

I do like some points about this section that makes so simple to restore, from a single file (even download it) to restore tons of virtual machines to your virtual environment.

– Single file restore

If you want to recover a file, you don’t have to search for the date, where it was, etc. As simple as searching for the name of the file (or the portion you remember) and it will be searched in the entire cluster for you:

And then, when you found that file, look at the options that we have:

First, search for the date and then, you can choose the usual option (Recover to Server) or … download it at the moment (a cool option there).

It looks like a painless and simple way to restore files that probably a non-tech person could do.

– Instant mass restore

Now, going bigger, let’s talk about the Cohesity instant mass restore of virtual machines. As the Cohesity platform is designed in a distributed architecture where there isn’t a centralized bottleneck, they can restore tons of VMs quite faster than other products.

When recovering a lot of VMs, in the background (you could look at your vSphere environment) it will mount an NFS datastore and bring up all you requested VMs (quite fast to be honest).

– Office 365

Finally, the last thing to show you is the option to backup your Office 365 environment. You can integrate Cohesity with your Office 365 and perform protection jobs that will be associated with a policy and consolidate all the data within the same platform.


The process is straightforward, selecting a package from your local computer or getting it from the Internet, this makes it so easy to do it for yourself.

One thing that stuck in my mind was that, while there were running some protection jobs you are able to upgrade the whole cluster (node by node) non-disruptively.

As the entire solution is designed to tolerate one node failure (N+1 redundancy) thus, you can upgrade one node without disruption in the service.

As we said before, the Cohesity platform is based in a distributed architecture so, in case a reboot is required after upgrading one of the nodes, you will only lose the bandwidth coming from that node and not impacting the rest of the environment.


Cohesity Helios is the console that lets you manage and view all your clusters from one console. As it’s in the cloud, you only have to register your Cohesity appliance and at the end, it will show up in the Helios console.

Helios Dashboard is similar to a Cohesity management dashboard but with the ability to manage all your clusters from that single pane of glass.

And it’s just not that… Helios lets you install applications!

Yes, you can choose to install applications in one of your clusters without anything else. What Helios will do is to deploy an app within a container (using Kubernetes) in that cluster without having to worry about the underlying infrastructure.

Just install, configure and run your app (as it sounds).

For example, running Splunk to gather data analytics in your clusters without having to worry about to deploy it is really a nice feature to look at it. 

I’ve never seen a feature like that and it really surprised me when I saw it. A nice additional value that you can consider when using Helios with your Cohesity platform.

Other use cases

As the Cohesity platform is cloud and hypervisor agnostic, you can protect objects on any cloud Azure/GCP/AWS or any hypervisor Hyper-V/VMware/Nutanix but, do you imagine what else can you do?

Well, you can use it to migrate VMs between different environments! It’s a great use case where you can choose to backup all your vSphere environment and move it to Nutanix for example or moving it to Azure.

Obviously, there is work to do after it but, the amount of simplicity that gives you with that, for me, it’s massive.

That’s all…

We saw a lot of things from the Cohesity platform, how can help your company to achieve that data consolidation by: backing up from different clouds and environments (cloud and hypervisor agnostic) , establishing an SLA in your services (configuring policies), recovering tons of VMs and other features like Helios, a cloud console that brings you a unified view for your Cohesity environment, analytics for all your data and even the ability to deploy applications without needing any kind of resources.

If you are interested in more content, check the Cohesity Build Day Live web page or the official web page from Cohesity.


vCSA 6.x installer error: “No networks on the host. Cannot proceed with the installation.”

Reading Time: 2 minutes

This is a quick post of an error I found sometimes while deploying a new vCenter server appliance with an embedded PSC on the vCSA 6.x installer.

The problem

In my case, I was trying to install vCSA 6.5 without DNS (this is why the system name has an IP address and the DNS is itself). Also, notice that the network section is empty:

If you try to continue with the installation, it will show you an error:

No networks on the host. Cannot proceed with the installation.



I checked the ESXi host and obviously, it has other port groups created in a standard virtual switch, then, which was the problem? Why I can’t see them in the drop-down list?


Checking on the internet I found this:

So, that web page mentions the “VM network” port group that is a default port group that is created once you deploy an ESXi host. In my case, it was auto-deployed with different port groups and that one didn’t exist.

Hence, I decided to create a port group called “VM Network” in the host that I am trying to deploy the vCSA and…it worked!

Now, as you can see, I can see that port group and I was able to continue the installation with success!

It seems that with you must have this port group if you are deploying a vCSA at least from your PC, so, bear in mind if you are trying to deploy a new vCSA and you don’t have the default port groups when deploying a vCenter Server.


I hope this helps if someone has this issue.

Cloning virtual machines in vSphere series – Part 3: Linked Clones

Reading Time: 7 minutes

Let’s start with part 3 in the cloning virtual machines in vSphere series, I am going to talk about another type of clones, linked clones!


All articles regarding cloning virtual machines in vSphere series:

Part 1: Types of clone
Part 2: Full Clone
Part 4: Instant Clones (not published yet)

As said in previous articles, this series is only focused on VMware vSphere hence, VMware Horizon View is not contemplated (Linked clones are commonly used in that product).

What is a linked clone?

What can you see here?

Keanu Reeves has been linked cloned!

In the previous image, you can see 3 different characters but they share something in common…the actor!


Linked clones are the same! A linked clone is a type of clone (a copy of a virtual machine) where the parent VM shares virtual disks with their clones.

The resulting linked clone will be created from the parent’s VM snapshot and because of being a snapshot, it will have the same state that was the snapshot was taken.


When the linked clone is created, it shares his own virtual disk (.vmdk file) with the snapshot from the parent VM, this leads to some unique features:

  • The clone will be dependent from the parent VM because they are sharing their virtual disks. If you delete the parent’s VM snapshot, it will corrupt the clone’s virtual disk.
  • Even both VMs are sharing their storage, any changes performed in the clone won’t affect the parent VM and vice-versa.
  • The linked clone will have the exact same data as the source VM because it was created from a snapshot.
  • The save spacings are obvious because the clone will only write the new modifications in its own virtual disk. So, the clone’s virtual disk size will be only the amount of data that changed after it was created!

General process

  1. Use or prepare a VM (Parent VM) that will be used as a master/parent to deploy the linked clones
  2. Power-off the Parent VM (Recommended but not mandatory)
  3. Perform a snapshot of the VM.
  4. Time to create Linked clones referencing the snapshot we created previously.
  5. Power-on the clones and customize them (apply customization specifications for example).
  6. (Extra) Before powering-on the linked clone, perform another snapshot of the clone to use it as a rollback (if the end-user needs it).
  7. (Extra) Power-on the clone and is ready to be delivered.
  8. (Extra plus) If you decide to keep the linked clone for any reason, you can perform a full clone of it and it will become an independent VM!


In the next section, I will show you how to create a linked clone with PowerCLI from a Windows VM and in my case, I will use Custom Specifications within the script to launch the clone.

Lab time!

Here we have the VM that we’re going to use as our Parent VM:

Name: SQLMasterVM
Disk allocation: Around 35 GB summing both disks.

Inside the Guest OS:


Space allocated in DS:


What are we going to do?

  1. Shutdown the master image VM that hosts some DBs.
  2. Create a snapshot when the VM is powered-off to ensure that is consistent (this is a VM with a SQL installed so, even more recommended)
  3. Perform the linked-clone via PowerCLI.
  4. Start the VM (We aren’t going to do the extra step of creating a snapshot of the clone) and use custom specifications to fully customize the clone.

All of this will be performed by this simple script:

##Creating SQL Linked Clone from a Parent VM "SQLMasterVM"
$OSSpec = Get-OSCustomizationSpec -Name 'Win-SQL'
$BaseVM = "SQLMasterVM"
$LinkedVM = "SQL-LC1"

# Delete snapshots on the Parent VM
Get-Snapshot -VM $BaseVM | Remove-Snapshot -Confirm:$falseStart-Sleep -Seconds 2

#Create snapshot
New-Snapshot -VM $BaseVM -Name "Linked-Snapshot" -Description "Snapshot for linked clones for $LinkedVM"

#Gather information of the created snapshot
$snapshotParent = Get-Snapshot -VM $BaseVM | Select Name
$snapshotParent = $snapshotParent.Name
Start-Sleep -Seconds 5

#Create Linked Clone referencing snapshot and start the VM.
New-VM -Name $LinkedVM -VM $BaseVM -Datastore "VMS" -ResourcePool (Get-Cluster -Name Gaiden-Cluster | Get-ResourcePool) -OSCustomizationSpec $OSSpec -LinkedClone -ReferenceSnapshot $snapshotParent -DiskStorageFormat Thin
Start-VM -VM $LinkedVM

In this script, I am also using the OSCustomizationSpec parameter, while using the sentence to create the linked clone,  to change the IP, name and join again to the domain the resultant clone. Also, I am changing the SQL instance name in my case because it’s a server with MSSQL server installed.

Once the script finished, a new linked clone is created and powered on with the name “SQL-LC1”.


We can see the amount of time that takes to create a Linked clone (5 seconds):

And now look at the storage allocated by the Linked clone (powered off), 750 MB approximately:


After the Linked clone is created and powered on, you can do whatever you want.

I had to wait some minutes (around 10 min. in my case) until the OS customization specification finish all the actions specified (power on the VM, join to the domain, reboot the VM, execute a script to update SQL instance, etc.)

Here is the “real” space allocated after the Linked clone has booted up and I logged in with a user, around 4 GB:


A look inside the Guest OS of the linked clone (new hostname, IP and has the same storage as the Parent VM:

Use cases

It’s commonly used in VDI and DEV environments but here are some examples:

  • Desktop Deployment
  • QA
  • Bug testing
  • DB server testing
  • File server testing
  • General testing


Benefits and limits

Let’s summarize which are the benefits and limits that we can find in linked clones:


  • Superfast cloning compared to a Full/Normal clone, it takes seconds instead of minutes to clone large VMs.
  • Space savings due to changes are stored in a separate disk (clone’s flat disk).
  • Useful for development environments or if you want to keep the clone just, perform a full clone of it!
  • Deploy as many linked clones as you want, they will reference the snapshot in the Parent VM hence, there is no disk chain on that (except for the snapshot you created of course) and the benefits of replicating.
  • Ongoing changes made in the virtual disk of the source VM don’t affect the linked clones and changes to the disk of the linked don’t affect the parent.
  • It can be performed with the parent VM powered on but, it will have some performance degradation and probably inconsistent data (if for example, the parent VM hosts a DB).


  • Recommended but not mandatory that the parent VM has to be powered off.
  • There is a storage/disk dependency as the linked clone is created from the parent’s VM snapshot then, if you delete that snapshot, inconsistencies will occur in the clone (and at the end you will delete it).
  • Performance on the destination clone will be impacted (as virtual machines are sharing storage)


To conclude

Linked clones have multiple benefits compared to full clones and it has many use cases as we saw before.

You can easily replicate the status of a VM (snapshot) and deploy linked clones to your end-users with all the benefits as for example space savings or the deployment speed.

To end this series, we will look at instant clones, another type of clone that is even faster than linked clones but, with some particularities.

Experience at a local VMUG – Barcelona VMUG

Reading Time: 3 minutes

I am going to talk about my experience at a local VMUG, in my case the VMUG from Barcelona (BCN). It was my first time at a local VMUG event and this is why I decided to share my thoughts about it.

VMUG – What is?

As you probably know VMUG stands for VMware User Group. Basically, you will find an international community of people where they share their experiences and discuss things related to VMware and other technologies.

At your local VMUG, you will find VMUG members that you can meet online or in person, sponsors. There you can find many passionate people about VMware and connect with them, so I highly recommend to attend to the VMUG events as it is a great experience to learn concepts or technologies and connect with a lot of people.

There are many communities around the world if you want to find your local VMUG go and register at


VMUG – Why attend?

Because you won’t regret it!

I didn’t imagine that the environment and people were so good, everyone is passioned about technology, mainly VMware things, and you can hear their stories and share experiences that both probably lived in your life.

Hence, it doesn’t matter if you’re presenting or attending your local VMUG, the thing is to go there, meet people and learn about the sessions! You will find that a lot of people share some passion for VMware and technology so, don’t be shy to say hello and try to meet them.

Presenting at a local VMUG

I’ve been involved in this local VMUG for a year approximately but, I never had the chance to attend any event. Since VMworld I didn’t hear anything about a new event so, I spoke with the VMUG forum about when would be a new event and finally, it ended with myself presenting my session.

In my case, I presented a session (in Spanish) about Clones in vSphere (I’ll do a post about presenting in the near future) on March, 15th.vmug_present_aboutme

You didn’t expect something like my case, I was lucky to present at my first time in this event but, usually, it’s better to go there and meet the people before presenting and not in the other way.

Final thoughts

I am glad that I finally was able to attend and meet a lot of VMUG members, you can learn a lot from people through their sessions and also make connections that will be valuable in the future.

I would highly recommend attending and experience the passion of VMUG members and the knowledge that you can gather from them.

Also, I would like to acknowledge the VMUG leaders from the BCN VMUG for letting me present and the sponsors that make it possible.

I hope to see everyone in the next event!

Cloning virtual machines in vSphere series – Part 2: Full clone

Reading Time: 5 minutes

Continuing with the cloning virtual machines in vSphere series, today I am going to write about the full clone, how it works and some useful information about it.

So, let’s talk about clones… but just full clones.


How does it work?

As you probably know a full clone is an exact copy of a source VM, meaning that, everything from the parent VM is copied (VM settings, disks, files, etc.).

This action can be performed if the parent VM is powered off or powered on and, if it has snapshots it will consolidate them once the clone is done.


When you clone a VM be aware that, all data will be identical so, if you power on the clone without performing any customization, probably you will have conflicts with IPs, MAC addresses, SIDs (Windows), etc.

The great thing about a full clone is that, after the cloning operations are performed the clone will be an independent copy of a virtual machine that doesn’t share anything with the parent virtual machine (we are talking about from a compute and storage perspective within vSphere).

Ways to do it

First of all, you will need VMware vCenter to do it.

There are other ways (not official) like copying all data related to the virtual machine (.vmdk and .vmx files) and then register the “new” VM with another name.

Let’s continue with the usual ways:

vSphere Web Client

You can do it through vSphere Web Client, as simple as, right-click a VM -> “Clone to Virtual Machine…” :


Once it finishes, it takes some time (depends on the storage that the source VM has allocated) but in the end, you will have your new clone.

Likely you are more familiar about deploying templates…

Deploying a template is the same as cloning but. aside from copying the same data from the parent virtual machine, vSphere lets you customize the deployed VM for creating many clones and with different configurations as you wish.


Of course, you can do it with PowerCLI. These are the minimal parameters needed to perform it (Disk Storage Format parameter is optional but recommended because, by default, it will convert all disks to Thick Provision Eager Zeroed):PowerCLIwithPercentatge_clone

New-VM -Name <CloneName> -VMHost <VMHost> -VM <VirtualMachineSourceVM> [-DiskStorageFormat <VirtualDiskStorageFormat>]

In the previous screenshot, you can see the minimum parameters required to perform a full clone, if you want to see more options you can check it here.

As you can see in the code, it’s similar to deploying a template, isn’t it?

Use cases

The main use case is deploying from a template, maybe we are not aware but, deploying from a template is just cloning our source VM (Master template) and then customizing it.

I saw many customers use it as a “rollback” when they have to perform a destructive task within the Guest OS. In this way, just shutting down the parent VM and powering on the clone.

If you think a snapshot can do the same as a clone well, not always… some applications don’t handle well doing a quiesced snapshot.

This is why, as a solution, you can create a full clone when the virtual machine is powered off and then, have a copy that will be consistent and without corruption.

Another use case could be to perform a full clone to use it in other environments. Although there are better ways to do this (with other products), when the Guest OS has many customizations, this can be an alternative solution of re-creating the entire virtual machine.

Benefits and limitations

The benefits of a full clone were mentioned before:

  • If the cloning operation is executed when the source VM is powered off, it can be used as a rollback in many cases (there are better options like a VM backup but, it can help a lot).
  • Creation of an independent VM that shares nothing with the source VM.
  • Used in templates, so, they are very useful!

These are some limitations instead of disadvantages that we can find:

  • It takes some time to create a full clone (it depends on the allocated storage) as it has to copy all storage from the source VM.
  • It can only be performed with VMware vCenter (there are other ways as I explained before but they are not official).
  • If done when the VM is powered on, it has an impact on the source VM that can be noticed by the business so, isn’t the best option to do it while the virtual machine is in running.


To sum up, a full clone is a great way to have an identical copy of another VM to use it as a permanent virtual machine once you configure it accordingly.

As said before, is the same as deploying a template because you are just cloning a VM (deploying a template) and then customizing it.

It usually takes some minutes to finish the clone (depending on the storage allocated in the parent VM), this is why there are other ways to deploy clones in a faster way (on the next posts!).

Cloning virtual machines in vSphere series – Part 1: Types of clone

Reading Time: 4 minutes

Most of you already know how to clone virtual machines within vSphere, and I mean cloning from the vSphere Web client within vCenter but, beyond that, there are other types of clones you can use in vCenter like, Linked Clones or Instant Clones (aka Project Fargo/VMfork)

Due to the large content that can be discussed about each clone’s type, I decided to make a short series of posts talking about cloning VMs!


Types of clone

Here I will summarize each type of clone that exists in vSphere, some of them are used in different products or interfaces but in the end, all of them are accessible through PowerCLI.

Full clone

This is the “classic” clone you can perform in vSphere Web Client no matter which is the VM’s status (powered on or off), that you can perform a copy of the VM.

If you want to perform a consistent clone, it’s recommended that you power off the VM and then perform the clone.

This is an independent copy and has no dependency from the parent virtual machine after the clone is complete (meaning that you can remove the parent VM if you need it).

The main advantage is that you can have a reliable copy of the Parent VM (remember this is not a backup) if you want to replace it. As this is a full copy of the VM (it will copy the entire disk), this might take several minutes depending on the size of the VM.

After you perform it, remember that everything will be the same then, all configuration (SID, network configuration, hostname, etc.) within the VM will be identical hance, it can lead to problems if both VMs co-exist at the same time without the proper configuration.

Full clone

Linked clone

Is a clone made from a snapshot of the Parent VM. This means that both VMs (the Linked clone and the parent VM) have in common virtual disks.

So, the linked clone is dependent on the parent VM, meaning that the linked clone needs access to the parent VM. The clone must be done while the Parent VM is powered off (as a best practice).

Once a linked clone is performed, changes on the parent VM doesn’t affect the linked clone and in the other way, changes in the linked clone don’t affect the original VM. Mainly the benefits of using linked clones are:

  • Saving disk space because only the differences between the origin snapshot and the linked clone are allocated and the fast.
  • Quickly deploy tens or hundreds of VMs in a fast way as it doesn’t need to copy the entire disk.

This is a technology commonly used in VMware Horizon View to provide desktop deployment (rapidly deploy a lot of VMs). The thing is that we can also use it with PowerCLI without having Horizon View and use it for more use cases.

Linked clone

Instant clone

Similar to the linked clone, Instant Clone is like an improved version of linked clone technology. This is something “new” in vSphere 6.7 as is available through the API.

Like the linked clone technology, there is a parent VM which will share the disk with the clone (Instant clone) but, in this case, it will share the memory too (even if TPS is disabled).

There are two types of Instant Clones that I will explain in more detail in the next posts but, as a summary, you can do an instant clone from a source VM from a point in time and deliver many VMs (instant clones) as you want.

The Parent VM must be powered-on instead of powered off like other types of clones, in this way, it can provide even a faster way to deploy VMs because it will not require to power-cycle the Instant Clones.

As benefits, we will have the same as in Linked Clones technology plus memory efficiency (because it shares memory between VMs) and the ability to resume the VM in a point of time without power cycling the clone.

In the other hand, depending on which type of instant clone you can run with a lot of delta disks.

Instant clone


I tried to summarize each clone’s type that we can perform within vSphere and if you want to read more, stay tuned to go in more detail in the coming series of posts related to cloning VMs within vSphere.


IT skills: Which gap to fill?

Reading Time: 5 minutes

Regarding this nice article, I read while ago from Eric Shanks:

I decided to write about it as it’s always in my mind since he published it. In particular, I will talk about what we should learn in order to fill a gap for a position/job in any technology and my experience.

Which path to choose?

I started learning Microsoft (MS) products and then I learned other technologies like Virtualization, Cloud, Storage, etc. In any case, you don’t have to worry about which would be your first technology because you can move later if you realize it doesn’t fit you.

For example, you can start learning Linux  (I hope I had taken that path!) instead of Windows and then move to Windows or learning about Storage, any path would be correct as long as you continue to learn the same technology or a new one.

When you start learning, you have an idea of what do you want to learn: Network, Cloud, Storage, etc. and then try to master that technology or move on to another one. Hence, you must start somewhere in order to grow your knowledge and then, you will decide which technology or technologies do you want to focus on.

As soon as you start you will notice if you like or not, but choose one!

Knowledge lost as a tank with a leak.

As I mentioned at the beginning of this post, Eric Shanks mentioned in his blog , is like filling a tank (which drains as time passes by) with your knowledge:

When you learn a new topic it stays in your mind (won’t drain) as long as you keep using it, hence you will gain different levels of knowledge within the topic, in some years maybe you will achieve an advance and then maybe an expert level or maybe not, as not everyone has the ease to learn things. Also, remember that to maintain a certain level you must update your wisdom (due to the topic is always acquiring new updates or features) about the topic and therefore more effort and time to put on.

You can try to learn other things and try to earn the same level of knowledge in each one but, it can be tough to do…You must spend a lot of time updating the different topics and to maintain it. So, we should try to fill our tank with the knowledge we really want or desire (by testing, reading, writing, etc.), in this way, we could maintain a great level of wisdom but, nothing is more useful than practising!

In my personal experience, when you are learning something new (by reading or watching videos) at the end, there is nothing more useful than practising it in a lab, even if you know the steps.

It’s really a way to ingrain in your mind those steps and if you fail when deploying a new product or feature it’s even better 🙂

Filling the gap, but which?

This is what I really wanted to talk.

Sometimes, it can be really hard to achieve a new position because you notice that there are some gaps between your skills and the desired position/job.

About dream jobs…

For example, if your dream position is to be a Technical Architect, you must be good enough in a lot of topics (networking, storage, virtualization, soft skills, etc.) but, can be very difficult to earn expertise on each area(without mentioning the soft skills needed).

As you know, each matter has sub-matters, for example,  if you want to learn VMware, you can learn the Server virtualization platform (vSphere) but, there are other areas like Storage virtualization platform (vSAN) and Network virtualization and Security platform (NSX) for example.

So, here we see that you can be an expert on vSphere but, a lot of positions will demand that you know about vSAN or NSX for example, and you spent a lot of time and effort on being a great expert on vSphere.Then,  do you really need to gain an advanced level before applying to a position that requires it?

Well, it depends, you should figure it out with the Recruiter but at least earn an entry level in the skills that the job requests.

We can conclude that to achieve that position, you must do an approach to each technology so, it usually takes a lot of years of experience and learning

About filling the gap

Another example, if you are a Windows Administrator and now you want to learn about Azure, well, it’s related to Microsoft but it’s different than learning about an another OS. In this case, you should gain knowledge in areas like Storage, Networking and maybe some coding skills.

Hence, we can conclude that you should try to learn at least a bit of every technology (related to your main knowledge) and trying to maintain/update your knowledge in those areas.

Also, bear in mind that the more you learn about different topics, the more you “lose” in other ones, so, my personal advice is to stay close of what do you want to learn and don’t be shy to learn about what you don’t.